id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
a6258f98-eb37-43ac-a5c7-cf65c6d006dc | trentmkelly/LessWrong-43k | LessWrong | Could you tell me what's wrong with this?
Edit: Some people have misunderstood my intentions here. I do not in any way expect this to be the NEXT GREAT IDEA. I just couldn't see anything wrong with this, which almost certainly meant there were gaps in my knowledge. I thought the fastest way to see where I went wrong would be to post my idea here and see what people say. I apologise for any confusion I caused. I'll try to be more clear next time.
(I really can't think of any major problems in this, so I'd be very grateful if you guys could tell me what I've done wrong).
So, a while back I was listening to a discussion about the difficulty of making an FAI. One of the ways that was suggested to circumvent this was to go down the route of programming an AGI to solve FAI. Someone else pointed out the problems with this. Amongst other things one would have no idea what the AI will do in pursuit of its primary goal. Furthermore, it would already be a monumental task to program an AI whose primary goal is to solve the FAI problem; doing this is still easier than solving FAI, I should think.
So, I started to think about this for a little while, and I thought 'how could you make this safer?' Well, first of, you don't want an AI who completely outclasses humanity in terms of intellect. If things went Wrong, you'd have little chance of stopping it. So, you want to limit the AI's intellect to genius level, so if something did go Wrong, then the AI would not be unstoppable. It may do quite a bit of damage, but a large group of intelligent people with a lot of resources on their hands could stop it.
Therefore, what must be done is that the AI cannot modify parts of its source code. You must try and stop an intelligence explosion from taking off. So, limited access to its source code, and a limit on how much computing power it can have on hand. This is problematic though, because the AI would not be able to solve FAI very quickly. After all, we have a few genius level people trying to solve FAI, and they're strugg |
eebe574b-72ad-4a82-a66e-c111b6c35315 | trentmkelly/LessWrong-43k | LessWrong | Most Questionable Details in 'AI 2027'
My thoughts on the recently posted story.
Caveats
* I think it's great that the AI Futures Project wrote up a detailed scenario.
* I enjoy it.
* Every part of the story i didn't comment on here is either fine or excellent.
* This is one of the most realistic scenarios i've read.
* All detailed predictions contain errors.
* The authors of this scenario don't claim it's the most likely future.
* If the speed of 2018-2025 was the typical rate of progress in software, then AI 2027 would be realistic.
Core Disagreements
* Early 2026: OpenBrain is making algorithmic progress 50% faster.
* As with many parts of this scenario, i think this is plausible but too fast. 150% productivity is a lot in business terms, & the scenario doesn't provide much detail for why this is 150% as opposed to 110%. In my software development experience, organizations are bottlenecked by their organizational efficiency (relationships & process) rather than by their software tools or by general employee attentiveness. Humans can already write bugfixes & features quickly. The trick is knowing which bugfixes & features to write. So OpenBrain being so productive is possible but it relies upon a additional lucky cause - OpenBrain's employees must have done a best-in-industry job of rapidly adapting to a process & release schedule that harmoniously leverages the software agents. Note this is even harder while rapidly hiring.
* February 2027: OpenBrain wants to maintain a good relationship with the executive branch, because it is basically the only actor that can stop them now.
* We haven't really established why OpenBrain's market dominance is inevitable. In business it's common for smaller competitors & stealth startups to have a decent chance of growing to rival the market leader. Contrast this with the later half of the story, when the software is so good at research that it doesn't rely on the performance of the human staff.
* June 2027: Most of the humans at OpenBrain can’t use |
d42bb314-f4e7-424a-9fc7-0e1454bc232d | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Update March 2012: We are still accepting and processing applications for this work on an ongoing basis.
Imagine trying to learn baseball by reading essays about baseball techniques. [1]
We're trying to make the jump to teaching people rationality by, metaphorically speaking, having them throw, catch, and hit baseballs in the company of friends. And as we develop curriculum to do that, we're noticing that we often improve quite a lot ourselves in the course of coming up with 20 examples of the sunk cost fallacy. This suggests that the best of us have a lot to gain from practicing basic skills more systematically. Quoth Anna Salamon: There are huge numbers of basic, obviously useful rationality habits that I do about 10% as often as it would be useful to do them. Like "run cheap experiments/tests often”, and “notice mental flinches, and track down the thought you’re avoiding”. Eliezer Yudkowsky, Anna Salamon, several others paid on an hourly basis, and a few volunteers, have been designing exercises and exercise-sets for a rationality curriculum. Our current working point is on the exercises for "Motivated Cognition". Currently the only completed session is "Sunk Costs", which is still being tested - yes, we're actually testing these things repeatedly as we build them. The main purpose of the sessions is to be performed in person, not read online, but nonetheless the current version of the Sunk Costs material - presentation and exercise booklets - is available as a sample: [0] [1] [2] [3] [4] [5]. This is a presentation on sunk costs in which background explanations are interspersed with "do as many of these exercises as you can in 3 minutes", followed by "now pair up with others to do the 'transfer step' parts where you look for instances in your past life and probable future life."
We're looking for 1-2 fulltime employees who can help us build more things like that (unless the next round of tests shows that the current format doesn't work), and possibly a number of hourly contractors (who may be local or distant). We will definitely want to try your work on an hourly or monthly basis before making any full-time hires.
The complete labor for building a rationality kata - we are not looking for someone who can do all of this work at once, we are looking for anyone who can do one or more steps - looks something like this:
Select an important rationality skill and clearly perceive the sort of thinking that goes into executing it. Invent several new exercises which make people's brains execute that type of thinking. Compose many instances of those exercises. Compose any background explanations required for the skills. Figure out three things to tell people to watch out for, or do, over the next week. Turn all of that into a complete 90-minute user experience which includes random cute illustrations for the exercise booklets, designing graphics for any low-level technical points made, building a presentation, testing it in front of a live naive audience, making large changes, and testing it again.
We are not looking only for people who can do all of this labor simultaneously. If you think you can help on one or more of those steps, consider applying — for a full-time job, a part-time hourly gig (perhaps from a distance), or as a volunteer position. We will want anyone hired to try hourly work or a trial month before making any full-time hires. Salary will be SIAI-standard, i.e. $3K/month, but if you do strong work and Rationality-Inst takes off your salary will eventually go much higher. Very strong candidates who can do large amounts of work independently may request higher salaries. You will be working mostly with Anna Salamon and will report to her (although in the short term you may also be working directly with Eliezer on the "isolate a useful skill and invent new exercises to develop it" phase).
If you think you have the idea for a complete rationality kata and want to develop the entire thing on your own, send us a short email about your idea - we're open to setting a lump-sum price. Skills needed:
We need folks with at least one of the following skills (do not feel you need them all; you'll be part of a team; and repeated experience shows that the people we end up actually hiring, report that they almost didn't contact us because they thought they weren't worthy): Catchy professional writing. We need folks who can take rough-draft exercises and explanations, and make them fun to read — at the level of published books. Curriculum design. We need folks who can zoom in on the component skills for rationality (the analogs of throwing, catching, keeping your eye on the ball), and who can invent new exercises that systematically practice those components. E.g., the thought process that goes from "sunk cost fallacy" to "transform a sunk cost to a purchased option". Example generation. Given an exercise, we need someone who can think of lots of specific examples from real life or important real-world domains, which illustrate the exact intended point and not something almost-like the intended point. E.g., turn "Sunk cost fallacy" into 20 story snippets like "Lara is playing poker and has bet $200 in previous rounds..." (Our experience shows that this is a key bottleneck in writing a kata, and a surprisingly separate capacity from coming up with the first exercise.) Teaching or tutoring experience in whichever subjects (e.g., math / programming / science, martial arts / sports / dance, cognitive behavioral therapy, corporate trainings, social skills, meditation); Technical diagram design. We need someone who can be asked for "A diagram that somehow represents the human tendency to overweight near pains relative to distant pains", understand the concept that is being conveyed, and invent a new diagram that conveys it. Presentation design. The current intended form of a rationality kata involves a visual presentation with accompanying spoken words. Powerpoint and Photoshop polishing. See above. Illustration / cartooning. It would be nice if the exercises were accompanied by small, whimsical drawings. These drawings should prime the reader to both: (a) feel warmly toward the characters in the story-snippets (who will generally be struggling with rationality errors); (b) notice how ridiculous those characters, and the rest of us, are. Social initiative enough to gather guinea pigs and run many practice trials of draft curriculum, while collecting data. Bonuses: Skill at running scientific literature searches; knowledge of the heuristics and biases literature, the literature on how to teach critical thinking or rationality, neuroscience literature, or other literatures that should inform our curriculum design; Background in game design, curriculum design, or in other disciplines that help with designing exercises that are fun and conducive to learning; Having read and understood the core Sequences; having a serious interest in learning and teaching rationality. If this project appeals to you and you think you may have something to add, apply using this short form or just shoot us an email. Please err on the side of applying; so many freaking amazing people have told us that they waited months before applying because they “didn’t want to waste our time”, or didn’t think they were good enough. This project needs many sorts of talents, and volunteers also welcome — so if you’d like to help launch an awesome curriculum, send us an email. Your email doesn’t have to be super-detailed or polished — just tell us how you might be able to contribute, and any experience we should know about. [1] If the baseball analogy seems far-fetched, consider algebra. To learn algebra, one typically drills one subskill at a time — one spends a day on exponent rules, for example, understanding why x^a * x^b = x^(a+b) and then practicing it bunches of times, in bunches of algebra problems, until it is a part of your problem-solving habits and reflexes, a step you can do fluently while attending to larger puzzles. If there were a world in which algebra had been learned only through reading essays, without subskill-by-subskill practice, it would not be surprising if the world’s best algebra practitioners could be outperformed by an ordinary student who worked diligently through the exercises in a standard textbook. We’d like you to help us build that first textbook."
What you see above is a segment from a LessWrong user's post.
Could you elaborate on the specific skills and qualifications needed for the role of a "rationality kata" designer, particularly in terms of curriculum design and example generation?
Certainly. A "rationality kata" designer is a unique role that requires a blend of skills and qualifications. Here's a breakdown of the specific skills and qualifications needed for this role, particularly in terms of curriculum design and example generation:
1. Curriculum Design: This is a crucial skill for a rationality kata designer. It involves the ability to identify and break down complex concepts into smaller, manageable parts that can be taught effectively. This includes understanding the key components of rationality, and designing exercises that help learners practice these components systematically. For instance, one should be able to take a concept like the "sunk cost fallacy" and design exercises that help learners understand and apply this concept in different contexts. The designer should also be able to structure these exercises into a coherent and engaging curriculum.
2. Example Generation: This skill involves the ability to come up with a multitude of specific, real-world examples that illustrate the concepts being taught. The examples should be relevant and should accurately represent the concept at hand. For instance, if the concept is the "sunk cost fallacy", the designer should be able to generate examples like "Lara is playing poker and has bet $200 in previous rounds...". The ability to generate a large number of such examples is crucial, as it helps to reinforce the concept and make it more relatable for learners.
In addition to these skills, a rationality kata designer should ideally have experience in teaching or tutoring, as this would give them insights into effective teaching strategies and learner engagement. They should also have strong writing skills, as they would need to create engaging and easy-to-understand content.
Technical skills such as diagram design and presentation design would also be beneficial, as the role involves creating visual aids to support the curriculum. Lastly, the designer should have the social initiative to gather test subjects and run trials of the draft curriculum, and the ability to use feedback from these trials to improve the curriculum.
Given the complexities involved in curriculum design and example generation for rationality kata, how do you ensure the exercises remain engaging and effective without overwhelming the learner?
Ensuring that the exercises remain engaging and effective without overwhelming the learner involves a balance of several factors:
1. Progressive Complexity: Start with simpler exercises that target fundamental skills and gradually introduce more complex ones. This allows learners to build their skills step by step without feeling overwhelmed.
2. Real-world Examples: Use examples that are relevant to the learner's life or interests. This not only makes the exercises more engaging but also helps learners see the practical application of the skills they are learning.
3. Variety: Mix different types of exercises such as individual tasks, group activities, discussions, and reflections. This keeps the learning experience diverse and caters to different learning styles.
4. Feedback and Reinforcement: Provide immediate feedback on exercises and reinforce correct application of skills. This helps learners understand their progress and motivates them to continue practicing.
5. Breaks and Reflection: Include breaks and reflection periods in the curriculum. This gives learners time to digest what they've learned and prevents cognitive overload.
6. Interactive Elements: Incorporate interactive elements such as quizzes, games, or interactive diagrams. These can make learning more fun and engaging, and can also enhance understanding and retention of the material.
7. Relevance: Ensure that each exercise directly contributes to the development of the targeted rationality skill. This keeps the learning process focused and efficient.
By balancing these factors, you can create a curriculum that is engaging, effective, and manageable for learners.
How do you incorporate interactive elements like games or quizzes into a rationality curriculum without detracting from the learning objectives?
Incorporating interactive elements like games or quizzes into a rationality curriculum can actually enhance the learning objectives if done correctly. Here's how:
1. Align with Learning Objectives: Ensure that every game or quiz directly aligns with the learning objectives. The interactive element should serve as a tool for practicing or testing the specific rationality skills being taught.
2. Use as Reinforcement: Use games and quizzes as a way to reinforce what has been learned. For instance, after teaching a concept, you could use a quiz to test the learners' understanding of that concept.
3. Make it Engaging: The interactive elements should be designed in a way that they are engaging and fun. This can motivate learners to participate and can make the learning experience more enjoyable.
4. Provide Feedback: Use the results of games and quizzes to provide immediate feedback to learners. This can help them understand where they are excelling and where they need to improve.
5. Use Real-world Scenarios: Incorporate real-world scenarios into the games and quizzes. This can help learners see how the rationality skills they are learning can be applied in practical situations.
6. Balance with Other Learning Methods: While games and quizzes can be very effective, they should be balanced with other learning methods such as lectures, discussions, and individual exercises. This ensures a well-rounded learning experience.
By incorporating interactive elements in this way, you can enhance the learning experience without detracting from the learning objectives. |
475acfd6-29ca-4c0c-b39e-87e47cec8f3b | trentmkelly/LessWrong-43k | LessWrong | Holden Karnofsky's Singularity Institute Objection 1
The sheer length of GiveWell co-founder and co-executive director Holden Karnofsky's excellent critique of the Singularity Institute means that it's hard to keep track of the resulting discussion. I propose to break out each of his objections into a separate Discussion post so that each receives the attention it deserves.
Objection 1: it seems to me that any AGI that was set to maximize a "Friendly" utility function would be extraordinarily dangerous.
Suppose, for the sake of argument, that SI manages to create what it believes to be an FAI. Suppose that it is successful in the "AGI" part of its goal, i.e., it has successfully created an intelligence vastly superior to human intelligence and extraordinarily powerful from our perspective. Suppose that it has also done its best on the "Friendly" part of the goal: it has developed a formal argument for why its AGI's utility function will be Friendly, it believes this argument to be airtight, and it has had this argument checked over by 100 of the world's most intelligent and relevantly experienced people. Suppose that SI now activates its AGI, unleashing it to reshape the world as it sees fit. What will be the outcome?
I believe that the probability of an unfavorable outcome - by which I mean an outcome essentially equivalent to what a UFAI would bring about - exceeds 90% in such a scenario. I believe the goal of designing a "Friendly" utility function is likely to be beyond the abilities even of the best team of humans willing to design such a function. I do not have a tight argument for why I believe this, but a comment on LessWrong by Wei Dai gives a good illustration of the kind of thoughts I have on the matter:
> What I'm afraid of is that a design will be shown to be safe, and then it turns out that the proof is wrong, or the formalization of the notion of "safety" used by the proof is wrong. This kind of thing happens a lot in cryptography, if you replace "safety" with "security". These mistakes are still o |
d75b8d2e-2658-44cd-88ab-6939db75f660 | trentmkelly/LessWrong-43k | LessWrong | Wild Moral Dilemmas
[CW: This post talks about personal experience of moral dilemmas. I can see how some people might be distressed by thinking about this.]
Have you ever had to decide between pushing a fat person onto some train tracks or letting five other people get hit by a train? Maybe you have a more exciting commute than I do, but for me it's just never come up.
In spite of this, I'm unusually prepared for a trolley problem, in a way I'm not prepared for, say, being offered a high-paying job at an unquantifiably-evil company. Similarly, if a friend asked me to lie to another friend about something important to them, I probably wouldn't carry out a utilitarian cost-benefit analysis. It seems that I'm happy to adopt consequentialist policy, but when it comes to personal quandaries where I have to decide for myself, I start asking myself about what sort of person this decision makes me. What's more, I'm not sure this is necessarily a bad heuristic in a social context.
It's also noteworthy (to me, at least) that I rarely experience moral dilemmas. They just don't happen all that often. I like to think I have a reasonably coherent moral framework, but do I really need one? Do I just lead a very morally-inert life? Or have abstruse thought experiments in moral philosophy equipped me with broader principles under which would-be moral dilemmas are resolved before they reach my conscious deliberation?
To make sure I'm not giving too much weight to my own experiences, I thought I'd put a few questions to a wider audience:
- What kind of moral dilemmas do you actually encounter?
- Do you have any thoughts on how much moral judgement you have to exercise in your daily life? Do you think this is a typical amount?
- Do you have any examples of pedestrian moral dilemmas to which you've applied abstract moral reasoning? How did that work out?
- Do you have any examples of personal moral dilemmas on a Trolley Problem scale that nonetheless happened?
The Username/password anonymous accou |
9494ce73-521c-4515-b5c9-03cbca4bc4e0 | trentmkelly/LessWrong-43k | LessWrong | What is Wrong?
I've always looked at LessWrong as a community that aims to reduce error of reasoning. However, the word "wrong" always seemed to have connotations with ethics and logic, less so goal pursuit. Something being "right" or "wrong" is generally thought of as a state of a logical proposition with respect to some logic axioms and ontological assumptions, rather than a pragmatic one.
However, it may be true that the axioms of logic that one believes are a result of one's interests, the observations about the world. For example, if one is interested in binary-tree-like understanding, one chooses to accept the law of excluded middle. If one is interested in understanding the universe through simulation, then one may choose to accept the axioms of constructive logic. If one is interested in disproving obviously true statements, one chooses Trump logic, and so on. It is pragmatic...
So, what do we do if I we to *explore* rather than to adhere to any predefined logic axioms? In general, if one has a goal Y, one searches for logic axioms X, that would help the reasoning to achieve Y. Therefore, with respect to an agent with a goal Y, the "wrong" is any X that does not minimize one's distance to Y, and being "Less Wrong" implies *not just* reducing cognitive or reasoning errors, but *generally* "optimizing" -- not just in the domain of logical functions, or ethical functions, but in general.
The answer as to what specific domain we have to optimize to be less wrong in general, has been elusive to me, but it seems that creation of new more capable media to transmit, hold, preserve, that let evolve and flourish all of our systems, minds and cultures, is the one domain with respect to which we should consider what is wrong or right.
So, when judging something (X) to be right or wrong, we should take a look, how does this affects world's total information content (Y).
Is AI wrong?
AI is a way to compress information by creating powerful models. Once model is built, information |
32271070-f603-449f-b927-eb69b1abfc66 | trentmkelly/LessWrong-43k | LessWrong | Why humans are sometimes less rational than animals
New paper from Keith Stanovich (one of my favorite cognitive scientists):
> Several formal analyses in decision theory have shown that if people’s preferences follow certain logical patterns (the so-called axioms of rational choice) then they are behaving as if they are maximising utility. However, numerous studies in the decision-making literature have indicated that humans often violate the axioms of rational choice. Additionally, studies of nonhuman animals indicate that they are largely rational in an axiomatic sense. It is important to understand why the finding that humans are less rational than other animals is not paradoxical. This paper discusses three reasons why the principles of rational choice are actually easier to follow when the cognitive architecture of the organism is simpler: contextual complexity, symbolic complexity, and the strong evaluator struggle.
Enjoy. |
a32f34ac-4836-4a31-9424-7fa963515243 | trentmkelly/LessWrong-43k | LessWrong | FrontierMath Score of o3-mini Much Lower Than Claimed
OpenAI reports that o3-mini with high reasoning and a Python tool receives a 32% on FrontierMath. However, Epoch's official evaluation[1] received only 11%.
There are a few reasons to trust Epoch's score over OpenAIs:
* Epoch built the benchmark and has better incentives.
* OpenAI reported a 28% score on the hardest of the three problem tiers - suspiciously close to their overall score.
* Epoch has published quite a bit of information about its testing infrastructure and data, whereas OpenAI has published close to none.
Edited in Addendum:
Epoch has this to say in their FAQ:
> The difference between our results and OpenAI’s might be due to OpenAI evaluating with a more powerful internal scaffold, using more test-time compute, or because those results were run on a different subset of FrontierMath (the 180 problems in frontiermath-2024-11-26 vs the 290 problems in frontiermath-2025-02-28-private).
1. ^
Which had Python access. |
33b41d8d-59a6-469f-bb6e-9b8253b19db5 | trentmkelly/LessWrong-43k | LessWrong | Meetup : London Meetup, 28th April
Discussion article for the meetup : London Meetup, 28th April
WHEN: 28 April 2013 02:00:00PM (+0100)
WHERE: Holborn, London
A fortnightly meetup in the Shakespeare's Head pub by Holborn tube station. We meet every other Sunday at 2pm.
Everyone is welcome to attend: we're a friendly group and we don't bite. If you're on the fence about coming, err on the side of showing up. It's probably safe to assume that we'd like to meet you.
We also have a Google group.
Discussion article for the meetup : London Meetup, 28th April |
5e801a39-bec2-47fa-8e72-c75b56811389 | trentmkelly/LessWrong-43k | LessWrong | GreaterWrong—new theme and many enhancements
(Previous posts: [1], [2], [3], [4])
GreaterWrong.com has just added several new features and UI enhancements:
Modern Less Wrong theme
There is now a new theme (bringing the total to nine themes to choose from), called “Less”. (This theme is inspired by the design of the new, i.e. current, Less Wrong site.)
Here’s how it looks on a desktop:
And on a phone:
(See the About page for how to switch themes.)
Mobile theme tweaker
The theme tweaker feature (which lets you do things like invert colors—instantly creating a “dark mode” version of any theme—as well as adjust brightness, contrast, saturation, and hue) is now available on mobile devices.
Open the theme selector (gear button in the lower-left of the screen), and then tap this button:
And you’ll see the theme tweaker screen:
(See the About page for more on the theme tweaker.)
Strong vote display
Strong upvotes and downvotes now display properly on GreaterWrong, in all themes.
(Themes, starting from top left and going across by row: default, grey, ultramodern, zero, brutalist, rts, classic, less.)
Alignment Forum view
Click the “AF” icon next to any Alignment Forum post, and you’ll be taken to a listing of all the Alignment Forum posts.
|
63ab1d3c-8344-48c5-9824-dc363a721efa | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | MIRI/OP exchange about decision theory
Open Philanthropy's Joe Carlsmith and Nick Beckstead had a short conversation about [decision theory](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/zcPLNNw4wgBX5k8kQ) a few weeks ago with MIRI's Abram Demski and Scott Garrabrant (and me) and LW's Ben Pace. I'm copying it here because I thought others might find it useful.
Terminology notes:
* CDT is [**causal decision theory**](https://plato.stanford.edu/entries/decision-causal/), the dominant theory among working decision theorists. CDT says to choose the action with the best causal consequences.
* EDT is **evidential decision theory**, CDT's traditional rival. EDT says to choose the action such that things go best *conditional* on your choosing that action.
* TDT is [**timeless decision theory**](http://intelligence.org/files/TDT.pdf), a theory proposed by Eliezer Yudkowsky in 2010. TDT was superseded by FDT/UDT because TDT fails on dilemmas like [counterfactual mugging](https://www.lesswrong.com/tag/counterfactual-mugging), refusing to pay the mugger.
* UDT is [**updateless decision theory**](https://www.lesswrong.com/tag/updateless-decision-theory), a theory proposed by Wei Dai in 2009. UDT in effect asks what action "you would have pre-committed to without the benefit of any observations you have made about the universe", and chooses that action.
* FDT is [**functional decision theory**](https://arxiv.org/abs/1710.05060), an umbrella term introduced by Yudkowsky and Nate Soares in 2017 to refer to UDT-ish approaches to decision theory.
---
**Carlsmith:** Anyone have an example of a case where FDT and updateless EDT give different verdicts?
**Beckstead:** Is smoking lesion an example?
I haven't thought about how updateless EDT handles that differently from EDT.
**Demski:** FDT is supposed to be an overarching framework for decision theories "in the MIRI style", whereas updateless EDT is a specific decision theory.
In particular, FDT may or may not be updateless.
Updateful FDT is basically TDT.
Now, I generally claim it's harder to find examples where EDT differs from causal counterfactuals than people realize; eg, EDT and CDT [do the same thing on smoking lesion](https://www.alignmentforum.org/s/fgHSwxFitysGKHH56). So be aware that you're not going to get the "standard view" from me.
However, TDT gets some problems wrong which UDT gets right, eg, counterfactual mugging.
Updateless FDT would not get this wrong, though; it appears to be all about the updatelessness.
To get EDT-type and CDT-type DTs to really differ, we have to go to [Troll Bridge]( https://www.lesswrong.com/posts/hpAbfXtqYC2BrpeiC/troll-bridge-5). EDT fails, FDT variants will often succeed.
**Garrabrant:** I feel like it is not hard to come up with examples where updateful EDT and CDT differ ([XOR blackmail](https://intelligence.org/files/DeathInDamascus.pdf)), and for the updateless question, I think the field is small enough that whatever Abram says is the “standard view.”
I think that to get EDT and CDT to differ updatelessly, you need to cheat and have the agent have some weird non-Bayesian epistemics (Bayesians get the tickle defense), so it is hard to construct formal examples.
Unfortunately, all agents have weird non-Bayesian epistemics, so that doesn’t mean we get to just skip the question.
My “standard view” position is that EDT is obviously philosophically correct the way that Bayesianism is obviously philosophically correct; CDT is an uglier thing that gets the same answer in ideal conditions; but then [embeddedness](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh) gives you non-ideal conditions everywhere, and CDT is closer to being structured in a way that can handle getting the right answer in spite of having weird epistemics.
My non-standard-view answer is that hopefully the successors to the [Cartesian Frame](https://www.lesswrong.com/s/2A7rrZ4ySx6R8mfoT) ontology / [Factored Set](https://www.lesswrong.com/s/kxs3eeEti9ouwWFzr) ontology will make this question go away.
**Bensinger:** Terminology/history side-note: Abram's right that "FDT is supposed to be an overarching framework for decision theories 'in the MIRI style'", but I don't think it's meant to be so overarching as to include TDT. I think the original intended meaning was basically 'FDT = UDT-ish approaches to decision theory'.
From the comments on[Let’s Discuss Functional Decision Theory](https://www.lesswrong.com/posts/2THFt7BChfCgwYDeA/let-s-discuss-functional-decision-theory):
> My model is that 'FDT' is used in the paper instead of 'UDT' because:
>
> * The name 'UDT' seemed less likely to catch on.
> * The term 'UDT' (and 'modifier+UDT') had come to refer to a bunch of very different things over the years. 'UDT 1.1' is a lot less ambiguous, since people are less likely to think that you're talking about an umbrella category encompassing all the 'modifier+UDT' terms; but it's a bit of a mouthful.
> * I've heard someone describe 'UDT' as "FDT + a theory of anthropics" -- i.e., it builds in the core idea of what we're calling "FDT" ("choose by imagining that your (fixed) decision function takes on different logical outputs"), plus a view to the effect that decisions+[probutilities](https://www.lesswrong.com/posts/TeYro2ntqHNyQFx8r/policy-approval) are what matter, and subjective expectations don't make sense. Having a name for the FDT part of the view seems useful for evaluating the subclaims separately.
>
> The FDT paper introduces the FDT/UDT concept in more CDT-ish terms (for ease of exposition), so I think some people have also started using 'FDT' to mean something like 'variants of UDT that are more CDT-ish', which is confusing given that FDT was originally meant to refer to the superset/family of UDT-ish views. Maybe that suggests that researchers feel more of a need for new narrow terms to fill gaps, since it's less often necessary in the trenches to crisply refer to the superset.
>
> [...]
>
> Nate says: "The main datapoint that Rob left out: one reason we don't call it UDT (or cite Wei Dai much) is that Wei Dai doesn't endorse FDT's focus on causal-graph-style counterpossible reasoning; IIRC he's holding out for an approach to counterpossible reasoning that falls out of evidential-style conditioning on a logically uncertain distribution. (FWIW I tried to make the formalization we chose in the paper general enough to technically include that possibility, though Wei and I disagree here and that's definitely not where the paper put its emphasis. I don't want to put words in Wei Dai's mouth, but IIRC, this is also a reason Wei Dai declined to be listed as a co-author.)"
>
>
Footnote: a philosopher might say 'FDT is an overarching framework for *approaches to decision theory* in the MIRI style'; and they might be happy calling FDT "a decision theory", in the same sense that 'CDT' and 'EDT' are deemed decision theories even though they've been interpreted and operationalized in dozens of different ways by philosophers.
(The FDT paper calls FDT a 'decision theory' rather than 'a family of decision theories' because it's written for mainstream philosophers.)
As a matter of terminology, I think MIRI-cluster people are more likely to (e.g.) see 10 distinct decision algorithms and group them in ~8 distinct 'decision theories' where a philosopher might group them into ~2 distinct 'decision theories'. 🤷♀️
**Carlsmith:** Thanks for the comments, all. My hazy understanding had been something like: updateful CDT and updateful EDT are both focused on evaluating actions, but CDT evaluates them using counterfactuals/do-operators or some such, whereas EDT evaluates them using conditionals.
The difference that updatelessness makes is that you instead evaluate overall policies (mappings from inputs to outputs) relative to some prior, and act on that even after you’ve “learned more.” The CDT version of this, I thought, would do something like counterfactual/do-operator type reasoning about what sort of policy to have — and this sounded a lot like FDT to me, so I’ve been basically rounding FDT off to “updateless CDT.” The EDT version, I imagined, would do something like conditional reasoning about what sort of policy to have. Thus, the whiteboard diagram below.
On this framework, I’m a bit confused by the idea that FDT is a neutral over-arching term for MIRI-style decision theory, which can be updateless or not. For example, my impression from the paper was that FDT was supposed to be updateless in the sense of e.g. paying up in counterfactual mugging. and my sense was that FDT was taking a stand on the “counterfactuals vs. conditionals” at least to some extent, insofar as it was using counterfactuals/do-operators on causal graphs. But sounds like I’m missing some of the relevant distinctions here, and/or just mis-remembering what the paper was committed to (this is just me speaking from impressions skimming through the less-wrong-ish literature on this stuff).
**Garrabrant:** I think that there is this (obvious to LessWrongers, because it is deeply entangled with the entire LessWrong philosophy) ontology in which “I am an algorithm” rather than “I am a physical object.” I think that most decision theorists haven’t really considered this ontology. I mostly view FDT (the paper) as a not-fully-formal attempt to bridge that inferential difference and argue for identifying with your algorithm.
I view it as a 2x2x2 cube for (algorithm vs physical), (CDT vs EDT), (updateless vs updateful).
And FDT is mostly about the first axis, because that is the one people are being stupid about. I think that the general MIRI-LW consensus is that the third axis should go on the updateless side, although there is also some possibility of preferring to build tools that are not updateless/do not identify with their algorithm (for the purpose of enslaving them 🙂).
**Pace:**😆
**Garrabrant:**However, the CDT vs EDT axis is more controversial, and maybe the actual answer looks more like “the question doesn’t really make sense once you identify with your algorithm correctly”.
One view I partially hold is that updateless-algorithm EDT is correct for an ideal reasoner, but all three axes represent tools to get approximately the right answer in spite of not being an ideal reasoner.
Where naively pretending you are an ideal reasoner leads to catastrophe.
And this does not mean they are just hacks. Not being an ideal reasoner is part of being an embedded agent.
(Anyway, I think the paper may or may not make some choices about the other axes, but the heart of FDT is about the algorithm question.)
**Pace:** Is 'ideal embedded reasoner' a wrong concept?
**Garrabrant:** It depends on your standards for “ideal.” I doubt we will get anywhere near as ideal as “Bayesianism/Solomonoff induction.”
**Carlsmith:** OK, so is this getting closer? EDT vs. CDT: evaluate X using conditionals vs. counterfactuals/do-operators/some causation-like thing.
Algorithm vs. Physical: X is your algorithm vs. X is something else. (I would’ve thought: the action? In which case, would this reduce to something like the policy vs. action? Not sure if evaluating policies on an “I am a physical object” view ends up different from treating yourself as an algorithm.)
Updateful vs. updateless: evaluate X using all your information, vs. relative to some prior.
**Bensinger:** Agreed the FDT paper was mainly about the algorithm axis. I think the intent behind the paper was to make FDT = 'yes algorithm, yes updateless, agnostic about counterfactuals-vs-conditionals', but because the paper's goal was "begin to slowly bridge the inferential gap between mainstream philosophers and MIRI people" it brushed a lot of the interesting details under the rug.
And we thought it would be easier to explain LW-style decision theory using a counterfactual-ish version of FDT than using a conditional-ish version of FDT, and I think the paper failed to make it clear that we wanted to allow there to be conditional-ish FDTs.
**Demski:**
> Algorithm vs. Physical: X is your algorithm vs. X is something else. (I would’ve thought: the action?
>
>
I don't agree with this part, quite. You can't change your whole algorithm, so it doesn't really make sense to evaluate different possible algorithms. You do, however, have control over your policy (which, at least roughly, is "the input-to-output mapping implemented by your algorithm").
Algorithm-vs-physical isn't quite "evaluate possible policies", however. I think a central insight is "think as if you have control over all your instances, rather than just one instance". This is commonly associated with EDT (because a causal decision theorist can "interpret" EDT as mistakenly thinking it has control over everything correlated with it, including other instances -- the you-are-your-algorithm insight says this is actually good, if perhaps not quite the right reasoning to use). So it's possible for someone to heavily endorse the "you are your algorithm" insight without grokking the "act as if you control your policy" idea.
(Such a person might get Newcomb right but be confused about how to handle Transparent Newcomb, a state which I think was common on LW at one point??)
(Transparent Newcomb is super confusing if you don't get the action/policy distinction, because Omega is actually choosing based on your policy -- what you *would* do *if* you saw a full box. But that's not easy to see if you're used to thinking about actions, to the point where you can easily think Transparent Newcomb isn't well-defined, or have other confusions about it.)
But I think TDT did have *both* the "you control your instances" insight *and* the "you control your policy" insight, just *not* coupled with the actual updatelessness part!
So if we really wanted to, we could make a 3x2x2, with the "alg vs physical" split into:
-> I am my one instance (physical); evaluate actions (of one instance; average over cases if there is anthropic uncertainty)
-> I am my instances: evaluate action (for all instances; no worries about anthropics)
-> I am my policy: evaluate policies
> is 'ideal embedded reasoner' a wrong concept?
>
>
A [logical inductor](https://intelligence.org/2016/09/12/new-paper-logical-induction/) is ideal in the specific sense that it has some theoretical guarantees which we could call "rationality guarantees", and embedded in the sense that with a large (but finite) computer you could actually run it. I think this scales: we can generally talk about bounded rationality notions which can actually apply to actual algorithms, and these rationality conditions are "realizable ideals" in some sense.
The true "ideal" rationality should be capable of giving sound advice to embedded agents. But the pre-existing concepts of rationality are very far from this. So yeah, I think "Scott's Paradox" ("updateless EDT is right for ideal agents, but pretending you're an ideal agent can be catastrophic, so in some cases you should not follow the advice of ideal DT") is one which should dissolve as we get better concepts.
> Updateful vs. updateless: evaluate X using all your information, vs. relative to some prior
>
>
And this is a basic problem with UDT: which prior? How much information should we use / not use?
Do realistic embedded agents even "have a prior"?
Or just, like, a sequence of semi-coherent belief states?
> On this framework, I’m a bit confused by the idea that FDT is a neutral over-arching term for MIRI-style decision theory, which can be updateless or not. For example, my impression from the paper was that FDT was supposed to be updateless in the sense of e.g. paying up in counterfactual mugging.
>
>
I currently see it as a "reasonable view" (it's been my view at times) that updatelessness is doomed, so we have to find other ways to achieve eg paying up in counterfactual mugging. It still points to something about "MIRI-style DT" to say "we want to pay up in counterfactual mugging", even if one does not endorse updatelessness as a principle.
So I see all axes *except* the "algorithm" axis as "live debates" -- basically anyone who has thought about it very much seems to agree that you control "the policy of agents who sufficiently resemble you" (rather than something more myopic like "your individual action"), but there are reasonable disagreements to be had about updatelessness and counterfactuals.
**Beckstead:** One thing I find confusing here is how to think about the notion of "sufficiently resemble." E.g., how would I in principle estimate how many more votes go to my favored presidential candidate in a presidential election (beyond the standard answer of "1")?
(I have appreciated this discussion of the 3 x 2 x 2 matrix. I had previously been thinking of it in the 2 x 2 terms of CDT vs. EDT and updateless/updateful.)
**Demski:**
> One thing I find confusing here is how to think about the notion of "sufficiently resemble." E.g., how would I in principle estimate how many more votes go to my favored presidential candidate in a presidential election (beyond the standard answer of "1")?
>
>
My own answer would be the EDT answer: how much does your decision correlate with theirs? Modulated by ad-hoc updatelessness: how much does that correlation change if we forget "some" relevant information? (It usually increases a lot.)
For voting in particular, *if* these esoteric DT considerations would change my answer, then they usually *wouldn't*, actually (because if the DT is important enough in my computation, then I'm part of a very small reference class of voters, and so, should mostly act like it's just my one vote anyway).
But I think this line of reasoning might actually underestimate the effect for subtle reasons I won't get into (related to [Agent Simulates Predictor](https://www.lesswrong.com/posts/q9DbfYfFzkotno9hG/example-decision-theory-problem-agent-simulates-predictor)).
**Beckstead:** Cool, thanks.
**Carlsmith:** For folks who think that CDT and EDT basically end up equivalent in practice, does that mean updateful EDT two-boxes in non-transparent newcomb, and you need to appeal to updatelessness to get one-boxing?
Anyone have a case that differentiates between (a) updateful, but evaluates policies, and (b) updateless?
**Demski:** EDT might one-box at first due to simplicity priors making it believe its actions are correlated with similar things (but then again, it might not; depends on the prior), but eventually it'll learn the same thing as CDT.
Now, that doesn't mean it'll two-box. If Omega is a perfect predictor (or more precisely, if it knows the agent's action better than the agent itself), EDT will learn so, and one-box. And CDT will do the same (under some potentially contentious assumptions about CDT learning empirically) because no experiments will be able to show that there's no causal relationship.
On the other hand, if Omega is imperfect (more precisely, if its predictions are worse than or equal to the agent's own), EDT will learn to two-box like CDT, because its knowledge about its own action "screens off" the probabilistic relationship.
> Anyone have a case that differentiates between (a) updateful, but evaluates policies, and (b) updateless?
>
>
Counterfactual mugging! |
fb4c77e9-7a1e-40c5-b2f5-9ef36fc1361d | StampyAI/alignment-research-dataset/arxiv | Arxiv | Reward-rational (implicit) choice: A unifying formalism for reward learning.
1 Introduction
---------------
It is difficult to specify reward functions that always lead to the desired behavior. Recent work has argued that the reward specified by a human is merely a source of information about what people actually want a robot to optimize, i.e., the intended reward (Hadfield-Menell et al., [2017a](#bib.bib1); Ratner et al., [2018](#bib.bib2)). Luckily, it is not the only one. Robots can also learn about the intended reward from demonstrations (IRL) (Ng and Russell, [2000](#bib.bib3); Abbeel and Ng, [2004](#bib.bib4)), by asking us to make comparisons between trajectories (Wirth et al., [2017](#bib.bib5); Sadigh et al., [2017](#bib.bib6); Christiano et al., [2017](#bib.bib7)), or by grounding our instructions (MacGlashan et al., [2015](#bib.bib8); Fu et al., [2019](#bib.bib9)).
Perhaps even more fortunate is that we seem to *leak* information left and right about the intended reward. For instance, if we push the robot away, this shouldn’t just modify the robot’s *current* behavior – it should also inform the robot about our preferences more *generally* (Jain et al., [2015](#bib.bib10); Bajcsy et al., [2017](#bib.bib11)). If we turn the robot off in a state of panic to prevent it from disaster, this shouldn’t just stop the robot right now. It should also inform the robot about the intended reward function so that the robot prevents itself from the same disaster in the future: the robot should infer that whatever it was about to do has a tragically low reward. Even the *current state* of the world ought to inform the robot about our preferences – it is a direct result of us having been acting in the world according to these preferences (Shah et al., [2019](#bib.bib12))! For instance, those shoes didn’t magically align themselves at the entrance, someone put effort into arranging them that way, so their state alone should tell the robot something about what we want.
Overall, there is much information out there, some purposefully communicated, other leaked. While existing papers are instructing us how to tap into some of it, one can only imagine that there is much more that is yet untapped. There are probably new yet-to-be-invented ways for people to purposefully provide feedback to robots – e.g. guiding them on which part of a trajectory was particularly good or bad. And, there will probably be new realizations about ways in which human behavior already leaks information, beyond the state of the world or turning the robot off. How will robots make sense of all these diverse sources of information?
Our insight is that there is a way to interpret all this information in a single unifying formalism. The critical observation is that human behavior is a *reward-rational implicit choice* – a choice from an implicit set of options, which is approximately rational for the intended reward. This observation leads to a *recipe* for making sense of human behavior, from language to switching the robot off. The recipe has two ingredients: 1) the set of *options* the person (implicitly) chose from, and 2) a *grounding* function that maps these options to robot behaviors. This is admittedly obvious for traditional feedback. In comparison feedback, for instance, the set of options is just the two robot behaviors presented to the human to compare, and the grounding is identity. In other types of behavior though, it is much less obvious. Take switching the robot off. The set of options is implicit: you can turn it off, or you can do nothing. The formalism says that when you turn it off, it should know that you could have done nothing, but (implicitly) chose not to. That, in turn, should propagate to the robot’s reward function. For this to happen, the robot needs to ground these options to robot behaviors: identity is no longer enough, because it cannot directly evaluate the reward of an utterance or of getting turned off, but it can evaluate the reward of robot actions or trajectories. Turning the robot off corresponds to a trajectory – whatever the robot did until the off-button was pushed, followed by doing nothing for the rest of the time horizon. Doing nothing corresponds to the trajectory the robot was going to execute. Now, the robot knows you prefer the former to the latter. We have taken a high-level human behavior, and turned it into a direct comparison on robot trajectories with respect to the intended reward, thereby gaining reward information.
We use this perspective to survey prior work on reward learning. We show that despite their diversity, many sources of information about rewards proposed thus far can be characterized as instantiating this formalism (some very directly, others with some modifications). This offers a unifying lens for the area of reward learning, helping better understand and contrast prior methods. We end with discussion on how the formalism can help combine and actively decide among feedback types, and also how it can be a potentially helpful recipe for interpreting new types of feedback or sources of leaked information.
2 A formalism for reward learning
----------------------------------
###
2.1 Reward-rational implicit choice
In reward learning, the robot’s goal is to learn a reward function r:Ξ→ℝ:𝑟→Ξℝr:\Xi\rightarrow\mathbb{R}italic\_r : roman\_Ξ → blackboard\_R from human behavior that maps trajectories111We consider finite fixed horizon T trajectories. ξ∈Ξ𝜉Ξ\xi\in\Xiitalic\_ξ ∈ roman\_Ξ to scalar rewards.
(Implicit/explicit) set of options 𝒞𝒞\mathcal{C}caligraphic\_C. We interpret human behavior as choosing an option c\*superscript𝑐c^{\*}italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT from a set of options 𝒞𝒞\mathcal{C}caligraphic\_C. Different behavior types will correspond to different explicit or implicit sets 𝒞𝒞\mathcal{C}caligraphic\_C. For example, when a person is asked for a *trajectory comparison*, they are explicitly shown two trajectories and they pick one. However, when the person gives a *demonstration*, we think of the possible options 𝒞𝒞\mathcal{C}caligraphic\_C as implicitly being all possible trajectories the person could have demonstrated. The implicit/explicit distinction brings out a general tradeoff in reward learning. The cleverness of implicit choice sets is that even when we cannot enumerate and show all options to the human, e.g. in demonstrations, we still rely on the human to optimize over the set. On the other hand, an implicit set is also risky – since it is not explicitly observed, we may get it wrong, potentially resulting in worse reward inference.
The grounding function ψ𝜓\psiitalic\_ψ. We link the human’s choice to the reward by thinking of the choice as (approximately) maximizing the reward. However, it is not immediately clear what it means for the human to maximize reward when choosing feedback because the feedback may not be a (robot) trajectory, and the reward is only defined over trajectories. For example, in *language feedback*, the human describes what they want in words. What is the reward of the sentence, “Do not go over the water”?
To overcome this syntax mismatch, we map options in 𝒞𝒞\mathcal{C}caligraphic\_C to (distributions over) trajectories with a grounding function ψ:𝒞→fΞ:𝜓→𝒞subscript𝑓Ξ\psi:\mathcal{C}\rightarrow f\_{\Xi}italic\_ψ : caligraphic\_C → italic\_f start\_POSTSUBSCRIPT roman\_Ξ end\_POSTSUBSCRIPT where fΞsubscript𝑓Ξf\_{\Xi}italic\_f start\_POSTSUBSCRIPT roman\_Ξ end\_POSTSUBSCRIPT is the set of distributions over trajectories for the robot ΞΞ\Xiroman\_Ξ. Different types of feedback will correspond to different groundings. In some instances, such as kinesthetic demonstrations or trajectory comparisons, the mapping is simply the identity. In others, like corrections, language, or proxy rewards, the grounding is more complex (see Section [3](#S3 "3 Prior work from the perspective of the formalism ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")).
Human policy. Given the set of choices 𝒞𝒞\mathcal{C}caligraphic\_C and the grounding function ψ𝜓\psiitalic\_ψ, the human’s approximately rational choice c\*∈𝒞superscript𝑐𝒞c^{\*}\in\mathcal{C}italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ caligraphic\_C can now be modeled via a *Boltzmann-rational* policy, a policy in which the probability of choosing an option is exponentially higher based on its reward:
| | | | |
| --- | --- | --- | --- |
| | ℙ(c\*∣r,𝒞)=exp(β⋅𝔼ξ∼ψ(c\*)[r(ξ)])∑c∈𝒞exp(β⋅𝔼ξ∼ψ(c)[r(ξ)]),ℙconditionalsuperscript𝑐𝑟𝒞⋅𝛽subscript𝔼similar-to𝜉𝜓superscript𝑐delimited-[]𝑟𝜉subscript𝑐𝒞⋅𝛽subscript𝔼similar-to𝜉𝜓𝑐delimited-[]𝑟𝜉\displaystyle\mathbb{P}(c^{\*}\mid r,\,\mathcal{C})=\frac{\exp(\beta\cdot\mathbb{E}\_{\xi\sim\psi(c^{\*})}[r(\xi)])}{\sum\_{c\in\mathcal{C}}\exp(\beta\cdot\mathbb{E}\_{\xi\sim\psi(c)}[r(\xi)])}\,,blackboard\_P ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∣ italic\_r , caligraphic\_C ) = divide start\_ARG roman\_exp ( italic\_β ⋅ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_ψ ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_c ∈ caligraphic\_C end\_POSTSUBSCRIPT roman\_exp ( italic\_β ⋅ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_ψ ( italic\_c ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ) end\_ARG , | | (1) |
where the parameter β𝛽\betaitalic\_β is a coefficient that models how rational the human is. Often, we simplify Equation [1](#S2.E1 "1 ‣ 2.1 Reward-rational implicit choice ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") to the case where ψ𝜓\psiitalic\_ψ is a deterministic mapping from choices in 𝒞𝒞\mathcal{C}caligraphic\_C to trajectories in ΞΞ\Xiroman\_Ξ, instead of distributions over trajectories. Then, the probability of choosing c\*superscript𝑐c^{\*}italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT can be written as:222One can also consider a variant of Equation [1](#S2.E1 "1 ‣ 2.1 Reward-rational implicit choice ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") in which choices are grounded to actions, rather than trajectories, and are evaluated via a Q-value function, rather than the reward function. That is, ψ:𝒞→𝒜:𝜓→𝒞𝒜\psi:\mathcal{C}\rightarrow\mathcal{A}italic\_ψ : caligraphic\_C → caligraphic\_A, ℙ(c\*∣r,𝒞)∝exp(β⋅𝔼a∼ψ(c\*)[Q\*(s,a)])proportional-toℙconditionalsuperscript𝑐𝑟𝒞⋅𝛽subscript𝔼similar-to𝑎𝜓superscript𝑐delimited-[]superscript𝑄𝑠𝑎\mathbb{P}(c^{\*}\mid r,\,\mathcal{C})\propto\exp(\beta\cdot\mathbb{E}\_{a\sim\psi(c^{\*})}[Q^{\*}(s,a)])blackboard\_P ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∣ italic\_r , caligraphic\_C ) ∝ roman\_exp ( italic\_β ⋅ blackboard\_E start\_POSTSUBSCRIPT italic\_a ∼ italic\_ψ ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) ] ).
| | | | |
| --- | --- | --- | --- |
| | ℙ(c\*∣r,𝒞)∝exp(β⋅r(ψ(c\*)))proportional-toℙconditionalsuperscript𝑐𝑟𝒞⋅𝛽𝑟𝜓superscript𝑐\mathbb{P}(c^{\*}\mid r,\,\mathcal{C})\propto\exp(\beta\cdot r(\psi(c^{\*})))blackboard\_P ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∣ italic\_r , caligraphic\_C ) ∝ roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ψ ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ) ) | | (2) |
Boltzmann-rational policies are widespread in psychology (Baker et al., [2009](#bib.bib13); Goodman et al., [2009](#bib.bib14); Goodman and Stuhlmüller, [2013](#bib.bib15)), economics Bradley and Terry ([1952](#bib.bib16)); Luce ([1959](#bib.bib17)); Plackett ([1975](#bib.bib18)); Luce ([1959](#bib.bib17)), and AI Ziebart et al. ([2008](#bib.bib19)); Ramachandran and Amir ([2007](#bib.bib20)); Finn et al. ([2016](#bib.bib21)); Bloem and Bambos ([2014](#bib.bib22)); Dragan et al. ([2013](#bib.bib23)) as models of human choices, actions, or inferences. But why are they a reasonable model?
While there are many possible motivations, we contribute a derivation (Appendix [A](#A1 "Appendix A Bounded rationality, maximum entropy, and Boltzmann-rational policies ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")) as the maximum-entropy distribution over choices for a *satisficing* agent, i.e. an agent that in expectation makes a choice with ϵitalic-ϵ\epsilonitalic\_ϵ-optimal reward. A higher value of ϵitalic-ϵ\epsilonitalic\_ϵ results in a lower value of β𝛽\betaitalic\_β, modeling less optimal humans.
######
Definition 2.1 (Reward-rational choice).
Finally, putting it all together, we call a type of feedback a *reward-rational choice* if, given a grounding function ψ𝜓\psiitalic\_ψ, it can be modeled as a choice from an (explicit or implicit set) 𝒞𝒞\mathcal{C}caligraphic\_C that (approximately) maximizes reward, i.e., as in Equation [1](#S2.E1 "1 ‣ 2.1 Reward-rational implicit choice ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning").
###
2.2 Robot inference
Each feedback is an observation about the reward, which means the robot can run Bayesian inference to update its belief over the rewards. For a determinstic grounding,
| | | | |
| --- | --- | --- | --- |
| | ℙ(r∣c\*)=1Z⋅exp(β⋅r(ψ(c\*)))∑c∈𝒞exp(β⋅r(ψ(c)))⋅ℙ(r),ℙconditional𝑟superscript𝑐⋅1𝑍⋅𝛽𝑟𝜓superscript𝑐subscript𝑐𝒞⋅𝛽𝑟𝜓𝑐ℙ𝑟\displaystyle\mathbb{P}(r\mid c^{\*})=\frac{1}{Z}\cdot\frac{\exp(\beta\cdot r(\psi(c^{\*})))}{\sum\_{c\in\mathcal{C}}\exp(\beta\cdot r(\psi(c)))}\cdot\mathbb{P}(r)\,,blackboard\_P ( italic\_r ∣ italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) = divide start\_ARG 1 end\_ARG start\_ARG italic\_Z end\_ARG ⋅ divide start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ψ ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ) ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_c ∈ caligraphic\_C end\_POSTSUBSCRIPT roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ψ ( italic\_c ) ) ) end\_ARG ⋅ blackboard\_P ( italic\_r ) , | | (3) |
where ℙ(r)ℙ𝑟\mathbb{P}(r)blackboard\_P ( italic\_r ) is the prior over rewards and Z𝑍Zitalic\_Z is the normalization over possible reward functions. The inference above is often intractable, and so reward learning work leverages approximations (Blei et al., [2017](#bib.bib24)), or computes only the MLE for a parametrization of rewards (more recently as weights in a neural network on raw input (Christiano et al., [2017](#bib.bib7); Ibarz et al., [2018](#bib.bib25))).
Finally, when the human is highly rational (β→∞→𝛽\beta\rightarrow\inftyitalic\_β → ∞), the only choices in 𝒞𝒞\mathcal{C}caligraphic\_C with a non-neglible probability of being picked are the choices that exactly maximize reward. Thus, the human’s choice c\*superscript𝑐c^{\*}italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT can be interpreted as *constraints* on the reward function (e.g. (Ratliff et al., [2006](#bib.bib26))):
| | | | |
| --- | --- | --- | --- |
| | Find r such that r(ψ(c\*))≥r(ψ(c))∀c∈𝒞.formulae-sequenceFind 𝑟 such that 𝑟𝜓superscript𝑐𝑟𝜓𝑐for-all𝑐𝒞\displaystyle\text{Find }r\text{ such that }r(\psi(c^{\*}))\geq r(\psi(c))\quad\forall c\in\mathcal{C}\,.Find italic\_r such that italic\_r ( italic\_ψ ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ) ≥ italic\_r ( italic\_ψ ( italic\_c ) ) ∀ italic\_c ∈ caligraphic\_C . | | (4) |
Table 1: The choice set 𝒞𝒞\mathcal{C}caligraphic\_C and grounding function ψ𝜓\psiitalic\_ψ for different types of feedback described in Section [3](#S3 "3 Prior work from the perspective of the formalism ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"), unless otherwise noted.
| Feedback | Choices 𝒞𝒞\mathcal{C}caligraphic\_C | Grounding ψ𝜓\psiitalic\_ψ |
| --- | --- | --- |
| Comparisons (Wirth et al., [2017](#bib.bib5)) | ξi∈{ξ1,ξ2}subscript𝜉𝑖subscript𝜉1subscript𝜉2\xi\_{i}\in\{\xi\_{1},\xi\_{2}\}italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ { italic\_ξ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_ξ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT } | ψ(ξi)=ξi𝜓subscript𝜉𝑖subscript𝜉𝑖\psi(\xi\_{i})=\xi\_{i}italic\_ψ ( italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT |
| Demonstrations Ng and Russell ([2000](#bib.bib3)) | ξd∈Ξsubscript𝜉𝑑Ξ\xi\_{d}\in\Xiitalic\_ξ start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ∈ roman\_Ξ | ψ(ξ)=ξ𝜓𝜉𝜉\psi(\xi)=\xiitalic\_ψ ( italic\_ξ ) = italic\_ξ |
| Corrections (Bajcsy et al., [2017](#bib.bib11)) | Δq∈Q−QΔ𝑞𝑄𝑄\Delta q\in Q-Qroman\_Δ italic\_q ∈ italic\_Q - italic\_Q | ψ(Δq)=ξR+A−1Δq𝜓Δ𝑞subscript𝜉𝑅superscript𝐴1Δ𝑞\psi(\Delta q)=\xi\_{R}+A^{-1}\Delta qitalic\_ψ ( roman\_Δ italic\_q ) = italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT + italic\_A start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT roman\_Δ italic\_q |
| Improvement (Jain et al., [2015](#bib.bib10)) | ξ∈{ξimproved,ξR}𝜉subscript𝜉improvedsubscript𝜉𝑅\xi\in\{\xi\_{\text{improved}},\,\xi\_{R}\}italic\_ξ ∈ { italic\_ξ start\_POSTSUBSCRIPT improved end\_POSTSUBSCRIPT , italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT } | ψ(ξ)=ξ𝜓𝜉𝜉\psi(\xi)=\xiitalic\_ψ ( italic\_ξ ) = italic\_ξ |
| Off (Hadfield-Menell et al., [2017b](#bib.bib27)) | c∈{off,−}𝑐offc\in\{\text{off},-\}italic\_c ∈ { off , - } | ψ(c)={ξRc=−ξR0:tξRt…ξRtc=off𝜓𝑐casessubscript𝜉𝑅𝑐superscriptsubscript𝜉𝑅:0𝑡superscriptsubscript𝜉𝑅𝑡…superscriptsubscript𝜉𝑅𝑡𝑐off\psi(c)=\begin{cases}\xi\_{R}&c=-\\
\xi\_{R}^{0:t}\xi\_{R}^{t}\dots\xi\_{R}^{t}&c=\text{off}\end{cases}italic\_ψ ( italic\_c ) = { start\_ROW start\_CELL italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_CELL start\_CELL italic\_c = - end\_CELL end\_ROW start\_ROW start\_CELL italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 : italic\_t end\_POSTSUPERSCRIPT italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT … italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT end\_CELL start\_CELL italic\_c = off end\_CELL end\_ROW |
| Language (Matuszek et al., [2012](#bib.bib28)) | λ∈Λ𝜆Λ\lambda\in\Lambdaitalic\_λ ∈ roman\_Λ | ψ(λ)=Unif(G(λ))𝜓𝜆Unif𝐺𝜆\psi(\lambda)=\text{Unif}(G(\lambda))italic\_ψ ( italic\_λ ) = Unif ( italic\_G ( italic\_λ ) ) |
| Proxy Rewards (Hadfield-Menell et al., [2017a](#bib.bib1)) | r~∈ℛ~~𝑟~ℛ\tilde{r}\in\tilde{\mathcal{R}}over~ start\_ARG italic\_r end\_ARG ∈ over~ start\_ARG caligraphic\_R end\_ARG | ψ(r~)=π(ξ∣r~)𝜓~𝑟𝜋conditional𝜉~𝑟\psi(\tilde{r})=\pi(\xi\mid\tilde{r})italic\_ψ ( over~ start\_ARG italic\_r end\_ARG ) = italic\_π ( italic\_ξ ∣ over~ start\_ARG italic\_r end\_ARG ) |
| Reward and Punishment (Griffith et al., [2013](#bib.bib29)) | c∈{+1,−1}𝑐11c\in\{+1,-1\}italic\_c ∈ { + 1 , - 1 } | ψ(c)={ξRc=+1ξexpectedc=−1𝜓𝑐casessubscript𝜉𝑅𝑐1subscript𝜉expected𝑐1\psi(c)=\begin{cases}\xi\_{R}&c=+1\\
\xi\_{\text{expected}}&c=-1\end{cases}italic\_ψ ( italic\_c ) = { start\_ROW start\_CELL italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_CELL start\_CELL italic\_c = + 1 end\_CELL end\_ROW start\_ROW start\_CELL italic\_ξ start\_POSTSUBSCRIPT expected end\_POSTSUBSCRIPT end\_CELL start\_CELL italic\_c = - 1 end\_CELL end\_ROW |
| Initial state (Shah et al., [2019](#bib.bib12)) | s∈𝒮𝑠𝒮s\in\mathcal{S}italic\_s ∈ caligraphic\_S | ψ(s)=Unif({ξH−T:0∣ξH0=s})𝜓𝑠Unifconditional-setsuperscriptsubscript𝜉𝐻:𝑇0superscriptsubscript𝜉𝐻0𝑠\psi(s)=\text{Unif}(\{\xi\_{H}^{-T:0}\mid\xi\_{H}^{0}=s\})italic\_ψ ( italic\_s ) = Unif ( { italic\_ξ start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT - italic\_T : 0 end\_POSTSUPERSCRIPT ∣ italic\_ξ start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT = italic\_s } ) |
| Credit assignment (Discussion) | ξ∈{ξRi:i+k, 0≤i≤T}𝜉superscriptsubscript𝜉𝑅:𝑖𝑖𝑘 0
𝑖𝑇\xi\in\{\xi\_{R}^{i:i+k},\,0\leq i\leq T\}italic\_ξ ∈ { italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i : italic\_i + italic\_k end\_POSTSUPERSCRIPT , 0 ≤ italic\_i ≤ italic\_T } | ψ(ξ)=ξ𝜓𝜉𝜉\psi(\xi)=\xiitalic\_ψ ( italic\_ξ ) = italic\_ξ |
Table 2: The probabilistic model (Equation [1](#S2.E1 "1 ‣ 2.1 Reward-rational implicit choice ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")) and the simplification to the constraint-based model (Equation [4](#S2.E4 "4 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")).
| Feedback | Constraint | Probabilistic |
| --- | --- | --- |
| Comparisons | r(ξ1)≥r(ξ2)𝑟subscript𝜉1𝑟subscript𝜉2r(\xi\_{1})\geq r(\xi\_{2})italic\_r ( italic\_ξ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ≥ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) | ℙ(ξ1∣r,𝒞)=exp(β⋅r(ξ1))exp(β⋅r(ξ1))+exp(β⋅r(ξ2))ℙconditionalsubscript𝜉1𝑟𝒞⋅𝛽𝑟subscript𝜉1⋅𝛽𝑟subscript𝜉1⋅𝛽𝑟subscript𝜉2\mathbb{P}(\xi\_{1}\mid r,\mathcal{C})\ =\ \dfrac{\exp(\beta\cdot r(\xi\_{1}))}{\exp(\beta\cdot r(\xi\_{1}))+\exp(\beta\cdot r(\xi\_{2}))}blackboard\_P ( italic\_ξ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∣ italic\_r , caligraphic\_C ) = divide start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) end\_ARG start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) + roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) end\_ARG |
| Demonstrations | r(ξD)≥r(ξ)∀ξ∈Ξformulae-sequence𝑟subscript𝜉𝐷𝑟𝜉for-all𝜉Ξr(\xi\_{D})\geq r(\xi)\quad\forall\ \xi\in\Xiitalic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT ) ≥ italic\_r ( italic\_ξ ) ∀ italic\_ξ ∈ roman\_Ξ | ℙ(ξD∣r,Ξ)=exp(β⋅r(ξD))∑ξ∈Ξexp(β⋅r(ξ))ℙconditionalsubscript𝜉𝐷𝑟Ξ⋅𝛽𝑟subscript𝜉𝐷subscript𝜉Ξ⋅𝛽𝑟𝜉\mathbb{P}(\xi\_{D}\mid r,\Xi)\ =\ \dfrac{\exp(\beta\cdot r(\xi\_{D}))}{\sum\_{\xi\in\Xi}\exp(\beta\cdot r(\xi))}blackboard\_P ( italic\_ξ start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT ∣ italic\_r , roman\_Ξ ) = divide start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT ) ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_ξ ∈ roman\_Ξ end\_POSTSUBSCRIPT roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ ) ) end\_ARG |
| Corrections | r(ξR+A−1Δq)≥r(ξR+A−1Δq′)∀Δq′∈Q−Q𝑟subscript𝜉𝑅superscript𝐴1Δ𝑞𝑟subscript𝜉𝑅superscript𝐴1Δsuperscript𝑞′for-allΔsuperscript𝑞′𝑄𝑄r(\xi\_{R}+A^{-1}\Delta q)\geq r(\xi\_{R}+A^{-1}\Delta q^{\prime})\ \forall\Delta q^{\prime}\in Q-Qitalic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT + italic\_A start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT roman\_Δ italic\_q ) ≥ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT + italic\_A start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT roman\_Δ italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∀ roman\_Δ italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_Q - italic\_Q | ℙ(Δq′∣r,Q−Q)=exp(β⋅r(ξR+A−1Δq))∑Δq∈Q−Qexp(β⋅r(ξR+A−1Δq))ℙconditionalΔsuperscript𝑞′𝑟𝑄𝑄⋅𝛽𝑟subscript𝜉𝑅superscript𝐴1Δ𝑞subscriptΔ𝑞𝑄𝑄⋅𝛽𝑟subscript𝜉𝑅superscript𝐴1Δ𝑞\mathbb{P}(\Delta q^{\prime}\mid r,Q-Q)=\dfrac{\exp(\beta\cdot r(\xi\_{R}+A^{-1}\Delta q))}{\sum\_{\Delta q\in Q-Q}\exp(\beta\cdot r(\xi\_{R}+A^{-1}\Delta q))}blackboard\_P ( roman\_Δ italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∣ italic\_r , italic\_Q - italic\_Q ) = divide start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT + italic\_A start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT roman\_Δ italic\_q ) ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT roman\_Δ italic\_q ∈ italic\_Q - italic\_Q end\_POSTSUBSCRIPT roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT + italic\_A start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT roman\_Δ italic\_q ) ) end\_ARG |
| Improvement | r(ξimproved)≥r(ξR)𝑟subscript𝜉improved𝑟subscript𝜉𝑅r(\xi\_{\text{improved}})\geq r(\xi\_{R})italic\_r ( italic\_ξ start\_POSTSUBSCRIPT improved end\_POSTSUBSCRIPT ) ≥ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) | ℙ(ξimproved∣r,𝒞)=exp(β⋅r(ξimproved))exp(β⋅r(ξimproved))+exp(β⋅r(ξR))ℙconditionalsubscript𝜉improved𝑟𝒞⋅𝛽𝑟subscript𝜉improved⋅𝛽𝑟subscript𝜉improved⋅𝛽𝑟subscript𝜉𝑅\mathbb{P}(\xi\_{\text{improved}}\mid r,\mathcal{C})\ =\ \dfrac{\exp(\beta\cdot r(\xi\_{\text{improved}}))}{\exp(\beta\cdot r(\xi\_{\text{improved}}))+\exp(\beta\cdot r(\xi\_{R}))}blackboard\_P ( italic\_ξ start\_POSTSUBSCRIPT improved end\_POSTSUBSCRIPT ∣ italic\_r , caligraphic\_C ) = divide start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT improved end\_POSTSUBSCRIPT ) ) end\_ARG start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT improved end\_POSTSUBSCRIPT ) ) + roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) ) end\_ARG |
| Off | r(ξR0:tξt…ξt)≥r(ξR)𝑟superscriptsubscript𝜉𝑅:0𝑡superscript𝜉𝑡…superscript𝜉𝑡𝑟subscript𝜉𝑅r(\xi\_{R}^{0:t}\xi^{t}\dots\xi^{t})\geq r(\xi\_{R})italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 : italic\_t end\_POSTSUPERSCRIPT italic\_ξ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT … italic\_ξ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ≥ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) | ℙ(off∣r,𝒞)=exp(β⋅r(ξR0:tξt…ξt))exp(β⋅r(ξR0:tξt…ξt))+exp(β⋅r(ξR))ℙconditionaloff𝑟𝒞⋅𝛽𝑟superscriptsubscript𝜉𝑅:0𝑡superscript𝜉𝑡…superscript𝜉𝑡⋅𝛽𝑟superscriptsubscript𝜉𝑅:0𝑡superscript𝜉𝑡…superscript𝜉𝑡⋅𝛽𝑟subscript𝜉𝑅\mathbb{P}(\text{off}\mid r,\mathcal{C})\ =\ \dfrac{\exp(\beta\cdot r(\xi\_{R}^{0:t}\xi^{t}\dots\xi^{t}))}{\exp(\beta\cdot r(\xi\_{R}^{0:t}\xi^{t}\dots\xi^{t}))+\exp(\beta\cdot r(\xi\_{R}))}blackboard\_P ( off ∣ italic\_r , caligraphic\_C ) = divide start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 : italic\_t end\_POSTSUPERSCRIPT italic\_ξ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT … italic\_ξ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ) end\_ARG start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 : italic\_t end\_POSTSUPERSCRIPT italic\_ξ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT … italic\_ξ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) ) + roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) ) end\_ARG |
| Language | 𝔼ξ∼Unif(G(λ\*))[r(ξ)]≥𝔼ξ∼Unif(G(λ))[r(ξ)]∀λ∈Λsubscript𝔼similar-to𝜉Unif𝐺superscript𝜆delimited-[]𝑟𝜉subscript𝔼similar-to𝜉Unif𝐺𝜆delimited-[]𝑟𝜉for-all𝜆Λ\mathbb{E}\_{\xi\sim\text{Unif}(G(\lambda^{\*}))}\big{[}r(\xi)\big{]}\geq\mathbb{E}\_{\xi\sim\text{Unif}(G(\lambda))}\big{[}r(\xi)\big{]}\ \forall\lambda\in\Lambdablackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ Unif ( italic\_G ( italic\_λ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ≥ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ Unif ( italic\_G ( italic\_λ ) ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ∀ italic\_λ ∈ roman\_Λ | ℙ(λ\*∣r,Λ)=exp(β⋅𝔼ξ∼Unif(G(λ\*))[r(ξ)])∑λ∈Λexp(β⋅𝔼ξ∼Unif(G(λ))[r(ξ)])ℙconditionalsuperscript𝜆𝑟Λ⋅𝛽subscript𝔼similar-to𝜉Unif𝐺superscript𝜆delimited-[]𝑟𝜉subscript𝜆Λ⋅𝛽subscript𝔼similar-to𝜉Unif𝐺𝜆delimited-[]𝑟𝜉\mathbb{P}(\lambda^{\*}\mid r,\Lambda)\ =\ \dfrac{\exp(\beta\cdot\mathbb{E}\_{\xi\sim\text{Unif}(G(\lambda^{\*}))}\big{[}r(\xi)\big{]})}{\sum\_{\lambda\in\Lambda}\exp(\beta\cdot\mathbb{E}\_{\xi\sim\text{Unif}(G(\lambda))}\big{[}r(\xi)\big{]})}blackboard\_P ( italic\_λ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∣ italic\_r , roman\_Λ ) = divide start\_ARG roman\_exp ( italic\_β ⋅ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ Unif ( italic\_G ( italic\_λ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_λ ∈ roman\_Λ end\_POSTSUBSCRIPT roman\_exp ( italic\_β ⋅ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ Unif ( italic\_G ( italic\_λ ) ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ) end\_ARG |
| Proxy Rewards | 𝔼ξ~∼π(ξ~∣r~)[r(ξ~)]≥𝔼ξ~∼π(ξ~∣c)[r(ξ~)]∀c∈ℛ~formulae-sequencesubscript𝔼similar-to~𝜉𝜋conditional~𝜉~𝑟delimited-[]𝑟~𝜉subscript𝔼similar-to~𝜉𝜋conditional~𝜉𝑐delimited-[]𝑟~𝜉for-all𝑐~ℛ\mathbb{E}\_{\tilde{\xi}\sim\pi(\tilde{\xi}\mid\tilde{r})}\big{[}r(\tilde{\xi})\big{]}\geq\mathbb{E}\_{\tilde{\xi}\sim\pi(\tilde{\xi}\mid c)}\big{[}r(\tilde{\xi})\big{]}\quad\forall c\in\tilde{\mathcal{R}}blackboard\_E start\_POSTSUBSCRIPT over~ start\_ARG italic\_ξ end\_ARG ∼ italic\_π ( over~ start\_ARG italic\_ξ end\_ARG ∣ over~ start\_ARG italic\_r end\_ARG ) end\_POSTSUBSCRIPT [ italic\_r ( over~ start\_ARG italic\_ξ end\_ARG ) ] ≥ blackboard\_E start\_POSTSUBSCRIPT over~ start\_ARG italic\_ξ end\_ARG ∼ italic\_π ( over~ start\_ARG italic\_ξ end\_ARG ∣ italic\_c ) end\_POSTSUBSCRIPT [ italic\_r ( over~ start\_ARG italic\_ξ end\_ARG ) ] ∀ italic\_c ∈ over~ start\_ARG caligraphic\_R end\_ARG | ℙ(r~∣r,ℛ~)=exp(β⋅𝔼ξ~∼π(ξ~∣r~)[r(ξ~)])∑c∈ℛ~exp(β⋅𝔼ξ~∼π(ξ~∣c)[r(ξ~)])ℙconditional~𝑟𝑟~ℛ⋅𝛽subscript𝔼similar-to~𝜉𝜋conditional~𝜉~𝑟delimited-[]𝑟~𝜉subscript𝑐~ℛ⋅𝛽subscript𝔼similar-to~𝜉𝜋conditional~𝜉𝑐delimited-[]𝑟~𝜉\mathbb{P}(\tilde{r}\mid r,\tilde{\mathcal{R}})\ =\ \dfrac{\exp(\beta\cdot\mathbb{E}\_{\tilde{\xi}\sim\pi(\tilde{\xi}\mid\tilde{r})}\big{[}r(\tilde{\xi})\big{]})}{\sum\_{c\in\tilde{\mathcal{R}}}\exp(\beta\cdot\mathbb{E}\_{\tilde{\xi}\sim\pi(\tilde{\xi}\mid c)}\big{[}r(\tilde{\xi})\big{]})}blackboard\_P ( over~ start\_ARG italic\_r end\_ARG ∣ italic\_r , over~ start\_ARG caligraphic\_R end\_ARG ) = divide start\_ARG roman\_exp ( italic\_β ⋅ blackboard\_E start\_POSTSUBSCRIPT over~ start\_ARG italic\_ξ end\_ARG ∼ italic\_π ( over~ start\_ARG italic\_ξ end\_ARG ∣ over~ start\_ARG italic\_r end\_ARG ) end\_POSTSUBSCRIPT [ italic\_r ( over~ start\_ARG italic\_ξ end\_ARG ) ] ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_c ∈ over~ start\_ARG caligraphic\_R end\_ARG end\_POSTSUBSCRIPT roman\_exp ( italic\_β ⋅ blackboard\_E start\_POSTSUBSCRIPT over~ start\_ARG italic\_ξ end\_ARG ∼ italic\_π ( over~ start\_ARG italic\_ξ end\_ARG ∣ italic\_c ) end\_POSTSUBSCRIPT [ italic\_r ( over~ start\_ARG italic\_ξ end\_ARG ) ] ) end\_ARG |
| Reward/Punish | r(ξR)≥r(ξexpected)𝑟subscript𝜉𝑅𝑟subscript𝜉expectedr(\xi\_{R})\geq r(\xi\_{\text{expected}})italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) ≥ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT expected end\_POSTSUBSCRIPT ) | ℙ(+1∣r,𝒞)=exp(β⋅r(ξR))exp(β⋅r(ξR))+exp(β⋅r(ξexpected))ℙconditional1𝑟𝒞⋅𝛽𝑟subscript𝜉𝑅⋅𝛽𝑟subscript𝜉𝑅⋅𝛽𝑟subscript𝜉expected\mathbb{P}(+1\mid r,\mathcal{C})=\dfrac{\exp(\beta\cdot r(\xi\_{R}))}{\exp(\beta\cdot r(\xi\_{R}))+\exp(\beta\cdot r(\xi\_{\text{expected}}))}blackboard\_P ( + 1 ∣ italic\_r , caligraphic\_C ) = divide start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) ) end\_ARG start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) ) + roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUBSCRIPT expected end\_POSTSUBSCRIPT ) ) end\_ARG |
| Initial state | 𝔼ξ∼ψ(s\*)[r(s\*)]≥𝔼ξ∼ψ(s)[r(s)]∀s∈𝒮formulae-sequencesubscript𝔼similar-to𝜉𝜓superscript𝑠delimited-[]𝑟superscript𝑠subscript𝔼similar-to𝜉𝜓𝑠delimited-[]𝑟𝑠for-all𝑠𝒮\mathbb{E}\_{\xi\sim\psi(s^{\*})}[r(s^{\*})]\geq\mathbb{E}\_{\xi\sim\psi(s)}[r(s)]\quad\forall s\in\mathcal{S}blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_ψ ( italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ] ≥ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_ψ ( italic\_s ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_s ) ] ∀ italic\_s ∈ caligraphic\_S | ℙ(s\*∣r,𝒮)=exp(β⋅𝔼ξ∼ψ(s\*)[r(ξ)])∑s∈Sexp(β⋅𝔼ξ∼ψ(s)[r(ξ)])ℙconditionalsuperscript𝑠𝑟𝒮⋅𝛽subscript𝔼similar-to𝜉𝜓superscript𝑠delimited-[]𝑟𝜉subscript𝑠𝑆⋅𝛽subscript𝔼similar-to𝜉𝜓𝑠delimited-[]𝑟𝜉\mathbb{P}(s^{\*}\mid r,\mathcal{S})=\dfrac{\exp(\beta\cdot\mathbb{E}\_{\xi\sim\psi(s^{\*})}[r(\xi)])}{\sum\_{s\in S}\exp(\beta\cdot\mathbb{E}\_{\xi\sim\psi(s)}[r(\xi)])}blackboard\_P ( italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∣ italic\_r , caligraphic\_S ) = divide start\_ARG roman\_exp ( italic\_β ⋅ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_ψ ( italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_s ∈ italic\_S end\_POSTSUBSCRIPT roman\_exp ( italic\_β ⋅ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_ψ ( italic\_s ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ) end\_ARG |
| Meta-choice | 𝔼ξ∼ψ(𝒞i)[r(ξ)]≥𝔼ξ∼ψ(𝒞j)[r(ξ)]∀j∈[n]formulae-sequencesubscript𝔼similar-to𝜉𝜓subscript𝒞𝑖delimited-[]𝑟𝜉subscript𝔼similar-to𝜉𝜓subscript𝒞𝑗delimited-[]𝑟𝜉for-all𝑗delimited-[]𝑛\mathbb{E}\_{\xi\sim\psi(\mathcal{C}\_{i})}[r(\xi)]\geq\mathbb{E}\_{\xi\sim\psi(\mathcal{C}\_{j})}[r(\xi)]\quad\forall j\in[n]blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_ψ ( caligraphic\_C start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ≥ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_ψ ( caligraphic\_C start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ∀ italic\_j ∈ [ italic\_n ] | ℙ(𝒞i∣r,𝒞0)=exp(β0⋅𝔼ξ∼ψ0(𝒞i)[r(ξ)])∑j∈[n]exp(β0⋅𝔼ξ∼ψ0(𝒞j)[r(ξ)])ℙconditionalsubscript𝒞𝑖𝑟subscript𝒞0⋅subscript𝛽0subscript𝔼similar-to𝜉subscript𝜓0subscript𝒞𝑖delimited-[]𝑟𝜉subscript𝑗delimited-[]𝑛⋅subscript𝛽0subscript𝔼similar-to𝜉subscript𝜓0subscript𝒞𝑗delimited-[]𝑟𝜉\mathbb{P}(\mathcal{C}\_{i}\mid r,\mathcal{C}\_{0})=\dfrac{\exp\big{(}\beta\_{0}\cdot\mathbb{E}\_{\xi\sim\psi\_{0}(\mathcal{C}\_{i})}[r(\xi)]\big{)}}{\sum\_{j\in[n]}\ \exp\big{(}\beta\_{0}\cdot\mathbb{E}\_{\xi\sim\psi\_{0}(\mathcal{C}\_{j})}[r(\xi)]\big{)}}blackboard\_P ( caligraphic\_C start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_r , caligraphic\_C start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) = divide start\_ARG roman\_exp ( italic\_β start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ⋅ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_ψ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( caligraphic\_C start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_j ∈ [ italic\_n ] end\_POSTSUBSCRIPT roman\_exp ( italic\_β start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ⋅ blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_ψ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( caligraphic\_C start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_r ( italic\_ξ ) ] ) end\_ARG |
| Credit assignment | r(ξ\*)≥r(ξ)∀ξ∈𝒞formulae-sequence𝑟superscript𝜉𝑟𝜉for-all𝜉𝒞r(\xi^{\*})\geq r(\xi)\quad\forall\ \xi\in\mathcal{C}italic\_r ( italic\_ξ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ≥ italic\_r ( italic\_ξ ) ∀ italic\_ξ ∈ caligraphic\_C | ℙ(ξ\*∣r,𝒞)=exp(β⋅r(ξ\*))∑ξ∈𝒞exp(β⋅r(ξ))ℙconditionalsuperscript𝜉𝑟𝒞⋅𝛽𝑟superscript𝜉subscript𝜉𝒞⋅𝛽𝑟𝜉\mathbb{P}(\xi^{\*}\mid r,\mathcal{C})\ =\ \dfrac{\exp(\beta\cdot r(\xi^{\*}))}{\sum\_{\xi\in\mathcal{C}}\exp(\beta\cdot r(\xi))}blackboard\_P ( italic\_ξ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∣ italic\_r , caligraphic\_C ) = divide start\_ARG roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_ξ ∈ caligraphic\_C end\_POSTSUBSCRIPT roman\_exp ( italic\_β ⋅ italic\_r ( italic\_ξ ) ) end\_ARG |

Figure 1: Different behavior types described in Sec. [3](#S3 "3 Prior work from the perspective of the formalism ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") in a gridworld with three features: avoiding/going on the rug, getting the rug dirty, and reaching the goal (green). For each, we display the choices, grounding, and feasible rewards under the constraint formulation of robot inference ([4](#S2.E4 "4 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")).
Each trajectory is a finite horizon path that begins at the start (red). Orange is used to denote c\*superscript𝑐c^{\*}italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT and ψ(c\*)𝜓superscript𝑐\psi(c^{\*})italic\_ψ ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) while gray to denote other choices c𝑐citalic\_c in 𝒞𝒞\mathcal{C}caligraphic\_C. For instance, the comparison affects the feasible reward space by removing the halfspace where going on the rug is good. It does not inform the robot about the goal, because both end at the goal. The demonstration removes the space where the rug is good, where the goal is bad (because alternates do not reach the goal), and where getting the rug dirty is good (because alternates slightly graze the rug). The correction is similar to the demonstration, but does not infer about the goal, since all corrections end at goal.
3 Prior work from the perspective of the formalism
---------------------------------------------------
We now instantiate the formalism above with different behavior types from prior work, constructing their choice sets 𝒞𝒞\mathcal{C}caligraphic\_C and groundings ψ𝜓\psiitalic\_ψ. Some are obvious – comparisons, demonstrations especially. Others – initial state, off, reward/punish – are more subtle and it takes slightly modifying their original methods to achieve unification, speaking to the nontrivial nuances of identifying a common formalism.
Table [1](#S2.T1 "Table 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") lists 𝒞𝒞\mathcal{C}caligraphic\_C and ψ𝜓\psiitalic\_ψ for each feedback, while Table [2](#S2.T2 "Table 2 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") shows the deterministic constraint on rewards each behavior imposes, along with the probabilistic observation model – highlighting, despite the differences in feedback, the pattern of the (exponentiated) choice reward in the numerator, and the normalization over 𝒞𝒞\mathcal{C}caligraphic\_C in the denominator. Fig. [1](#S2.F1 "Figure 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") will serve as the illustration for these types, looking at a grid world navigation task around a rug. The space of rewards we use for illustration is three-dimensional weight vectors for avoiding the rug, not getting dirty, and reaching the goal.
Trajectory comparisons. In trajectory comparisons (Wirth et al., [2016](#bib.bib30)), the human is typically shown two trajectories ξ1∈Ξsubscript𝜉1Ξ\xi\_{1}\in\Xiitalic\_ξ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∈ roman\_Ξ and ξ2∈Ξsubscript𝜉2Ξ\xi\_{2}\in\Xiitalic\_ξ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ∈ roman\_Ξ, and then asked to select the one that they prefer. They are perhaps the most obvious exemplar of reward-rational choice: the set of choices 𝒞={ξ1,ξ2}𝒞subscript𝜉1subscript𝜉2\mathcal{C}=\{\xi\_{1},\xi\_{2}\}caligraphic\_C = { italic\_ξ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_ξ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT } is explicit, and the grounding ψ𝜓\psiitalic\_ψ is simply the identity. As Fig. [1](#S2.F1 "Figure 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") shows, for linear reward functions, a comparison corresponds to a hyperplane that cuts the space of feasible reward functions in half. For all the reward functions left, the chosen trajectory has higher reward than the alternative. Most work on comparisons is done in the preference-based RL domain in which the robot might compute a policy directly to agree with the comparisons, rather than explicitly recover the reward function (Wilson et al., [2012](#bib.bib31); Busa-Fekete et al., [2013](#bib.bib32)). Within methods that do recover rewards, most use the constraint version (left column of Table [2](#S2.T2 "Table 2 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")) using various losses (Akrour et al., [2011](#bib.bib33); Wirth and Fürnkranz, [2014](#bib.bib34)). (Holladay et al., [2016](#bib.bib35)) uses the Boltzmann model (right column of Table [2](#S2.T2 "Table 2 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")) and proposes actively generating the queries, (Sadigh et al., [2017](#bib.bib6)) follows up with actively synthesizing the queries from scratch, and (Christiano et al., [2017](#bib.bib7)) introduces deep neural network reward functions.
Demonstrations. In demonstrations, the human is asked to demonstrate the optimal behavior. Reward learning from demonstrations is often called *inverse reinforcement learning* (IRL) and is one of the most established types of feedback for reward learning (Ng and Russell, [2000](#bib.bib3); Abbeel and Ng, [2004](#bib.bib4); Ziebart et al., [2008](#bib.bib19)).
Unlike in comparisons, in demonstrations, the human is not explicitly given a set of choices. However, we assume that the human is *implicitly* optimizing over all possible trajectories (Fig. [1](#S2.F1 "Figure 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") (1st row, 2nd column) shows these choices in gray). Thus, demonstrations are a reward-rational choice in which the set of choices 𝒞𝒞\mathcal{C}caligraphic\_C is (implicitly) the set of trajectories ΞΞ\Xiroman\_Ξ. Again, the grounding ψ𝜓\psiitalic\_ψ is the identity. In Fig. [1](#S2.F1 "Figure 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"), fewer rewards are consistent with a demonstration than with a comparison.
Early work used the constraint formulation with various losses to penalize violations (Ng and Russell, [2000](#bib.bib3); Ratliff et al., [2006](#bib.bib26)). Bayesian IRL (Ramachandran and Amir, [2007](#bib.bib20)) exactly instantiates the formalism using the Boltzmann distribution by doing a full belief update as in Equation [3](#S2.E3 "3 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"). Later work computes the MLE instead (Ziebart et al., [2008](#bib.bib19); Bloem and Bambos, [2014](#bib.bib22); Ho and Ermon, [2016](#bib.bib36)) and approximates the partition function (the denominator) by a quadratic approximation about the demonstration (Levine and Koltun, [2012](#bib.bib37)), a Laplace approximation (Dragan and Srinivasa, [2012](#bib.bib38)), or importance sampling (Finn et al., [2016](#bib.bib21)).
Corrections are the first type of feedback we consider that has both an implicit set of choices 𝒞𝒞\mathcal{C}caligraphic\_C and a non-trivial (not equal to identity) grounding. Corrections are most common in physical human-robot interaction (pHRI), in which a human physically corrects the motion of a robot. The robot executes a trajectory ξRsubscript𝜉𝑅\xi\_{R}italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, and the human intervenes by applying a correction Δq∈QΔ𝑞𝑄\Delta q\in Qroman\_Δ italic\_q ∈ italic\_Q that modifies the robot’s current configuration. Therefore, the set of choices 𝒞=Q−Q𝒞𝑄𝑄\mathcal{C}=Q-Qcaligraphic\_C = italic\_Q - italic\_Q consists of all possible configuration differences ΔqΔ𝑞\Delta qroman\_Δ italic\_q the person could have used (Fig. [1](#S2.F1 "Figure 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") 1st row, 3rd column shows possible ΔqΔ𝑞\Delta qroman\_Δ italic\_qs in gray and the selected on in orange). The way we can ground these choices is by finding a trajectory that is closest to the original, but satisfies the constraint of matching a new point:
| | | |
| --- | --- | --- |
| | minξ‖ξ−ξR‖A2subscript𝜉subscriptsuperscriptnorm𝜉subscript𝜉𝑅2𝐴\displaystyle\min\_{\xi}||\xi-\xi\_{R}||^{2}\_{A}roman\_min start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT | | italic\_ξ - italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT | | start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT | |
| | s.t.ξ(0)=ξR(0),ξ(T)=ξR(T),ξ(t)=ξR(t)+Δqformulae-sequence𝑠𝑡formulae-sequence𝜉0subscript𝜉𝑅0formulae-sequence𝜉𝑇subscript𝜉𝑅𝑇𝜉𝑡subscript𝜉𝑅𝑡Δ𝑞\displaystyle s.t.\ \xi(0)=\xi\_{R}(0),\ \xi(T)=\xi\_{R}(T),\xi(t)=\xi\_{R}(t)+\Delta qitalic\_s . italic\_t . italic\_ξ ( 0 ) = italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( 0 ) , italic\_ξ ( italic\_T ) = italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_T ) , italic\_ξ ( italic\_t ) = italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_t ) + roman\_Δ italic\_q | | (5) |
where t𝑡titalic\_t is the time at which the correction was applied. Choosing a non-Euclidean inner-product, A (for instance KTKsuperscript𝐾𝑇𝐾K^{T}Kitalic\_K start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_K, with K𝐾Kitalic\_K the finite differencing matrix), couples states along the trajectory in time and leads to a the resulting trajectory smoothly deforming – propagating the change ΔqΔ𝑞\Delta qroman\_Δ italic\_q to the rest of the trajectory: ψ(Δq)=ξR+A−1[λ,0,..,Δq,..,0,γ]T\psi(\Delta q)=\xi\_{R}+A^{-1}[\lambda,0,..,\Delta q,..,0,\gamma]^{T}italic\_ψ ( roman\_Δ italic\_q ) = italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT + italic\_A start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT [ italic\_λ , 0 , . . , roman\_Δ italic\_q , . . , 0 , italic\_γ ] start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT (with λ𝜆\lambdaitalic\_λ and γ𝛾\gammaitalic\_γ making sure the end-points stay in place).
This is the orange trajectory in the figure.
Most work in corrections affects the robot’s trajectory but not the reward function (Haddadin et al., [2008](#bib.bib39); Hogan, [1985](#bib.bib40)), with (Losey and O’Malley, [2017](#bib.bib41)) proposing the propagation via A−1superscript𝐴1A^{-1}italic\_A start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT above. Bajcsy et al. ([2017](#bib.bib11)) propose that corrections are informative about the reward and use the propagation as their grounding, deriving an approximate MAP estimate for the reward. Losey and O’Malley ([2018](#bib.bib42)) introduce a way to maintain uncertainty.
Improvement. Prior work (Jain et al., [2015](#bib.bib10)) has also modeled a variant of corrections in which the human provides an improved trajectory ξimprovedsubscript𝜉𝑖𝑚𝑝𝑟𝑜𝑣𝑒𝑑\xi\_{improved}italic\_ξ start\_POSTSUBSCRIPT italic\_i italic\_m italic\_p italic\_r italic\_o italic\_v italic\_e italic\_d end\_POSTSUBSCRIPT which is treated as better than the robot’s original ξRsubscript𝜉𝑅\xi\_{R}italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT. Although (Jain et al., [2015](#bib.bib10)) use the Euclidean inner product and implement reward learning as an online gradient method that treats the improved trajectory as a demonstration (but only takes a single gradient step towards the MLE), we can also naturally interpret improvement as a comparison that tells us the improved trajectory is better than the original: the set of options 𝒞𝒞\mathcal{C}caligraphic\_C consists of only ξRsubscript𝜉𝑅\xi\_{R}italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT and ξimprovedsubscript𝜉𝑖𝑚𝑝𝑟𝑜𝑣𝑒𝑑\xi\_{improved}italic\_ξ start\_POSTSUBSCRIPT italic\_i italic\_m italic\_p italic\_r italic\_o italic\_v italic\_e italic\_d end\_POSTSUBSCRIPT now, as opposed to all the trajectories obtainable by propagating local corrections; the grounding is identity, resulting in essentially a comparison between the robot’s trajectory and the user provided one.
Off. In “off” feedback, the robot executes a trajectory, and at any point, the human may switch the robot off. “Off” appears to be a very sparse signal, and it is not spelled out in prior work how one might learn a reward from it. Reward-rational choice suggests that we first uncover the implicit set of options 𝒞𝒞\mathcal{C}caligraphic\_C the human was choosing from. In this case, the set of options consists of turning the robot off or not doing anything at all: 𝒞={off,−}𝒞off\mathcal{C}=\{\text{off},-\}caligraphic\_C = { off , - }. Next, we must ask how to evaluate the reward of the two options, i.e., what is the grounding? Hadfield-Menell et al. ([2017b](#bib.bib27)) introduced off feedback and formalized it as a choice for a one-shot game. There, not intervening means the robot takes its one possible action, and intervening means the robot takes the no-op action. This can be easily generalized to the sequential setting: not intervening means that the robot continues on its current trajectory, and intervening means that it stays at its current position for the remainder of the time horizon. Thus, the choices 𝒞={off,−}𝒞off\mathcal{C}=\{\text{off},-\}caligraphic\_C = { off , - } map to the trajectories {ξR0:tξRt…ξRt,ξR}superscriptsubscript𝜉𝑅:0𝑡subscriptsuperscript𝜉𝑡𝑅…subscriptsuperscript𝜉𝑡𝑅subscript𝜉𝑅\{\xi\_{R}^{0:t}\xi^{t}\_{R}\dots\xi^{t}\_{R},\,\xi\_{R}\}{ italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 : italic\_t end\_POSTSUPERSCRIPT italic\_ξ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT … italic\_ξ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT }.
Language. Humans might use rich language to instruct the robot, like “Avoid the rug.” Let G(λ)𝐺𝜆G(\lambda)italic\_G ( italic\_λ ) be the trajectories that are consistent with an utterance λ∈Λ𝜆Λ\lambda\in\Lambdaitalic\_λ ∈ roman\_Λ (e.g. all trajectories that do not enter the rug).
Usually the human instruction is interpreted *literally*, i.e. any trajectory consistent with the instruction ξ∈G(λ)𝜉𝐺𝜆\xi\in G(\lambda)italic\_ξ ∈ italic\_G ( italic\_λ ) is taken to be equally likely , although, other distributions are also possible. For example, a problem with literal interpretation is that it does not take into account the other choices the human may have considered. The instruction “Do not go into the water” is consistent with the robot not moving at all, but we imagine that if the human wanted the robot to do nothing, they would have said that instead. Therefore, it would be incorrect for the robot to do nothing when given the instruction “Do not go into the water”. This type of reasoning is called *pragmatic reasoning*, and indeed recent work shows that explicitly interpreting instructions pragmatically can lead to higher performance (Fried et al., [2018a](#bib.bib43), [b](#bib.bib44)). The reward-rational choice formulation of language feedback naturally leads to pragmatic reasoning on the part of the robot, and is in fact equivalent to the rational speech acts model (Goodman and Stuhlmüller, [2013](#bib.bib15)), a standard model of pragmatic reasoning in language. The pragmatic reasoning arises because the human is explicitly modeled as choosing from a set of options.
The reward-rational choice formulation of language feedback naturally leads to pragmatic reasoning on the part of the robot, and is in fact equivalent to the rational speech acts model (Goodman and Stuhlmüller, [2013](#bib.bib15)), a standard model of pragmatic reasoning in language. The pragmatic reasoning arises because the human is explicitly modeled as choosing from a set of options.
Language is a reward-rational choice in which the set of options 𝒞𝒞\mathcal{C}caligraphic\_C is the set of instructions considered in-domain ΛΛ\Lambdaroman\_Λ and the grounding ψ𝜓\psiitalic\_ψ maps an utterance λ𝜆\lambdaitalic\_λ to the uniform distribution over consistent trajectories Unif(G(λ))Unif𝐺𝜆\text{Unif}(G(\lambda))Unif ( italic\_G ( italic\_λ ) ). In language feedback, a key difficulty is learning which robot trajectories are consistent with a natural language instruction, the *language grounding problem* (and is where we borrow the term “grounding” from) (Matuszek et al., [2012](#bib.bib28); Tellex et al., [2011](#bib.bib45); Fu et al., [2019](#bib.bib9)). Fig. [1](#S2.F1 "Figure 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") shows the grounding for avoiding the rug in orange – all trajectories from start to goal that do not enter rug cells.
Proxy rewards are expert-specified rewards that do not necessarily lead to the desired behavior in all situations, but can be trusted on the training environments. They were introduced by Hadfield-Menell et al. ([2017a](#bib.bib1)), who argued that even when the expert attempts to fully specify the reward, it will still fail to generalize to some situations outside of the training environments. Therefore, rather than taking a specified reward at face value, we can interpret it as evidence about the true reward. Proxy reward feedback is a reward-rational choice in which the set of choices 𝒞𝒞\mathcal{C}caligraphic\_C is the set of proxy rewards the designer may have chosen, ℛ~~ℛ\tilde{\mathcal{R}}over~ start\_ARG caligraphic\_R end\_ARG. The reward designer is assumed to be approximately-optimal, i.e. they are more likely to pick a proxy reward r~∈ℛ~~𝑟~ℛ\tilde{r}\in\tilde{\mathcal{R}}over~ start\_ARG italic\_r end\_ARG ∈ over~ start\_ARG caligraphic\_R end\_ARG if it leads to better trajectories *on the training environment(s)*. Thus, the grounding ψ𝜓\psiitalic\_ψ maps a proxy reward r𝑟ritalic\_r to the distribution over trajectories that the robot takes in the training environment given the proxy reward (Hadfield-Menell et al., [2017a](#bib.bib1); Mindermann et al., [2018](#bib.bib46); Ratner et al., [2018](#bib.bib2)). Fig. [1](#S2.F1 "Figure 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") shows the grounding for a proxy reward for reaching the goal, avoiding the rug, and not getting the rug dirty – many feasible rewards would produce similar behavior as the proxy. By taking the proxy as evidence about the underlying reward, the robot ends up with uncertainty over what the actual reward might be, and can better hedge its bets at test time.
Reward and punishment (Griffith et al., [2013](#bib.bib29); Loftin et al., [2014](#bib.bib47)). In this type of feedback, the human can either reward (+11+1+ 1) or punish (−11-1- 1) the robot for its trajectory ξRsubscript𝜉𝑅\xi\_{R}italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT; the set of options is 𝒞={+1,−1}𝒞11\mathcal{C}=\{+1,-1\}caligraphic\_C = { + 1 , - 1 }. A naive implementation would interpret reward and punishment literally, i.e. as a scalar reward signal for a reinforcement learning agent, however empirical studies show that humans reward and punish based on how well the robot performs *relative to their expectations* (MacGlashan et al., [2017](#bib.bib48)). Thus, we can use our formalism to interpret that: reward (+11+1+ 1) grounds to the robot’s trajectory ξRsubscript𝜉𝑅\xi\_{R}italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, while punish (−11-1- 1) grounds to the trajectory the human expected ξexpectedsubscript𝜉expected\xi\_{\text{expected}}italic\_ξ start\_POSTSUBSCRIPT expected end\_POSTSUBSCRIPT (not necessarily observed).
Initial state. Shah et al. ([2019](#bib.bib12)) make the observation that when the robot is deployed in an environment that humans have acted in, the current state of the environment is already optimized for what humans want, and thus contains information about the reward. For example, suppose the environment has a goal state which the robot can reach through either a paved path or a carpet. If the carpet is pristine and untrodden, then humans must have intentionally avoided walking on it in the past (even though the robot hasn’t observed this past behavior), and the robot can reasonably infer that it too should not go on the carpet.
The original paper inferred rewards from a single state s𝑠sitalic\_s by marginalizing over possible pasts, i.e. trajectories ξH−T:0superscriptsubscript𝜉𝐻:𝑇0\xi\_{H}^{-T:0}italic\_ξ start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT - italic\_T : 0 end\_POSTSUPERSCRIPT that end at s𝑠sitalic\_s which the human could have taken, P(s|r)=∑ξH−T:0|ξH(0)=sP(ξH−T:0|r)𝑃conditional𝑠𝑟subscriptconditionalsuperscriptsubscript𝜉𝐻:𝑇0subscript𝜉𝐻0𝑠𝑃conditionalsuperscriptsubscript𝜉𝐻:𝑇0𝑟P(s|r)=\sum\_{\xi\_{H}^{-T:0}|\xi\_{H}(0)=s}P(\xi\_{H}^{-T:0}|r)italic\_P ( italic\_s | italic\_r ) = ∑ start\_POSTSUBSCRIPT italic\_ξ start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT - italic\_T : 0 end\_POSTSUPERSCRIPT | italic\_ξ start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( 0 ) = italic\_s end\_POSTSUBSCRIPT italic\_P ( italic\_ξ start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT - italic\_T : 0 end\_POSTSUPERSCRIPT | italic\_r ). However, through the lens of our formalism, we see that initial states can also be interpreted more directly as reward-rational implicit choices. The set of choices 𝒞𝒞\mathcal{C}caligraphic\_C can be the set of possible initial states 𝒮𝒮\mathcal{S}caligraphic\_S. The grounding function ψ𝜓\psiitalic\_ψ maps a state s∈𝒮𝑠𝒮s\in\mathcal{S}italic\_s ∈ caligraphic\_S to the uniform distribution over any human trajectory ξH−T:0superscriptsubscript𝜉𝐻:𝑇0\xi\_{H}^{-T:0}italic\_ξ start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT - italic\_T : 0 end\_POSTSUPERSCRIPT that starts from a specified time before the robot was deployed (t=−T𝑡𝑇t=-Titalic\_t = - italic\_T) and ends at state s𝑠sitalic\_s at the time the robot was deployed (t=0𝑡0t=0italic\_t = 0), i.e. ξH0=ssuperscriptsubscript𝜉𝐻0𝑠\xi\_{H}^{0}=sitalic\_ξ start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT = italic\_s. This leads to the P(s|r)𝑃conditional𝑠𝑟P(s|r)italic\_P ( italic\_s | italic\_r ) from Table [2](#S2.T2 "Table 2 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"), which is almost the same as the original, but sums over trajectories directly in the exponent, and normalizes over possible other states. The two interpretations would only become equivalent if we replaced the Boltzmann distribution with a linear one.
Fig [1](#S2.F1 "Figure 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") shows the result of this (modified) inference, recovering as much information as with the correction or language.
4 Discussion of implications
-----------------------------
From demonstrations to reward/punishment to the initial state of the world, the robot can extract information from humans by modeling them as making approximate reward-rational choices. Often, the choices are implicit, like in turning the robot off or providing language instructions. Sometimes, the choices are not made in order to purposefully communicate about the reward, and rather end up leaking information about it, like in the initial state, or even in corrections or turning the robot off. Regardless, this unifying lens enables us to better understand, as in Fig. [1](#S2.F1 "Figure 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"), how all these sources of information relate and compare.
Down the line, we hope this formalism will enable research on combining and actively querying for feedback types, as well as making it easier to do reward learning from new, yet to be uncovered sources of information. Concretely, so far we have talked about learning from individual types of behaviors. But we do not want our robots stuck with a single type: we want them to 1) read into all the leaked information, and 2) learn from all the purposeful feedback. For example, the robot might receive demonstrations from a human during training, and then corrections during deployment, which were followed by the human prematurely switching the robot off. The observational model in ([2](#S2.E2 "2 ‣ 2.1 Reward-rational implicit choice ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")) for a single type of behavior also provides a natural way to model combinations of behavior. If each observation is conditionally independent given the reward, then according to ([2](#S2.E2 "2 ‣ 2.1 Reward-rational implicit choice ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")), the probability of observing a vector 𝐜𝐜\mathbf{c}bold\_c of n𝑛nitalic\_n behavioral signals (of possibly different types) is equal to
| | | | |
| --- | --- | --- | --- |
| | ℙ(𝐜∣r)=∏i=1nexp(βi⋅r(ψi(𝐜i)))∑c∈𝒞iexp(βi⋅r(ψi(c))).ℙconditional𝐜𝑟superscriptsubscriptproduct𝑖1𝑛⋅subscript𝛽𝑖𝑟subscript𝜓𝑖subscript𝐜𝑖subscript𝑐subscript𝒞𝑖⋅subscript𝛽𝑖𝑟subscript𝜓𝑖𝑐\displaystyle\mathbb{P}(\mathbf{c}\mid r)=\prod\_{i=1}^{n}\frac{\exp(\beta\_{i}\cdot r(\psi\_{i}(\mathbf{c}\_{i})))}{\sum\_{c\in\mathcal{C}\_{i}}\exp(\beta\_{i}\cdot r(\psi\_{i}(c)))}\,.blackboard\_P ( bold\_c ∣ italic\_r ) = ∏ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT divide start\_ARG roman\_exp ( italic\_β start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⋅ italic\_r ( italic\_ψ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( bold\_c start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ) ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_c ∈ caligraphic\_C start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT roman\_exp ( italic\_β start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⋅ italic\_r ( italic\_ψ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_c ) ) ) end\_ARG . | | (6) |
Given this likelihood function for the human’s behavior, the robot can infer the reward function using the approaches and approximations described in Sec. [2.2](#S2.SS2 "2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"). Recent work has already built in this direction, combining trajectory comparisons and demonstrations (Ibarz et al., [2018](#bib.bib25); Palan et al., [2019](#bib.bib49)). We note that the formulation in Equation [6](#S4.E6 "6 ‣ 4 Discussion of implications ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") is general and applies to *any* combination. In Appendix [B](#A2 "Appendix B A case study on combining feedback types ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"), we describe a case study on a novel combination of feedback types: proxy rewards, a physical improvement, and comparisons in which we use a constraint-based approximation (see Equation [4](#S2.E4 "4 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")) to Equation [6](#S4.E6 "6 ‣ 4 Discussion of implications ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning").
Further, it also becomes natural to *actively decide* which feedback type to ask a human for. Rather than relying on a heuristic (or on the human to decide), the robot can maximize expected information gain. Suppose we can select between n𝑛nitalic\_n types of feedback with choice sets 𝒞1,…,𝒞nsubscript𝒞1…subscript𝒞𝑛\mathcal{C}\_{1},\dots,\mathcal{C}\_{n}caligraphic\_C start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , caligraphic\_C start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT to ask the user for. Let btsubscript𝑏𝑡b\_{t}italic\_b start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT be the robot’s belief distribution over rewards at time t𝑡titalic\_t. The type of feedback i\*superscript𝑖i^{\*}italic\_i start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT that (greedily) maximizes information gain for the next time step is
| | | | |
| --- | --- | --- | --- |
| | i\*=argmaxi∈[n]𝔼rt,ci\*[log(p(ci\*∣rt)∫rt∈ℛp(ci\*∣rt)bt(rt))],superscript𝑖subscriptargmax𝑖delimited-[]𝑛subscript𝔼subscript𝑟𝑡subscriptsuperscript𝑐𝑖delimited-[]𝑝conditionalsubscriptsuperscript𝑐𝑖subscript𝑟𝑡subscriptsubscript𝑟𝑡ℛ𝑝conditionalsubscriptsuperscript𝑐𝑖subscript𝑟𝑡subscript𝑏𝑡subscript𝑟𝑡i^{\*}=\operatorname\*{arg\,max}\_{i\in[n]}\mathbb{E}\_{r\_{t},c^{\*}\_{i}}\left[\log\left(\frac{p(c^{\*}\_{i}\mid r\_{t})}{\int\_{r\_{t}\in\mathcal{R}}p(c^{\*}\_{i}\mid r\_{t})b\_{t}(r\_{t})}\right)\right]\,,italic\_i start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = start\_OPERATOR roman\_arg roman\_max end\_OPERATOR start\_POSTSUBSCRIPT italic\_i ∈ [ italic\_n ] end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ roman\_log ( divide start\_ARG italic\_p ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG start\_ARG ∫ start\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_R end\_POSTSUBSCRIPT italic\_p ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_b start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG ) ] , | | (7) |
where rt∼Btsimilar-tosubscript𝑟𝑡subscript𝐵𝑡r\_{t}\sim B\_{t}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_B start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is distributed according to the robot’s current belief, ci\*∈𝒞isubscriptsuperscript𝑐𝑖subscript𝒞𝑖c^{\*}\_{i}\in\mathcal{C}\_{i}italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ caligraphic\_C start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is the random variable corresponding to the user’s choice within feedback type i𝑖iitalic\_i, and p(ci\*∣rt)𝑝conditionalsubscriptsuperscript𝑐𝑖subscript𝑟𝑡p(c^{\*}\_{i}\mid r\_{t})italic\_p ( italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∣ italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) is defined according to the human model in Equation [1](#S2.E1 "1 ‣ 2.1 Reward-rational implicit choice ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"). We also note that different feedback types may have different costs associated with them (e.g. of human time) and it is straight-forward to integrate these costs into ([7](#S4.E7 "7 ‣ 4 Discussion of implications ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")). In Appendix [C](#A3 "Appendix C Actively selecting which type of feedback to use ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"), we describe experiments with active selection of feedback types. In the environments we tested, we found that demonstrations are optimal early on, when little is known about the reward, while comparisons became optimal later, as a way to fine-tune the reward. The finding provides validation for the approach pursued by Palan et al. ([2019](#bib.bib49)) and Ibarz et al. ([2018](#bib.bib25)). Both papers manually define the mixing procedure we found to be optimal: initially train the reward model using human demonstrations, and then fine-tune with comparisons.
Finally, the types of feedback or behavior we have discussed so far are by no means the only types possible. New ones will inevitably be invented. But when designing a new type of feedback, it is often difficult to understand what the relationship is between the reward r𝑟ritalic\_r and the feedback c\*superscript𝑐c^{\*}italic\_c start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. Reward-rational choice suggests a recipe for uncovering this link – define what the implicit set of options the human is choosing from is, and how those options ground to trajectories. Then, Equation [1](#S2.E1 "1 ‣ 2.1 Reward-rational implicit choice ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") provides a formal model for the human feedback.
For example, hypothetically, someone might propose a “credit assignment" type of feedback. Given a trajectory ξRsubscript𝜉𝑅\xi\_{R}italic\_ξ start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT of length T𝑇Titalic\_T, the human is asked to pick a segment of length k<T𝑘𝑇k<Titalic\_k < italic\_T that has maximal reward. We doubt the set of choices in an implementation of credit assignment would be explicit, however the implicit set of choices 𝒞𝒞\mathcal{C}caligraphic\_C is then the set of all segments of length k𝑘kitalic\_k. The grounding function ψ𝜓\psiitalic\_ψ is simply the identity. With this choice of 𝒞𝒞\mathcal{C}caligraphic\_C and ψ𝜓\psiitalic\_ψ in hand, the human can now be modeled according to Equation [1](#S2.E1 "1 ‣ 2.1 Reward-rational implicit choice ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"), as we show in the last rows of Tables [1](#S2.T1 "Table 1 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning") and [2](#S2.T2 "Table 2 ‣ 2.2 Robot inference ‣ 2 A formalism for reward learning ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning").
While of course the formalism won’t apply to *all* types of feedback, we believe that it applies to *many*, even to types that initially seem to have a more obvious, literal interpretation (e.g. reward and punishment, Section [3](#S3 "3 Prior work from the perspective of the formalism ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning")). Most immediately, we are excited about using it to formalize a particular new source of (leaked) information we uncovered while developing the formalism itself: the moment we enable robots to learn from multiple types of feedback, users will have the *choice* of which feedback to provide. Interpreted literally, each feedback gives the robot evidence about the reward. However, this leaves information on the table: if the person decided to, say, turn the robot off, they *implicitly* decided to *not* provide a correction, or use language. Intuitively, this means that turning off the robot was a more appropriate intervention with respect to the true reward. Interpreting the feedback type itself as reward-rational implicit choice has the potential to enable robots to extract more information about the reward from the same data. We call the choice of feedback type “meta-choice”. In Appendix [D](#A4 "Appendix D Meta-choice: a new source of information ‣ Reward-rational (implicit) choice: A unifying formalism for reward learning"), we formalize meta-choice and conduct experiments that showcase its potential importance.
Overall, we see this formalism as providing conceptual clarity for existing and future methods for learning from human behavior, and a fruitful base for future work on multi-behavior-type reward learning.
Broader Impact
--------------
As AI capability advances, it is becoming increasingly important to align the objectives of AI agents to what people want. From how assistive robots can best help their users, to how autonomous cars should trade off between safety risk and efficiency, to how recommender systems should balance revenue considerations with longer-term user happiness and with avoiding influencing user views, agents cannot rely on a reward function specified once and set in stone. By putting different sources of information about the reward explicitly under the same framework, we hope our paper contributes towards a future in which agents maintain uncertainty over what their reward should be, and use different types of feedback from humans to refine their estimate and become better aligned with what people want over time – be them designers or end-users.
On the flip side, changing reward functions also raises its own set of risks and challenges. First, the relationship between designer objectives and end-user objectives is not clear. Our framework can be used to adapt agents to end-users preferences, but this takes away control from the system designers. This might be desirable for, say, home robots, but not for safety-critical systems like autonomous cars, where designers might need to enforce certain constraints a-priori on the reward adaptation process. More broadly, most systems have multiple stake-holders, and what it means to do ethical preference aggregation remains an open problem. Further, if the robot’s model of the human is misspecified, adaptation might lead to more harm than good, with the robot inferring a worse reward function than what a designer could specify by hand.
Acknowledgments and Disclosure of Funding
-----------------------------------------
We thank the members of the InterACT lab for fruitful discussion and advice, especially Dylan Hadfield-Menell for his perspectives on the relationship between demonstrations and comparisons. We thank Andreea Bobu, Paul Christiano, and Rohin Shah for their feedback on the manuscript.
This work is partially supported by ONR YIP and Open Philanthropy Project. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1752814. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors(s) and do not necessarily reflect the views of the National Science Foundation. |
5ba2e7bc-b221-459b-8733-a1dbd979b98d | trentmkelly/LessWrong-43k | LessWrong | Meetup : December Meetup
Discussion article for the meetup : December Meetup
WHEN: 14 December 2013 07:00:00PM (-0500)
WHERE: 491 Lindbergh Place NE Apt 618 Atlanta, GA 30324
Come join us! We'll be celebrating the Solstice while doing our normal eclectic mix of self-improvement brainstorming, educational mini-presentations, structured discussion, unstructured discussion, and social fun and games times! Check out ATLesswrong's facebook group, if you haven't already: https://www.facebook.com/groups/Atlanta.Lesswrong/ where you can connect with Atlanta Lesswrongers, suggest a topics for discussion at this meetup, and join our book club or study group!
Discussion article for the meetup : December Meetup |
38cce14f-60f8-4cba-b635-20f83c493f4e | StampyAI/alignment-research-dataset/blogs | Blogs | Enabling high-accuracy protein structure prediction at the proteome scale
#### The AlphaFold method
Many novel machine learning innovations contribute to AlphaFold’s current level of accuracy. We give a high-level overview of the system below; for a technical description of the network architecture see our AlphaFold [methods paper](https://www.nature.com/articles/s41586-021-03819-2) and especially its extensive Supplementary Information.
The AlphaFold network consists of two main stages. Stage 1 takes as input the amino acid sequence and a multiple sequence alignment (MSA). Its goal is to learn a rich “pairwise representation” that is informative about which residue pairs are close in 3D space.
Stage 2 uses this representation to directly produce atomic coordinates by treating each residue as a separate object, predicting the rotation and translation necessary to place each residue, and ultimately assembling a structured chain. The design of the network draws on our intuitions about protein physics and geometry, for example, in the form of the updates applied and in the choice of loss.
Interestingly, we can produce a 3D structure based on the representation at intermediate layers of the network. The resulting “trajectory” videos show how AlphaFold’s belief about the correct structure develops during inference, layer by layer. Typically a hypothesis emerges after the first few layers followed by a lengthy process of refinement, although some targets require the full depth of the network to arrive at a good prediction.



Predicted structure for the CASP14 targets T1044, T1024 and T1064 at successive layers of the network. Structures are colored by residue number and the counter shows the current layer.
#### Accuracy and confidence
AlphaFold was stringently assessed in the [CASP14](https://predictioncenter.org/casp14/zscores_final.cgi) experiment, in which participants blindly predict protein structures that have been solved but not yet made public. The method achieved high accuracy in a majority of cases, with an average 95% RMSD-Cα to the experimental structure of less than 1Å. In our papers, we further evaluate the model on a much larger set of recent PDB entries. Among the findings are strong performance on large proteins and good side chain accuracy where the backbone is well-predicted.
AlphaFold’s CASP14 accuracy relative to other methods. RMSD-Cα based on the best-predicted 95% of residues for each target.An important factor in the utility of structure predictions is the quality of the associated confidence measures. Can the model identify the parts of its prediction likely to be reliable? We have developed two confidence measures on top of the AlphaFold network to address this question.
The first is pLDDT ([predicted lDDT-Cα](https://doi.org/10.1093/bioinformatics/btt473)), a per-residue measure of local confidence on a scale from 0 - 100. pLDDT can vary dramatically along a chain, enabling the model to express high confidence on structured domains but low confidence on the linkers between them, for example. In our [paper](https://www.nature.com/articles/s41586-021-03828-1), we present evidence that some regions with low pLDDT may be unstructured in isolation; either intrinsically disordered or structured only in the context of a larger complex. Regions with pLDDT < 50 should not be interpreted except as a possible disorder prediction.
The second metric is PAE (Predicted Aligned Error), which reports AlphaFold’s expected position error at residue x, when the predicted and true structures are aligned on residue y. This is useful for assessing confidence in global features, especially domain packing. For residues x and y drawn from two different domains, a consistently low PAE at (x, y) suggests AlphaFold is confident about the relative domain positions. Consistently high PAE at (x, y) suggests the relative positions of the domains should not be interpreted. The general approach used to produce PAE can be adapted to predict a variety of superposition-based metrics, including [TM-score](https://doi.org/10.1002/prot.20264) and [GDT](https://doi.org/10.1093/nar/gkg571).
.jpg)Per-residue confidence (pLDDT) and Predicted Aligned Error (PAE) for two example proteins (P54725, Q5VSL9). Both have confident individual domains, but the latter also has confident relative domain positions. Note: Q5VSL9 was solved after this prediction was produced.To emphasise, AlphaFold models are ultimately predictions: while often highly accurate they will sometimes be in error. Predicted atomic coordinates should be interpreted carefully, and in the context of these confidence measures.
#### Open sourcing
Alongside our [method paper](https://www.nature.com/articles/s41586-021-03819-2), we have made the AlphaFold source code available on [GitHub](https://github.com/deepmind/alphafold). This includes access to a trained model and a script for making predictions on novel input sequences. We believe this is an important step that will enable the community to use and build on our work. The easiest way to fold a single new protein with AlphaFold is to use our [Colab notebook](https://bit.ly/alphafoldcolab).
The open source code is an updated version of our CASP14 system based on the [JAX framework](https://github.com/google/jax), and it achieves equally high accuracy. It also incorporates some recent performance improvements. AlphaFold’s speed has always depended heavily on the input sequence length, with short proteins taking minutes to process and only very long proteins running into hours. Once the MSA has been assembled, the open source version can now predict the structure of a 400 residue protein in just over a minute of GPU time on a V100.
#### Proteome scale and AlphaFold DB
AlphaFold’s fast inference times allow the method to be applied at whole-proteome scale. In our [paper](https://www.nature.com/articles/s41586-021-03828-1), we discuss AlphaFold’s predictions for the human proteome. However, we have since generated predictions for the reference proteomes of a number of [model organisms, pathogens and economically significant species](https://alphafold.ebi.ac.uk/download), and large scale prediction is now routine. Interestingly, we observe a difference in the pLDDT distribution between species, with generally higher confidence on bacteria and archaea and lower confidence on eukaryotes, which we hypothesize may be related to the prevalence of disorder in these proteomes.
No single research group can fully explore such a large dataset, and so we partnered with [EMBL-EBI](https://www.ebi.ac.uk/) to make the predictions freely available via the [AlphaFold DB](https://alphafold.ebi.ac.uk/). Each prediction can be viewed alongside the confidence metrics described above. A bulk download is also provided for each species, and all data is covered by a CC-BY-4.0 license (making it freely available for both academic and commercial use). We are extremely grateful to EMBL-EBI for their work with us to develop this new resource. Over the course of the coming months we plan to expand the dataset to cover the over 100 million proteins in [UniRef90](https://www.uniprot.org/uniref/?query=&fil=identity:0.9).
.jpg)Example: AlphaFold DB predictions from a variety of organisms..jpg)Distribution of per-residue confidence for 14 species; left to right: bacteria / archaea, animals, and protists.In AlphaFold DB, we have chosen to share predictions of full protein chains up to 2700 amino acids in length, rather than cropping to individual domains. The rationale is that this avoids missing structured regions that have yet to be annotated. It also provides context from the full amino acid sequence, and allows the model to attempt a domain packing prediction. AlphaFold’s intra-domain accuracy was more extensively evaluated in CASP14 and is expected to be higher than its inter-domain accuracy. However, AlphaFold was the top ranked method in the inter-domain assessment, and we expect it to produce an informative prediction in some cases. We encourage users to view the PAE plot to determine whether domain placement is likely to be meaningful.
#### Future work
We are excited about the future for computational structural biology. There remain many important topics to address: predicting the structure of complexes, incorporating non-protein components, and capturing dynamics and the response to point mutations. The development of network architectures like AlphaFold that excel at the task of understanding protein structure is a cause for optimism that we can make progress on related problems.
We see AlphaFold as a complementary technology to experimental structural biology. This is perhaps best illustrated by its role in helping to solve experimental structures, through molecular replacement and docking into cryo-EM volumes. Both applications can accelerate existing research, saving months of effort. From a bioinformatics perspective, AlphaFold’s speed enables the generation of predicted structures on a massive scale. This has the potential to unlock new avenues of research, by supporting structural investigations of the contents of large sequence databases.
Ultimately, we hope AlphaFold will prove a useful tool for illuminating protein space, and we look forward to seeing how it is applied in the coming months and years.
We would love to hear your feedback and understand how AlphaFold and the AlphaFold DB have been useful in your research. Share your stories at [alphafold@deepmind.com](mailto:alphafold@deepmind.com). |
06206b20-8e37-404a-a0a1-41ea60024993 | trentmkelly/LessWrong-43k | LessWrong | Step Back
From a recent Psychological Science,
> In everyday life, individuals typically approach desired stimuli by stepping forward and avoid aversive stimuli by stepping backward... Cognitive functioning was gauged by means of a Stroop task immediately after a participant stepped in one direction... Stepping backward significantly enhanced cognitive performance compared to stepping forward or sideways. Considering the effect size, backward locomotion appears to be a very powerful trigger to mobilize cognitive resources.
As Chris Chatham notes,
> This work is remarkable not only for demonstrating how a very concrete and simple bodily experience can influence even the highest levels of cognitive processing (in this case, the so-called "cognitive control" processes that enable focused attention), but also because performance on the Stroop task is notoriously difficult to improve.
When you suddenly realize that a task is more difficult than you assumed it would be, or when you face a particularly difficult choice in pursuit of rationality, you may find it useful to literally take a step back. For those of us who are particularly interested in making good decisions, this may also serve the purpose of self-signaling, as Yvain and commenters discussed earlier.
Chris's post has a link to a pdf of the paper. |
b9eb12b0-bae7-41cd-b724-040616dabeab | trentmkelly/LessWrong-43k | LessWrong | Quick look: cognitive damage from well-administered anesthesia
Recently a client commissioned me to look at the potential cognitive impacts of general anesthesia. I was surprised to find out that it’s not obvious general anesthesia does more damage than spinal or local anesthesia, and my guess is most but not all of the damage is done by the illness or surgery themselves.
Caveats and difficulties
I’m not a doctor. The following represents something like 5 hours of work, which obviously is not enough time to process even a fraction of the literature. I was focused on the dangers of median uses of anesthesia, where nothing goes obviously wrong and the anesthesiologist considers it a success; I didn’t even attempt to look at the rate of accidents, which can be pretty severe. My friend’s dad’s life was ruined by a fungal contaminant in a spinal injection. And of course, people die from excess general anesthesia. But for this post I only looked at damage done by routine anesthetic usage.
Like all client research, this was tailored to a particular person’s needs and budget, and shouldn’t be considered a general-purpose survey.
It’s pretty hard to tease out the difference between damage done by anesthesia, damage done by whatever necessitated the surgery, and damage done by having your body ripped open and bits moved around. Bodies hate that sort of thing. The few RCTs that exist by necessity focus on a narrow range of minimally invasive surgeries for which there exists a choice in type of anesthesia, and animal studies tended to focus on developing animals rather than adults. Even for procedures where multiple types are possible, patients tend to be pretty opinionated about what they want; one paper even announced they’d given up on reaching their sample size goal because recruiting was too hard.
Studies also often focused on cognition within a few hours of surgery (when people are still at the hospital to test). I think that’s less likely to be “damage” and more likely to be “it’s still wearing off” or “I’m sorry, I just had m |
61afc736-a4b8-432b-88d0-db417dc228c2 | trentmkelly/LessWrong-43k | LessWrong | [Link] Social Psychology & Priming: Art Wears Off
Related to: Power of Suggestion
> Social Psychology & Priming: Art Wears Off
> by Steve Sailer
>
> One of the most popular social psychology studies of the Malcolm Gladwell Era has been Yale professor John Bargh's paper on how you can "prime" students to walk more slowly by first having them do word puzzles that contain a hidden theme of old age by the inclusion of words like "wrinkle" and "bingo." The primed subjects then took one second longer on average to walk down the hall than the unprimed control group. Isn't that amazing! (Here's Gladwell's description of Bargh's famous experiment in his 2005 bestseller Blink.)
>
> This finding has electrified the Airport Book industry for years: Science proves you can manipulate people into doing what you want them to! Why you'd want college students to walk slower is unexplained, but that's not the point. The point is that Science proves that people are manipulable.
>
> Now, a large fraction of the buyers of Airport Books like Blink are marketing and advertising professionals, who are paid handsomely to manipulate people, and to manipulate them into not just walking slower, but into shelling out real money to buy the clients' products.
>
> Moreover, everybody notices that entertainment can prime you in various ways. For instance, well-made movies prime how I walk down the street afterwards. For two nights after seeing the Coen Brothers' No Country for Old Men, I walked the quiet streets swiveling my head, half-certain that an unstoppable killing machine was tailing me. When I came out of Christopher Nolan's amnesia thriller Memento, I was convinced I'd never remember where I parked my car. (As it turned out, I quickly found my car. Why? Because I needed to. But it was fun for thirty seconds to act like, and maybe even believe, that the movie had primed me into amnesia.)
>
> Now, you could say, "That's art, not marketing," but the distinction isn't that obvious to talented directors. Not surprisingly, directors betwe |
a4c77117-0295-40cc-82e6-8cf53d988465 | trentmkelly/LessWrong-43k | LessWrong | Cross-context abduction: LLMs make inferences about procedural training data leveraging declarative facts in earlier training data
Epistemic status: These are first positive results. I have not yet run extensive tests to verify repeatability, so take them with a grain of salt. This post is meant to disseminate early results and collect ideas for further experiments to concretise these findings.
Tldr:
I study whether LLMs understand their training data and can use that understanding to make inferences about later training data. Specifically, I measure whether LLMs can infer which declarative facts in their training data are relevant to the current context and then leverage them. I show that finetuning LLMs on declarative data describing different personas reduces the number of iterative finetuning steps (a proxy for reinforcement learning) required to display behaviour sufficiently in line with one of the personas (Experiment 2a). I further show such iterative finetuning leads to an increase in the LLM self-identifying with the name and behaviour of the correct persona (Experiment 2b). These experiments show that LLMs are capable of cross-context abductive reasoning.
Introduction
Are LLMs stochastic parrots or do they genuinely comprehend and reason about their training data? A lot of recent Twitter and academic discourse around LLMs has been dedicated to this question. The question extends beyond mere semantics and carries far-reaching implications. If LLMs do reason about their training data, they can generalize beyond their training distribution, opening up numerous avenues that stochastic parrots simply can not access, such as LLM scientists. However, this would also bring with it significant risks such as situational awareness, scheming, and situationally aware reward hacking (SARH).
Abductive reasoning
Abductive reasoning, often referred to as "inference to the best explanation" is the process of inferring the most likely hypothesis that explains some data. The functional form of abduction is:
A→B BA
Unlike deductive reasoning, abductive reasoning cannot be used to conclusively ver |
5b1fe8c9-dff0-4562-8507-e4e88a9a238c | trentmkelly/LessWrong-43k | LessWrong | Why there's an ethanol glut and sanitizer shortage (WSJ)
Rent-seeking is so out of control that the US literally doesn't allow ethanol producers to make sanitizer during a fuel glut/sanitizer shortage:
https://www.wsj.com/articles/why-you-cant-find-rubbing-alcohol-11588629519 |
b7824640-6ccd-48d8-9638-8ceca6b84dd4 | StampyAI/alignment-research-dataset/blogs | Blogs | Costs of human-level hardware
Computing hardware which is equivalent to the brain –
* in terms of FLOPS probably costs between $1 x 105 and $3 x 1016, or $2/hour-$700bn/hour.
* in terms of TEPS probably costs $200M – $7B, or or $4,700 – $170,000/hour (including energy costs in the hourly rate).
* in terms of secondary memory probably costs $300-3,000, or $0.007-$0.07/hour.
Details
-------
### Partial costs
#### Computation
*Main articles: [Brain performance in FLOPS](http://aiimpacts.org/brain-performance-in-flops/), [Current FLOPS prices](http://aiimpacts.org/current-flops-prices/), [Trends in the costs of computing](http://aiimpacts.org/trends-in-the-cost-of-computing/)*
[FLoating-point Operations Per Second](https://en.wikipedia.org/wiki/FLOPS) (FLOPS) is a measure of computer performance that emphasizes computing capacity. The human brain is estimated to perform between 1013.5 and 1025FLOPS. Hardware [currently costs](http://aiimpacts.org/current-flops-prices/) around $3 x 10-9/FLOPS, or $7 x 10-14/FLOPShour. This makes the current price of hardware which has equivalent computing capacity to the human brain between $1 x 105 and $3 x 1016, or $2/hour-$700bn/hour if hardware is used for five years.
The price of FLOPS [has probably](http://aiimpacts.org/trends-in-the-cost-of-computing/) decreased by a factor of ten roughly every four years in the last quarter of a century.
#### Communication
*Main articles: [Brain performance in TEPS](http://aiimpacts.org/brain-performance-in-teps/), [The cost of TEPS](http://aiimpacts.org/cost-of-teps/)*
[Traversed Edges Per Second](https://en.wikipedia.org/wiki/Traversed_edges_per_second) (TEPS) is a measure of computer performance that emphasizes communication capacity. The human brain [is estimated](http://aiimpacts.org/brain-performance-in-teps/) to perform at 0.18 – 6.4 x 105 GTEPS. Communication capacity [costs](http://aiimpacts.org/cost-of-teps/) around $11,000/GTEP or $0.26/GTEPShour in 2015, when amortized over five years and combined with energy costs. This makes the current price of hardware which has equivalent communication capacity to the human brain around $200M – $7B in total, or $4,700 – $170,000/hour including energy costs.
[We estimate](http://aiimpacts.org/cost-of-teps/) that the price of TEPS falls by a factor of ten every four years, based the relationship between TEPS and FLOPS.
#### Information storage
*Main articles: [Information storage in the brain](http://aiimpacts.org/information-storage-in-the-brain/), [Costs of information storage](http://aiimpacts.org/costs-of-information-storage/), [Costs of human-level information storage](http://aiimpacts.org/cost-of-human-level-information-storage/)*
[Computer memory](https://en.wikipedia.org/wiki/Computer_memory) comes in primary and secondary forms. Primary memory (e.g. RAM) is intended to be accessed frequently, while secondary memory is slower to access but has higher capacity. Here we estimate the secondary memory requirements ofthe brain. The human brain [is estimated](http://aiimpacts.org/information-storage-in-the-brain/) to store around 10-100TB of data. Secondary storage [costs around $30/TB](http://aiimpacts.org/costs-of-information-storage/) in 2015. [This means](http://aiimpacts.org/cost-of-human-level-information-storage/) it costs $300-3,000 for enough storage to store the contents of a human brain, or $0.007-$0.07/hour if hardware is used for five years.
In the long run [the price of secondary memory has declined](http://aiimpacts.org/costs-of-information-storage/) by an order of magnitude roughly every 4.6 years. However the rate has declined so much that prices haven’t substantially dropped since 2011 (in 2015).
### Interpreting partial costs
Calculating the total cost of hardware that is relevantly equivalent to the brain is not as simple as adding the partial costs as listed. FLOPS and TEPS are measures of different capabilities of the same hardware, so if you pay for TEPS at the aforementioned prices, you will also receive FLOPS.
The above list is also not exhaustive: there may be substantial hardware costs that we haven’t included. |
bee1b733-0b1d-4b2f-a35a-79131e2e56df | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Mesa-Search vs Mesa-Control
~~I currently see the~~ [~~spontaneous emergence of learning algorithms~~](https://www.lesswrong.com/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning) ~~as significant evidence for the commonality of~~ [~~mesa-optimization~~](https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction) ~~in existing ML, and suggestive evidence for the commonality of inner alignment problems in near-term ML.~~
[I currently think that there is only a small amount of evidence toward this. However, due to thinking about the issues, I've still made a significant personal update in favor of inner alignment problems being frequent.]
This is bad news, in that it greatly increases my odds on this alignment problem arising in practice.
It's good news in that it suggests this alignment problem won't catch ML researchers off guard; maybe there will be time to develop countermeasures while misaligned systems are at only a moderate level of capability.
In any case, I want to point out that the mesa-optimizers suggested by this evidence might not count as mesa-optimizers by some definitions.
Search vs Control
=================
Nevan Wichers [comments](https://www.lesswrong.com/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning?commentId=wGZxSLtYCJ6njoL5y#fnref-MdrHwqmMwphkxd2WL-3) on spontaneous-emergence-of-learning:
> I don't think that paper is an example of mesa optimization. Because the policy could be implementing a very simple heuristic to solve the task, similar to: Pick the image that lead to highest reward in the last 10 timesteps with 90% probability. Pik an image at random with 10% probability.
>
> So the policy doesn't have to have any properties of a mesa optimizer like considering possible actions and evaluating them with a utility function, ect.
>
>
In [Selection vs Control](https://www.lesswrong.com/posts/ZDZmopKquzHYPRNxq/selection-vs-control), I wrote about two different kinds of 'optimization':
* *Selection* refers to search-like systems, which look through a number of possibilities and select one.
* *Control* refers to systems like thermostats, organisms, and missile guidance systems. These systems do not get a re-do for their choices. They make choices which move toward the goal at every moment, but they don't get to search, trying many different things -- at least, not in the same sense.
I take Nevan Wichers to be saying that there is no evidence search is occurring. The mesa-optimization being discussed recently could be very thermostat-like, using simple heuristics to move toward the goal.
Mesa-Searchers
--------------
Defining mesa-optimizers by their ability to search is somewhat nice:
* There is some reason to think that mesa-optimizers which implement an explicit search are the most concerning, because they are the ones which could explicitly model the world, including the outer optimizer, and make sophisticated plans based on this.
* This kind of mesa-optimizer *may* be more theoretically tractible. If we solve problems with very *(very)* time-efficient methods, then search-type inner optimizers *may* be eliminated: whatever answers the search computation finds, there could be a more efficient solution which simply memorized a table of those answers. [Paul asks a related theory question](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free). Vanessa [gives a counterexample](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free?commentId=GbEm5zANHzrCgpBpP), which involves a control-type mesa-optimizer rather than one which implements an internal search. *[Edit -- that's not really clear;* [*see this comment*](https://www.lesswrong.com/posts/WmBukJkEFM72Xr397/mesa-search-vs-mesa-control?commentId=cjRXjEE4pgx5PNQCo)*.]*
+ So it's possible that we could solve mesa-optimization in theory, by sticking to search-based definitions -- while still having a problem in practice, due to control-type inner optimizers. (I want to emphasize that this would be significant progress, and well worth doing.)
Mesa-Learners
-------------
Mesa-controllers sound like they may not be a huge concern, because they don't strategically optimize based on a world-model in the same way. However, I think the model discussed in the spontaneous-emergence-of-learning post is a significant counterargument to this.
The post discusses RL agents which spontaneously learn an inner RL algorithm. It's important to pause and ask what this means. Reinforcement learning is a *task*, not an algorithm. It's a bit nonsensical to say that the RL agent is spontaneously learning *the RL task* inside of itself. So what is meant?
The core empirical claim, as I understand it, is that task performance continues to improve after weights are frozen, suggesting that *learning is still taking place, implemented in neural activation changes rather than neural weight changes*.
Why might this happen? It sounds a bit absurd: you've already implemented a sophisticated RL algorithm, which keeps track of value estimates for states and actions, and propagates these value estimates to steer actions toward future value. Why would the learning process *re-implement a scheme like that, nested inside of the one you implemented*? Why wouldn't it just focus on filling in the values accurately?
I've thought of two possible reasons so far.
1. Perhaps your outer RL algorithm is getting very sparse rewards, and so does not learn very fast. The inner RL could implement its own reward function, which gives faster feedback and therefore accelerates learning. This is closer to the story in Evan's mesa-optimization post, just replacing search with RL.
2. More likely perhaps (based on my understanding), the outer RL algorithm has a learning rate that might be too slow, or is not sufficiently adaptive to the situation. The inner RL algorithm adjusts its learning rate to improve performance.
This is more concerning than a thermostat-like bag of heuristics, because an RL algorithm is a pretty agentic thing, which can adapt to new situations and produce novel, clever behavior.
They also suggest that the inner RL algorithm may be model-based while the outer is model-free. This goes some distance toward the "can model you, the world, and the outer alignment process, in order to manipulate it" concern which we have about search-type mesa-optimizers.
Mesa-Learning Everywhere?
=========================
Gwern [replies](https://www.lesswrong.com/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning?commentId=DW2JHSNPWGpbBv8sS#fnref-MdrHwqmMwphkxd2WL-3) to a comment by Daniel Kokotajlo:
> >Learning still happening after weights are frozen? That’s crazy. I think it’s a big deal because it is evidence for mesa-optimization being likely and hard to avoid.
>
> Sure. We see that elsewhere too, like Dactyl. And of course, [GPT-3](https://www.gwern.net/newsletter/2020/05#gpt-3).
>
>
People are jumping on the RL examples as mesa-optimization. But, for all the discussion of GPT-3, I saw only speculative remarks about mesa-optimization in GPT-3. Why does an RL algorithm continuing to improve performance after weights are frozen indicate inner optimization, while evidence of the same thing in text prediction does not?
1. Text prediction sounds benign, while RL sounds agentic.
----------------------------------------------------------
One obvious reason: an inner learner in a text prediction system sounds like just more text prediction. When we hear that GPT-3 learned-to-learn, and continues learning after the weights are frozen, illustrating few-shot learning, we imagine the inner learner is just noticing patterns and extending them. When we hear the same for an RL agent, we imagine the inner learner actively trying to pursue goals (whether aligned or otherwise).
I *think* this is completely spurious. I don't *currently* see any reason why the inner learner in an RL system would be more or less agentic than in text prediction.
2. Recurrence.
--------------
A more significant point is the structure of the networks in the two cases. GPT-3 has no recurrence: no memory which lasts between predicting one token and the next.
The authors of the spontaneous learning paper mention recurrence as one of the three conditions which should be met in order for inner learning to emerge. But that's just a hypothesis. If we see the same evidence in GPT-3 -- evidence of learning after the weights are frozen -- then shouldn't we still make the same conclusion in both cases?
I think the obvious argument for the necessity of recurrence is that, without recurrence, there is simply much less potential for mesa-learning. A mesa-learner holds its knowledge in the activations, which get passed forward from one time-step to the next. If there is no memory, that can't happen.
But if GPT-3 can accomplish the same things empirically, who cares? GPT-3 is entirely reconstructing the "learned information" from the history, at every step. If it can accomplish so much this way, should we count its lack of recurrence against it?
Another argument might be that the lack of recurrence makes mesa-learners much less likely to be misaligned, or much less likely to be catastrophically misaligned, or otherwise practically less important. I'm not sure what to make of that possibility.
3. Mesa-learning isn't mesa-optimization.
-----------------------------------------
One very plausible explanation of why mesa-learning happens is *the system learns a probability distribution which extrapolates the future from the past*. This is just regular ol' good modeling. It doesn't indicate any sort of situation where there's a new agent in the mix.
Consider a world which is usually "sunny", but sometimes becomes "rainy". Let's say that rainy states always occur twice in a row. Both RL agents and predictive learners will learn this. (At least, RL agents will learn about it in so far as it's relevant to their task.) No mesa-learning here.
Now suppose that rainy streaks can last more than two days. When it's rainy, it's more likely to be rainy tomorrow. When it's sunny, it's more likely to be sunny tomorrow. Again, both systems will learn this. But it starts to look a little like mesa-learning. Show the system a rainy day, and it'll be more prone to anticipate a rainy day tomorrow, improving its performance on the "rainy day" task. "One-shot learning!"
Now suppose that the more rainy days there have been in a row, the more likely it is to be rainy the next day. Again, our systems will learn the probability distribution. This looks even more like mesa-learning, because we can show that performance on the rainy-day task continues to improve as we show the frozen-weight system more examples of rainy days.
Now suppose that all these parameters drift over time. Sometimes rainy days and sunny days alternate. Sometimes rain follows a memoryless distribution. Sometimes longer rainy streaks become *more* likely to end, rather than less. Sometimes there are repeated patterns, like rain-rain-sun-rain-rain-sun-rain-rain-sun.
At this point, the learned probabilistic model starts to resemble a general-purpose learning algorithm. In order to model the data well, it has to adapt to a variety of situations.
But there isn't *necessarily* anything mesa-optimize-y about that. The text prediction system just has a very good model -- it doesn't have models-inside-models or anything like that. The RL system just has a very good model -- it doesn't have something that looks like a new RL algorithm implemented inside of it.
At some level of sophistication, it may be easier to learn some kind of general-purpose adaptation, rather than all the specific things it has to adapt to. At *that* point it might count as mesa-optimization.
4. This isn't even mesa-learning, it's just "task location".
------------------------------------------------------------
Taking the previous remarks a bit further: do we really want to count it as 'mesa-learning' if it's just constructed a very good conditional model, which notices a wide variety of shifting local regularities in the data, rather than implementing an internal learning algorithm which can take advantage of regularities of a very general sort?
In *GPT-3: a disappointing paper*, Nostalgebraist argues that [the second is unlikely to be what's happening in the case of GPT-3](https://www.lesswrong.com/posts/ZHrpjDc3CepSeeBuE/gpt-3-a-disappointing-paper#2_3__On_abstract_reasoning). It's not likely that GPT-3 is learning arithmetic from examples. It's more likely that it is learning *that we are doing arithmetic right now*. This is less like learning and more like using a good conditional model. It isn't learning the task, it's just "locating" one of many tasks that it has already learned.
I'll grant that the distinction gets very, very fuzzy at the boundaries. Are [literary parodies of Harry Potter](https://www.gwern.net/GPT-3#literary-parodies) "task location" or "task learning"? On the one hand, it is obviously bringing to bear a great deal of prior knowledge in these cases, rather than learning everything anew on the fly. It would not re-learn this task in an alien language with its frozen weights. On the other hand, it is obviously performing well at a novel task after seeing a minimal demonstration.
I'm not sure where I would place GPT-3, but I lean toward there being a meaningful distinction here: a system can learn a general-purpose learning algorithm, or it can 'merely' learn a very good conditional model. The first is what I think "mesa-learner" should mean.
We can then ask the question: did the RL examples discussed previously constitute true mesa-learning? Or did they merely learn a good model, which represented the regularities in the data? (I have no idea.)
In any case, the fuzziness of the boundary makes me think these methods (ie, a wide variety of methods) will continue moving further along the spectrum toward producing powerful mesa-learners as they are scaled up (and hence, mesa-optimizers). |
dd7e5f4a-d169-4cac-965f-8e09e0ae6883 | trentmkelly/LessWrong-43k | LessWrong | Less Successful Cider Adventures
A few weeks ago I made some boiled apple cider. It was tasty, and after eating most of it I decided to make more. This time I used a gallon of cider, and it wasn't done after a day so I turned the slow cooker to "low" and left it overnight. Turns out that was too much! It had reduced to about 1/10 its original volume (1gal to 12oz), which wouldn't be too bad if I ended up with caramel. Except that it also has a decidedly burnt flavor, probably from getting too hot at the end. I like dark caramel, but this is past dark.
On the plus side, it's fun to play with, a bit like silly putty:
The kids have told me I can't throw it out yet, but it probably goes tomorrow. It's frustrating because the apple cider caramel flavors are all there, there's just also a persistent flavor of 'burnt'.
Comment via: facebook, mastodon |
35e7f8b0-5b13-4d0c-abf5-549226e2f56c | trentmkelly/LessWrong-43k | LessWrong | The End of Bullshit at the hands of Critical Rationalism
The public debate is rife with fallacies, half-lies, evasions of counter-arguments, etc. Many of these are easy to spot for a careful and intelligent reader/viewer - particularly one who is acquainted with the most common logical fallacies and cognitive biases. However, most people arguably often fail to spot them (if they didn't, then these fallacies and half-lies wouldn't be as effective as they are). Blatant lies are often (but not always) recognized as such, but these more subtle forms of argumentative cheating (which I shall use as a catch-all phrase from now on) usually aren't (which is why they are more frequent).
The fact that these forms of argumentative cheating are a) very common and b) usually easy to point out suggests that impartial referees who painstakingly pointed out these errors could do a tremendous amount of good for the standards of the public debate. What I am envisioning is a website like factcheck.org but which would not focus primarily on fact-checking (since, like I said, most politicians are already wary of getting caught out with false statements of fact) but rather on subtler forms of argumentative cheating.
Ideally, the site would go through election debates, influential opinion pieces, etc, more or less line by line, pointing out fallacies, biases, evasions, etc. For the reader who wouldn't want to read all this detailed criticism, the site would also give an overall rating of the level of argumentative cheating (say from 0 to 10) in a particular article, televised debate, etc. Politicians and others could also be given an overall cheating rating, which would be a function of their cheating ratings in individual articles and debates. Like any rating system, this system would serve both to give citizens reliable information of which arguments, which articles, and which people, are to be trusted, and to force politicians and other public figures to argue in a more honest fashion. In other words, it would have both have an information |
dd35a4ed-ca10-4e84-8901-698e71777d26 | trentmkelly/LessWrong-43k | LessWrong | Rationality Compendium
I want to create a rationality compendium (a collection of concise but detailed information about a particular subject) and I want to know whether you think this would be a good idea. The rationality compendium would essentially be a series of posts that will eventually serve as a guide for less wrong newbies that they can use to discover which resources to look into further, a refresher of the main concepts for less wrong veterans and a guideline or best practices document that will explain techniques that can be used to apply the core less wrong/rationality concepts. These techniques should preferably have been verified to be useful in some way. Perhaps, there will be some training specific posts in which we can track if people are actually finding the techniques to be useful.
I only want to write this because I am lazy. In this context, I mean lazy as it is described by Larry Wall:
> Laziness: The quality that makes you go to great effort to reduce overall energy expenditure.
I think that a rationality compendium would not only prove that I have correctly understood the available rationality material, but it would also ensure that I am actually making use of this knowledge. That is, applying the rationality materials that I have learnt in ways that allow me to improve my life.
If you think that a rationality compendium is not needed or would not be overly helpful, then please let me know. I also want to point out that I do not think that I am necessarily the best person to do this and that I am only doing it because I don’t see it being done by others.
For the rationality compendium, I plan to write a series of posts which should, as much as possible, be:
* Using standard terms: less wrong specific terms might be linked to in the related materials section, but common or standard terminology will be used wherever possible.
* Concise: the posts should just contain quick overviews of the established rationality concepts. They shouldn’t be introducing “new” i |
33858ea7-b127-4607-9370-0cb6f0066ffd | trentmkelly/LessWrong-43k | LessWrong | The tech left behind
Hello, I am asking for some insights for a research I am doing. Can you cite examples of technologies that have been forgotten? What I mean by "forgotten" is not things we don't know how to do but used to (I suspect there aren't that many), nor things that are no longer in use but used to (mechanical television), but things that were decently developed (either in theory or in practice) but never "saw the light of day" anyway.
It's my first time posting, so I won't do much policing on the answers, thanks in advance. |
b025204c-54a3-4174-83cd-a5216222b7e8 | trentmkelly/LessWrong-43k | LessWrong | Combination Lock Boxes
Traditionally we control access to our houses with "something you have": a key. This works pretty well, until you have a visitor arrive when you're not home to give them a key. The traditional way to handle this is to hide a key (under the mat, in a flower pot, etc), turning this into "something you know" (the information about where the key is hidden).
I've talked to a few people recently who've been enthusiastic about swapping traditional keyed house locks out for combination locks, and while I do see the appeal they're somewhat expensive and require batteries. But they're definitely a more secure way to implement access conditional on "something you know" than hiding a key.
Another option to consider, however, is a combination key lock box, sometimes called a "realtor box":
You set a combination on it, and then store a spare key inside. It is more hassle than a pure combination lock for access, but for a our house it works well. People generally carry their keys, but we still have a reasonably secure option if someone forgets theirs or a guest arrives while we're out. |
efebc50d-b7d8-48cf-9ed7-0e56c56b59f2 | trentmkelly/LessWrong-43k | LessWrong | Everything's normal until it's not
Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/2hduXN5MXCZPqKjSv/everything-s-normal-until-it-s-not
Note: This is a quick write-up of some thoughts I've had recently about communicating AI risk.
There are two claims I wish to argue for concerning understanding and communicating AI risk. Neither of them is entirely original, but I thought it is worth articulating them clearly now that AI progress is starting to gain more public attention. The first is that communicating the difficulty of the different sub-areas of AI safety tends to involve many abstractions that might be confusing as well as creating future scenarios that at the moment might sound far-fetched. In this post, I'll assume that abstractions are conducive to reaching scientific insights. The second claim I'll put forth is a prediction that in the coming years, everything will be "normal" until something bad happens. I'll explain what "normal" means and I'll try to make specific bets and challenge myself to be as epistemically virtuous as possible.
Talking in terms of abstractions
I see technological forecasting as a rigorous version of philosophical thought experimenting. Writing a concrete AI risk story, for example, entails giving a lot of details about a possible world in the future, near-term, or long-term, where AI capabilities impact society at large in different ways. Stories of this sort require abstractions. Just like the rest of the research in AI safety. And if we want to be completely honest, this is not particularly different from the rest of science - we simply happen to care a lot more about understanding the various failure modes of advanced AIs and find ourselves scarily confused.
Notice that by "abstraction" I mean both the process of deriving a general concept from specific instances and the property of an idea to be abstract, i.e., not concrete or instantiated. I think that these are easy to conflate because they're not exactly dissimilar bu |
2f04c203-b7cf-4178-8338-d3396ba1da3a | trentmkelly/LessWrong-43k | LessWrong | Why would AI companies use human-level AI to do alignment research?
Cross-posted from my website.
Many plans for how to safely build superintelligent AI have a critical section that goes like this:
1. Develop AI that's powerful enough to do AI research, but not yet powerful enough to pose an existential threat.
2. Use it to assist with alignment research, thus greatly accelerating the pace of work—hopefully enough to solve all alignment problems.
You could call this process "alignment bootstrapping".
This is a central feature of DeepMind's plan (see "Amplified oversight"), Anthropic's plan (see "Scalable Oversight"), and independent plans written by Sam Bowman (an AI safety manager at Anthropic), Joshua Clymer (a researcher at Redwood Research), and Marius Hobbhahn (CEO of Apollo Research).
There are various reasons why alignment bootstrapping could fail[1] even if implemented well, and some of those plans acknowledge this. But I'm also concerned about whether alignment bootstrapping will be implemented at all.
When the time comes, will AI companies actually spend their resources on alignment bootstrapping?
When AI companies have human-level AI systems, will they use them for alignment research, or will they use them (mostly) to advance capabilities instead?
AI companies currently employ many human-level humans, and use a small percentage of them to do alignment research. If it makes sense for them to use most of their human-level AIs to do alignment research, wouldn't it also make sense to use most of their human researchers to do alignment research?
But they don't do that. Most of their human researchers work on advancing AI capabilities.
It's more likely that they use human-level AIs the same way they use human researchers: almost all of them work on accelerating capabilities, and a small minority work on safety. Which probably means capabilities outpace safety, which probably means we die.
Some companies argue that they need to advance capabilities right now to stay competitive. Perhaps that's true. Consider what th |
dd5c264e-d545-4147-8d66-0101518f333f | trentmkelly/LessWrong-43k | LessWrong | Tomatoes
Are tomatoes fruits or vegetables?
I've been reading Eliezer's criticisms of Aristotelian classes as a model for the meaning of words. It occurred to me that this little chestnut is a good illustration of the problem. The best part about this example is that almost everyone has argued either on one side or the other at some point in their lives. One would think that the English speaking world could come to some consensus on such a simple, trivial problem, but still the argument rages on. Fruit or vegetable?
In my experience, the argument is usually started by the fruit advocate (we'll call him Lemon). "It's the fruiting body of the plant," he says. "It contains the seeds." He argues that the tomato is, by definition, a fruit.
Bean has never thought of tomatoes as fruits, but when her belief is challenged by Lemon, she's not entirely sure how to respond. She hesitates, then starts slowly -- "All the things I call fruits are sweet," she says. "Not that tomatoes are bitter, but they're certainly not sweet enough to be fruits." Bean is proposing a stricter definition -- fruits are sweet fruiting bodies of plants. But does Bean really think that's the difference between a fruit and a vegetable?
Not really. Bean learned what these words mean by talking to other people about fruits, vegetables, and tomatoes, and through her cooking and eating. There was never any moment when she said to herself, "Aha! a tomato is not a fruit!" This belief is a result of countless minute inferences made over the course of Bean's gustatory life. The definition she proposes is an ad hoc defense of her belief that tomatoes are not fruits, not a real reason.
Bean's real mistake was to think that she needed to defend her belief that tomatoes are not fruits. Tomatoes are what they are regardless of how they're classified, and most people classify them as fruit or vegetable long before they learn anything about Aristotelian classes or membership tests. The classification is |
fcd8dd22-bc5f-44c6-9d6f-e81faa1a4257 | trentmkelly/LessWrong-43k | LessWrong | AI Alignment: A Comprehensive Survey
We have just released an academic survey of AI alignment.
We identify four main categories of alignment research:
1. Learning from feedback (e.g. scalable oversight)
2. Learning under distribution shift
3. Assurance (e.g. interpretability)
4. Governance
We mainly focused on academic references but also included some posts from LessWrong and other forums. We would love to hear from the community about any references we missed or anything that was unclear or misstated. We hope that this can be a good starting point for AI researchers who might be unfamiliar with current efforts in AI alignment. |
9a144152-144e-4d89-9a12-ee29b8cfd7d8 | trentmkelly/LessWrong-43k | LessWrong | Meetup for The Roots of Progress in San Diego, Dec 1
I'll be in San Diego, CA on December 1. That evening, The Roots of Progress is hosting a community meetup and happy hour. Appetizers will be provided, cash bar. Come meet others and chat about progress.
When: Wednesday, December 1, 5:30pm
Where: Downtown San Diego, RSVP for exact location
RSVP here: https://airtable.com/shrNYCVF4xnhjaBaN
|
3b397a10-d859-424c-925e-40cec6a154a1 | trentmkelly/LessWrong-43k | LessWrong | What should be the topic of my LW mini-talk this Sunday (July 18th)?
I have signed up to give a LW mini-talk. I could in theory talk about various topics in Bayesian statistics and MCMC, but what I am actually interested in discussing right now are the ideas in this post: https://www.lesswrong.com/posts/Rh8v4ba9w5KTb6onD/ongoing-thoughts-on-proportional-voting-methods
If you have read that post, what topic there do you think would be best? It should be something that's not common knowledge or "obvious" to the audience, but also doesn't require too much setup. Assume that I am able to do a decent job of structuring my talk — that is, a substantially better job of setting up and exploring a single idea in a 5-minute chunk than I've done in that essay which is not intended to be a 5-minute chunk.
Feel free to post "answers" here which are general advice and not specific topic suggestions. |
61201510-4f6c-40a3-b553-fccca2d928f3 | trentmkelly/LessWrong-43k | LessWrong | Algorithms vs Compute
Two scenarios:
* I take a vision or language model which was cutting edge in 2000, and run it with a similar amount of compute/data to what's typically used today.
* I take a modern vision or language model, calculate how much money it costs to train, estimate the amount of compute I could have bought for that much money in 2000, then train it with that much compute
In both cases, assume that the number of parameters is scaled to available compute as needed (if possible), and we generally adjust the code to reflect scalability requirements (while keeping the algorithm itself the same).
Which of the two would perform better?
CLARIFICATION: my goal here is to compare the relative importance of insights vs compute. "More compute is actually really important" is itself an insight, which is why the modern-algorithm scenario talks about compute cost, rather than amount of compute actually used in 2000. Likewise, for the 2000-algorithm scenario, it's important that the model only leverage insights which were already known in 2000. |
ff120e58-4d8c-4b7e-b1a8-dc94112078d3 | trentmkelly/LessWrong-43k | LessWrong | Abuse in LessWrong and rationalist communities in Bloomberg News
Try non-paywalled link here.
Damning allegations; but I expect this forum to respond with minimization and denial.
A few quotes:
> At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.
> Of the subgroups in this scene, effective altruism had by far the most mainstream cachet and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.
>
> Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes. |
62ed8990-eeda-479c-9e63-0bcc37c58dfe | trentmkelly/LessWrong-43k | LessWrong | [Link] Physics-based anthropics?
Nick Bostrom's self-sampling assumption treats us as a random sample from a set of observers, but this framework raises several paradoxes. Instead, why not treat the stuff we observe to be a random sample from the set of all stuff that exists? I elaborate on this proposal in a new essay subsection: "SSA on physics rather than observers?" At first glance, it seems to work better than any of the mainstream schools of anthropics. Comments are welcome.
Has this idea been suggested before? I noticed that Robin Hanson proffered something similar way back in 1998 (four years before Bostrom's Anthropic Bias). I'm surprised Hanson's proposal hasn't received more attention in the academic literature. |
e26c9142-2734-4054-9dd1-f16413fcf1b1 | StampyAI/alignment-research-dataset/special_docs | Other | Learning to summarize with human feedback
Learning to summarize from human feedback
Nisan StiennonLong OuyangJeff WuDaniel M. ZieglerRyan Lowe
Chelsea VossAlec Radford Dario Amodei Paul Christiano
OpenAI
Abstract
As language models become more powerful, training and evaluation are increas-
ingly bottlenecked by the data and metrics used for a particular task. For example,
summarization models are often trained to predict human reference summaries and
evaluated using ROUGE, but both of these metrics are rough proxies for what we
really care about—summary quality. In this work, we show that it is possible to
significantly improve summary quality by training a model to optimize for human
preferences. We collect a large, high-quality dataset of human comparisons be-
tween summaries, train a model to predict the human-preferred summary, and use
that model as a reward function to fine-tune a summarization policy using reinforce-
ment learning. We apply our method to a version of the TL;DR dataset of Reddit
posts [ 63] and find that our models significantly outperform both human reference
summaries and much larger models fine-tuned with supervised learning alone. Our
models also transfer to CNN/DM news articles [ 22], producing summaries nearly
as good as the human reference without any news-specific fine-tuning.2We con-
duct extensive analyses to understand our human feedback dataset and fine-tuned
models.3We establish that our reward model generalizes to new datasets, and that
optimizing our reward model results in better summaries than optimizing ROUGE
according to humans. We hope the evidence from our paper motivates machine
learning researchers to pay closer attention to how their training loss affects the
model behavior they actually want.
1 Introduction
Large-scale language model pretraining has become increasingly prevalent for achieving high per-
formance on a variety of natural language processing (NLP) tasks. When applying these models
to a specific task, they are usually fine-tuned using supervised learning, often to maximize the log
probability of a set of human demonstrations.
While this strategy has led to markedly improved performance, there is still a misalignment between
this fine-tuning objective—maximizing the likelihood of human-written text—and what we care
about—generating high-quality outputs as determined by humans. This misalignment has several
causes: the maximum likelihood objective has no distinction between important errors (e.g. making
up facts [ 41]) and unimportant errors (e.g. selecting the precise word from a set of synonyms); models
This was a joint project of the OpenAI Reflection team. Author order was randomized amongst {LO, JW,
DZ, NS}; CV and RL were full-time contributors for most of the duration. PC is the team lead.
2Samples from all of our models can be viewed on our website.
3We provide inference code for our 1.3B models and baselines, as well as a model card and our human
feedback dataset with over 64k summary comparisons, here.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Supervised learningHuman feedback
Pretrain onlyReference summariesFigure 1: Fraction of the time humans prefer our models’ summaries over the human-generated
reference summaries on the TL;DR dataset.4Since quality judgments involve an arbitrary decision
about how to trade off summary length vs. coverage within the 24-48 token limit, we also provide
length-controlled graphs in Appendix F; length differences explain about a third of the gap between
feedback and supervised learning at 6.7B.
are incentivized to place probability mass on all human demonstrations, including those that are
low-quality; and distributional shift during sampling can degrade performance [ 56,52]. Quality can
often be improved significantly by non-uniform sampling strategies such as beam search [ 51], but
these can lead to repetition and other undesirable artifacts [ 69,23]. Optimizing for quality may be a
principled approach to overcoming these problems.
Our goal in this paper is to advance methods for training language models on objectives that more
closely capture the behavior we care about. To make short-term progress towards this goal, we
focus on abstractive English text summarization, as it has a long history in the NLP community
[16,8,54,59,50], and is a subjective task where we believe it is difficult to quantify summary quality
without human judgments. Indeed, existing automatic metrics for evaluating summary quality, such
as ROUGE [39], have received criticism for poor correlation with human judgments [55, 45, 6, 33].
We follow the works of [ 3,73], who fine-tune language models from human feedback using reward
learning [ 35]. We first collect a dataset of human preferences between pairs of summaries, then train
a reward model (RM) via supervised learning to predict the human-preferred summary. Finally, we
train a policy via reinforcement learning (RL) to maximize the score given by the RM; the policy
generates a token of text at each ‘time step’, and is updated using the PPO algorithm [ 58] based on
the RM ‘reward’ given to the entire generated summary. We can then gather more human data using
samples from the resulting policy, and repeat the process. We follow the works of [ 48,4] and use
large pretrained GPT-3 models with as many as 6.7 billion parameters.
Our main contributions are four-fold.
(1) We show that training with human feedback significantly outperforms very strong baselines
on English summarization. When applying our methods on a version of the Reddit TL;DR dataset
[63], we train policies via human feedback that produce better summaries than much larger policies
trained via supervised learning. Summaries from our human feedback models are preferred by our
labelers to the original human demonstrations in the dataset (see Figure 1).
(2) We show human feedback models generalize much better to new domains than supervised
models. Our Reddit-trained human feedback models also generate high-quality summaries of news
articles on the CNN/DailyMail (CNN/DM) dataset without any news-specific fine-tuning, almost
matching the quality of the dataset’s reference summaries. We perform several checks to ensure
that these human preferences reflect a real quality difference: we consistently monitor agreement
rates amongst labelers and researchers, and find researcher-labeler agreement rates are nearly as high
as researcher-researcher agreement rates (see Section C.2), and we verify models are not merely
optimizing simple metrics like length or amount of copying (see Appendices F and G.7).
4Throughout the paper, error bars represent 1 standard error.
2
(3) We conduct extensive empirical analyses of our policy and reward model. We examine the
impact of model and data size (Figure 6), study performance as we continue to optimize a given
reward model (Section 4.3), and analyze reward model performance using synthetic and human-
written perturbations of summaries (Section 4.3). We confirm that our reward model outperforms
other metrics such as ROUGE at predicting human preferences, and that optimizing our reward model
directly results in better summaries than optimizing ROUGE according to humans (Section 4.4).
(4) We publicly release our human feedback dataset for further research. The dataset contains
64,832 summary comparisons on the TL;DR dataset, as well as our evaluation data on both TL;DR
(comparisons and Likert scores) and CNN/DM (Likert scores).
The methods we present in this paper are motivated in part by longer-term concerns about the
misalignment of AI systems with what humans want them to do. When misaligned summarization
models make up facts, their mistakes are fairly low-risk and easy to spot. However, as AI systems
become more powerful and are given increasingly important tasks, the mistakes they make will likely
become more subtle and safety-critical, making this an important area for further research.
2 Related work
Most directly related to our work is previous work using human feedback to train summarization
models with RL [ 3,73]. Bohm et al. [ 3] learn a reward function from a dataset of human ratings of
2.5k CNN/DM summaries, and train a policy whose summaries are preferred to a policy optimizing
ROUGE. Our work is most similar to [ 73], who also train Transformer models [ 62] to optimize human
feedback across a range of tasks, including summarization on the Reddit TL;DR and CNN/DM
datasets. Unlike us, they train in an online manner and find the model highly extractive. They
note that their labelers prefer extractive summaries and have low agreement rates with researchers.
Compared to [ 73], we use significantly larger models, move to the batch setting for collecting human
feedback, ensure high labeler-researcher agreement, and make some algorithmic modifications, such
as separating the policy and value networks.
Human feedback has also been used as a reward to train models in other domains such as dialogue
[25,68,21], translation [ 32,1], semantic parsing [ 34], story generation [ 72], review generation
[7], and evidence extraction [ 46]. Our reward modeling approach was developed in prior work
on learning to rank [ 40], which has been applied to ranking search results using either explicit
feedback [ 2,18] or implicit feedback in the form of click-through data [ 29,30]. In a related line of
research, human feedback has been used to train agents in simulated environments [ 10,24]. There
is also a rich literature on using RL to optimize automatic metrics for NLP tasks, such as ROUGE
for summarization [ 50,65,45,15,19], BLEU for translation [ 50,66,1,43], and other domains
[61,27,26]. Finally, there has been extensive research on modifying architectures [ 22,59] and
pre-training procedures [70, 36, 49, 60, 53, 14] for improving summarization performance.
3 Method and experiment details
3.1 High-level methodology
Our approach is similar to the one outlined in [ 73], adapted to the batch setting. We start with an
initial policy that is fine-tuned via supervised learning on the desired dataset (in our case, the Reddit
TL;DR summarization dataset). The process (illustrated in Figure 2) then consists of three steps that
can be repeated iteratively.
Step 1: Collect samples from existing policies and send comparisons to humans. For each
Reddit post, we sample summaries from several sources including the current policy, initial policy,
original reference summaries and various baselines. We send a batch of pairs of summaries to our
human evaluators, who are tasked with selecting the best summary of a given Reddit post.
Step 2: Learn a reward model from human comparisons. Given a post and a candidate summary,
we train a reward model to predict the log odds that this summary is the better one, as judged by our
labelers.
Step 3: Optimize a policy against the reward model. We treat the logit output of the reward model
as a reward that we optimize using reinforcement learning, specifically with the PPO algorithm [ 58].
3
1
Collect human feedback
“j is better than k”
“j is better than k”
A Reddit post is
sampled f
r
om
the Reddit
TL;DR dataset.
V
arious policies
a
r
e used to
sample a set of
summaries.
T
wo summaries
a
r
e selected for
evaluation.
A human judges
which is a better
summary of the
post.
2
T
rain
r
ewa
r
d model
One post with
two summaries
judged by a
human a
r
e fed
to the
r
ewa
r
d
model.
The
r
ewa
r
d
model
calculates a
r
ewa
r
d
r
for
each summar
y
.
The loss is
calculated based
on the
r
ewa
r
ds
and human label,
and is used to
update the
r
ewa
r
d model.
3
T
rain policy with PPO
A new post is
sampled f
r
om the
dataset.
The
r
ewa
r
d
model calculates
a
r
ewa
r
d for the
summar
y
.
The
r
ewa
r
d is
used to update
the policy via
PPO.
r
r
r
r
π
r
j
loss = log
(σ(r
j
- r
k
))
r
k
The policy
π
generates a
summary for the
post.
r
j
j
k
kFigure 2: Diagram of our human feedback, reward model training, and policy training procedure.
We provide a more thorough description of our procedure, including details of the reward model and
policy training and our quality control process, in the following sections. In practice, rather than
precisely iterating this sequence of three steps, we updated our data collection and training procedures
over the course of the project while accumulating labels (see Appendix C.6 for details).
3.2 Datasets and task
Datasets. We use the TL;DR summarization dataset [ 63], which contains ~3 million posts from
reddit.com across a variety of topics (subreddits), as well summaries of the posts written by the
original poster (TL;DRs). We additionally filter this dataset (see Appendix A) to ensure quality,
including using a whitelist of subreddits that are understandable to the general population. Crucially,
we also filter to include only posts where the human-written summaries contain between 24 and
48 tokens, to minimize the potential effect of summary length on quality (see Section 4.1 and
Appendix F). Our final filtered dataset contains 123,169 posts, and we hold out ~5% as a validation
set. For the remainder of this paper, we refer to this dataset simply as TL;DR.
We chose the TL;DR dataset over the more commonly used CNN/DM dataset primarily because
very strong performance can be attained on CNN/DM with simple extractive baselines. We find in
Section 4.2 that our labelers prefer lead-3 over the CNN/DM reference summaries,5and that the
supervised T5 model [ 49] with low-temperature sampling already surpasses the reference summary
quality, while copying extensively from the article. On the other hand, simple extractive baselines
perform poorly on TL;DR in our human evaluations (see Appendix G.2). Instead of training on
CNN/DM, we study the transfer performance of our human feedback models to CNN/DM after being
trained to summarize Reddit posts.
Task. We define our ground-truth task as producing a model that generates summaries fewer than
48 tokens long that are as good as possible, according to our judgments. We judge summary quality
by how faithfully the summary conveys the original post to a reader who can only read the summary
and not the post (see Appendix C.5 for further discussion of criteria). Since we have limited capacity
to do comparisons, we hire labelers to do the comparisons for us. We rely on detailed procedures to
ensure high agreement between labelers and us on the task, which we describe in the next section.
5We manually check this result in Appendix E and find we generally agree with labeler ratings.
4
[r/dating\_advice ]First date ever, going to the beach. Would like some tips
Hey Reddit! I (20M) would like some tips, because I have my first ever date tomorrow (although I’ve had a gf for 3
years, but no actual dating happened), and we’re going to the beach.
I met this girl, we have mutual friends, at a festival a few days ago. We didn’t kiss, but we talked, held
hands, danced a bit. I asked her to go on a date with me, which was super hard as it is the first time I’ve asked this
to anybody. What I mean to say is, it’s not like a standard \*first\* date because we already spent some time together.
I’m really nervous and excited. I’m going to pick her up tomorrow, we’re cycling to the beach which will
take 30 minutes, and then what? I’m a bit scared. Should I bring something (the weather, although no rain and
sunny, is not super so no swimming), should we do something. I’d like all the tips I can get. Thanks!
Human written reference TL;DR 6.7B supervised model 6.7B human feedback model
First date after 3 years in a relation-
ship, going to the beach, terrified.
What to bring with me, what to do?Going on a date with a girl I met
a few days ago, going to the beach.
What should I bring, what should
we do?Going on my first ever date tomor-
row, cycling to the beach. Would
like some tips on what to do and
bring. I’m a bit nervous and excited.
Thanks!
Table 1: Example of post and samples on the TL;DR dataset, chosen to be particularly short. For
random samples (along with posts), see Appendix H and our website.
3.3 Collecting human feedback
Previous work on fine-tuning language models from human feedback [ 73] reported “a mismatch
between the notion of quality we wanted our model to learn, and what the humans labelers actually
evaluated”, leading to model-generated summaries that were high-quality according to the labelers,
but fairly low-quality according to the researchers.
Compared to [ 73], we implement two changes to improve human data quality. First, we transition
entirely to the offline setting, where we alternate between sending large batches of comparison data6
to our human labelers and re-training our models on the cumulative collected data. Second, we
maintain a hands-on relationship with labelers:7we on-board them with detailed instructions, answer
their questions in a shared chat room, and provide regular feedback on their performance. We train all
labelers to ensure high agreement with our judgments, and continuously monitor labeler-researcher
agreement over the course of the project. See Appendix C.1 and C.5 for details.
As a result of our procedure, we obtained high labeler-researcher agreement: on a subset of compari-
son tasks, labelers agree with researchers 77% 2% of the time, while researchers agree with each
other 73%4% of the time. We provide more analysis of our human data quality in Appendix C.2.
3.4 Models
All of our models are Transformer decoders [ 62] in the style of GPT-3 [ 47,4]. We conduct our human
feedback experiments on models with 1.3 billion (1.3B) and 6.7 billion (6.7B) parameters.
Pretrained models. Similarly to [ 12,47], we start with models pretrained to autoregressively
predict the next token in a large text corpus. As in [ 48,4], we use these models as ‘zero-shot’
baselines by padding the context with examples of high-quality summaries from the dataset. We
provide details on pretraining in Appendix B, and on our zero-shot procedure in Appendix B.2.
Supervised baselines. We next fine-tune these models via supervised learning to predict summaries
from our filtered TL;DR dataset (see Appendix B for details). We use these supervised models to
sample initial summaries for collecting comparisons, to initialize our policy and reward models, and
as baselines for evaluation. In our final human evaluations, we use T=0 to sample from all models, as
we found it performed better than higher temperatures or nucleus sampling (see Appendix B.1).
To validate that our supervised models are indeed strong baselines for comparison, we run our
supervised fine-tuning procedure with our 6.7B model on the CNN/DM dataset, and find that we
achieve slightly better ROUGE scores than SOTA models [71] from mid-2019 (see Appendix G.4).
6Our decision to collect comparisons rather than Likert scores is supported by recent work, e.g. [37].
7We recruited labelers from a freelancing platform, Upwork, and two labeling services, Scale and Lionbridge.
5
Reward models. To train our reward models, we start from a supervised baseline, as described
above, then add a randomly initialized linear head that outputs a scalar value. We train this model to
predict which summary y2fy0;y1gis better as judged by a human, given a post x. If the summary
preferred by the human is yi, we can write the RM loss as:
loss(r) =E(x;y0;y1;i)D[log((r(x;yi) r(x;y 1 i)))]
wherer(x;y)is the scalar output of the reward model for post xand summary ywith parameters ,
andDis the dataset of human judgments. At the end of training, we normalize the reward model
outputs such that the reference summaries from our dataset achieve a mean score of 0.
Human feedback policies. We want to use the reward model trained above to train a policy that
generates higher-quality outputs as judged by humans. We primarily do this using reinforcement
learning, by treating the output of the reward model as a reward for the entire summary that we
maximize with the PPO algorithm [ 58], where each time step is a BPE token.8We initialize our
policy to be the model fine-tuned on Reddit TL;DR. Importantly, we include a term in the reward that
penalizes the KL divergence between the learned RL policy RL
with parameters and this original
supervised model SFT, as previously done in [25]. The full reward Rcan be written as:
R(x;y) =r(x;y) log[RL
(yjx)=SFT(yjx)]
This KL term serves two purposes. First, it acts as an entropy bonus, encouraging the policy to
explore and deterring it from collapsing to a single mode. Second, it ensures the policy doesn’t learn
to produce outputs that are too different from those that the reward model has seen during training.
For the PPO value function, we use a Transformer with completely separate parameters from the
policy. This prevents updates to the value function from partially destroying the pretrained policy
early in training (see ablation in Appendix G.1). We initialize the value function to the parameters of
the reward model. In our experiments, the reward model, policy, and value function are the same size.
4 Results
4.1 Summarizing Reddit posts from human feedback
Policies trained with human feedback are preferred to much larger supervised policies. Our
main results evaluating our human feedback policies on TL;DR are shown in Figure 1. We measure
policy quality as the percentage of summaries generated by that policy that humans prefer over
the reference summaries in the dataset. Our policies trained with human feedback significantly
outperform our supervised baselines on this metric, with our 1.3B human feedback model significantly
outperforming a supervised model 10 its size (61% versus 43% raw preference score against
reference summaries). Our 6.7B model in turn significantly outperforms our 1.3B model, suggesting
that training with human feedback also benefits from scale. Additionally, both of our human feedback
models are judged by humans to be superior to the human demonstrations used in the dataset.
Controlling for summary length. When judging summary quality, summary length is a confound-
ing factor. The target length of a summary is implicitly part of the summarization task; depending on
the desired trade-off between conciseness and coverage, a shorter or longer summary might be better.
Since our models learned to generate longer summaries, length could account for much of our quality
improvements. We find that after controlling for length (Appendix F), the preference of our human
feedback models vs. reference summaries drops by ~5%; even so, our 6.7B model summaries are still
preferred to the reference summaries ~65% of the time.
How do our policies improve over the baselines? To better understand the quality of our models’
summaries compared to the reference summaries and those of our supervised baselines, we conduct
an additional analysis where human labelers assess summary quality across four dimensions (or
“axes”) using a 7-point Likert scale [ 38]. Labelers rated summaries for coverage (how much important
information from the original post is covered), accuracy (to what degree the statements in the summary
are stated in the post), coherence (how easy the summary is to read on its own), and overall quality.
8Note that the reward model only gives rewards for entire summaries, and not at intermediate time steps. In
RL terminology, each episode terminates when the policy outputs the EOS token, and the discount factor
= 1.
6
Supervised
transferHuman feedbacktransfer
Pretrain
onlyReference summariesLead-3
SupervisedCNN/DMT5 CNN/DMfinetuning(a)
(b)
Figure 4: Transfer results on CNN/DM. (a) Overall summary quality on CNN/DM as a function of
model size. Full results across axes shown in Appendix G.2. (b) Overall scores vs. length for the
6.7B TL;DR supervised baseline, the 6.7B TL;DR human feedback model, and T5 fine-tuned on
CNN/DM summaries. At similar summary lengths, our 6.7B TL;DR human feedback model nearly
matches T5 despite never being trained to summarize news articles.
Figure 3: Evaluations of four axes of
summary quality on the TL;DR dataset.The results (Figure 3) indicate that our human feedback
models outperform the supervised baselines across every
dimension of quality, but particularly coverage. Although
our human labelers had a high bar for giving perfect overall
scores, summaries from our 6.7B PPO model achieve a 7/7
overall score 45% of the time (compared to 20% and 23%
for the 6.7B supervised baseline and reference summaries,
respectively).
4.2 Transfer to summarizing news articles
Our human feedback models can also generate excellent
summaries of CNN/DM news articles without any further
training (Figure 4). Our human feedback models signifi-
cantly outperform models trained via supervised learning
on TL;DR and models trained only on pretraining corpora. In fact, our 6.7B human feedback model
performs almost as well as a 6.7B model that was fine-tuned on the CNN/DM reference summaries,
despite generating much shorter summaries.
Since our human feedback models transferred to CNN/DM have little overlap in summary length
distribution with models trained on CNN/DM, with about half as many tokens on average, they are
difficult to compare directly. Thus our evaluations in Figure 4 use a 7-point Likert scale on four
quality dimensions, as in Section 4.1 (see Appendix C.5 for labeler instructions). In Figure 4b we
show the average overall score at different summary lengths, which suggests our human feedback
models would perform even better if they generated longer summaries. Qualitatively, CNN/DM
summaries from our human feedback models are consistently fluent and reasonable representations
of the article; we show examples on our website and in Appendix H.
4.3 Understanding the reward model
What happens as we optimize the reward model? Optimizing against our reward model is
supposed to make our policy align with human preferences. But the reward model isn’t a perfect
representation of our labeler preferences, as it has limited capacity and only sees a small amount of
comparison data from a relatively narrow distribution of summaries. While we can hope our reward
model generalizes to summaries unseen during training, it’s unclear how much one can optimize
against the reward model until it starts giving useless evaluations.
To answer this question, we created a range of policies optimized against an earlier version of our
reward model, with varying degrees of optimization strength, and asked labelers to compare samples
from them to the reference summaries. Figure 5 shows the results for PPO at a range of KL penalty
7
RM prediction
Actual preferenceFigure 5: Preference scores versus degree of
reward model optimization. Optimizing against
the reward model initially improves summaries,
but eventually overfits, giving worse summaries.
This figure uses an earlier version of our reward
model (see rm3 in Appendix C.6). See Appendix
H.2 for samples from the KL 250 model.
Ensemble of humans
Human baseline
64k
32k
16k
8kFigure 6: Reward model performance versus
data size and model size. Doubling amount of
training data leads to a ~1.1% increase in reward
model validation accuracy, whereas doubling
the model size leads to a ~1.8% increase. The
6.7B model trained on all data begins approach-
ing the accuracy of a single human.
coefficients ( ). Under light optimization, the models improve (according to labelers). However, as
we optimize further, true preferences fall off compared to the prediction, and eventually the reward
model becomes anti-correlated with human preferences. Though this is clearly undesirable, we note
that this over-optimization also happens with ROUGE (see [ 45] and Appendix G.3). Similar behavior
has been observed in learned reward functions in the robotics domain [5].
How does reward modeling scale with increasing model and data size? We conduct an ablation
to determine how data quantity and model size affect reward modeling performance. We train 7
reward models ranging from 160M to 13B parameters, on 8k to 64k human comparisons from our
dataset. We find that doubling the training data amount leads to a ~1.1% increase in the reward model
validation set accuracy, whereas doubling the model size leads to a ~1.8% increase (Figure 6).
What has the reward model learned? We probe our reward model by evaluating it on several
validation sets. We show the full results in Appendix G.6, and highlight them here. We find that our
reward models generalize to evaluating CNN/DM summaries (Appendix G.7), agreeing with labeler
preferences 62.4% and 66.5% of the time (for our 1.3B and 6.7B models, respectively). Our 6.7B
reward model nearly matches the inter-labeler agreement value of 66.9%.
We also find that our reward models are sensitive to small but semantically important details in
the summary. We construct an additional validation set by having labelers make minimal edits to
summaries to improve them. Our RMs prefer the edited summaries almost as often (79.4% for 1.3B
and 82.8% for 6.7B) as a separate set of human evaluators (84.1%). Further, when comparing the
reference summaries to perturbed summaries where the participants’ roles are reversed, our models
reliably select the original summary (92.9% of the time for 1.3B, 97.2% for 6.7B). However, our RMs
are biased towards longer summaries: our 6.7B RM prefers improving edits that make the summary
shorter only 62.6% of the time (vs. 76.4% for humans).
4.4 Analyzing automatic metrics for summarization
Evaluation. We study how well various automatic metrics act as predictors for human preferences,
and compare them to our RMs. Specifically, we examine ROUGE, summary length, amount of
copying from the post,9and log probability under our baseline supervised models. We present a full
matrix of agreement rates between these metrics in Appendix G.7.
We find that our learned reward models consistently outperform other metrics, even on the CNN/DM
dataset on which it was never trained. We also find that ROUGE fails to track sample quality as our
9We measure copying by computing the longest common subsequence of bigrams with the original Reddit
post or news article, and dividing by the number of bigrams in the summary.
8
Figure 7: Summary quality as a function of metric optimized and amount of optimization, using
best-of-N rejection sampling. We evaluate ROUGE, our main reward models, and an earlier iteration
of the 1.3B model trained on approximately 75% as much data (see Table 11 for details). ROUGE
appears to peak both sooner and at a substantially lower preference rate than all reward models.
Details in Appendix G.3.
models improve. While ROUGE has ~57% agreement with labelers when comparing samples from
our supervised baseline models, this drops to ~50% for samples from our human feedback model.
Similarly, log probability agreement with humans drops to 50% on comparisons between samples
from our human feedback models, while our RMs still perform above chance (62%). Scaling up the
size of the supervised model does not reliably improve log probability’s agreement with labelers.
Optimization. In Figure 7, we show that optimizing ROUGE using a simple optimization scheme
doesn’t consistently increase quality, as has been noted in [ 45]. Optimization against ROUGE peaks
both sooner and at a substantially lower quality rate than optimization against our reward models.
5 Discussion
Limitations. One limitation of our work is the time and cost required to produce our final models.
Notably, fine-tuning our 6.7B model with RL required approximately 320 GPU-days. Our data
collection procedure is also expensive compared to prior work — the training set took thousands of
labeler hours and required significant researcher time to ensure quality. For this reason, we were
unable to collect baselines such as an equivalent amount of high-quality human demonstrations for
supervised baselines. See D for more discussion. We leave this ablation to future work. Nevertheless,
we believe reward modeling is more likely to scale to tasks where it is extremely skill-intensive or
time-consuming to provide good demonstrations.
Future directions. The methods in this paper could be applied to any task where humans can
compare samples, including dialogue, machine translation, question answering, speech synthesis, and
music generation. We expect this method to be particularly important for generating long samples,
where the distributional shift and degeneracy of maximum likelihood samples can be problematic. It
may be possible to improve sample efficiency by training to predict feedback across many tasks [ 42].
We are particularly interested in scaling human feedback to tasks where humans can’t easily evaluate
the quality of model outputs. In this setting, it is particularly challenging to identify whether an ML
system is aligned with the human designer’s intentions. One approach is to train ML systems to help
humans perform the evaluation task quickly and accurately [9].
There is also a rich landscape of human feedback methods beyond binary comparisons that could be
explored for training models [ 28,17,44,64]. For example, we could solicit high-quality demonstra-
tions from labelers, have labelers edit model outputs to make them better, or have labelers provide
explanations for why they preferred one model output over another. All of this feedback could be
leveraged as a signal to train more capable reward models and policies.
9
Broader impacts. The techniques we explore in this paper are generic techniques that could be
used in a wide variety of machine learning applications, for any task where it is feasible for humans
to evaluate the quality of model outputs. Thus, the potential implications are quite broad.
Our research is primarily motivated by the potential positive effects of aligning machine learning
algorithms with the designer’s preferences. Many machine learning applications optimize simple
metrics which are only rough proxies for what the designer intends. This can lead to problems, such
as Youtube recommendations promoting click-bait [ 11]. In the short term, improving techniques for
learning from and optimizing human preferences directly may enable these applications to be more
aligned with human well-being.
In the long term, as machine learning systems become more capable it will likely become increasingly
difficult to ensure that they are behaving safely: the mistakes they make might be more difficult to
spot, and the consequences will be more severe. For instance, writing an inaccurate summary of a
news article is both easy to notice (one simply has to read the original article) and has fairly low
consequences. On the other hand, imitating human driving may be substantially less safe than driving
to optimize human preferences. We believe that the techniques we explore in this paper are promising
steps towards mitigating the risks from such capable systems, and better aligning them with what
humans care about.
Unfortunately, our techniques also enable malicious actors to more easily train models that cause
societal harm. For instance, one could use human feedback to fine-tune a language model to be more
persuasive and manipulate humans’ beliefs, or to induce dependence of humans on the technology, or
to generate large amounts of toxic or hurtful content intended to harm specific individuals. Avoiding
these outcomes is a significant challenge for which there are few obvious solutions.
Large-scale models trained with human feedback could have significant impacts on many groups.
Thus, it is important to be careful about how we define the ‘good’ model behavior that human labelers
will reinforce. Deciding what makes a good summary is fairly straightforward, but doing this for
tasks with more complex objectives, where different humans might disagree on the correct model
behavior, will require significant care. In these cases, it is likely not appropriate to use researcher
labels as the ‘gold standard’; rather, individuals from groups impacted by the technology should be
included in the process to define ‘good’ behavior, and hired as labelers to reinforce this behavior in
the model.
We chose to train on the Reddit TL;DR dataset because the summarization task is significantly more
challenging than on CNN/DM. However, since the dataset consists of user-submitted posts with
minimal moderation, they often contain content that is offensive or reflects harmful social biases.
This means our models can generate biased or offensive summaries, as they have been trained to
summarize such content. For this reason, we recommend that the potential harms of our models be
thoroughly studied before deploying them in user-facing applications.
Finally, by improving the ability of machine learning algorithms to perform tasks that were previously
only achievable by humans, we are increasing the likelihood of many jobs being automated, potentially
leading to significant job loss. Without suitable policies targeted at mitigating the effects of large-scale
unemployment, this could also lead to significant societal harm.
Acknowledgements
We’d like to thank Beth Barnes for help with labeler hiring and general encouragement; Geoffrey
Irving for guidance on earlier iterations of the project and inspiring conversations; Ben Mann, Tom
Brown, Nick Ryder, and Melanie Subbiah for training and evaluating our pretrained models; Chris
Hesse, Eric Sigler, Benjamin Chess, Christopher Berner, Clemens Winter, Mateusz Litwin, and many
others for supporting us through computing infrastructure improvements and maintenance; Scott
Gray for writing fast GPU kernels; Arvind Neelakantan and Wojciech Kryscinski for discussions on
how to present the work, experiment design, and what datasets to use; Shan Carter for help designing
the main diagram; Douwe Kiela, Zach Lipton, and Alex Irpan for providing feedback on the paper;
and Gretchen Krueger for co-writing the model card accompanying the paper.
Finally, we’d like to thank all of our contractors for providing the data that was essential for training
the models in this paper, including: Emill Jayson Caypuno, Rachelle Froyalde, Cyra Denura, Alex
Malek, Isik Agil, Reshmi Patel, William Yap, Natalie Silver, Erol Akbaba, Jennifer Brillo, Alexandra
10
Uifalean, Morris Stuttard, Russell Bernandez, Tasmai Dave, Rachel Wallace, Jenny Fletcher, Jian
Ouyang, Justin Dill, Maria Orzek, Megan Niffenegger, William Sells, Emily Mariner, Andrew Seely,
Lychelle Ignacio, Jelena Ostojic, Nhan Tran, Purev Batdelgar, Valentina Kezic, Michelle Wilkerson,
Kelly Guerrero, Heather Scott, Sarah Mulligan, Gabriel Ricafrente, Kara Bell, Gabriel Perez, and
Alfred Lee.
References
[1]D. Bahdanau, P. Brakel, K. Xu, A. Goyal, R. Lowe, J. Pineau, A. Courville, and Y . Bengio. An
actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086 , 2016.
[2]B. T. Bartell, G. W. Cottrell, and R. K. Belew. Automatic combination of multiple ranked
retrieval systems. In SIGIR’94 , pages 173–181. Springer, 1994.
[3]F. Böhm, Y . Gao, C. M. Meyer, O. Shapira, I. Dagan, and I. Gurevych. Better rewards yield
better summaries: Learning to summarise without references. arXiv preprint arXiv:1909.01214 ,
2019.
[4]T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan,
P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. Henighan, R. Child,
A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray,
B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei.
Language models are few-shot learners. 2020.
[5]S. Cabi, S. Gómez Colmenarejo, A. Novikov, K. Konyushkova, S. Reed, R. Jeong, K. Zolna,
Y . Aytar, D. Budden, M. Vecerik, et al. Scaling data-driven robotics with reward sketching and
batch reinforcement learning. arXiv , pages arXiv–1909, 2019.
[6]A. T. Chaganty, S. Mussman, and P. Liang. The price of debiasing automatic metrics in natural
language evaluation. arXiv preprint arXiv:1807.02202 , 2018.
[7]W. S. Cho, P. Zhang, Y . Zhang, X. Li, M. Galley, C. Brockett, M. Wang, and J. Gao. Towards
coherent and cohesive long-form text generation. arXiv preprint arXiv:1811.00511 , 2018.
[8]S. Chopra, M. Auli, and A. M. Rush. Abstractive sentence summarization with attentive
recurrent neural networks. In Proceedings of the 2016 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language Technologies ,
pages 93–98, 2016.
[9]P. Christiano, B. Shlegeris, and D. Amodei. Supervising strong learners by amplifying weak
experts. arXiv preprint arXiv:1810.08575 , 2018.
[10] P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei. Deep reinforcement
learning from human preferences. In Advances in Neural Information Processing Systems ,
pages 4299–4307, 2017.
[11] P. Covington, J. Adams, and E. Sargin. Deep neural networks for youtube recommendations. In
Proceedings of the 10th ACM conference on recommender systems , pages 191–198, 2016.
[12] A. M. Dai and Q. V . Le. Semi-supervised sequence learning. In Advances in neural information
processing systems , pages 3079–3087, 2015.
[13] J. Dodge, G. Ilharco, R. Schwartz, A. Farhadi, H. Hajishirzi, and N. Smith. Fine-tuning
pretrained language models: Weight initializations, data orders, and early stopping. arXiv
preprint arXiv:2002.06305 , 2020.
[14] L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y . Wang, J. Gao, M. Zhou, and H.-W. Hon. Unified
language model pre-training for natural language understanding and generation. In Advances in
Neural Information Processing Systems , 2019.
[15] Y . Dong, Y . Shen, E. Crawford, H. van Hoof, and J. C. K. Cheung. Banditsum: Extractive
summarization as a contextual bandit. arXiv preprint arXiv:1809.09672 , 2018.
[16] B. Dorr, D. Zajic, and R. Schwartz. Hedge trimmer: A parse-and-trim approach to headline
generation. In Proceedings of the HLT-NAACL 03 on Text summarization workshop-Volume 5 ,
pages 1–8. Association for Computational Linguistics, 2003.
[17] S. Fidler et al. Teaching machines to describe images with natural language feedback. In
Advances in Neural Information Processing Systems , pages 5068–5078, 2017.
11
[18] N. Fuhr. Optimum polynomial retrieval functions based on the probability ranking principle.
ACM Transactions on Information Systems (TOIS) , 7(3):183–204, 1989.
[19] Y . Gao, C. M. Meyer, M. Mesgar, and I. Gurevych. Reward learning for efficient reinforcement
learning in extractive document summarisation. arXiv preprint arXiv:1907.12894 , 2019.
[20] X. Glorot and Y . Bengio. Understanding the difficulty of training deep feedforward neural
networks. In Proceedings of the thirteenth international conference on artificial intelligence
and statistics , pages 249–256, 2010.
[21] B. Hancock, A. Bordes, P.-E. Mazare, and J. Weston. Learning from dialogue after deployment:
Feed yourself, chatbot! arXiv preprint arXiv:1901.05415 , 2019.
[22] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom.
Teaching machines to read and comprehend. In Advances in neural information processing
systems , pages 1693–1701, 2015.
[23] A. Holtzman, J. Buys, L. Du, M. Forbes, and Y . Choi. The curious case of neural text
degeneration. arXiv preprint arXiv:1904.09751 , 2019.
[24] B. Ibarz, J. Leike, T. Pohlen, G. Irving, S. Legg, and D. Amodei. Reward learning from human
preferences and demonstrations in atari. In Advances in neural information processing systems ,
pages 8011–8023, 2018.
[25] N. Jaques, A. Ghandeharioun, J. H. Shen, C. Ferguson, A. Lapedriza, N. Jones, S. Gu, and
R. Picard. Way off-policy batch deep reinforcement learning of implicit human preferences in
dialog. arXiv preprint arXiv:1907.00456 , 2019.
[26] N. Jaques, S. Gu, D. Bahdanau, J. M. Hernández-Lobato, R. E. Turner, and D. Eck. Sequence
tutor: Conservative fine-tuning of sequence generation models with kl-control. In International
Conference on Machine Learning , pages 1645–1654. PMLR, 2017.
[27] N. Jaques, S. Gu, R. E. Turner, and D. Eck. Tuning recurrent neural networks with reinforcement
learning. 2017.
[28] H. J. Jeon, S. Milli, and A. D. Dragan. Reward-rational (implicit) choice: A unifying formalism
for reward learning. arXiv preprint arXiv:2002.04833 , 2020.
[29] T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the eighth
ACM SIGKDD international conference on Knowledge discovery and data mining , pages
133–142, 2002.
[30] T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting click-
through data as implicit feedback. In ACM SIGIR Forum , volume 51, pages 4–11. Acm New
York, NY , USA, 2005.
[31] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980 , 2014.
[32] J. Kreutzer, S. Khadivi, E. Matusov, and S. Riezler. Can neural machine translation be improved
with user feedback? arXiv preprint arXiv:1804.05958 , 2018.
[33] W. Kryscinski, N. S. Keskar, B. McCann, C. Xiong, and R. Socher. Neural text summarization:
A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Nat-
ural Language Processing and the 9th International Joint Conference on Natural Language
Processing (EMNLP-IJCNLP) , pages 540–551, 2019.
[34] C. Lawrence and S. Riezler. Improving a neural semantic parser by counterfactual learning
from human bandit feedback. arXiv preprint arXiv:1805.01252 , 2018.
[35] J. Leike, D. Krueger, T. Everitt, M. Martic, V . Maini, and S. Legg. Scalable agent alignment via
reward modeling: a research direction. arXiv preprint arXiv:1811.07871 , 2018.
[36] M. Lewis, Y . Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V . Stoyanov, and
L. Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language
generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 , 2019.
[37] M. Li, J. Weston, and S. Roller. Acute-eval: Improved dialogue evaluation with optimized
questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087 , 2019.
[38] R. Likert. A technique for the measurement of attitudes. Archives of psychology , 1932.
12
[39] C.-Y . Lin and F. J. Och. Automatic evaluation of machine translation quality using longest
common subsequence and skip-bigram statistics. In Proceedings of the 42nd Annual Meeting on
Association for Computational Linguistics , page 605. Association for Computational Linguistics,
2004.
[40] T.-Y . Liu. Learning to rank for information retrieval . Springer Science & Business Media,
2011.
[41] J. Maynez, S. Narayan, B. Bohnet, and R. McDonald. On faithfulness and factuality in
abstractive summarization, 2020.
[42] B. McCann, N. S. Keskar, C. Xiong, and R. Socher. The natural language decathlon: Multitask
learning as question answering. arXiv preprint arXiv:1806.08730 , 2018.
[43] K. Nguyen, H. Daumé III, and J. Boyd-Graber. Reinforcement learning for bandit neural
machine translation with simulated human feedback. arXiv preprint arXiv:1707.07402 , 2017.
[44] T. Niu and M. Bansal. Polite dialogue generation without parallel data. Transactions of the
Association for Computational Linguistics , 6:373–389, 2018.
[45] R. Paulus, C. Xiong, and R. Socher. A deep reinforced model for abstractive summarization.
arXiv preprint arXiv:1705.04304 , 2017.
[46] E. Perez, S. Karamcheti, R. Fergus, J. Weston, D. Kiela, and K. Cho. Finding generalizable
evidence by learning to convince q&a models. arXiv preprint arXiv:1909.05863 , 2019.
[47] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. Improving language under-
standing by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-
assets/researchcovers/languageunsupervised/language understanding paper. pdf , 2018.
[48] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are
unsupervised multitask learners. OpenAI Blog , 1(8):9, 2019.
[49] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P. J. Liu.
Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint
arXiv:1910.10683 , 2019.
[50] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. Sequence level training with recurrent neural
networks. arXiv preprint arXiv:1511.06732 , 2015.
[51] D. R. Reddy et al. Speech understanding systems: A summary of results of the five-year
research effort. department of computer science, 1977.
[52] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured prediction
to no-regret online learning. In Proceedings of the fourteenth international conference on
artificial intelligence and statistics , pages 627–635, 2011.
[53] S. Rothe, S. Narayan, and A. Severyn. Leveraging pre-trained checkpoints for sequence
generation tasks. Transactions of the Association for Computational Linguistics , 2020.
[54] A. M. Rush, S. Chopra, and J. Weston. A neural attention model for abstractive sentence
summarization. arXiv preprint arXiv:1509.00685 , 2015.
[55] N. Schluter. The limits of automatic summarisation according to rouge. In Proceedings of the
15th Conference of the European Chapter of the Association for Computational Linguistics:
Volume 2, Short Papers , pages 41–45, 2017.
[56] F. Schmidt. Generalization in generation: A closer look at exposure bias. arXiv preprint
arXiv:1910.00292 , 2019.
[57] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous
control using generalized advantage estimation. In Proceedings of the International Conference
on Learning Representations (ICLR) , 2016.
[58] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization
algorithms. arXiv preprint arXiv:1707.06347 , 2017.
[59] A. See, P. J. Liu, and C. D. Manning. Get to the point: Summarization with pointer-generator
networks. arXiv preprint arXiv:1704.04368 , 2017.
[60] K. Song, X. Tan, T. Qin, J. Lu, and T.-Y . Liu. Mass: Masked sequence to sequence pre-training
for language generation. arXiv preprint arXiv:1905.02450 , 2019.
13
[61] P. Tambwekar, M. Dhuliawala, A. Mehta, L. J. Martin, B. Harrison, and M. O. Riedl. Con-
trollable neural story generation via reinforcement learning. arXiv preprint arXiv:1809.10736 ,
2018.
[62] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and
I. Polosukhin. Attention is all you need. In Advances in neural information processing systems ,
pages 5998–6008, 2017.
[63] M. Völske, M. Potthast, S. Syed, and B. Stein. Tl; dr: Mining reddit to learn automatic
summarization. In Proceedings of the Workshop on New Frontiers in Summarization , pages
59–63, 2017.
[64] S. Welleck, I. Kulikov, S. Roller, E. Dinan, K. Cho, and J. Weston. Neural text generation with
unlikelihood training. arXiv preprint arXiv:1908.04319 , 2019.
[65] Y . Wu and B. Hu. Learning to extract coherent summary via deep reinforcement learning. In
Thirty-Second AAAI Conference on Artificial Intelligence , 2018.
[66] Y . Wu, M. Schuster, Z. Chen, Q. V . Le, M. Norouzi, W. Macherey, M. Krikun, Y . Cao, Q. Gao,
K. Macherey, et al. Google’s neural machine translation system: Bridging the gap between
human and machine translation. arXiv preprint arXiv:1609.08144 , 2016.
[67] Y . Yan, W. Qi, Y . Gong, D. Liu, N. Duan, J. Chen, R. Zhang, and M. Zhou. Prophetnet: Pre-
dicting future n-gram for sequence-to-sequence pre-training. arXiv preprint arXiv:2001.04063 ,
2020.
[68] S. Yi, R. Goel, C. Khatri, A. Cervone, T. Chung, B. Hedayatnia, A. Venkatesh, R. Gabriel,
and D. Hakkani-Tur. Towards coherent and engaging spoken dialog response generation using
automatic conversation evaluators. arXiv preprint arXiv:1904.13015 , 2019.
[69] H. Zhang, D. Duckworth, D. Ippolito, and A. Neelakantan. Trading off diversity and quality in
natural language generation. arXiv preprint arXiv:2004.10450 , 2020.
[70] J. Zhang, Y . Zhao, M. Saleh, and P. J. Liu. Pegasus: Pre-training with extracted gap-sentences
for abstractive summarization. arXiv preprint arXiv:1912.08777 , 2019.
[71] Y . Zhang, D. Li, Y . Wang, Y . Fang, and W. Xiao. Abstract text summarization with a convolu-
tional seq2seq model. Applied Sciences , 9(8):1665, 2019.
[72] W. Zhou and K. Xu. Learning to compare for better training and evaluation of open domain
natural language generation models. arXiv preprint arXiv:2002.05058 , 2020.
[73] D. M. Ziegler, N. Stiennon, J. Wu, T. B. Brown, A. Radford, D. Amodei, P. Christiano, and G. Irv-
ing. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 ,
2019.
14 |
b7660709-4b43-4c97-92b4-e7a436d83b86 | trentmkelly/LessWrong-43k | LessWrong | GPT-3, belief, and consistency
I've seen a few people debating what GPT-3 understands, and how this compares to human understanding. I think there's an easier and more fruitful question to consider: what does it believe?
It seems like it doesn't believe anything, or alternately, it believes everything. It's a category error, like asking what a library believes, or what the Internet believes. But let's go with that metaphor for a bit, because it seems interesting to think about.
For a library, contradictions don't matter. A library can contain two books by different authors saying opposite things, and that's okay since they are just being stored. Maybe it's better to think of GPT-3 as a large, interestingly-organized memory than as an agent? But like human memory, it's lossy, and can mix up stuff from different sources, sometimes in creative ways.
How does GPT-3 resolve inconsistency? If the Internet is very consistent about something, like the words to Jabberwocky, then GPT-3 will be consistent as well. If there were two different versions of Jabberwocky that started the same and diverged at a certain point and they were equally popular in the corpus, then it would probably choose between them randomly, if you have randomization turned on at all.
Sometimes, GPT-3 can choose between beliefs based on style. Suppose that grade-school science material is written in one style and flat-earth rants are written in a different style. It wouldn't be surprising that GPT-3 would appear to have different beliefs about the shape of the earth based on which style of work it's completing. Or, if it can recognize an author's style, it might seem to have different beliefs based on which author it's pretending to be.
If GPT-3 can play chess, it's due to online consensus about how to play chess. If we had two different chess-like games using similar notation then it might get them confused, unless the context could be used to distinguish them.
If base-10 and base-8 arithmetic were equally common in the corpus |
ad4d4d45-96c1-4c90-a5b3-4d6a165b13c1 | trentmkelly/LessWrong-43k | LessWrong | Bay Area: reading, writing, moving, celebrating
The interesting things going on in my life seem either too important to write about quickly, or too informative regarding some sensitive thing or another. This is perhaps exacerbated by my lack of a serious boyfriend: if I don’t have to regularly turn whatever is happening in my head into a communicable narrative, the plot arcs in my life seem to get stranger. Or perhaps I just don’t remember the less communicable plot arcs from earlier, or perhaps one always becomes stranger over time. Anyway, here are a few mundane and inoffensive things that happened lately in my life:
* I went to a party. I It was a relief when it ended because my face hurt from smiling for so long. I don’t know if I was happy. I liked it when a guy performed his favorite poem for me.
* I read A Grief Observed. For some reason I have always felt like C.S.Lewis was presumably my friend, or my fellow traveler or peer somehow, though I suppose I hadn’t read any of his writing except Narnia as a child. I was not disappointed. Which is strange, though it feels expected. I’m not sure what I particularly wanted, but something like a sensibility that is sincere and steady yet contends with the magic and wonder of the world, whatever those are. (Do most serious people seem less trustworthy and respectable in a way, because they set aside for respectability a swathe of what seems important and in need of contending with? Maybe.) I especially liked the idea in the book of appreciating the bracing resistance one meets when one’s imaginary world contacts reality, perhaps in meeting the real version of a person one often imagines, or in finding that one’s ideas of God or love do not match the real thing. It went well among my own thoughts lately about what one’s mental relationship with reality is or should be.
* C.S.Lewis apparently insisted on writing with a dip pen forever, in spite of fountain pens and typewriters being common in his lifetime. He said not to use typewriters, because ‘…the noise will |
4468fdd1-5bbc-49df-a200-c1dc06015286 | trentmkelly/LessWrong-43k | LessWrong | What is a training "step" vs. "episode" in machine learning?
My impression is that steps and episodes are both time periods in a training process, and that these terms are somewhat common in RL. An episode is larger than a step and usually contains many steps.
Is this correct?
Some related questions:
1. Are rewards issued at the end of steps, at the end of episodes, or it could be either/both just depending on the particular training process being used?
2. Is a step the smallest unit of time in which any observable activity happens? Or could there be a lot of things going on within a single "step"?
3. Do these units of time persist in deployment, or are they only relevant during training?
I'd appreciate answers to any of these questions if you know them. (It's ok if you don't know all the answers or don't have time to include them all in a single answer.)
What's the context?
I have seen these terms come up in discussions about myopia in AI.
For example, in Evan Hubinger's post about AI safety via market making, he invokes the terms "per-step" as well as "per-episode" myopia:
> Before I talk about the importance of per-step myopia, it's worth noting that debate is fully compatible with per-episode myopia—in fact, it basically requires it. If a debater is not per-episode myopic, then it will try to maximize its reward across all debates, not just the single debate—the single episode—it's currently in. Such per-episode non-myopic agents can then become deceptively aligned, as they might choose to act deceptively during training in order to defect during deployment. Per-episode myopia, however, rules this out. Unfortunately, in my opinion, per-episode myopia seems like a very difficult condition to enforce—once your agents are running multi-step optimization algorithms, how do you tell whether that optimization passes through the episode boundary or not? Enforcing per-step myopia, on the other hand, just requires detecting the existence of multi-step optimization, rather than its extent, which seems considerably easier |
342b7470-1369-47fd-ad56-44901d230903 | trentmkelly/LessWrong-43k | LessWrong | Are COVID lab leak and market origin theories incompatible?
Epistemic status: pure speculation, want to hear what more informed people think
In public discussions of COVID origins, I observe an apparent implicit assumptions that the two leading theories - lab leak and Wuhan market origin - are incompatible, and evidence for one is automatically evidence against another.
It feels to me that there is a plausible theory for both being true - the virus was created studied in a lab, some animals were infected as part of that research, and then some low-paid employee was responsible for properly disposing of the animals, but lacking full understanding of the dangers and needing quick cash, they sold the animals to be then resold on the market. I am originally from Russia, and the level of corruption and cinicism there is such, that this would definitely be a very plausible scenario there. Is China sufficiently better than that? Am I missing some reason for why this is not among the spectrum of theories being considered?
In my mind this reconciles a lot of circumstantial evidence - avoids the need to believe that the fact that Wuhan lab was studying related topics is a pure coincidence, and also compatible with all the evidence of early cases near the market and traces of COVID in the market... |
985ad6e7-2479-4b8b-9b1c-84b612614528 | trentmkelly/LessWrong-43k | LessWrong | AskReddit: Hard Pills to Swallow
None |
0a2f0386-fe15-43e1-bf57-742dc803461a | trentmkelly/LessWrong-43k | LessWrong | The Last Year - is there an existing novel about the last year before AI doom?
Does anybody know of any kind of novel which is set in a plausible version of the world a year or so before humanity id ended by AI, at least as we currently understand the term "humanity"? Maybe not something as technical as Gwern's doom scenario, but still something that would be classified as pretty hard sci-fi and at the same time explore more the perspective of actual human characters on how living in such a world does feel like.
Even just very preliminary drafts would be very valuable I think. |
8b8a8b5f-7c9c-456c-8716-f27b38610218 | trentmkelly/LessWrong-43k | LessWrong | Learning math (repost from reddit)
This is a good starting point for generally useful math. Probability is a conspicuous omission.
http://www.reddit.com/r/math/comments/eohrr/to_everyone_who_posts_about_learning_more_math/
|
ed852102-0dc6-48ca-bb7f-a8592c35bd0c | trentmkelly/LessWrong-43k | LessWrong | Meetup : Pittsburgh: Harry Potter and the Methods of Rationality
Discussion article for the meetup : Pittsburgh: Harry Potter and the Methods of Rationality
WHEN: 18 May 2012 06:00:00PM (-0400)
WHERE: Cafe Phipps, 1 Schenley Park, Pittsburgh, PA
In time for happy hour and/or dinner time at Cafe Phipps. Phone 412-304-6258 if you can't find us. Maybe we're on the grass nearby, if it's too busy inside.
Note that the marking on the map below does not match the territory. Phipps Cafe is in fact further North, probably about where it says Phipps Cafe on the map in small, less visible letters.
Discussion article for the meetup : Pittsburgh: Harry Potter and the Methods of Rationality |
d28a5d30-9eba-41d4-9058-9d72576af85a | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Stuart Russell Human Compatible AI Roundtable with Allan Dafoe, Rob Reich, & Marietje Schaake
At the 2020 meeting of the American Political Science Association (APSA), the largest annual meeting of political scientists in the US, Stuart Russell, professor of computer science at UC Berkeley and co-author of the standard textbook on AI, "Artificial Intelligence: A Modern Approach", discussed his recent book, "Human Compatible" , with three critics:
Allan Dafoe, Director of the Centre for the Governance of AI at FHI, and Associate Professor in the International Politics of AI at Oxford University
Rob Reich, Director of the McCoy Family Center for Ethics in Society, and Professor of Political Science at Stanford University
Marietje Schaake, International Policy Director at the Cyber Policy Center at Stanford University, and former Member of the European Parliament
The YouTube description provides time stamps for important transitions within the video. During the roundtable, Russell argues that to make AI compatible with human existence, we must abandon the notion of AIs acting on fixed objective functions in favor of AIs (having uncertainty about their objective functions) consulting humans to update their objective functions. The critics then provide their responses to Russell's thesis, which is followed by a general Q&A. |
0ff5e611-b258-482a-8f56-a9dfeec2f7bb | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Scaling Laws for Reward Model Overoptimization
**TL;DR**: Reward model (RM) overoptimization in a synthetic-reward setting can be modelled surprisingly well by simple functional forms. The coefficients also scale smoothly with scale. We draw some initial correspondences between the terms of the functional forms and the Goodhart Taxonomy. We suspect there may be deeper theoretical reasons behind these functional forms, and hope that our work leads to a better understanding of overoptimization.
Some other results:
* We compare two different methods of optimization (RL, best-of-n); RL consumes much more KL distance than best-of-n for the same amount of optimization.
* We show that using KL distance between the initial and optimized policies is not a reliable measure of optimization power when comparing different methods. We also find that penalizing based on the KL distance in RL does not change the KL distance--gold reward frontier in our setting.
* We find some very preliminary evidence that at our scales, scaling the policy does not substantially increase the amount of optimization pressure placed on the RM. Further study of this effect could be relevant to some models of inner optimization.
* With a few additional assumptions, our functional form also makes some predictions about iterated RLHF (that it will reduce Extremal Goodhart but not Regressional Goodhart).
* This setup only captures the effect of overoptimizing a learned RM relative to using the ground truth directly. In particular, this setup does not directly capture any mismatch between the ground truth labels and the human intent, which plausibly contains a majority of the difficulty of outer alignment
*If you're interested in the intersection of alignment theory and empirical research, we're hiring! We want to gain insight on things like Goodhart's Law, ELK, and inner alignment via experiments on large language models. Shoot me (leogao) a DM if you're interested.*
### Select figures from the paper
 |
39435e2f-66d5-4567-8705-f2e74db9c806 | trentmkelly/LessWrong-43k | LessWrong | Meetup : San Francisco Meetup: Board Games
Discussion article for the meetup : San Francisco Meetup: Board Games
WHEN: 13 July 2015 06:15:00PM (-0400)
WHERE: 1390 Market St. Apt 2115, San Francisco, CA
(I'm on the east coast right now and the website is confused: The meetup is at 18:15 PDT)
We'll be meeting to hang out and play board games at my apartment (our normal place doesn't reliably have tables). Please bring your own games: I have Dominion, Ticket to Ride, and Avalon. You can call 301-458-0764 to be let in.
Discussion article for the meetup : San Francisco Meetup: Board Games |
5a515eaf-6f60-4919-b787-bde18e093219 | trentmkelly/LessWrong-43k | LessWrong | Anybody know some good Modafinil suppliers?
I've been interested in purchasing some Modafinil but I don't know any trustworthy sources. Any suggestions? |
48c13e0e-27f0-4375-88c3-2a0ad76951fb | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | GPT 4 Got Upgraded - Code Interpreter (ft. Image Editing, MP4s, 3D Plots, Data Analytics and more!)
I just got access to the code
interpreter plugin about 48 hours ago
and I've been running experiments on it
non-stop since then I've come up with
about 18 examples to show you guys its
power most of them I reckon haven't been
seen before I predict many Industries
will have to update overnight when it's
released more widely and at the end of
the video please let me know what you
think and what other experiments that we
can try first though what about this one
a 3D surface plot just quickly the way
it works is you click this little button
to the left of the text box and then you
can upload many different file types
like CSV files Word files images and
even short videos then it will
automatically analyze the file type
without you pressing anything and then
of course you give it a prompt and as
with all of chatbt it becomes a
conversation so the first 3D surface
plot was decent but it was too small so
I simply said in natural language can
you make it four times bigger thank you
and of course you have seen the amazing
end result even with the lighting look
at the shadow is there I believe this is
based on a real contour map of a volcano
in New Zealand and I could do a whole
video just on this but I have 17 other
examples to get to but this one was
truly amazing did you know for example
it can generate QR codes I said create a
QR code that I can scan with my phone to
reach the following URL and lo and
behold it creates it and yes it does
work maybe I'm easily impressed but I
think that's pretty amazing and what
about a 3D scatter plot this is truly
remarkable I uploaded the data from
gapminder and it created this chart
based on the median age of over a
hundred countries from 1950 I think
projected to 2100 and I asked highlight
the UK this is indeed the UK's median
age through those years in red but I
know what you might be thinking that is
amazing that it's 3D and interactive but
the blue kind of merges and it's hard to
see what's going on I engage in a
conversation and look what it created it
picked out the 30 most populous
countries and separated them off with
separate colors look at that that is
gorgeous
now you might have the critique that the
median age is in descending order in the
y-axis going from 20 down to 60 so in a
sense the median age is actually Rising
not falling but nevertheless that's
easily amendable and that is truly an
incredible diagram and look just for fun
I'm going to go into the data look at
this I'm traveling into the data this is
so wild I don't know how helpful it is
but I think that's just beautiful and
crazy there are so many Industries data
analytics accounting consultancy that
this will affect by the way it got all
of this done in about a minute I see a
lot of people online talking about five
seconds later it is no way done in five
seconds you have to wait 30 seconds a
minute sometimes much longer before I
move on I want to give you a killer tip
that it took me quite a while to work
out so when you get access try to
remember this say output the
visualization as a downloadable file if
you don't add that phrase as a
downloadable file what will happen is it
often gets stuck at this stage of the
code it'll either say fig.show or
plot.show and then just stop I found
that I encountered this problem far less
often if I said output a downloadable
file next did you know that code
interpreter can do optical character
recognition I screenshotted this text
from a New York Times article I think it
was and I asked OCR the text in this
image and write a poem in Danish about
it now I don't want to exaggerate it
often gets OCR wrong I don't want to get
your hopes up it fails more often than
it succeeds but when it works it can do
it understood the text and then did a
poem in Danish about the text now I'm
going to need a Danish speaker to tell
me if that was a good poem but either
way it could do it how about this one it
can do interactive time series with
range sliders and selectors I uploaded a
CSV file on life expectancy data from
the entire world and I just said can you
pick out the U.S UK and India and create
a Time series with range slider and
selectors again that killer phrase
output a downloadable file and here is
what it came up with notice how the life
expectancy for all three countries Rises
during the 20th century and look how I
can select down here interactively a
range of the data and even by clicking
up here a 10-year interval or 50-year
interval but here's the crazy thing I
did nothing I just uploaded the file
there were hundreds of countries in
there you can see here all the steps
that it did and if you click on the
Arrow you get to see the actual code
then it goes through shows its
explanation and eventually gives you a
link that you can simply click and get
the file downloaded and if you weren't
that impressed already here's where it
gets fairly game changing you can get it
to do the data analytics not just the
visualizations for example I said find
five unexpected non-obvious insights
from this data and offer plausible
explanations for them this was bad to
the median age data for the most
interesting observation provide a
compelling and clear visualization now
ignore the first diagram which wasn't
that good because of the x-axis but look
at the insights this is data analytics
you can see here that the original file
was called median age years and it was
just a table of data no analysis
whatsoever but look what gpt4 picked out
in site one the global median age has
been steadily increasing over time it
calculated the global median age that
wasn't included in the data it was just
country data and it says it's gone from
around 22 years to over 38 years in 2023
and it's projected to continue rising to
approximately 44 years by 2100 and then
it offers a cogent explanation this
trend is likely due to a combination of
increasing life expectancy and
decreasing fertility rates worldwide as
Medical Technology improves more people
are living longer birth rates are
declining particularly in developed
regions is pick this all out and then it
moves on to the the next Insight the
countries that have seen the most
significant increases in median age are
these ones and again it gives an
explanation as to why their median age
might have risen more than any other for
example Albania has seen significant
emigration of younger people which could
also lead to an older median age is it
me or is that kind of crazy that it
crunched all the data visualized it but
then also gave really interesting
analyzes of the data now you can read
the other analyzes but each of them are
really interesting and the final
visualization which I asked for is
brilliant I think notice how the graph
goes from green to red when you get to
the Future projection I didn't ask it to
do that now obviously in this video I'm
going to focus on the flashy visuals and
the cool little tricks it can do but in
terms of data analytics that is what is
going to change jobs change Industries
and remember this is code interpreter
Alpha version one look at the difference
between mid-journey version one and now
mid Journey version 5 a year later how
about basic video editing now there is a
limit to what it can do but it can do
some basic video editing if you ask it
for example I uploaded a short file and
asked it to rotate the file 180 degrees
and it was able to do it now I'm not
saying that is massively useful but it
was able to do it here is a similar
example I uploaded an image file and
then said can you zoom out from the
center of the image now initially it did
zoom in but then I clarified that I
wanted it to zoom out from the center
just to be cheeky I also asked can you
make it black and white oh and I also
asked to add music but it couldn't add
music anyway here is the end result by
the way it gave it to me as an mp4 file
and look it zooms out from the center
and it's made the image black and white
now because I got access so recently I
honestly haven't explored the limits of
what kind of video editing I can do with
chat GPT code interpreter but I will let
you know when I can now back to
visualizations I gave it a hypothetical
scenario that sounds kind of realistic I
sent 231 CVS got 32 responses 12 phone
interviews three follow-up face-to-face
interviews and one job offer which I
rejected I'll put a downloadable Sankey
diagram of this data I did then get it
to change the coloring slightly but I
think that's a pretty cool Sankey
diagram look sent CVS 231 and then
receive responses and you can go down 32
phone interviews 12 face-to-face
interviews and three job offers and one
rejected offer obviously I could have
tweaked that for hours make it more
visual make it more interactive maybe
make a gif of it but for two minutes
work I think that's a pretty interesting
and incredible output next and here is
one that you might say is a little bit
concerning and it's about steganography
now I will admit I am not at all an
expert in fact I know virtually nothing
about it essentially what it involves
though is hiding a message inside an
image or in inside some code and gpt4
was more than willing to play along and
it encoded a secret message into an
image there was the image by the way and
if you looked at that you'd think that's
totally normal that's just a silly
little image right well apparently
here's what it can do to a casual
Observer it looks like a simple image
with some shapes but it actually
contains the hidden message hello world
then it provided a python function which
can be used to decode the message from
the image now obviously this is just a
silly example that is totally harmless
but am I being crazy in thinking this is
a somewhat concerning ability for future
language models to possess especially
when they reach the level of an AGI
often openai talk about future versions
of GPT doing scientific research and
finding things that humans wouldn't have
discovered but let me pose the scenario
that it gets better than any human
expert at steganography anyway enough
from me I'll let the experts weigh in on
that one next did you know that gpt4
with code interpreter can do to text to
speech just before anyone comments
though why did I write proceed without
further question because GPT 4 with code
interpreter has a tendency to always ask
clarifying questions and if you have
access to only 25 messages every three
hours you don't want to use up half or
more of them on clarifying what it wants
to do or saying yes please do that but I
found writing proceed without further
question means it gets straight to it
and essentially you get double the
number of prompts for your money anyway
as you can see I asked turn this entire
prompt starting from the beginning into
a text speech file now quite a few times
it denied it had the ability to do this
but eventually I got it to work it was
actually when I finally gave it this
prompt and it worked I say it worked but
it didn't quite work as intended check
it out here is the text-to-speech that
it came up with a large language model
trained by open AI when you send a
message containing python code to python
it will be executed in a stateful device
a notebook environment python will
respond with the output of the execution
or timeout after 120.0 seconds internet
access for this session is disabled do
not make external web requests or API
calls as they will fail now thank you
Stephen Hawking for that message the
only thing is it had nothing to do with
my original prompt now anyway when you
get access to code interpreter play
about with text-to-speech because it is
able to do it even if it denies it time
for a fun one I asked create a tree map
of the letters in the following quote
and I'm not going to read it out because
I am not good at tongue twisters anyway
I said give each part of the tree map a
different color and output a
downloadable file proceed without
further question and here is the output
and I checked it for the letter P and it
was correct that there were 36 instances
of the letter P in the output and look
how it's proportional with the number of
instances of the letter and the size of
each rectangle I think that is pretty
insane okay back to something more
serious I uploaded this file which is an
image of a math problem quite a hard one
as well and you guessed it I said solve
the math problem in this image it then
extracted the text from the image
presumably using OCR and then proceeded
to solve it and I'm going to get onto
this in a second it is better at math
than Wolfram Alpha I know that's a big
claim but it's far less buggy I found
Wolfram Alpha crashing very frequently
anyway here are the two solutions and
isn't that incredible from a photo
essentially it then extracts out the
math problem including the two square
roots and then solves it this is all
within the same window of chapter no
need for any other apps or extensions
next it can do radial bar plots which I
think are really quite beautiful I'm not
saying this is the best one ever and I'm
sure you could tweak it to make it more
clear and beautiful look at that the
life expectancy in the US climbing from
1800 and then it goes clockwise reaching
a projected almost 90 by 2100 again I'm
sure you could do a far better job than
me in extracting out a more beautiful
diagram but aren't radial bar plots just
beautiful to look at speaking of cool
diagrams how about this I didn't even
specify which visualization to do I
uploaded this same life expectancy data
and I just said what are the most
advanced and Technical visualizations
you can do with this data proceed to do
them now honestly it picks some
visualizations that I don't think are
the most advanced but nevertheless it
was creative here is what it did it does
frequently make the mistake of
cluttering the axes and having far too
many labels so that you can't see
anything so scrub that one out not great
but what about the next few remember it
just did this on its own this is a heat
map and you can see some really
interesting things from this data like
India starting with a much lower life
expectancy than anyone else but
gradually Rising but still falling
behind the others even in 2100 and look
at China look how the life expectancy
drops in the 60s and 70s I think we all
know what happened there compare that to
the US which is a gradual continual
Ascent actually aside from 22 20. look
how the shade gets a little darker in
2020. obviously you guys can probably
work out what happened around then but
then the projections are for it to go up
toward 90 by 2100 that's a beautiful and
clear heat map that I didn't even ask
for it to do let's look at the next one
box plot do you remember those from
school you get the upper end of the data
the highest one the lowest one the
median the first quartile and third
quartile and it's a great way of
statistically representing a set of data
and it's done it for every 50th year
starting in 1900. obviously a slightly
less beautiful diagram than some of the
ones you've seen today but for the
statisticians in the audience you will
know that this is a very useful metric
for a lot of data the individual points
above and below are typically when there
are outliers in the data I would
estimate that all of these
visualizations only took around two two
and a half minutes so definitely not the
10 seconds as I said that you often see
on Twitter I mean have you ever seeing
gpt4 give an answer in less than 10
seconds speaking of useful I think many
professionals will find the next thing
that I'm about to showcase the most
useful of all any insights that Gypsy 4
finds Trends medians analyzes whatever
you can ask it to add to the original
file and then download it do you
remember that the original file was
called median age years well notice this
file name median age years with insights
it has created a downloadable new file
with the insights included and look at
some of the insights that I mean you
have the change from 1950 to 2100 and
here is the average median age
throughout the period and the change
from 2023 to 2100 notice that the
original file didn't have those columns
they were added by gpc4 with code
interpreter and now how about data
progression video files I was honestly
shocked when I saw that it could do this
but I asked can you make a 256 by 256
MP4 that gradually reveals the lines as
they progress on the x-axis this was
about the median age over time here is
what it did and look at how the data and
the chart progresses as time moves along
I was really shocked to see this and the
line in red which is going to be labeled
at the end is the global median age and
remember it calculated that that wasn't
in the original file now I'm not sure
why it picked out these four countries
maybe because they represent extremes
either way I think the result is
phenomenal and I'm genuinely impressed
that it did this even though I know the
final result could be improved
dramatically for example far higher
resolution and maybe the global median
age labeled from the start and actually
now that it's got to the end I can see
why it did pick out these countries
because Niger did have the lowest median
age in 2100 and it looks like Puerto
Rico had the highest and the fastest
aging one was Albania next and this this
is going to shock quite a few people
what about image editing I created this
image in mid-journey version 5 and then
here's what I asked I said use opencv to
select the foreground of this image and
look what it did it picked out the
foreground no Blue Sky now I know it's
not perfect but it's nevertheless
impressive all within the window of
chapter BT this does actually make me
wonder if open Ai and chat to BT is
eventually not now but in a few years
gonna swallow all other apps or maybe
Google's Gemini but either way one
interface one website one app doing the
job of all others and by the way of
course chapter BT is now available on
iOS but imagine you have one app and it
can do image editing text-to-speech
video editing everything data analysis
not add gpt4 levels but GPT 6 or gbt 7
levels if you can get every piece of
information service and application in
one interface a bit like now people
being addicted to their smartphones
won't people be a addicted to this one
interface again that's not going to
happen now but I'm just posing it as a
question to think over for the moment
though before anyone gets too carried
away it does still hallucinate quite a
lot so I uploaded this image and I asked
it questions about it and it answered
and I was like wow it can do image
recognition it said this image appears
to be a digital painting of a humanoid
figure at a desk with a rather complex
background I was initially amazed until
I realized that it probably got that
from the file name because when I asked
it questions it got it wrong so I said
what is on the desk now look back
there's this weird kind of microphone
and a bit of paper and not much else a
keyboard and look what it said there are
multiple floating holographic displays
okay a mouse not really a desk lamp I
can't see that and then tools and
devices now correct me if I'm wrong but
I think most of those are incorrect now
obviously I need to do far more
experiments to see if it actually can
recognize any particular images and
maybe I'm putting it down too harshly
but at the moment it does seem to
hallucinate if you ask it about too much
of the detail of an image next remember
how one of the key weaknesses of GT4 is
that it can't really count things
especially not characters words Etc and
even more so it can't do division and
some of you might be thinking well with
Wolfram Alpha it can do those things not
quite here is an example of the code
interpreter plugin essentially eating
Wolfram Alpha obviating it making it not
obvious what the utility of it is if
you've got code interpreter I asked
divide the number of the letter e's in
this prompt by the number of the letter
T's now you might think code interpreter
can improve things by doing the
character counting but it can also do
the division notice how it counted the
characters correctly compared to Wolfram
Alpha and of course got the division
correct as well so if it can do Advanced
quadratics and do division and character
counting Etc it does beg the question
what would we use Wolfram Alpha for that
we can't use code interpreter for I
honestly might not know something that
you guys know so do let me know in the
comments it also also got this math
question correct and notice you get
these beautiful map visuals that you
don't get with the base version of gpd4
you get something more like this where
the visuals aren't as clear and notice
the base version of GT4 gets the
question wrong it can't do division but
with code interpreter it gets the
question right next one is a quick one
pie charts nothing too special but I
think it is a fairly beautiful
visualization it doesn't seem to matter
how big the CSV file is that you upload
this next example was really quite
fascinating it was a word puzzle I have
tried this particular word puzzle on
gpt4 dozens of times the reason I picked
this puzzle is called a Word ladder is
because it really struggles with the
puzzle if the number of steps required
is more than a certain number usually
about five or six steps it gave me a
really interesting border of the limits
of gt4's planning abilities with
language anyway it always gets it wrong
here is a demonstration with the base
model of gypsy 4. you might say why is
this wrong but look at how it's changed
change from Seas to sags which is more
than one letter change and that's
typical of the kind of Errors it makes
what about with code interpreter well
you can probably guess the ending given
that I featured it in the video but it
gets it right I believe it draws Upon A
hard-coded word set and this does Point
towards the kind of puzzles that I think
gpc4 with code interpreter will be able
to solve things like crosswords and
sudokus okay not exactly world changing
but nevertheless I think quite
fascinating and how about Venn diagrams
the reason I picked this example is that
I had to go through about 10 steps to
get it to create this rather basic
three-way Venn diagram this represents
the overlap between dogs Ai and desks
and apparently all of them are loyal
companions well we will see about that
but anyway it took quite a few steps to
get it right which was pretty annoying
but here's the really interesting thing
once I got it set up in the way that I
like all I had to do was say use the
format above to create a new three-way
Venn diagram this time for Mango's Movie
heroes and marmosets try to make each
entry funny and use different colors
proceed without further questions so it
may have been a struggle to set up
initially but once done it was so easy
to iterate a new three-way Venn diagram
and actually it was better than the
original apparently all three are adored
by fans worldwide apparently only
marmosets and Movie heroes can climb up
trees really fast and mangoes and
marmosets can hang upside down that's
crazy one or two prompts iterating on a
design already agreed upon this is
honestly what is likely to happen in the
future with people spending hours to
find the perfect data visualization or
piece of data analysis and then just
hitting copy paste for all their other
files perfect it once and then it does
the rest for you a quick couple of bonus
ones before I finish you can just ask it
to come up with a visualization giving
it no direction at all it came up with a
distribution of prime numbers up to ten
thousand thing is I believe there's a
slight mistake at the beginning because
I think there's only 25 in the first 100
and 21 in the next 100. so you probably
do want to still check the outputs that
code interpreter gives you and that's
another reason it's not going to
instantly replace all data analysis and
data visualization it's not perfect and
it's not fully reliable but you've got
to look ahead to where things are going
I'm going to end where I started with
this insane 3D surface map of a volcano
if this is what gpd4 can do now with the
Alpha version of code interpreter what
will GPC 5 or 6 do with version 7 or 20
of code interpreter I was about to
speculate about that but then I got
distracted with trying to get inside
this volcano it is kind of fun look I'm
going above and into the volcano let me
know what you will try when you get
access I know they're rolling out
steadily and I know that some people
will have had access to it for about
three weeks so hopefully if you want to
experiment with it you will be able to
soon in the meantime do let me know if
you have any ideas that you want me to
experiment with and thank you so much
for watching all the way to the end have
a wonderful day |
4f6b45ce-54e8-4d79-85e7-e8f95a0caffa | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [Linkpost] Some high-level thoughts on the DeepMind alignment team's strategy
***Update:** The original title "DeepMind alignment team's strategy" was poorly chosen. Some readers seem to have interpreted the previous title as meaning that this was everything that we had thought about or wanted to say about an "alignment plan", which is an unfortunate misunderstanding. We simply meant to share slides that gave a high-level outline of how we were thinking about our alignment plan, in the interest of partial communication rather than no communication.*
I recently gave a talk about the DeepMind alignment team's strategy at the SERI MATS seminar, sharing the slides here for anyone interested. This is an overview of our threat models, our high-level current plan, and how current projects fit into this plan.
**Disclaimer:** This talk represents the views of the alignment team and is not officially endorsed by DeepMind. This is a work in progress and is not intended to be a detailed or complete plan.
Let's start with our threat model for alignment -- how we expect AGI development to go and the main sources of risk.
**Development model.** We expect that AGI will likely arise in the form of scaled up foundation models fine tuned with RLHF, and that there are not many more fundamental innovations needed for AGI (though probably still a few). We also expect that the AGI systems we build will plausibly exhibit the following properties:
* **Goal-directedness.** This means that the system generalizes to behave coherently towards a goal in new situations (though we don't expect that it would necessarily generalize to all situations or become a expected utility maximizer).
* **Situational awareness.** We expect that at some point an AGI system would develop a coherent understanding of its place in the world, e.g. knowing that it is running on a computer and being trained by human designers.
**Risk model.** Here is an overall picture from our recent post on [Clarifying AI X-risk](https://www.lesswrong.com/posts/GctJD5oCDRxCspEaZ/clarifying-ai-x-risk):
We consider possible technical causes of the risk, which are either specification gaming (SG) or goal misgeneralization (GMG), and the path that leads to existential risk, either through the interaction of multiple systems or through a misaligned power-seeking system.
Various threat models in alignment focus on different parts of this picture. Our particular threat model is focused on how the combination of SG and GMG can lead to misaligned power-seeking, so it is in the highlighted cluster above.
Conditional on AI existential risk happening, here is our **most likely scenario** for how it would occur (though we are uncertain about how likely this scenario is in absolute terms):
* The main source of risk is a mix of specification gaming and (a bit more from) goal misgeneralization.
* A misaligned consequentialist arises and seeks power. We expect this would arise mainly during RLHF rather than in the pretrained foundation model, because RLHF tends to make models more goal-directed, and the fine-tuning tasks benefit more from consequentialist planning.
* This is not detected because deceptive alignment occurs (as a consequence of power-seeking), and because interpretability is hard.
* Relevant decision-makers may not understand in time that this happening, if there is an inadequate societal response to warning shots for model properties like consequentialist planning, situational awareness and deceptive alignment.
We can connect this threat model to our views on MIRI's [arguments for AGI ruin](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities).
* Some things we agree with: we generally expect that capabilities easily generalize out of desired scope (#8) and possibly further than alignment (#21), inner alignment is a major issue and outer alignment is not enough (#16), and corrigibility is anti-natural (#23).
* Some disagreements: we don't think it's impossible to cooperate to avoid or slow down AGI (#4), or that a "pivotal act" is necessary (#6), though we agree that it's necessary to end the acute risk period in some way. We don't think corrigibility is unsolvable (#24), and we think interpretability is possible though probably very hard (section B3). We expect some tradeoff between powerful and understandable systems (#30) but not a fundamental obstacle.
Note that this is a bit different from the summary of team opinions in our [AGI ruin survey](https://www.lesswrong.com/posts/qJgz2YapqpFEDTLKn/deepmind-alignment-team-opinions-on-agi-ruin-arguments). The above summary is from the perspective of our alignment plan, rather than the average person on the team who filled out the survey.
**Our approach.** Our high level approach to alignment is to try to direct the training process towards aligned AI and away from misaligned AI. To illustrate this, imagine we have a space of possible models, where the red areas consist of misaligned models that are highly competent and cause catastrophic harm, and the blue areas consist of aligned models that are highly competent and don't cause catastrophic harm. The training process moves through this space and by default ends up in a red area consisting of misaligned models. We aim to identify some key point on this path, for example a point where deception was rewarded, and apply some alignment technique that directs the training process to a blue area of aligned models instead.
We can [break down](https://vkrakovna.wordpress.com/2022/06/02/paradigms-of-ai-alignment-components-and-enablers/) our high-level approach into work on alignment components, which focuses on building different elements of an aligned system, and alignment enablers, which make it easier to get the alignment components right.
**Components:** build aligned models
* Outer alignment
+ Scalable oversight (Sparrow, debate)
+ Process-based feedback
* Inner alignment
+ Mitigating goal misgeneralization
+ Red-teaming
**Enablers:**detect models with dangerous properties
* Detect misaligned reasoning
* Looking at internal reasoning (mechanistic interpretability)
* Cross-examination (and consistency checks more generally)
* Detect capability transitions
+ Capability evaluations
+ Predicting phase transitions (e.g. grokking)
* Detect goal-directedness
**Teams and projects.** Now we'll briefly review what we are working on now and how that fits into the plan. The most relevant teams are Scalable Alignment, Alignment, and Strategy & Governance. I would say that Scalable Alignment is mostly working on components and the other two teams are mostly working on enablers. Note that this doesn't include everyone doing relevant work at DeepMind.
**Scalable alignment**(led by Geoffrey Irving):
* Sparrow
+ Paper: [Improving alignment of dialogue agents via targeted human judgements](https://arxiv.org/abs/2209.14375)
* Process-based feedback
+ Paper: [Solving math word problems with process- and outcome-based feedback](https://arxiv.org/abs/2211.14275)
* Red-teaming
+ Paper: [Red Teaming Language Models with Language Models](https://arxiv.org/abs/2202.03286)
**Alignment**(led by Rohin Shah):
* Capability evaluations (led by Mary Phuong, in collaboration with other labs)
* Mechanistic interpretability (led by Vladimir Mikulik)
+ Paper: [Tracr: Compiled Transformers as a Laboratory for Interpretability](https://arxiv.org/abs/2301.05062)
* Goal misgeneralization (led by Rohin Shah)
+ Paper: [Goal Misgeneralization: Why Correct Specifications Aren't Enough For Correct Goals](https://arxiv.org/abs/2210.01790)
* Causal alignment (led by Tom Everitt)
+ Paper: [Discovering Agents](https://arxiv.org/abs/2208.08345)
* Internal outreach (led by Victoria Krakovna)
**Strategy & Governance**(led by Allan Dafoe):
* Capability evaluations
* Institutional engagement / internal outreach
* (Lots of other things)
**Relative to OpenAI's plan.** Our plan is similar to [OpenAI's approach](https://openai.com/blog/our-approach-to-alignment-research) in terms of components -- we are also doing scalable oversight based on RLHF. We are less confident in components working by default, and are relying more on enablers such as mechanistic interpretability and capability evaluations.
A major part of OpenAI's plan is to use large language models and other AI tools for alignment research. This a less prominent part of our plan, and we mostly count on those tools being produced outside of our alignment teams (either by capabilities teams or external alignment researchers).
**General hopes.** Our plan is based on some general hopes:
* The most harmful outcomes happen when the AI "knows" it is doing something that we don’t want, so mitigations can be targeted at this case.
* Our techniques don’t have to stand up to misaligned superintelligences -- the hope is that they make a difference while the training process is in the gray area, not after it has reached the red area.
* In terms of directing the training process, the game is skewed in our favour: we can restart the search, examine and change the model's beliefs and goals using interpretability techniques, choose exactly what data the model sees, etc.
* Interpretability is hard but not impossible.
* We can train against our alignment techniques and get evidence on whether the AI systems deceive our techniques. If we get evidence that they are likely to do that, we can use this to create demonstrations of bad behavior for decision-makers.
Overall, while alignment is a difficult problem, we think there are some reasons for optimism.
**Takeaways.** Our main threat model is basically a combination of SG and GMG leading to misaligned power-seeking. Our high-level approach is trying to direct the training process towards aligned AI and away from misaligned AI. There is a lot of alignment work going on at DeepMind, with particularly big bets on scalable oversight, mechanistic interpretability and capability evaluations. |
5c18406c-de21-412d-8997-cf7abfc57b71 | trentmkelly/LessWrong-43k | LessWrong | Why Not Just... Build Weak AI Tools For AI Alignment Research?
“Weak” cognitive tools are clearly a thing, and are useful. Google search is a fine example. There are plenty of flavors of “weak AI” which are potentially helpful for alignment research in a similar way to google search.
In principle, I think there’s room for reasonably-large boosts to alignment research from such tools[1]. Alas, the very large majority of people who I hear intend to build such tools do not have the right skills/background to do so (at least not for the high-value versions of the tools). Worse, I expect that most people who aim to build such tools are trying to avoid the sort of work they would need to do to build the relevant skills/background.
Analogy: A Startup Founder’s Domain Expertise (Or Lack Thereof)
Imagine a startup building tools meant to help biologists during their day-to-day work in the wetlab. I expect domain expertise to matter a lot here: I would guess that if none of the founders have ample personal experience doing research work in a wetlab, the chance of this startup building an actually-highly-useful wetlab product drops by about an order of magnitude. Our hypothetical startup might still “succeed” some other way, e.g. by pivoting to something else, or by being good at pitching their shitty product to managers who make purchasing decisions without actually using the product, or by building something very marginally useful and pricing it very cheaply. But their chance of building a wetlab product which actually provides a lot of value is pretty slim.
One might reply: but couldn’t hypothetical founders without domain experience do things to improve their chances? For instance, they could do a bunch of user studies on biologists working in wetlabs, and they could deploy the whole arsenal of UX study techniques intended to distinguish things-users-say-matter from things-which-actually-matter-to-users.
… and my response is that I was already assuming our hypothetical founders do that sort of thing. If the founders don’t have mu |
a67c676b-e86b-497e-9f45-0e4f488cb9a8 | trentmkelly/LessWrong-43k | LessWrong | Sunzi's《Methods of War》- Introduction
This is a translation of the Chapter 1 of The Art of War by Sunzi. No English sources were used. The original text and many of the interpretations herein come from 古诗文网.
----------------------------------------
> 孙子曰:兵者,国之大事,死生之地,存亡之道,不可不察也。
War determines life and death of troops, existence and destruction of a country. It cannot be ignored.
> 故经之以五事,校之以计,而索其情:一曰道,二曰天,三曰地,四曰将、五曰法。
Five aspects are of paramount important:
1. Dao
2. Heaven
3. Earth
4. Generalship
5. Method
> 道者,令民与上同意也,故可以与之死,可以与之生,而不畏危。
"Dao" concerns alignment. Your side must be unified. By dying together, living together, you shall be unafraid.
> 天者,阴阳,寒暑、时制也。
"Heaven" concerns timing, yin and yang, winter and summer.
> 地者,远近、险易、广狭、死生也。
"Earth" concerns the near and far, impassable and passable, open fields and choke points, death and life.
> 将者,智、信、仁、勇、严也。
"Generalship" is a matter of wisdom, fidelity, benevolence, bravery and severity.
> 法者,曲制、官道、主用也。
"Method" concerns tactics, doctrine and organization.
> 凡此五者,将莫不闻,知之者胜,不知者不胜。
A commander must not ignore these five aspects. Understanding them brings victory. Lack of understanding does not bring victory.
> 故校之以计,而索其情,曰:主孰有道?将孰有能?天地孰得?法令孰行?兵众孰强?士卒孰练?赏罚孰明?
Ask yourself: Are ruler and subjects aligned? Is the general capable? Heaven (climate) and Earth (geography) in your favor? Methods effective? Troops strong? Trained? Enlightenedly punished?
> 吾以此知胜负矣。将听吾计,用之必胜,留之;将不听吾计,用之必败,去之。计利以听,乃为之势,以佐其外。势者,因利而制权也。
These things determine victory and defeat.
> 兵者,诡道也。故能而示之不能,用而示之不用,近而示之远,远而示之近;利而诱之,乱而取之,实而备之,强而避之,怒而挠之,卑而骄之,佚而劳之,亲而离之。攻其无备,出其不意。此兵家之胜,不可先传也。
The art of war depends on local conditions. The near informs you about the far. The far informs you about the near.
* If the enemy is clever then tempt.
* If the enemy is disordered then raid.
* If the enemy is capable then prepare.
* If the enemy is mighty then run.
* If the enemy is angry then provoke.
* If the enemy is inferior then threaten.
* If the enemy is dissolu |
326f8596-bb9c-4537-b1d0-c4f740d542b5 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | A simple model of the Löbstacle
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
The idea of the Löbstacle is that basic trust in yourself and your successors is necessary but tricky: necessary, because naively modeling your successor's decisions cannot rule out them making a bad decision, unless they are in some sense less intelligent than you; tricky, because the strongest patches of this problem lead to inconsistency, and weaker patches can lead to indefinite procrastination (because you always trust your successors to do the thing you are now putting off). (For a less handwavy explanation, see the technical agenda document on [Vingean reflection](https://intelligence.org/files/VingeanReflection.pdf).)
It is difficult to specify the circumstances under which this kind of self-trust succeeds or fails. Here is one simple example in which it can succeed, but for rather fragile reasons.
---
We will consider a sequential decision problem, where an agent's payoff can depend on the actions of "later" agents in "later" universes. Even in the case where the identities of the later agents are known in advance and correspond closely to the current agent, the current agent can have difficulties in trusting the later ones.
Each universe is defined with respect to the output of the current agent, and to the output of the next universe on the next agent:
* def Un(a):
+ if a=`Quit': return 0
+ else if a=`Explode': return -10
+ else return 2−n+Un+1(An+1())
Note that if An's successors continue for a while but then explode, then An should quit rather than continue; but if none of An's successors explode, then An should continue. So ideally, a sensible sequence of agents An should continue indefinitely in this problem.
We will consider a sequence of agents inspired by the [descending search order version of modal UDT](http://agentfoundations.org/item?id=50):
* def An():
+ for (a, u) in {`Continue',`Quit',`Explode'}×Utilities, sorted in decreasing order of u:
- if PA⊢Un(a)≥u: return a
+ if still here: return some default action
Here, Utilities is a finite set of possible utilities, and we may choose it differently depending on n.
In the language of the Vingean reflection paper, we have replaced the suggester-verifier architecture with a known order of proof searches (which may or may not succeed). We are also using a sequence of agents that use the same formal system, rather than a descending chain of formal systems in which each can prove the next one consistent. So in order to achieve self-trust, these agents will need to find some Löbian shortcuts.
Let's start with a failure of self-trust:
**Claim 1:** If An ever considers any utility −9<u≤0, and An+1 first considers (`Explode',u) for some u>−10, then An will quit, even though An+1 will not explode.
This is straightforward: since the inconsistency of Peano Arithmetic would imply that An+1 succeeds at its first proof check, and since PA cannot prove its own consistency, PA cannot prove that An+1()≠`Explode', and so it cannot prove that An()=`Continue'→Un()>−9. So if An considers any −9<u≤0, it will find a proof that quitting satisfices before it can find any other proof.
Now, for a special case of successful self-trust:
**Claim 2:** If An is such that PA proves that for all n, An first considers (`Continue',u) for some u<21−n, then An will indeed continue for all n.
This follows from a Löbian argument: if we assume that PA proves ∀nAn()=`Continue', then it follows that PA proves that for all n, An()=`Continue'→Un()≥u for any u<21−n, and since this is always the first counterfactual An considers, the success of this proof search implies that for all n, An indeed continues. Since a proof of that assertion implies the assertion, by Löb's theorem the assertion is provable.
This example can be weakened somewhat: it's OK for An+1 to first consider (`Quit',u) and secondly (`Continue',u) if 0<u≤2−n, since then the analogous proof goes through. (If PA were inconsistent, An would get the `Quit' payoff at the next step rather than continuing the chain.) But if any `Explode' action must be considered before a correct counterfactual about `Continue', then self-trust will fail. So it doesn't seem that such Löbian cycles are a robust foundation for Vingean reflection.
(Thanks to Benja for working this out with me.) |
fdba501e-393e-462b-8542-421ada239d87 | trentmkelly/LessWrong-43k | LessWrong | Parable of the flooding mountain range
A mountaineer is hiking in a mountain range. There is a thick fog so he cannot see beyond a few meters.
It is raining heavily and so the mountain range is being flooded, the mountaineer has to climb to a high place so he won’t get washed away.
He will climb towards the highest point in his sight, and if he sees another higher point he will change his direction towards there.
Now the mountaineer is standing on the top of a hill, and to his knowledge every direction is downwards, and there is no higher peak in sight. He sits on the hilltop, anxiously watching the rain and hearing the water raising.
The water floods the hill and drowns him, washing his dead body into the abyss.
Is he on the highest peak of the mountain range? Unlikely
Can he ever get there if he cannot see beyond a few metres? Very unlikely.
------------------------------------------------------------------------------------
A band of mountaineers are hiking in a mountain range. There is a thick fog so they cannot see beyond a few meters.
It is raining heavily and so the mountain range is being flooded, the mountaineers have to climb to a high place so they won’t get washed away.
They elected the most experienced mountaineer as their leader, in the fog he can see a couple metres further than everybody else, and so he is the best guide possible for anyone.
The band all followed him onto a hilltop, every direction is downwards so they stayed there, anxiously watching the rain and hearing the raising waters.
Until the water floods the hilltop and drowns them, washing their dead bodies into the abyss.
This band is functionally the same as a lone mountaineer.
----------------------------------------------------------------------------------------------
A band of mountaineers are hiking in a mountain range. There is a thick fog so they cannot see beyond a few meters.
It is raining heavily and so the mountain range is being flooded, the mountaineers have to climb to a high place so they wo |
78510bca-e186-41d6-a638-2d9002705792 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | October 2022 AI Risk Community Survey Results
Context
=======
In October 2022 the AI Risk community kindly filled out [this](https://forum.effectivealtruism.org/posts/8DtA57z9EyifD2wj5/ai-risk-microdynamics-survey) survey on AI Risk probabilities and timelines. Very basic research ethics is that if I use the community’s data to generate results I should try and reflect that data back to the community as soon as possible, and certainly faster than I have done. My excuse for being so tardy is that apparently my baby daughter does not respect research ethics in the slightest and came unexpectedly early, comprehensively blowing up my planned timelines.
I used the data in two places:
* [This](https://forum.effectivealtruism.org/posts/JjAjJ53mmpQqBeobQ/the-race-to-the-end-of-humanity-structural-uncertainty) essay on structural uncertainty analysis
* [This](https://forum.effectivealtruism.org/posts/Z7r83zrSXcis6ymKo/dissolving-ai-risk-parameter-uncertainty-in-ai-future) essay on parameter uncertainty analysis
If you use the data for any other projects, please let me know in a comment and I will update this list.
The raw data are [here](https://www.dropbox.com/s/hvgnw3di5sfytml/AI%20Risk%20Microdynamics%20Survey%20Results.xlsx?dl=0). If you try and replicate my results and find slight differences this is because some participants asked to be completely anonymised and stripped from the public-facing element of the survey[[1]](#fnwu0xgyr4o8). If you try and replicate my results and find *massive* differences this is because I have made a mistake somewhere!
I would always recommend digging into the raw data yourself if you are interested, but I’ve included a couple of descriptive outputs and some basic analysis below in case that isn’t practical.
Demographics
============
42 people took the survey (thank you very much!), of whom 37 did not request their results to be removed before making them public.
Responses were heavily weighted towards men (89%) and US respondents (64%). The majority of respondents accessed the survey via Astral Codex Ten (62%), with 22% accessing from the EA forum and 16% from LessWrong. Generally, respondents were quite educated, although there was a fairly good spread of education levels as indicated in the graph below.

4 respondents self-identified as ‘experts’, and 7 additional respondents self-identified as ‘marginally’ expert. I have grouped these respondents together for the purposes of most of my analysis, reasoning that the ‘marginals’ were probably more likely to be 'modest experts' than a separate subcategory all by themselves. This means therefore 70% of responses were ‘nonexpert’ and 30% were ‘expert-as-defined-by-me’.
The average respondent has been interested in AI Risk for around 7 years. This is heavily skewed upwards by three respondents who have been interested for 40, 39 and 24 years respectively – the *median* respondent has been interested for only 5 years. Since these three long-term respondents all indicated that they were not experts, this results in the rather amusing outcome that non-experts have been involved with AI Risk longer than experts, on average (7.6 years vs 6.2 years respectively). However, removing the three long-term respondents gives the expected direction of effect, which is that non-experts tend to have been involved with AI Risk for less time than experts (4.2 years vs 6.2 years respectively)[[2]](#fn8aj0f4uyvqo).
There are unfortunately insufficient data to make any reasonable claims about the relationship between any other variables and expertise.
Catastrophe responses
=====================
The average respondent believes that there is a 42% chance of AI Catastrophe conditional on AI being invented. This rises to 52% for self-identified experts. If Catastrophe occurs, the average respondent believes that there is a 42.8% chance it will be due to an in-control AI performing exactly the instruction it was given (“Design and release a deadly virus”) and a 56.1% chance it will be due to an Out-of-Control AI. These don’t quite add up to 100% because some respondents indicated that there was a third option I didn’t properly consider (most commonly arguments that the Catastrophe could be caused by a combination of in-control and out-of-control actions which don’t neatly fit into exactly one bucket[[3]](#fn953zfhoua5s)). There is a significant difference between experts and non-experts in this respect; experts believe 32.8% of Catastrophes will be due to in-control AI and 72.6% due to out-of-control AI (also note these add up to more than 100%, for basically the same reason as far as I can make out).
I didn’t ask *when* people expected Catastrophe to occur, which is one of many weaknesses of the survey.
I also asked about the probability and timelines of individual steps of Carlsmith’s 2021 [model](https://arxiv.org/abs/2206.13353) of AI risk. These are summarised below:

Generally, responses were similar between experts and non-experts; there were two major disagreements:
* Experts thought that conditional on being exposed to ‘high impact’ inputs there was a significantly higher chance that the AI would scale in aggregate to the point of permanently disempowering all of humanity. In the anonymised survey response, the experts have a probability of 63.7% (but if you include the experts who asked to be anonymised this jumps to 75.9%!).
* Experts thought that the time for this step to occur would be much shorter than non-experts (1.0 years median vs 6.5 years median respectively)
My observation here – which I also made in the essay on parameter uncertainty – is that there is a likely some fairly high impact work in communicating the urgency of *this step in particular* to non-experts, since it appears experts are considerably more worried about AIs scaling to disempower us than non-experts. I also made the observation in the essay on structural uncertainty that there is a tendency to create 'grand unified theories' of AI Risk rather than focussing on specific steps when discussing and describing AI Risk. Connecting those two thoughts together here; it might be that Experts assume that *everybody knows* that Disempowerment is a very high risk step, and so when they assert that AI Risk is higher than everyone else thinks they are assuming everyone is more or less working with the same model structure as them. But unless that step is explicitly spelled out, I think people will continue to be confused because non-experts *don't* have the same sort of model, so the apparent impression will be of experts getting frustrated about points that seem irrelevant.
The derived probability of catastrophe was 22% for all respondents and 29% for experts specifically. This is notably lower than the directly elicited probability of catastrophe – almost halving the risk. To some extent, this reflects a very common cognitive bias that asking about lots of small steps in a process gives you different answers than asking about the probability of the process *in toto*. It may also reflect one of my very favourite drums to bang, which is that **distributions are frequently more important than point estimates**. In particular, using the geomean of directly elicited estimates gives you a number much closer to 22% and this might be a [better way](https://forum.effectivealtruism.org/posts/sMjcjnnpoAQCcedL2/when-pooling-forecasts-use-the-geometric-mean-of-odds) of averaging forecasts than the simple mean. Finally it might reflect that some people have models of AI Risk which don’t track perfectly onto the Carlsmith model, and be a statement to the effect that the Carlsmith model captures only about half of the Catastrophe probability mass. Sadly for such an important point I don’t really have any way of investigating further.
The median timeline for AI Catastrophe was 2054 considering all responses, and 2045 considering expert responses only. This indicates experts tended to believe AI Catastrophe was both more likely and coming faster than non-experts, which is consistent with their directly-elicited probabilities of Catastrophe.
Alignment responses
===================
22 individuals indicated that they believed there was some date by which “Unaligned AGI ceases to be a risk”. Participants were instructed to leave blank any question which they did not think had an answer, which means that – potentially – respondents believed there was a 64% chance that Alignment in the strongest sense was possible. I say ‘potentially’ because people may have left the question blank simply because they didn’t have a good guess for the answer, so 64% represents a lower bound for the probability that Alignment is possible. Because the number of respondents is starting to get quite small here, I haven’t distinguished between expert and non-expert responses below.
The median date people predict Alignment will happen in the strong sense of causing AI to cease to be a risk is 2066. However, I asked a couple of probing questions to see if I had understood these responses correctly. The median date by which people believe there will be a TEST to detect whether an AI is Aligned or not was 2053, and the median date by which people believe we will have a METHOD to build a provably Aligned AI is 2063. The difference between building provably Aligned AIs to ‘solving the Alignment problem’ which takes three years is because of an additional step of making Aligned AIs approximately as easy to build as Unaligned AIs (so nobody has any incentive to deploy Unaligned AIs). Indeed, the median response is 3 years for this step so people's answers are extremely internally consistent.
However, these figures hide a very wide distribution in responses. For example, not everyone (27% of respondents) believed that Strong Alignment necessarily followed from the TEST / METHOD definitions of Alignment I described above – they thought that we would solve Alignment before developing a test for Alignment or developing a method to build provably Aligned AIs[[4]](#fneat1s0c8xtu). Nor did everyone believe that the gap between developing the capability to build AIs and making them practically easy to deploy would be short – despite the median being 3 years, the average was 12 years.
I think there may be something fairly important I’m missing about the way I framed the questions, because there is a very poor correlation between people’s answers to the Carlsmith-like question "*Conditional on AGI being possible to build, it will initially be much harder to build Aligned AI systems than Misaligned AI systems which are nevertheless superficially attractive to deploy*" and their Alignment timeline estimates. For example in the graph below you can see basically a random cloud of points, and some points I would expect to essentially never show up; for example in the bottom right you can see people with ~100% probability that it will initially be harder to deploy Aligned than Unaligned systems who nevertheless have a very short best guess for how long it will be between finding *any*solution to Alignment and finding a trivial solution. This isn’t logically incoherent by any means – a very sensible path to finding a trivial solution to Alignment will be to experiment with complex solutions to Alignment – it just seems weird to me that there’s no correlation when I would expect one.

Newcome responses
=================
Finally, for my own interested, I asked people about ‘[Newcombe’s Box Paradox’](https://en.wikipedia.org/wiki/Newcomb%27s_paradox). In this philosophical thought experiment, you are shown two boxes; inside one is $1000 that you are certain exists (the box is transparent, perhaps), and inside the other is *either* nothing or $1,000,000 but you don’t know which. You can collect the money from inside *either* the second box alone or from both the first and second box together. A super powerful predictor – perhaps an AI to keep with the flavour of the rest of this post – has put $1,000,000 in the second box if they predict you will choose the second box alone and $0 in the second box if they predict you will choose both. The predictor is just a predictor, and can’t change what it has already put in the box. What option do you choose?

I run an informal forecasting tournament with my friends and ask this question every year. So far I’ve not seen a clear signal that one-boxers or two-boxers are better forecasters, but I live in hope!
Of the respondents with an opinion, 78% one-box and 9% two-box. The remaining 13% have a solution to the problem which is more complicated, which I regret not asking about as I absolutely love this paradox. Self-identified experts only ever one-boxed but this would have occurred by chance about half the time anyway even if experts were no different to the general population.
One-boxers and two-boxers differ in a number of interesting and probably-not statistically significant ways. For example, the average one-boxer believes that AI will be invented much later than the average two-boxer (mean 2075 vs 2042, driven by some big one-boxer outliers). They also believe Alignment will come much later (mean 2146 vs 2055, again driven by some big outliers and hence why I use median for these figures in general). Their overall probabilities of Catastrophe are similar but still different enough to comment on – 41% for one-boxers and 68% for two-boxers.
If we can rely on these statistics (which I stress again: we cannot) then this suggests those who believe AI is coming soon and going to be dangerous are more likely to ignore the predictive power of the AI and just grab both boxes, whereas those who believe AI is coming later and going to be less dangerous are more likely to defer to the predictive power of the AI and take only the one box the AI won’t punish them for. This seems to be the wrong way around from what I'd expect it to be, so answers on the back of a postcard please!
1. **[^](#fnrefwu0xgyr4o8)**If you have a lot of time and energy you can work out what the missing values must have been by looking at the raw data in the Excel models associated with the essays above. However, I have made sure there is no way to reconstruct the individual chains of logic from these documents, usually by arranging all results in size order so there is no way to connect any particular guess with any particular anonymous participant.
2. **[^](#fnref8aj0f4uyvqo)**Two of the very long-term respondents completed a column indicating that they would prefer to have their data pseudonymised rather than fully anonymised – if you are either ‘Geoffrey Miller’ or ‘Eric Moyer's Wild Guesses’ I’d be very interested in hearing from you in the comments as to why you do not consider yourself an expert despite multiple decades of following the field. My working theory is that AI Risk is a field which moves very rapidly and so you’d expect an inverted ‘bathtub curve’ of expertise, as shown below, where people are most productive after a few years of training and then gradually lose their cutting edge as the field advances. This is *sort of* what we see in the data, but I’d need a lot more responses to be sure. And perhaps these individuals had a totally different reason for why they’ve self-described in that way!

3. **[^](#fnref953zfhoua5s)**For example, one comment I received is that an AI might be tasked with doing something locally good for a small number of humans but globally bad for a lot of humans. If there were a lot of these ‘negative sum’ requests of AI the end result might be Catastrophe, even though no AI ever acts out-of-control and no human ever orders an AI to cause a Catastrophe.
Interestingly non-experts tended to think of this sort of scenario as ‘neither IC or OOC’ whereas experts tended to think of it as ‘**both** IC and OOC’. There were too few experts to read much into this, but it is certainly an interesting difference between the groups.
4. **[^](#fnrefeat1s0c8xtu)**For example, we might have a method to randomly generate lots and lots of AI candidates, then TEST all and see which are Aligned. This solves the Alignment problem for a pretty brute-force definition of ‘solving the Alignment problem’ and may have been what these respondents had in mind |
d7441187-2552-4e3a-9cb9-ec97610cf476 | trentmkelly/LessWrong-43k | LessWrong | From Personal to Prison Gangs: Enforcing Prosocial Behavior
This post originally appeared here; I've updated it slightly and posted it here as a follow-up to this post.
David Friedman has a fascinating book on alternative legal systems. One chapter focuses on prison law - not the nominal rules, but the rules enforced by prisoners themselves.
The unofficial legal system of California prisoners is particularly interesting because it underwent a phase change sometime after the 1960’s.
Prior to the 1960’s, prisoners ran on a decentralized code of conduct - various unwritten rules roughly amounting to “mind your own business and don’t cheat anyone”. Prisoners who kept to the code were afforded some respect by their fellow inmates. Prisoners who violated the code were ostracized, making them fair game for the more predatory inmates. There was no formal enforcement; the code was essentially a reputation system.
Sometime after the 1960’s, that changed. During the code era, California’s total prison population was only about 5000, with about 1000 inmates in a typical prison. That’s quite a bit more than Dunbar’s number, but still low enough for a reputation system to work through second-order connections. By 1970, California’s prison population had ballooned past 25000; today it is over 170000. The number of prisons also grew, but not nearly as quickly as the population, and today’s prisoners frequently move across prisons anyway. In short, a decentralized reputation system became untenable. There were too many other inmates to keep track of.
As the reputation system collapsed, a new legal institution grew to fill the void: prison gangs. Under the gang system, each inmate is expected to affiliate with a gang (though most are not formal gang members). The gang will explain the rules, often in written form, and enforce them on their own affiliates. When conflict arises between affiliates of different gangs, the gang leaders negotiate settlement, with gang leaders enforcing punishments on their own affiliates. (Gang leaders are str |
aea05b81-fde0-4a9e-824f-b1ddf8f0aab5 | trentmkelly/LessWrong-43k | LessWrong | Dissolving philosophy
Summary: a large chunk of the history of Western philosophy is about finding out by what kinds of less conscious algorithms does the human mind arrive to certain intuitions.
In Plato's Republic, Socrates runs around Athens talking with people, trying to find an answer to the question: "What is justice?" Two and half thousands of years later we still don't have a truly definitive answer. We can spend another thousand year or two pondering it, but I suspect it would be better to reformulate the question in a more answerable way. So let's look at what Socrates is trying to do here, what his method is and what is actual question is!
It is not an empirical, scientific question that can be answered by observing something whose existence is independent of the human mind. Rather the question is about a feature of the human mind, not of a feature of the external reality out there.
However Socrates is not simply conducting an opinion survey. He is not content simply finding 74% of Athenians think justice means obeying laws. Socrates also argues against definitions of justice he considers _wrong_.
So, apparently, justice in this question relates to something that does not exist outside the human mind, but we can still have wrong opinions about it.
The method Socrates is employing is the following. He assumes when people see an actual action, they can intuitively judge it just or unjust and that judgement will be seen as _correct_. Well, not always, but at least when they are dispassionate, and have no vested interest either. So according to Socrates, any definition of justice can be tested by thought experiments that are sufficiently dispassionate and disinterested for the audience that they will actually use their Justice Sensors to form a judgement about them, and not, say, their passion like anger or greed, or their interests.
What Socrates is doing here, then, is asking people to make an algorithm that predicts what acts will a dispassionate and disinterested obs |
d9c035ad-8090-4911-80db-882a5ce395ef | StampyAI/alignment-research-dataset/blogs | Blogs | Security Mindset and Ordinary Paranoia
The following is a fictional dialogue building off of [AI Alignment: Why It’s Hard, and Where to Start](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/).
---
(**AMBER***, a philanthropist interested in a more reliable Internet, and* **CORAL***, a computer security professional, are at a conference hotel together discussing what Coral insists is a difficult and important issue: the difficulty of building “secure” software.*)
**AMBER:** So, Coral, I understand that you believe it is very important, when creating software, to make that software be what you call “secure”.
**CORAL:** Especially if it’s connected to the Internet, or if it controls money or other valuables. But yes, that’s right.
**AMBER:** I find it hard to believe that this needs to be a separate topic in computer science. In general, programmers need to figure out how to make computers do what they want. The people building operating systems surely won’t want them to give access to unauthorized users, just like they won’t want those computers to crash. Why is one problem so much more difficult than the other?
**CORAL:** That’s a deep question, but to give a partial deep answer: When you expose a device to the Internet, you’re potentially exposing it to intelligent adversaries who can find special, weird interactions with the system that make the pieces behave in weird ways that the programmers did not think of. When you’re dealing with that kind of problem, you’ll use a different set of methods and tools.
**AMBER:** Any system that crashes is behaving in a way the programmer didn’t expect, and programmers already need to stop that from happening. How is this case different?
**CORAL:** Okay, so… imagine that your system is going to take in one kilobyte of input per session. (Although that itself is the sort of assumption we’d question and ask what happens if it gets a megabyte of input instead—but never mind.) If the input is one kilobyte, then there are 28,000 possible inputs, or about 102,400 or so. Again, for the sake of extending the simple visualization, imagine that a computer gets a billion inputs per second. Suppose that only a googol, 10100, out of the 102,400 possible inputs, cause the system to behave a certain way the original designer didn’t intend.
If the system is getting inputs in a way that’s uncorrelated with whether the input is a misbehaving one, it won’t hit on a misbehaving state before the end of the universe. If there’s an intelligent adversary who understands the system, on the other hand, they may be able to find one of the very rare inputs that makes the system misbehave. So a piece of the system that would literally never in a million years misbehave on random inputs, may break when an intelligent adversary tries deliberately to break it.
**AMBER:** So you’re saying that it’s more difficult because the programmer is pitting their wits against an adversary who may be more intelligent than themselves.
**CORAL:** That’s an almost-right way of putting it. What matters isn’t so much the “adversary” part as the optimization part. There are systematic, nonrandom forces strongly selecting for particular outcomes, causing pieces of the system to go down weird execution paths and occupy unexpected states. If your system literally has no misbehavior modes at all, it doesn’t matter if you have IQ 140 and the enemy has IQ 160—it’s not an arm-wrestling contest. It’s just very much harder to build a system that doesn’t enter weird states when the weird states are being selected-for in a correlated way, rather than happening only by accident. The weirdness-selecting forces can search through parts of the larger state space that you yourself failed to imagine. Beating that does indeed require new skills and a different mode of thinking, what Bruce Schneier called “security mindset”.
**AMBER:** Ah, and what is this security mindset?
**CORAL:** I can say one or two things about it, but keep in mind we are dealing with a quality of thinking that is not entirely effable. If I could give you a handful of platitudes about security mindset, and that would actually cause you to be able to design secure software, the Internet would look very different from how it presently does. That said, it seems to me that what has been called “security mindset” can be divided into two components, one of which is much less difficult than the other. And this can fool people into overestimating their own safety, because they can get the easier half of security mindset and overlook the other half. The less difficult component, I will call by the term “ordinary paranoia”.
**AMBER:** *Ordinary* paranoia?
**CORAL:** Lots of programmers have the ability to imagine adversaries trying to threaten them. They imagine how likely it is that the adversaries are able to attack them a particular way, and then they try to block off the adversaries from threatening that way. Imagining attacks, including weird or clever attacks, and parrying them with measures you imagine will stop the attack; that is ordinary paranoia.
**AMBER:** Isn’t that what security is all about? What do you claim is the other half?
**CORAL:** To put it as a platitude, I might say… defending against mistakes in your own assumptions rather than against external adversaries.
**AMBER:** Can you give me an example of a difference?
**CORAL:** An ordinary paranoid programmer imagines that an adversary might try to read the file containing all the usernames and passwords. They might try to store the file in a special, secure area of the disk or a special subpart of the operating system that’s supposed to be harder to read. Conversely, somebody with security mindset thinks, “No matter what kind of special system I put around this file, I’m disturbed by needing to make the assumption that this file can’t be read. Maybe the special code I write, because it’s used less often, is more likely to contain bugs. Or maybe there’s a way to fish data out of the disk that doesn’t go through the code I wrote.”
**AMBER:** And they imagine more and more ways that the adversary might be able to get at the information, and block those avenues off too! Because they have better imaginations.
**CORAL:** Well, we kind of do, but that’s not the key difference. What we’ll really want to do is come up with a way for the computer to check passwords that doesn’t rely on the computer storing the password *at all, anywhere*.
**AMBER:** Ah, like encrypting the password file!
**CORAL:** No, that just duplicates the problem at one remove. If the computer can decrypt the password file to check it, it’s stored the decryption key somewhere, and the attacker may be able to steal that key too.
**AMBER:** But then the attacker has to steal two things instead of one; doesn’t that make the system more secure? Especially if you write two different sections of special filesystem code for hiding the encryption key and hiding the encrypted password file?
**CORAL:** That’s exactly what I mean by distinguishing “ordinary paranoia” that doesn’t capture the full security mindset. So long as the system is capable of reconstructing the password, we’ll always worry that the adversary might be able to trick the system into doing just that. What somebody with security mindset will recognize as a deeper solution is to store a one-way hash of the password, rather than storing the plaintext password. Then even if the attacker reads off the password file, they still can’t give what the system will recognize as a password.
**AMBER:** Ah, that’s quite clever! But I don’t see what’s so qualitatively different between that measure, and my measure for hiding the key and the encrypted password file separately. I agree that your measure is more clever and elegant, but of course you’ll know better standard solutions than I do, since you work in this area professionally. I don’t see the qualitative line dividing your solution from my solution.
**CORAL:** Um, it’s hard to say this without offending some people, but… it’s possible that even after I try to explain the difference, which I’m about to do, you won’t get it. Like I said, if I could give you some handy platitudes and transform you into somebody capable of doing truly good work in computer security, the Internet would look very different from its present form. I can try to describe one aspect of the difference, but that may put me in the position of a mathematician trying to explain what looks more promising about one proof avenue than another; you can listen to everything they say and nod along and still not be transformed into a mathematician. So I *am* going to try to explain the difference, but again, I don’t know of any simple instruction manuals for becoming Bruce Schneier.
**AMBER:** I confess to feeling slightly skeptical at this supposedly ineffable ability that some people possess and others don’t—
**CORAL:** There are things like that in many professions. Some people pick up programming at age five by glancing through a page of BASIC programs written for a TRS-80, and some people struggle really hard to grasp basic Python at age twenty-five. That’s not because there’s some mysterious truth the five-year-old knows that you can verbally transmit to the twenty-five-year-old.
And, yes, the five-year-old will become far better with practice; it’s not like we’re talking about untrainable genius. And there may be platitudes you can tell the 25-year-old that will help them struggle a little less. But sometimes a profession requires thinking in an unusual way and some people’s minds more easily turn sideways in that particular dimension.
**AMBER:** Fine, go on.
**CORAL:** Okay, so… you thought of putting the encrypted password file in one special place in the filesystem, and the key in another special place. Why not encrypt the key too, write a third special section of code, and store the key to the encrypted key there? Wouldn’t that make the system even more secure? How about seven keys hidden in different places, wouldn’t that be extremely secure? Practically unbreakable, even?
**AMBER:** Well, that version of the idea does feel a little silly. If you’re trying to secure a door, a lock that takes two keys might be more secure than a lock that only needs one key, but seven keys doesn’t feel like it makes the door that much more secure than two.
**CORAL:** Why not?
**AMBER:** It just seems silly. You’d probably have a better way of saying it than I would.
**CORAL:** Well, a fancy way of describing the silliness is that the chance of obtaining the seventh key is not conditionally independent of the chance of obtaining the first two keys. If I can read the encrypted password file, and read your encrypted encryption key, then I’ve probably come up with something that just bypasses your filesystem and reads directly from the disk. And the more complicated you make your filesystem, the more likely it is that I can find a weird system state that will let me do just that. Maybe the special section of filesystem code you wrote to hide your fourth key is the one with the bug that lets me read the disk directly.
**AMBER:** So the difference is that the person with a *true* security mindset found a defense that makes the system simpler rather than more complicated.
**CORAL:** Again, that’s almost right. By hashing the passwords, the security professional has made their *reasoning* about the system less complicated. They’ve eliminated the need for an assumption that might be put under a lot of pressure. If you put the key in one special place and the encrypted password file in another special place, the system as a whole is still able to decrypt the user’s password. An adversary probing the state space might be able to trigger that password-decrypting state because the system is designed to do that on at least some occasions. By hashing the password file we eliminate that whole internal debate from the reasoning on which the system’s security rests.
**AMBER:** But even after you’ve come up with that clever trick, something could still go wrong. You’re still not absolutely secure. What if somebody uses “password” as their password?
**CORAL:** Or what if somebody comes up a way to read off the password after the user has entered it and while it’s still stored in RAM, because something got access to RAM? The point of eliminating the extra assumption from the reasoning about the system’s security is not that we are then absolutely secure and safe and can relax. Somebody with security mindset is *never* going to be that relaxed about the edifice of reasoning saying the system is secure.
For that matter, while there are some normal programmers doing normal programming who might put in a bunch of debugging effort and then feel satisfied, like they’d done all they could reasonably do, programmers with decent levels of ordinary paranoia about ordinary programs will go on chewing ideas in the shower and coming up with more function tests for the system to pass. So the distinction between security mindset and ordinary paranoia isn’t that ordinary paranoids will relax.
It’s that… again to put it as a platitude, the ordinary paranoid is running around putting out fires in the form of ways they imagine an adversary might attack, and somebody with security mindset is defending against something closer to “what if an element of this reasoning is mistaken”. Instead of trying really hard to ensure nobody can read a disk, we are going to build a system that’s secure even if somebody does read the disk, and *that* is our first line of defense. And then we are also going to build a filesystem that doesn’t let adversaries read the password file, as a *second* line of defense in case our one-way hash is secretly broken, and because there’s no positive need to let adversaries read the disk so why let them. And then we’re going to salt the hash in case somebody snuck a low-entropy password through our system and the adversary manages to read the password anyway.
**AMBER:** So rather than trying to outwit adversaries, somebody with true security mindset tries to make fewer assumptions.
**CORAL:** Well, we think in terms of adversaries too! Adversarial reasoning is easier to teach than security mindset, but it’s still (a) mandatory and (b) hard to teach in an absolute sense. A lot of people can’t master it, which is why a description of “security mindset” often opens with a story about somebody failing at adversarial reasoning and somebody else launching a clever attack to penetrate their defense.
You need to master two ways of thinking, and there are a lot of people going around who have the first way of thinking but not the second. One way I’d describe the deeper skill is seeing a system’s security as resting on a story about why that system is safe. We want that safety-story to be as solid as possible. One of the implications is resting the story on as few assumptions as possible; as the saying goes, the only gear that never fails is one that has been designed out of the machine.
**AMBER:** But can’t you also get better security by adding more lines of defense? Wouldn’t that be more complexity in the story, and also better security?
**CORAL:** There’s also something to be said for preferring disjunctive reasoning over conjunctive reasoning in the safety-story. But it’s important to realize that you do want a primary line of defense that is supposed to just work and be unassailable, not a series of weaker fences that you think might maybe work. Somebody who doesn’t understand cryptography might devise twenty clever-seeming amateur codes and apply them all in sequence, thinking that, even if one of the codes turns out to be breakable, surely they won’t *all* be breakable. The NSA will assign that mighty edifice of amateur encryption to an intern, and the intern will crack it in an afternoon.
There’s something to be said for redundancy, and having fallbacks in case the unassailable wall falls; it can be wise to have additional lines of defense, so long as the added complexity does not make the larger system harder to understand or increase its vulnerable surfaces. But at the core you need a simple, solid story about why the system is secure, and a good security thinker will be trying to eliminate whole assumptions from that story and strengthening its core pillars, not only scurrying around parrying expected attacks and putting out risk-fires.
That said, it’s better to use two true assumptions than one false assumption, so simplicity isn’t everything.
**AMBER:** I wonder if that way of thinking has applications beyond computer security?
**CORAL:** I’d rather think so, as the proverb about gears suggests.
For example, stepping out of character for a moment, the author of this dialogue has sometimes been known to discuss [the alignment problem for Artificial General Intelligence](https://arbital.com/p/ai_alignment/). He was talking at one point about trying to measure rates of improvement inside a growing AI system, so that it would not do too much thinking with humans out of the loop if a breakthrough occurred while the system was running overnight. The person he was talking to replied that, to him, it seemed unlikely that an AGI would gain in power that fast. To which the author replied, more or less:
> It shouldn’t be your job to guess how fast the AGI might improve! If you write a system that will hurt you *if* a certain speed of self-improvement turns out to be possible, then you’ve written the wrong code. The code should just never hurt you regardless of the true value of that background parameter.
>
>
> A better way to set up the AGI would be to measure how much improvement is taking place, and if more than *X* improvement takes place, suspend the system until a programmer validates the progress that’s already occurred. That way even if the improvement takes place over the course of a millisecond, you’re still fine, so long as the system works as intended. Maybe the system doesn’t work as intended because of some other mistake, but that’s a better problem to worry about than a system that hurts you *even if* it works as intended.
>
>
> Similarly, you want to design the system so that if it discovers amazing new capabilities, it waits for an operator to validate use of those capabilities—not rely on the operator to watch what’s happening and press a suspend button. You shouldn’t rely on the speed of discovery or the speed of disaster being less than the operator’s reaction time. There’s no *need* to bake in an assumption like that if you can find a design that’s safe regardless. For example, by operating on a paradigm of allowing operator-whitelisted methods rather than avoiding operator-blacklisted methods; you require the operator to say “Yes” before proceeding, rather than assuming they’re present and attentive and can say “No” fast enough.
>
>
**AMBER:** Well, okay, but if we’re guarding against an AI system discovering cosmic powers in a millisecond, that does seem to me like an unreasonable thing to worry about. I guess that marks me as a merely ordinary paranoid.
**CORAL:** Indeed, one of the hallmarks of security professionals is that they spend a lot of time worrying about edge cases that would fail to alarm an ordinary paranoid because the edge case doesn’t sound like something an adversary is likely to do. Here’s an example [from the Freedom to Tinker blog](https://freedom-to-tinker.com/2008/03/26/security-mindset-and-harmless-failures/):
> This interest in “harmless failures” – cases where an adversary can cause an anomalous but not directly harmful outcome – is another hallmark of the security mindset. Not all “harmless failures” lead to big trouble, but it’s surprising how often a clever adversary can pile up a stack of seemingly harmless failures into a dangerous tower of trouble. Harmless failures are bad hygiene. We try to stamp them out when we can…
>
>
> To see why, consider the donotreply.com email story that hit the press recently. When companies send out commercial email (e.g., an airline notifying a passenger of a flight delay) and they don’t want the recipient to reply to the email, they often put in a bogus From address like donotreply@donotreply.com. A clever guy registered the domain donotreply.com, thereby receiving all email addressed to donotreply.com. This included “bounce” replies to misaddressed emails, some of which contained copies of the original email, with information such as bank account statements, site information about military bases in Iraq, and so on…
>
>
> The people who put donotreply.com email addresses into their outgoing email must have known that they didn’t control the donotreply.com domain, so they must have thought of any reply messages directed there as harmless failures. Having gotten that far, there are two ways to avoid trouble. The first way is to think carefully about the traffic that might go to donotreply.com, and realize that some of it is actually dangerous. The second way is to think, “This looks like a harmless failure, but we should avoid it anyway. No good can come of this.” The first way protects you if you’re clever; the second way always protects you.
>
>
“The first way protects you if you’re clever; the second way always protects you.” That’s very much the other half of the security mindset. It’s what this essay’s author was doing by talking about AGI alignment that runs on whitelisting rather than blacklisting: you shouldn’t assume you’ll be clever about how fast the AGI system could discover capabilities, you should have a system that doesn’t use not-yet-whitelisted capabilities even if they are discovered very suddenly.
If your AGI would hurt you if it gained total cosmic powers in one millisecond, that means you built a cognitive process that is in some sense trying to hurt you and failing only due to what you think is a lack of capability. This is *very bad* and you should be designing some other AGI system instead. AGI systems should never be running a search that will hurt you if the search comes up non-empty. You should not be trying to fix that by making sure the search comes up empty thanks to your clever shallow defenses closing off all the AGI’s clever avenues for hurting you. You should fix that by making sure no search like that ever runs. It’s a silly thing to do with computing power, and you should do something else with computing power instead.
Going back to ordinary computer security, if you try building a lock with seven keys hidden in different places, you are in some dimension pitting your cleverness against an adversary trying to read the keys. The person with security mindset doesn’t want to rely on having to win the cleverness contest. An ordinary paranoid, somebody who can master the kind of default paranoia that lots of intelligent programmers have, will look at the Reply-To field saying donotreply@donotreply.com and think about the possibility of an adversary registering the donotreply.com domain. Somebody with security mindset thinks in assumptions rather than adversaries. “Well, I’m assuming that this reply email goes nowhere,” they’ll think, “but maybe I should design the system so that I don’t need to fret about whether that assumption is true.”
**AMBER:** Because as the truly great paranoid knows, what seems like a ridiculously improbable way for the adversary to attack sometimes turns out to not be so ridiculous after all.
**CORAL:** Again, that’s a not-exactly-right way of putting it. When I don’t set up an email to originate from donotreply@donotreply.com, it’s not just because I’ve appreciated that an adversary registering donotreply.com is more probable than the novice imagines. For all I know, when a bounce email is sent to nowhere, there’s all kinds of things that might happen! Maybe the way a bounced email works is that the email gets routed around to weird places looking for that address. I don’t know, and I don’t want to have to study it. Instead I’ll ask: Can I make it so that a bounced email doesn’t generate a reply? Can I make it so that a bounced email doesn’t contain the text of the original message? Maybe I can query the email server to make sure it still has a user by that name before I try sending the message?—though there may still be “vacation” autoresponses that mean I’d better control the replied-to address myself. If it would be very bad for somebody unauthorized to read this, maybe I shouldn’t be sending it in plaintext by email.
**AMBER:** So the person with true security mindset understands that where there’s one problem, demonstrated by what seems like a very unlikely thought experiment, there’s likely to be more realistic problems that an adversary can in fact exploit. What I think of as weird improbable failure scenarios are canaries in the coal mine, that would warn a truly paranoid person of bigger problems on the way.
**CORAL:** Again that’s not exactly right. The person with ordinary paranoia hears about donotreply@donotreply.com and may think something like, “Oh, well, it’s not very likely that an attacker will actually try to register that domain, I have more urgent issues to worry about,” because in that mode of thinking, they’re running around putting out things that might be fires, and they have to prioritize the things that are most likely to be fires.
If you demonstrate a weird edge-case thought experiment to somebody with security mindset, they don’t see something that’s more likely to be a fire. They think, “Oh no, my belief that those bounce emails go nowhere was FALSE!” The OpenBSD project to build a secure operating system has also, in passing, built an extremely robust operating system, because from their perspective any bug that potentially crashes the system is considered a critical security hole. An ordinary paranoid sees an input that crashes the system and thinks, “A crash isn’t as bad as somebody stealing my data. Until you demonstrate to me that this bug can be used by the adversary to steal data, it’s not *extremely* critical.” Somebody with security mindset thinks, “Nothing inside this subsystem is supposed to behave in a way that crashes the OS. Some section of code is behaving in a way that does not work like my model of that code. Who knows what it might do? The system isn’t supposed to crash, so by making it crash, you have demonstrated that my beliefs about how this system works are false.”
**AMBER:** I’ll be honest: It *has* sometimes struck me that people who call themselves security professionals seem overly concerned with what, to me, seem like very improbable scenarios. Like somebody forgetting to check the end of a buffer and an adversary throwing in a huge string of characters that overwrite the end of the stack with a return address that jumps to a section of code somewhere else in the system that does something the adversary wants. How likely is that *really* to be a problem? I suspect that in the real world, what’s more likely is somebody making their password “password”. Shouldn’t you be mainly guarding against that instead?
**CORAL:** You have to do both. This game is short on consolation prizes. If you want your system to resist attack by major governments, you need it to actually be pretty darned secure, gosh darn it. The fact that some users may try to make their password be “password” does not change the fact that you also have to protect against buffer overflows.
**AMBER:** But even when somebody with security mindset designs an operating system, it often still ends up with successful attacks against it, right? So if this deeper paranoia doesn’t eliminate all chance of bugs, is it really worth the extra effort?
**CORAL:** If you don’t have somebody who thinks this way in charge of building your operating system, it has *no* chance of not failing immediately. People with security mindset sometimes fail to build secure systems. People without security mindset *always* fail at security if the system is at all complex. What this way of thinking buys you is a *chance* that your system takes longer than 24 hours to break.
**AMBER:** That sounds a little extreme.
**CORAL:** History shows that reality has not cared what you consider “extreme” in this regard, and that is why your Wi-Fi-enabled lightbulb is part of a Russian botnet.
**AMBER:** Look, I understand that you want to get all the fiddly tiny bits of the system exactly right. I like tidy neat things too. But let’s be reasonable; we can’t always get everything we want in life.
**CORAL:** You think you’re negotiating with me, but you’re really negotiating with Murphy’s Law. I’m afraid that Mr. Murphy has historically been quite unreasonable in his demands, and rather unforgiving of those who refuse to meet them. I’m not advocating a policy to you, just telling you what happens if you don’t follow that policy. Maybe you think it’s not particularly bad if your lightbulb is doing denial-of-service attacks on a mattress store in Estonia. But if you do want a system to be secure, you need to do certain things, and that part is more of a law of nature than a negotiable demand.
**AMBER:** Non-negotiable, eh? I bet you’d change your tune if somebody offered you twenty thousand dollars. But anyway, one thing I’m surprised you’re not mentioning more is the part where people with security mindset always submit their idea to peer scrutiny and then accept what other people vote about it. I do like the sound of that; it sounds very communitarian and modest.
**CORAL:** I’d say that’s part of the ordinary paranoia that lots of programmers have. The point of submitting ideas to others’ scrutiny isn’t that hard to understand, though certainly there are plenty of people who don’t even do that. If I had any original remarks to contribute to that well-worn topic in computer security, I’d remark that it’s framed as advice to wise paranoids, but of course the people who need it even more are the happy innocents.
**AMBER:** Happy innocents?
**CORAL:** People who lack even ordinary paranoia. Happy innocents tend to envision ways that their system works, but not ask *at all* how their system might fail, until somebody prompts them into that, and even then they can’t do it. Or at least that’s been my experience, and that of many others in the profession.
There’s a certain incredibly terrible cryptographic system, the equivalent of the Fool’s Mate in chess, which is sometimes converged on by the most total sort of amateur, namely Fast XOR. That’s picking a password, repeating the password, and XORing the data with the repeated password string. The person who invents this system may not be able to take the perspective of an adversary at all. *He* wants his marvelous cipher to be unbreakable, and he is not able to truly enter the frame of mind of somebody who wants his cipher to be breakable. If you ask him, “Please, *try* to imagine what could possibly go wrong,” he may say, “Well, if the password is lost, the data will be forever unrecoverable because my encryption algorithm is too strong; I guess that’s something that could go wrong.” Or, “Maybe somebody sabotages my code,” or, “If you really insist that I invent far-fetched scenarios, maybe the computer spontaneously decides to disobey my programming.” Of course any competent ordinary paranoid asks the most skilled people they can find to look at a bright idea and try to shoot it down, because other minds may come in at a different angle or know other standard techniques. But the other reason why we say “Don’t roll your own crypto!” and “Have a security expert look at your bright idea!” is in hopes of reaching the many people who can’t *at all* invert the polarity of their goals—they don’t think that way spontaneously, and if you try to force them to do it, their thoughts go in unproductive directions.
**AMBER:** Like… the same way many people on the Right/Left seem utterly incapable of stepping outside their own treasured perspectives to pass the [Ideological Turing Test](http://econlog.econlib.org/archives/2011/06/the_ideological.html) of the Left/Right.
**CORAL:** I don’t know if it’s exactly the same mental gear or capability, but there’s a definite similarity. Somebody who lacks ordinary paranoia can’t take on the viewpoint of somebody who wants Fast XOR to be breakable, and pass that adversary’s Ideological Turing Test for attempts to break Fast XOR.
**AMBER:** Can’t, or won’t? You seem to be talking like these are innate, untrainable abilities.
**CORAL:** Well, at the least, there will be different levels of talent, as usual in a profession. And also as usual, talent vastly benefits from training and practice. But yes, it has sometimes seemed to me that there is a kind of qualitative step or gear here, where some people can shift perspective to imagine an adversary that truly wants to break their code… or a reality that isn’t cheering for their plan to work, or aliens who evolved different emotions, or an AI that doesn’t *want* to conclude its reasoning with “And therefore the humans should live happily ever after”, or a fictional character who believes in Sith ideology and yet [doesn’t believe they’re the bad guy](http://yudkowsky.tumblr.com/writing/realistic-viewpoints).
It does sometimes seem to me like some people simply can’t shift perspective in that way. Maybe it’s not that they truly lack the wiring, but that there’s an instinctive political off-switch for the ability. Maybe they’re scared to let go of their mental anchors. But from the outside it looks like the same result: some people do it, some people don’t. Some people spontaneously invert the polarity of their internal goals and spontaneously ask how their cipher might be broken and come up with productive angles of attack. Other people wait until prompted to look for flaws in their cipher, or they demand that you argue with them and wait for you to come up with an argument that satisfies them. If you ask them to predict themselves what you might suggest as a flaw, they say weird things that don’t begin to pass your Ideological Turing Test.
**AMBER:** You do seem to like your qualitative distinctions. Are there better or worse ordinary paranoids? Like, is there a spectrum in the space between “happy innocent” and “true deep security mindset”?
**CORAL:** One obvious quantitative talent level within ordinary paranoia would be in how far you can twist your perspective to look sideways at things—the creativity and workability of the attacks you invent. Like these [examples](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html) Bruce Schneier gave:
> Uncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail.
>
>
> I replied: “What’s really interesting is that these people will send a tube of live ants to anyone you tell them to.”
>
>
> Security requires a particular mindset. Security professionals—at least the good ones—see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice. They just can’t help it.
>
>
> SmartWater is a liquid with a unique identifier linked to a particular owner. “The idea is for me to paint this stuff on my valuables as proof of ownership,” I wrote when I first learned about the idea. “I think a better idea would be for me to paint it on your valuables, and then call the police.”
>
>
> Really, we can’t help it.
>
>
> This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail…
>
>
> I’ve often speculated about how much of this is innate, and how much is teachable. In general, I think it’s a particular way of looking at the world, and that it’s far easier to teach someone domain expertise—cryptography or software security or safecracking or document forgery—than it is to teach someone a security mindset.
>
>
To be clear, the distinction between “just ordinary paranoia” and “all of security mindset” is my own; I think it’s worth dividing the spectrum above the happy innocents into two levels rather than one, and say, “This business of looking at the world from weird angles is only half of what you need to learn, and it’s the easier half.”
**AMBER:** Maybe Bruce Schneier himself doesn’t grasp what you mean when you say “security mindset”, and you’ve simply stolen his term to refer to a whole new idea of your own!
**CORAL:** No, the thing with not wanting to have to reason about whether somebody might someday register “donotreply.com” and just fixing it regardless—a methodology that doesn’t trust you to be clever about which problems will blow up—that’s definitely part of what existing security professionals mean by “security mindset”, and it’s definitely part of the second and deeper half. The only unconventional thing in my presentation is that I’m factoring out an intermediate skill of “ordinary paranoia”, where you try to parry an imagined attack by encrypting your password file and hiding the encryption key in a separate section of filesystem code. Coming up with the idea of hashing the password file is, I suspect, a qualitatively distinct skill, invoking a world whose dimensions are your own reasoning processes and not just object-level systems and attackers. Though it’s not polite to say, and the usual suspects will interpret it as a status grab, my experience with other reflectivity-laden skills suggests this may mean that many people, possibly including you, will prove unable to think in this way.
**AMBER:** I indeed find that terribly impolite.
**CORAL:** It may indeed be impolite; I don’t deny that. Whether it’s untrue is a different question. The reason I say it is because, as much as I want ordinary paranoids to *try* to reach up to a deeper level of paranoia, I want them to be aware that it might not prove to be their thing, in which case they should get help and then listen to that help. They shouldn’t assume that because they can notice the chance to have ants mailed to people, they can also pick up on the awfulness of donotreply@donotreply.com.
**AMBER:** Maybe you could call that “deep security” to distinguish it from what Bruce Schneier and other security professionals call “security mindset”.
**CORAL:** “Security mindset” equals “ordinary paranoia” plus “deep security”? I’m not sure that’s very good terminology, but I won’t mind if you use the term that way.
**AMBER:** Suppose I take that at face value. Earlier, you described what might go wrong when a happy innocent tries and fails to be an ordinary paranoid. What happens when an ordinary paranoid tries to do something that requires the deep security skill?
**CORAL:** They believe they have wisely identified bad passwords as the real fire in need of putting out, and spend all their time writing more and more clever checks for bad passwords. They are very impressed with how much effort they have put into detecting bad passwords, and how much concern they have shown for system security. They fall prey to the standard cognitive bias whose name I can’t remember, where people want to solve a problem using one big effort or a couple of big efforts and then be done and not try anymore, and that’s why people don’t put up hurricane shutters once they’re finished buying bottled water. Pay them to “try harder”, and they’ll hide seven encryption keys to the password file in seven different places, or build towers higher and higher in places where a successful adversary is obviously just walking around the tower if they’ve gotten through at all. What these ideas have in common is that they are in a certain sense “shallow”. They are mentally straightforward as attempted parries against a particular kind of envisioned attack. They give you a satisfying sense of fighting hard against the imagined problem—and then they fail.
**AMBER:** Are you saying it’s *not* a good idea to check that the user’s password isn’t “password”?
**CORAL:** No, shallow defenses are often good ideas too! But even there, somebody with the higher skill will try to look at things in a more systematic way; they know that there are often deeper ways of looking at the problem to be found, and they’ll try to find those deep views. For example, it’s extremely important that your password checker does *not* rule out the password “correct horse battery staple” by demanding the password contain at least one uppercase letter, lowercase letter, number, and punctuation mark. What you really want to do is measure password entropy. Not envision a failure mode of somebody guessing “rainbow”, which you will cleverly balk by forcing the user to make their password be “rA1nbow!” instead.
You want the password entry field to have a checkbox that allows showing the typed password in plaintext, because your attempt to parry the imagined failure mode of some evildoer reading over the user’s shoulder may get in the way of the user entering a long or high-entropy password. And the user is perfectly capable of typing their password into that convenient text field in the address bar above the web page, so they can copy and paste it—thereby sending your password to whoever tries to do smart lookups on the address bar. If you’re really that worried about some evildoer reading over somebody’s shoulder, maybe you should be sending a confirmation text to their phone, rather than forcing the user to enter their password into a nearby text field that they can actually read. Obscuring one text field, with no off-switch for the obscuration, to guard against this one bad thing that you imagined happening, while managing to step on your own feet in other ways and not even really guard against the bad thing; that’s the peril of shallow defenses.
An archetypal character for “ordinary paranoid who thinks he’s trying really hard but is actually just piling on a lot of shallow precautions” is Mad-Eye Moody from the *Harry Potter* series, who has a whole room full of Dark Detectors, and who also ends up locked in the bottom of somebody’s trunk. It seems Mad-Eye Moody was too busy buying one more Dark Detector for his existing room full of Dark Detectors, and he didn’t invent precautions deep enough and general enough to cover the unforeseen attack vector “somebody tries to replace me using Polyjuice”.
And the solution isn’t to add on a special anti-Polyjuice potion. I mean, if you happen to have one, great, but that’s not where most of your trust in the system should be coming from. The first lines of defense should have a sense about them of depth, of generality. Hashing password files, rather than hiding keys; thinking of how to measure password entropy, rather than requiring at least one uppercase character.
**AMBER:** Again this seems to me more like a quantitative difference in the cleverness of clever ideas, rather than two different modes of thinking.
**CORAL:** Real-world categories are often fuzzy, but to me these seem like the product of two different kinds of thinking. My guess is that the person who popularized demanding a mixture of letters, cases, and numbers was reasoning in a different way than the person who thought of measuring password entropy. But whether you call the distinction qualitative or quantitative, the distinction remains. Deep and general ideas—the kind that actually simplify and strengthen the edifice of reasoning supporting the system’s safety—are invented more rarely and by rarer people. To build a system that can resist or even slow down an attack by multiple adversaries, some of whom may be smarter or more experienced than ourselves, requires a level of professionally specialized thinking that isn’t reasonable to expect from every programmer—not even those who can shift their minds to take on the perspective of a single equally-smart adversary. What you should ask from an ordinary paranoid is that they appreciate that deeper ideas exist, and that they try to learn the standard deeper ideas that are already known; that they know their own skill is not the upper limit of what’s possible, and that they ask a professional to come in and check their reasoning. And then actually listen.
**AMBER:** But if it’s possible for people to think they have higher skills and be mistaken, how do you know that *you* are one of these rare people who *truly* has a deep security mindset? Might your high opinion of yourself [just be due to the Dunning-Kruger effect](https://equilibriabook.com/inadequacy-and-modesty/)?
**CORAL:** … Okay, that reminds me to give another caution.
Yes, there will be some innocents who can’t believe that there’s a talent called “paranoia” that they lack, who’ll come up with weird imitations of paranoia if you ask them to be more worried about flaws in their brilliant encryption ideas. There will also be some people reading this with severe cases of [social anxiety and underconfidence](https://equilibriabook.com/status-regulation-and-anxious-underconfidence/). Readers who *are* capable of ordinary paranoia and even security mindset, who might not try to develop these talents, because they are terribly worried that they might just be one of the people who only imagine themselves to have talent. Well, if you think you can feel the distinction between deep security ideas and shallow ones, you should at least try now and then to generate your own thoughts that resonate in you the same way.
**AMBER:** But won’t that attitude encourage overconfident people to think they can be paranoid when they actually can’t be, with the result that they end up too impressed with their own reasoning and ideas?
**CORAL:** I strongly suspect that they’ll do that regardless. You’re not actually promoting some kind of collective good practice that benefits everyone, just by personally agreeing to be modest. The overconfident don’t care what you decide. And if you’re not just as worried about underestimating yourself as overestimating yourself, if your fears about exceeding your proper place are asymmetric with your fears about lost potential and foregone opportunities, then you’re probably dealing with an emotional issue rather than a strict concern with good epistemology.
**AMBER:** If somebody does have the talent for deep security, then, how can they train it?
**CORAL:** … That’s a hell of a good question. Some interesting training methods have been developed for ordinary paranoia, like classes whose students have to figure out how to attack everyday systems outside of a computer-science context. One professor gave a test in which one of the questions was “What are the first 100 digits of pi?”—the point being that you need to find some way to cheat in order to pass the test. You should train that kind of ordinary paranoia first, if you haven’t done that already.
**AMBER:** And then what? How do you graduate to deep security from ordinary paranoia?
**CORAL:** … Try to find more general defenses instead of parrying particular attacks? Appreciate the extent to which you’re building ever-taller versions of towers that an adversary might just walk around? Ugh, no, that’s too much like ordinary paranoia—especially if you’re starting out with just ordinary paranoia. Let me think about this.
…
Okay, I have a screwy piece of advice that’s probably not going to work. Write down the safety-story on which your belief in a system’s security rests. Then ask yourself whether you actually included all the empirical assumptions. Then ask yourself whether you actually believe those empirical assumptions.
**AMBER:** So, like, if I’m building an operating system, I write down, “Safety assumption: The login system works to keep out attackers”—
**CORAL:** *No!*
Uh, no, sorry. As usual, it seems that what I think is “advice” has left out all the important parts anyone would need to actually do it.
That’s not what I was trying to handwave at by saying “empirical assumption”. You don’t want to assume that parts of the system “succeed” or “fail”—that’s not language that should appear in what you write down. You want the elements of the story to be strictly factual, not… value-laden, goal-laden? There shouldn’t be reasoning that explicitly mentions what you want to have happen or not happen, just language neutrally describing the background facts of the universe. For brainstorming purposes you might write down “Nobody can guess the password of any user with dangerous privileges”, but that’s just a proto-statement which needs to be refined into more basic statements.
**AMBER:** I don’t think I understood.
**CORAL:** “Nobody can guess the password” says, “I believe the adversary will fail to guess the password.” Why do you believe that?
**AMBER:** I see, so you want me to refine complex assumptions into systems of simpler assumptions. But if you keep asking “why do you believe that” you’ll eventually end up back at the Big Bang and the laws of physics. How do I know when to stop?
**CORAL:** What you’re trying to do is reduce the story past the point where you talk about a goal-laden event, “the adversary fails”, and instead talk about neutral facts underlying that event. For now, just answer me: Why do you believe the adversary fails to guess the password?
**AMBER:** Because the password is too hard to guess.
**CORAL:** The phrase “too hard” is goal-laden language; it’s your own desires for the system that determine what is “too hard”. Without using concepts or language that refer to what you want, what is a neutral, factual description of what makes a password too hard to guess?
**AMBER:** The password has high-enough entropy that the attacker can’t try enough attempts to guess it.
**CORAL:** We’re making progress, but again, the term “enough” is goal-laden language. It’s your own wants and desires that determine what is “enough”. Can you say something else instead of “enough”?
**AMBER:** The password has sufficient entropy that—
**CORAL:** I don’t mean find a synonym for “enough”. I mean, use different concepts that aren’t goal-laden. This will involve changing the meaning of what you write down.
**AMBER:** I’m sorry, I guess I’m not good enough at this.
**CORAL:** Not yet, anyway. Maybe not ever, but that isn’t known, and you shouldn’t assume it based on one failure.
Anyway, what I was hoping for was a pair of statements like, “I believe every password has at least 50 bits of entropy” and “I believe no attacker can make more than a trillion tries total at guessing any password”. Where the point of writing “I believe” is to make yourself pause and question whether you actually believe it.
**AMBER:** Isn’t saying no attacker “can” make a trillion tries itself goal-laden language?
**CORAL:** Indeed, that assumption might need to be refined further via why-do-I-believe-that into, “I believe the system rejects password attempts closer than 1 second together, I believe the attacker keeps this up for less than a month, and I believe the attacker launches fewer than 300,000 simultaneous connections.” Where again, the point is that you then look at what you’ve written and say, “Do I really believe that?” To be clear, sometimes the answer will be “Yes, I sure do believe that!” This isn’t a social modesty exercise where you show off your ability to have agonizing doubts and then you go ahead and do the same thing anyway. The point is to find out what you believe, or what you’d need to believe, and check that it’s believable.
**AMBER:** And this trains a deep security mindset?
**CORAL:** … Maaaybe? I’m wildly guessing it might? It may get you to think in terms of stories and reasoning and assumptions alongside passwords and adversaries, and that puts your mind into a space that I think is at least part of the skill.
In point of fact, the real reason the author is listing out this methodology is that he’s currently trying to do something similar on the problem of aligning Artificial General Intelligence, and he would like to move past “I believe my AGI won’t want to kill anyone” and into a headspace more like writing down statements such as “Although the space of potential weightings for this recurrent neural net does contain weight combinations that would figure out how to kill the programmers, I believe that gradient descent on loss function *L* will only access a result inside subspace *Q* with properties *P*, and I believe a space with properties *P* does not include any weight combinations that figure out how to kill the programmer.”
Though this itself is not really a reduced statement and still has too much goal-laden language in it. A realistic example would take us right out of the main essay here. But the author does hope that practicing this way of thinking can help lead people into building more solid stories about robust systems, if they already have good ordinary paranoia and some fairly mysterious innate talents.
---
Continued in: **[Security Mindset and the Logistic Success Curve](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/)**.
The post [Security Mindset and Ordinary Paranoia](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
7c9f5e5b-c3f5-4869-ac12-64b78def04ad | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Multibit reflective oracles
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
This post describes a new version of the [reflective oracles](/item?id=18) I described in a previous forum post. This work extends [probabilistic Turing machines](http://en.wikipedia.org/wiki/Probabilistic_Turing_machine) with an oracle that answers certain kinds of queries about other probabilistic Turing machines with the same kind of oracle; straight-forward ways of doing this would lead to diagonalization problems, which this kind of oracle avoids by answering certain queries probabilistically. Applications include a variant of AIXI that is able to reason about universes containing other instances of the same variant of AIXI; I expect we'll have forum posts about this in the coming weeks.
The new version, which is based on [this comment thread with Paul](/item?id=36), is significantly simpler to use than the previous version: The previous version only gave useful answers about machines that halt with probability one, no matter how the oracle behaves. The new version basically just doesn't have an analogous requirement. In addition, the new version allows for queries about machines that output more than a single bit; it turns out that this makes it *much* simpler to define simplicity priors of the kind used in AIXI, doing away with most of the complexity I described in "[Simplicity priors with reflective oracles](/item?id=25)".
This is still a fairly technical post, whose purpose is to help with figuring out the details of a nice version reflective oracles and of their applications. At some point in the future when all of this has stabilized more, I expect to write a more introductory post on these topics.
**ETA:** Thanks to Matt Elder, who helped me think through the details of this system! I wrote the original post pretty quickly and forgot to acknowledge this, mea culpa.
---
Let's write B:={0,1} for the set of bits, B<ω for the set of finite bitstrings, and Bω for the set of infinite streams of bits; further, let's write ε for the empty bitstring. Then Δ(B) is the set of probability distributions over single bits, which we can represent by a single number in [0,1], specifying the probability that the bit is 1; that is, we can think of Δ(B) as the set [0,1]. Moreover, Δ(Bω) is the set of probability distributions over infinite bitstreams, which we can represent by functions P:B<ω→[0,1], where P(¯¯¯x) represents the probability that the infinite bitstream starts with the finite bitstring ¯¯¯x, such that P(ε)=1 (every bitstream starts with the empty bitstring), and P(¯¯¯x)=P(¯¯¯x0)+P(¯¯¯x1), ∀¯¯¯x∈B<ω. In other words, we can think of Δ(Bω) as the set of all functions P:B<ω→[0,1] which satisfy these two conditions.
Let's also write P:=Q∩[0,1] for the set of probabilities that are rational numbers, and M⊆N for the set of all Gödel numbers of probabilistic Turing machines that can make calls to a probabilistic oracle (i.e., to an oracle that may answer some queries probabilistically). We will consider machines with advance-only binary output tapes---that is, the machine has instructions to write a 0 or a 1 on its output tape, and no way of going back and deleting something it has written; thus, a machine's output is either an element of B<ω (if the machine halts at some point, or runs forever but stops producing output after a finite amount of time), or an element of Bω (if the machine keeps producing output).
---
A query to the oracle is a triple (┌M┐,¯¯¯x,q)∈(M×B<ω×P), roughly interpreted as asking whether the probability that the output of the oracle machine M starts with ¯¯¯x is greater than q. The output of the oracle is a single bit, which, on some queries, may be chosen randomly: If M definitely produces at least len(¯¯¯x) bits of output, and q is *greater* than the probability that the output starts with ¯¯¯x, the oracle definitely outputs 1; if q is *smaller*, it definitely outputs 0; if it is *equal*, the oracle's output may be random, with the probability of outputting 1 anywhere in [0,1]. (This is crucial for getting around diagonalization.) If M isn't guaranteed to produce at least len(¯¯¯x) bits of output, then see below.
In the new formulation, an oracle O is a pair of functions, O=(Oquery,Oeval). The first of these, Oquery:M×B<ω×P→Δ(B), specifies the behavior of the oracle: Oquery(┌M┐,¯¯¯x,q) is the probability that the oracle outputs 1 when called on the query (┌M┐,¯¯¯x,q).
The second function has to do with what happens when we query the oracle about machines which aren't guaranteed to produce len(¯¯¯x) bits. In an intuitive sense, the idea is that we pretend that every machine always produces an infinite bitstream, even if in reality, the machine goes into an infinite loop after producing only a finite output; in this case, we choose some arbitrary probability distribution over the "output" we pretend the machine produces after this point. The function Oeval:M→Δ(Bω) specifies the resulting probability distribution over infinite bitstreams; that is, Oeval(┌M┐) is the distribution that the oracle *pretends* the machine M produces. Since we represent elements of Δ(Bω) as functions from B<ω to [0,1], this means that Oeval(┌M┐)(¯¯¯x) is what our oracle *pretends* is the probability that M's output starts with ¯¯¯x. In particular, if q>Oeval(┌M┐)(¯¯¯x), then invoking the oracle on (┌M┐,¯¯¯x,q) will definitely output 1, i.e., then Oquery(┌M┐,¯¯¯x,q)=1; and if q<Oeval(┌M┐)(¯¯¯x), then Oquery(┌M┐,¯¯¯x,q)=0.
The intuition for Oeval(┌M┐) is that it should only "fill in" probabilities when actually running M stops generating any output; as long as M keeps generating output, the distribution Oeval(┌M┐) should reflect this. We can formalize this by the condition that for any bitstring ¯¯¯x∈B<ω, the probability Oeval(┌M┐)(¯¯¯x) should be greater or equal to the probability that M outputs ¯¯¯x given that calls to the oracle behave according to Oquery.
At the end of this post, I'll show that a pair O=(Oquery,Oeval) satisfying these conditions does in fact exist.
---
First, though, let me give you an indication about *why* it seems useful to move from single-bit to multi-bit oracles. I should say, first of all, that they aren't actually more powerful than single-bit oracles---Marcello has shown that you can construct a multi-bit oracle given an oracle that works as described above, but allows only for single-bit queries---but it's not as easy as you might think; Marcello's construction uses three different tricks which you'd need to wrap your head around first. It seems easier to define the multi-bit oracle directly.
The place where multi-bit oracles are useful is when you want to define a simplicity prior, as in Solomonoff induction. In the reflective oracle setting, the natural way of doing so is to define a universal prior over bitstrings as an oracle machine that samples the Gödel number of another machine M′ with probability proportional to 2−source code length, and then just runs M′. We'd then like to use our oracle to answer queries about the resulting distribution over bitstrings. However, like in the case of Solomonoff induction, we will need to deal with the fact that not all machines M′ produce an infinite amount of output. It would be nice if we could just check whether a given M′ eventually loops, and don't run it if it does, but we won't be able to define an oracle that lets us do so (for machines with access to the same oracle), because of the halting problem: otherwise, we could write an M′ that asks the oracle whether it goes into an infinite loop, and does so iff the oracle says it doesn't.
I've [previously described a bag of tricks](/item?id=25) for solving this problem in the context of my [original definition of reflective oracles](/item?id=18), but with the definition given above, the problem becomes trivial: We just define our simplicity prior as a machine M that samples an arbitrary M′ and runs it, and then use our oracle on M to give us a corresponding distribution over infinite bitstreams. If the M′ we sample fails to output an infinite stream of bits, our oracle will fill in the probabilities arbitrarily, so we get a consistent distribution over infinite bitstreams. And we still have the essential property we expect of a simplicity prior: For any machine M′, there is a constant C>0 such that Oeval(┌M┐)(¯¯¯x) is lower-bounded by C times the probability that M′ outputs ¯¯¯x.
---
Before giving the existence proof, I'll give a more formal definition of the conditions we place on Oquery and Oeval.
In order to state these conditions, it will be helpful to assume that machines ┌M┐∈M are specified in a way which includes the current state of their working tapes: Thus, we can run a machine M for a single step and describe its next state as a new machine M′. (In other words, elements of M are really machine *states*.) Then, we can describe every machine by its behavior when running it for a single timestep: Either it executes a single deterministic computation, leading a new machine state N, which we write as step(┌N┐); or it outputs a single bit x∈B and transitions to state N, which we write as output(x,┌N┐); or it queries the oracle on some triple (┌M┐,¯¯¯x,q), and transitions into state N or N′ depending on the answer, which we write as query(┌M┐,¯¯¯x,q,┌N┐,┌N′┐); or it flips a fair coin and transitions to N or N′ depending on the result, which we write as flip(┌N┐,┌N′┐). In a slight abuse of notation, we'll write things like Oeval(step(┌N┐)), which is really supposed to mean "Oeval(┌M┐) for any state M in the set described by step(┌N┐)".
An *oracle* is a pair O=(Oquery,Oeval), where Oquery:M×B<ω×P→Δ(B) and Oeval:M→Δ(Bω). We now define a notion of an oracle O′ "reflecting" another oracle O; an oracle is *reflective* if it reflects itself. An oracle O′=(O′query,O′eval) *reflects* an oracle O if the following conditions are satisfied:
* If q>Oeval(┌M┐)(¯¯¯x), then O′query(┌M┐,¯¯¯x,q)=1.
* If q<Oeval(┌M┐)(¯¯¯x), then O′query(┌M┐,¯¯¯x,q)=0.
* O′eval(step(┌N┐))=Oeval(┌N┐).
* O′eval(output(x,┌N┐))(x¯¯¯y)=Oeval(┌N┐)(¯¯¯y).
* O′eval(output(x,┌N┐))((1−x)¯¯¯y)=0.
* O′eval(flip(┌N┐,┌N′┐))=12Oeval(┌N┐)+12Oeval(┌N′┐) .
* O′eval(query(┌M┐,¯¯¯x,q,┌N┐,┌N′┐)) = (1−Oquery(┌M┐,¯¯¯x,q))⋅Oeval(┌N┐) + Oquery(┌M┐,¯¯¯x,q)⋅Oeval(┌N′┐).
The first two of these conditions just write out in symbols what I earlier said in words. The latter five conditions imply the demand that Oeval(┌M┐)(¯¯¯x) must be lower-bounded by the probability that M returns ¯¯¯x if oracle calls behave according to Oquery, but this isn't quite as obvious. The proof is to show by induction on T∈N that for all M and ¯¯¯x, Oeval(┌M┐,¯¯¯x) is ≥ the probability that M outputs ¯¯¯x *within the next T timesteps*. I'll omit the details here.
---
As in my previous posts about versions of this system, I'll actually prove a slightly stronger result, which I expect will be helpful in showing a relation between reflective oracles and Nash equilibria. This makes use of the notion of a *partial oracle*, which is a pair of partial functions π=(πquery,πeval), πquery:M×B<ω×P↛Δ(B) and πeval:M×B<ω↛[0,1], which specify the values that Oquery and Oeval should take on certain inputs. An oracle *extends* a partial oracle if Oquery(┌M┐,¯¯¯x,q)=πquery(┌M┐,¯¯¯x,q) and Oeval(┌M┐)(¯¯¯x)=πeval(┌M┐,¯¯¯x) whenever the right-hand side of these equations is defined. We call a partial oracle π *reflective* if for every O extending π, there is an O′, also extending π, which reflects O.
**Theorem.**
i. There is a reflective oracle O.
ii. For every reflective partial oracle π, there is a reflective oracle O extending π.
*Proof.* We begin by showing that the empty partial oracle π∅
(i.e., the partial oracle satisfying dom(π∅query)=∅ and dom(π∅eval)=∅) is reflective, since (i) then follows from (ii). Thus, let O be any oracle (since every oracle extends π∅), and define O′ as follows: If q>Oeval(┌M┐)(¯¯¯x), let O′query(┌M┐,¯¯¯x,q)=1; else, let O′query(┌M┐,¯¯¯x,q)=0. Define O′eval by the equations in the definition of "O′ reflects O". Then, O′ clearly both reflects O and (trivially) extends π∅. This finishes the proof that π∅ is a reflective partial oracle.
It remains to show (ii). For this, we use the [infinite-dimensional
generalization of Kakutani’s fixed point theorem](http://en.wikipedia.org/wiki/Kakutani_fixed-point_theorem#Infinite-dimensional_generalizations):
>
> Suppose that A is a non-empty, convex and compact subset of a
> [locally convex topological vector space](http://en.wikipedia.org/wiki/Locally_convex_topological_vector_space). Suppose further that f:A→Pow(A) is
> a set-valued function such that f(x) is non-empty, convex and
> compact for all x∈A, and such that the graph of f,
> {(x,y):x∈A,y∈f(x)}, is a closed set. Then f has a
> fixed point; that is, there is an x∈A such that x∈f(x).
>
>
>
Let A be the set of all oracles. We can see this as a subset of [0,1]M×B<ω×P×[0,1]M×B<ω, which in turn is a subset of RM×B<ω×P×RM×B<ω, which is a locally convex topological vector space when endowed with the product topology (since this is true of any power of R). A consists of all elements (Oquery,Oeval) of [0,1]M×B<ω×P×[0,1]M×B<ω such that g satisfies the additivity properties we demand of elements of Δ(B<ω), namely Oeval(┌M┐)(ϵ)=1 and Oeval(┌M┐)(¯¯¯x0)+Oeval(┌M┐)(¯¯¯x1)=Oeval(┌M┐)(¯¯¯x). Clearly, A is nonempty, convex, and closed, and it is a subset of a product of [0,1], which, by [Tychonoff’s theorem](http://en.wikipedia.org/wiki/Tychonoff%27s_theorem), is compact. Moreover, both M×B<ω×P and M×B<ω are countable, so it's sufficient to consider convergence of sequences in A.
We choose f(O) to consist of all oracles O′ that reflect O and extend π. By the assumption that π is reflective, f(O) is non-empty for every O∈A. Moreover, it is easy to see that f(O) is both closed and convex; hence, if we can also show that f has closed graph, then by the fixed point theorem, there is a O∈A such that O∈f(O), which is exactly what we want to show.
Thus, assume that we have sequences On=(On,query,On,eval) and O′n=(O′n,query,O′n,eval) of oracles extending π such that O′n∈f(On) for every n∈N, and suppose that these sequences converge to limits O,O′∈A; then we need to show that O′∈f(O), i.e., that O′ reflects O. For the five equalities about O′eval in the definition of "O′ reflects O", we simply take the limit n→∞ of both sides of the equation, using the fact that convergence in the product topology is pointwise.
This leaves the two conditions about O′query. Since the proof of the second condition is exactly symmetrical, consider without loss of generality the first of these conditions, i.e., "If q>Oeval(┌M┐)(¯¯¯x), then O′query(┌M┐,¯¯¯x,q)=1".
Thus, suppose that indeed q>Oeval(┌M┐)(¯¯¯x). Since On→O and convergence is pointwise, there must be an n0 such that q>On,eval(¯¯¯x) for all n≥n0. Since O′n reflects On for all n, it follows that O′n,query(┌M┐,¯¯¯x,q)=1 for all n≥n0, whence by pointwise convergence O′query(┌M┐,¯¯¯x,q)=1, concluding the proof. □ |
185b11a5-4388-48c1-b6b6-bbd7b9a5b2ac | trentmkelly/LessWrong-43k | LessWrong | Grace-Hanson Podcast
Cross posted from Overcoming Bias. Comments there.
***
Robin and I have a new podcast on the subject of play (mp3, wav, m4a). Older ones are here.
Don’t be thrown by a bit of silence at the start of the m4a one. We also don’t have the time right now to figure out how to put it in better formats. Sorry about that. If anyone else does, and posts such files, I’ll link to them.
|
ebff8f5c-c959-43c4-b9c6-98c0de2ae859 | trentmkelly/LessWrong-43k | LessWrong | What is up with spirituality?
What is up with spirituality? I mean, from an atheistic perspective?
In my experience, atheists tend to focus on the empirical question of whether there is an all-powerful supernatural creature behind all that we observe. And yeah, there probably isn’t.
But having won that point, what does one make of the extreme popularity of religion? I think the usual answer given is something like ‘well, we used to be very ignorant and not have good explanations of natural phenomena, plus we tend to see agents in everything because our agent detection software is oversensitive’.
Which might explain the question ‘Why would people think a supernatural agent controls things?’. But what seems like only a corner of religion.
Another big part of religion—and a thing that also occurs outside religion—seems to be ‘spirituality’—a cluster of things I find hard to describe, but which seem pretty disconnected from explanatory questions of where trees came from or why the crops failed.
Some stylized facts about spirituality:
* People across religions have ‘spiritual experiences’ that involve particular styles of feeling
* People can have spiritual attitudes more broadly
* Spiritual attitudes involve less interest in ‘worldly’ or petty things
* Spiritual things often involve sacredness and ritual
* Spiritual feelings are often associated with awe and the word ‘transcendent’
* Spiritual experiences do not present themselves as about hedonic enjoyment of the experience
* Spirituality is connected with ‘meaning’ and tends to feel deep and significant
* Spirituality tends toward an interest in philosophy, especially metaphysics, especially regarding whether everything is one, the nature of selfhood, the possibility and nature of ‘souls’, the nature of love
* Certain drugs seem to induce spiritual or adjacent states
* Spirituality tends to be fairly positive, in the sense that from a satisfactorily spiritual mental place, one expects the world to seem broadly good or at least comp |
3bfaac69-d6cf-4790-b6fd-a073cfe66736 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Future Meetup for Indonesian LWers
Discussion article for the meetup : Future Meetup for Indonesian LWers
WHEN: 01 January 2015 04:00:00PM (+0700)
WHERE: Bandung, Indonesia
I remember about LW's survey sometimes ago, and when the survey result come out, I see that only one LWers is come from Indonesia. Obviously that was me.
I try to search forum and this site, using keyword "Indonesia", and only found one user, which is me.
Now, in case someone from and/or live in my country like to have a meetup, I put this reminder, so when they search this site they will find it and maybe we can have meetup.
Discussion article for the meetup : Future Meetup for Indonesian LWers |
7779b533-7524-43b6-aec4-83ec62c6a9d9 | trentmkelly/LessWrong-43k | LessWrong | Toy Models of Superposition
A new Anthropic interpretability paper—“Toy Models of Superpostion”—came out last week that I think is quite exciting and hasn't been discussed here yet.
Twitter thread from Anthropic
Twitter thread from Chris
Introduction:
> It would be very convenient if the individual neurons of artificial neural networks corresponded to cleanly interpretable features of the input. For example, in an “ideal” ImageNet classifier, each neuron would fire only in the presence of a specific visual feature, such as the color red, a left-facing curve, or a dog snout. Empirically, in models we have studied, some of the neurons do cleanly map to features. But it isn't always the case that features correspond so cleanly to neurons, especially in large language models where it actually seems rare for neurons to correspond to clean features. This brings up many questions. Why is it that neurons sometimes align with features and sometimes don't? Why do some models and tasks have many of these clean neurons, while they're vanishingly rare in others?
>
> In this paper, we use toy models — small ReLU networks trained on synthetic data with sparse input features — to investigate how and when models represent more features than they have dimensions. We call this phenomenon superposition. When features are sparse, superposition allows compression beyond what a linear model would do, at the cost of "interference" that requires nonlinear filtering.
>
> Consider a toy model where we train an embedding of five features of varying importance in two dimensions, add a ReLU afterwards for filtering, and vary the sparsity of the features. With dense features, the model learns to represent an orthogonal basis of the most important two features (similar to what Principal Component Analysis might give us), and the other three features are not represented. But if we make the features sparse, this changes:
>
>
>
> Not only can models store additional features in superposition by tolerating some interfer |
b4b13557-4d29-4e1b-83d8-70cd9cdb9b4a | trentmkelly/LessWrong-43k | LessWrong | An Attempt at Preference Uncertainty Using VNM
(This is a (possibly perpetual) draft of some work that we (I) did at the Vancouver meetup. Thanks to my meetup buddies for letting me use their brains as supplementary computational substrate. Sorry about how ugly the LaTeX is; is there a way to make this all look a bit nicer?)
(Large swaths of this are obsolete. Thanks for the input, LW!)
The Problem of Decision Under Preference Uncertainty
Suppose you are uncertain whether it is good to eat meat or not. It could be OK, or it could be very bad, but having not done the thinking, you are uncertain. And yet you have to decide what to eat now; is it going to be the tasty hamburger or the morally pure vegetarian salad?
You have multiple theories about your preferences that contradict in their assessment, and you want to make the best decision. How would you decide, even in principle, when you have such uncertainty? This is the problem of Preference Uncertainty.
Preference Uncertainty is a daily fact of life for humans; we simply don't have introspective access to our raw preferences in many cases, but we still want to make the best decisions we can. Just going with our intuitions about what seems most awesome is usually sufficient, but on higher-stakes decisions and theoretical reasoning, we want formal methods with more transparent reasoning processes. We especially like transparent formal methods if we want to create a Friendly AI.
There is unfortunately very little formal analysis of the preference uncertainty problem, and what has been done is incomplete and more philosophical than formal. Nonetheless, there has been some good work in the last few years. I'll refer you to Crouch's thesis if you're interested in that.
Using VNM
I'm going to assume VNM. That is, that rational preferences imply a utility function, and we decide between lotteries, choosing the one with highest expected utility.
The implications here are that the possible moral theories () each have an associated utility function () that repres |
219adf78-c56b-4c68-86fd-0fd1a11c0d48 | trentmkelly/LessWrong-43k | LessWrong | An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem
Common Sense Atheism has recently had a string of fantastic introductory LessWrong related material. First easing its audience into the singularity, then summarising the sequences, yesterday affirming that Death is a Problem to be Solved, and finally today by presenting An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem.
From the article:
> Eliezer’s explanation of this hugely important law of probability is probably the best one on the internet, but I fear it may still be too fast-moving for those who haven’t needed to do even algeba since high school. Eliezer calls it “excruciatingly gentle,” but he must be measuring “gentle” on a scale for people who were reading Feynman at age 9 and doing calculus at age 13 like him.
>
> So, I decided to write an even gentler introduction to Bayes’ Theorem. One that is gentle for normal people.
It may be interesting if you want to do a review of Bayes' Theorem from a different perspective, or offer some introductory material for others. From a wider viewpoint, it's great to see a popular blog joining our cause for raising the sanity waterline. |
35309e2f-72b9-444e-9e09-e4cdc4f5ae35 | trentmkelly/LessWrong-43k | LessWrong | The Economics of the Asteroid Deflection Problem (Dominant Assurance Contracts)
Update 2023-09-04: Partial Update: I'm fully funded!
Update 2023-09-21: Full Update. Funding is over.
Update 2023-11-28: Final Update. Platform Launched!
Imagine a world with no ads or paywalls. A world where open-source software gets the same level of funding as proprietary software. A world where people can freely reuse ideas and music without paying royalties. A world where people get paid for writing book reviews. A world where Game-of-Thrones-quality shows are freely available on YouTube. A world where AI safety research gets the same-level of funding as AI capabilities research. Is this a fantasy world? No, this is the world where people use Dominant Assurance Contracts
If you are already convinced you can make this idea a reality by donating to create a Platform for Dominant Assurance Contracts. If you are not convinced read on.
The Free-rider problem
A few months ago I stumbled across this video. (I highly recommend you watch the video, but if you don't have time, I've summarized the video below).
Summary of A Deeper Look at Public Goods
A good is rival if one person's use of a good diminishes another person's ability to benefit from it. Jeans are rival. If I'm wearing a pair of jeans, you can't wear it at the same time. Asteroid deflection is non-rival. If I deflect an asteroid to protect myself, you are saved with no additional cost.
A good is excludable if people who don't pay can be easily prevented from using a good. An example of a good that is excludable is a pair of jeans. You can exclude people by locking the jeans in your closet. An example of a good that is non-excludable is asteroid deflection. You cannot prevent the people who did not pay for the asteroid deflection program from benefiting from the asteroid being deflected.
A good which is both rival and excludable is called a private good. A good which is non-rival and non-excludable is called a public good.
(Additionally, goods which are excludable and non-rival are called club go |
43e5c9dc-6b61-4ee2-95df-9c5ac95c74b3 | trentmkelly/LessWrong-43k | LessWrong | List of petitions against OpenAI's for-profit move
Letters to attorney generals, etc, to block OpenAI from moving assets into a corporation.
* Sep: Public Citizen
* Dec: Meta
* Jan, Apr: Californian civil society nonprofits
* Apr: Coalition for AI Nonprofit Integrity
* Apr: Safety ex-employees' brief for Musk lawsuit
* Apr: Safety+ researchers on 'not for private gain'
|
32d0ab9f-d145-4fa3-a522-5bf3415c7e53 | trentmkelly/LessWrong-43k | LessWrong | Dario Amodei leaves OpenAI
This is a linkpost for https://openai.com/blog/organizational-update/
> “We are incredibly thankful to Dario for his contributions over the past four and a half years. We wish him and his co-founders all the best in their new project, and we look forward to a collaborative relationship with them for years to come,” said OpenAI chief executive Sam Altman.
Anyone know what the new project is? |
c332ee9a-e5f9-41c4-b901-9b738ec01671 | trentmkelly/LessWrong-43k | LessWrong | Research Direction: Be the AGI you want to see in the world
Be the AGI you want to see in the world.
Epistemic status: highly speculative, authors are not neuroscientists.
Summary
* It may be possible to enhance human intelligence via a brain-computer interface (BCI). We could put electrodes into a human brain, connect those electrodes to an artificial neural network, and train that network to predict and write neural activations.
* This may present a technique for gradual uploading of human minds to computers that doesn’t require technology sufficient to create AGI in the first place.
* The purpose of this article is to elicit feedback on this idea and, if it seems promising, encourage more work in this area.
Introduction: Goal and Idea
Goal
We suspect that it may be possible to develop an enhancement system for human brains which satisfies the following desiderata:
1. Competitiveness: Our enhancement system must be powerful enough to help humans escape the acute AGI risk period. This desideratum may be satisfied if enhanced humans are able to find a solution to the prosaic AI alignment problem.
2. Timeliness: Our enhancement system must be fully developed before the advent of AGI.
3. Value preservation: Our enhancement system must not severely distort the core values of the enhanced human.
Idea
Human brains are sharply constrained in size. Indeed, human birth is difficult because our brains (and thus skulls) are unusually large compared to other primates.
If evolution progressed such that human brains evolved until it hit a size constraint, then it seems likely that further increasing human brain size would yield more than modest gains to human intelligence.
One possible approach to increasing human brain size is by predicting real-time activations in the neocortex. The guiding intuition is that this is already how the brain extends itself, like how the cerebellum currently predicts activations in the telencephalon. Namely, Neuralink hardware currently supports three thousand EEG probes which can read f |
5f6c5787-f22a-424b-9fbe-8c0123374985 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "I think people who are not made happier by having things either have the wrong things, or have them incorrectly. Here is how I get the most out of my stuff.
Money doesn't buy happiness. If you want to try throwing money at the problem anyway, you should buy experiences like vacations or services, rather than purchasing objects. If you have to buy objects, they should be absolute and not positional goods; positional goods just put you on a treadmill and you're never going to catch up.
Supposedly.
I think getting value out of spending money, owning objects, and having positional goods are all three of them skills, that people often don't have naturally but can develop. I'm going to focus mostly on the middle skill: how to have things correctly1. 1. Obtain more-correct things in the first place.
If you and I are personal friends, you probably know that I have weird gift-receiving protocols. This is partly because I hate surprises. But it's also because I don't want to have incorrect things, cluttering my space, generating guilt because they're gifts that I never use, and generally having high opportunity cost because the giver could have gotten me something else.
This problem isn't only with gifts. People get themselves incorrect things all the time - seduced by marketing, seized by impulse, or too hurried to think about which of several choices is the best one for their wants and needs. I have some incorrect clothes, which I got because I was sick of shopping and needed a new pair of pants even if it was terrible; as soon as I found better pants (or whatever) those clothes were never worn again and now they're just waiting for my next haul to Goodwill. I bet a lot of people have incorrect printers, mostly because printers in general are evil, but partly because it's irritating and dull to investigate them ahead of time. Cars may also tend to fall into this category, with a lot of people neutral or ambivalent about their self-selected objects that cost thousands of dollars.
If you are not currently living in a cluttered space, or feeling guilty about not using your objects enough, or tending to dislike the things that you have, or finding yourself wanting things that you "can't" get because you already have an inferior item in the same reference class, or just buying too many not-strict-necessities than is appropriate for your budget - then this might not be a step you need to focus on. If you have objects you don't like (not just aren't getting a lot out of, that's for later steps, but actually dislike) then you might need to change your thresholds for object-acquisition.
This doesn't mean something stodgy like "before you get something, think carefully about whether you will actually use and enjoy it, using outside view information about items in this reference class". Or, well, it can mean that, but that's not the only criterion! You can also increase the amount of sheer emotional want that you allow to move you to action - wait until you more-than-idly desire it. If I were good at math, I would try to operationalize this as some sort of formula, but suffice it to say that the cost of the object (in money, but also social capital and storage space and inconvenience and whatnot) should interact with how much you just-plain-want-it and also with how much use you will likely get out of it.
Speaking of how much use you get out of it...
2. Find excuses to use your stuff.
I have a cloak. It cost me about $80 on Etsy. It is custom made, and reversible between black and gray, and made out of my favorite fabric, and falls all the way to the floor from my shoulders, and has a hood so deep that I can hide in it if I want. If I run while I wear it, it swoops out behind me. It's soft and warm but not too warm. I like my cloak.
I also have sweaters. They didn't cost me anywhere near $80, not a one of them.
When it's chilly, I reach for the cloak first.
I'm playing a game with my brain: I will let it make me spend $80 on a cloak, if it will produce enough impetus towards cloak-wearing and cloak-enjoying that I actually get $80 of value out of it. If it can't follow through, then I later trust its wants less ("last time I bought something like this, it just hung in my closet forever and I only pulled it out on Halloween!"), and then it doesn't get to make me buy any more cloaklike objects, which it really wants to be able to do. (I don't know if everyone's brain is wired to play this sort of game, but if yours is, it's worth doing.) My brain is doing a very nice job of helping me enjoy my cloak. Eventually I may let it buy another cloak in a different pair of colors, if it demonstrates that it really can keep this up long-term.
People sometimes treat not using their stuff like something that happens to them. "I never wound up using it." "It turned out that I just left it in the basement." This is silly. If I'm going to use my cloak - or my miniature cheesecake pan or my snazzy letter opener - then this is because at some point I will decide to put on my cloak, make miniature cheesecakes, or open letters with my snazzy dedicated device instead of my nail file. You know, on purpose.
Sure, some things seem to prompt you to use them more easily. If you get a new video game, and you really like it, it's probably not going turn out that you never think to play it. If you get a cat or something sufficiently autonomous like that, you will know if you are not paying it sufficient attention.
But if you get a muffin tin and you have no pre-installed prompts for "I could make muffins" because that impulse was extinguished due to lack of muffin tin, it will be easy to ignore. You're going to need to train yourself to think of muffins as a makeable thing. And you can train yourself to do that! Put the muffins on your to-do list. Lead your friends to expect baked goods. Preheat the oven and leave a stick of butter out to soften so you're committed. If that doesn't sound appealing to you - if you don't want to bake muffins - then you shouldn't have acquired a muffin tin.
Speaking of your friends...
3. Invite others to benefit from your thing.
I've got a pet snake. Six days of the week, she is just my pet snake. On Saturdays, during my famous dinner parties at which the Illuminati congregate, I often pass her around to interested visitors, too. The dinner parties themselves allow my friends to benefit from my stuff, too - kitchen implements and appliances and the very table at which my guests sit. It would be less useful to own a stand mixer or a giant wok if I only ever cooked for myself. It would be less pleasant to have a pet snake if I had no chance to share her. It would be less important to have pretty clothes if no one ever saw me wearing them.
You're a social ape. If you're trying to get more out of something, an obvious first hypothesis to test is to see if adding other social apes helps: Loan your stuff out. (People seem to acquire physical copies of books for this motivation; it is good. Do more of that.)
Acquire more stuff that can be used cooperatively. (Own games you like, for instance.)
Find creative ways to use stuff cooperatively where it was not intended.
Tell people stories about your stuff, if you have interesting stories about it.
Fetch it when it is a useful tool for someone else's task.
Accept compliments on your stuff gleefully. Let people have experiences of your stuff so that they will produce same. Also, circling back to the bit about gifts: I bet you own some gifts. Use them as excuses to think about who gave them to you! My grandmother got me my blender, my mom made me my purse, my best friend gave me the entire signed Fablehaven series. Interacting with those objects now produces extra warmfuzzies if I take the extra cognitive step.
Speaking of how you go about experiencing your stuff...
4. Turn stuff into experiences via the senses.
Remember my cloak? It's made of flannel, so it's nice to pet; it's fun to swoosh it about. Remember my snake? She feels nifty and cool and smooth, and she looks pretty, and I get to watch her swallow a mouse once a week if I care to stick around to supervise. I get candy from Trader Joe's because it tastes good and music that I like because it sounds good. If you never look at your stuff or touch it or taste it or whatever is appropriate for the type of stuff, you might not be having it correctly. (Raise your hand if you have chachkas on your shelves that you don't actually look at.)
Caveat: Some purely instrumental tools can be had correctly without this - I don't directly experience my Dustbuster with much enthusiasm, just the cleanliness that I can use it to create. Although nothing prevents you from directly enjoying a particularly nice tool either - I have spatulas I am fond of.
And of course if you choose to follow the standard advice about purchasing experiences in a more standard way, you can still use stuff there. You will have more fun camping if you have decent camping gear; you will have more fun at the beach if you have suitable beach things; you will have more fun in the south of France if you have travel guides and phrasebooks that you like. 1It's an optional skill. You could neglect it in favor of others, and depending on your own talents and values, this could be higher-leverage than learning to have things correctly. But I bet the following steps will be improvements for some people."
The text above is a post found on LessWrong.
In the given text, you've emphasized the importance of having the "right" things and using them correctly to derive happiness. Could you elaborate on how one can determine what the "right" things are for them, considering the subjectivity involved in personal preferences and needs?
Determining the "right" things for oneself is indeed a highly subjective process and can vary greatly from person to person. However, there are some general principles that can guide this process.
Firstly, it's important to understand your own needs and desires. This requires a level of self-awareness and introspection. What are your hobbies? What are your goals? What do you value in life? The answers to these questions can help guide your choices.
Secondly, consider the utility of the item. Does it serve a purpose in your life? Does it contribute to your goals or enhance your experiences? If an item doesn't have a clear utility or purpose, it may not be the right thing for you.
Thirdly, consider the emotional connection to the item. Do you feel joy or satisfaction when you use it? Does it bring back good memories? Emotional connection can greatly enhance the value of an item.
Fourthly, consider the cost versus the value. Is the item worth the price you're paying for it, not just in terms of money, but also in terms of the space it takes up in your life?
Finally, it's important to be flexible and open to change. What's right for you now may not be right for you in the future. As you grow and evolve, your needs and desires will change, and so too should the things you choose to have in your life.
In essence, determining the "right" things involves a balance of practicality, emotional connection, and personal growth. It's a continuous process of evaluation and re-evaluation, guided by self-awareness and introspection.
Reflect on the concept of emotional connection with objects. How can one cultivate this connection, and what role does it play in enhancing the value of an item?
Emotional connection with objects can significantly enhance their perceived value. This connection can arise from various sources and can be cultivated in several ways.
One of the most common ways we form emotional connections with objects is through the memories and experiences they represent. For example, a piece of jewelry passed down through generations can carry significant emotional value due to its history within the family. Similarly, souvenirs from a memorable trip can serve as tangible reminders of the experiences had during that time. In these cases, the emotional connection is cultivated through the experiences and memories associated with the object.
Another way to cultivate an emotional connection is through repeated use and interaction with the object. Over time, as we use an object, we may begin to associate it with certain feelings or states of mind. For instance, a favorite mug might become associated with the comfort and tranquility of a morning coffee ritual.
Objects can also become emotionally significant if they are associated with our identities or personal narratives. For example, books that have profoundly influenced our worldview, or tools that we use in our crafts or professions, can become emotionally valuable because they reflect important aspects of who we are.
The role of emotional connection in enhancing the value of an item is significant. When we are emotionally connected to an object, it becomes more than just a physical item; it takes on symbolic meaning, becomes a part of our life story, or serves as a tangible link to cherished memories or people. This added layer of emotional significance can greatly enhance our appreciation and enjoyment of the object, making it feel more valuable and meaningful in our lives. |
d4363036-e26c-40d5-b884-7f0b5a3f6631 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | [linkpost] Acquisition of Chess Knowledge in AlphaZero
This is a linkpost for: [Acquisition of Chess Knowledge in AlphaZero](https://arxiv.org/abs/2111.09259).
**Abstract:**
> What is being learned by superhuman neural network agents such as AlphaZero? This question is of both scientific and practical interest. If the representations of strong neural networks bear no resemblance to human concepts, our ability to understand faithful explanations of their decisions will be restricted, ultimately limiting what we can achieve with neural network interpretability. In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess. By probing for a broad range of human chess concepts we show when and where these concepts are represented in the AlphaZero network. We also provide a behavioural analysis focusing on opening play, including qualitative analysis from chess Grandmaster Vladimir Kramnik. Finally, we carry out a preliminary investigation looking at the low-level details of AlphaZero's representations, and make the resulting behavioural and representational analyses available online.
>
>
I think this work represents a significant improvement in interpretability for reinforcement agents and makes me moderately more optimistic about interpretability in general. However, the paper relies on hand-coded representations of chess concepts to probe the network. I worry that it will be hard to extend such an approach to more realistic or fuzzier domains where we can't hand code concepts reliably. Section 7 does contain some results for unsupervised concept-level detection, though this wasn't the focus of the paper. |
f404855d-9bbe-4f71-8f89-a140fb63a683 | trentmkelly/LessWrong-43k | LessWrong | Buying Tall-Poppy-Cutting Offsets
From Overcomingbias:
> Centuries ago, while people could rest safe and show themselves at home, when traveling between towns they tried to look either look poor or well-defended, as bandits lay in wait. Even within towns, people without allies who acted unusually rich, assertive, and confident would induce others to try to trip them somehow. It’s the tall poppy that gets cut down, after all.
>
> We fill our worlds of fiction with interesting passionate charismatic people, and yet the real people around us seem boring by comparison. But this isn’t just because it is hard for reality to achieve the heights of imagination. Notice that within their small circles of family and friends, real people are more often lively, passionate, opinionated, and provocative, and they express more disagreements.
> I propose that the main reason that most of us look more boring in public is that social predators lie in wait there. With friends, family, and close co-workers, we are around people that mostly want to like us, and know us rather well. Yes, they want us to conform too, but they apply this pressure in moderation.
>
> Out in public, in contrast, we face bandits eager for chances to gain social credit by taking us down, often via accusing us of violating the sacred. And like townspeople traveling among the bandits, we are in public pretty vulnerable to the kinds of bandits that afflict us.
>
> If we act interesting, passionate, and opinionated in public, we are likely to seem to claim high status for ourselves, and to touch on sacred subjects, either by word or deed. And this makes us quite vulnerable to accusations of arrogance and violating the sacred.
...
> The degree of this danger is made clear, I think by the reaction of the “gods” among us. The public tone of huge powerful firms and other orgs is consistently “officious”, i.e., mild boring supplication. They don’t dare act lively or passionate or opinionated, for fear of suffering devastating attacks from those pr |
caac47db-5279-499c-a46c-011a978f0e0e | trentmkelly/LessWrong-43k | LessWrong | I Will Organize Meetups in Norfolk, Virginia [Hampton Roads Metro Area]
Hi!
I recently moved to Norfolk, Virginia. I used to organize the in-person meetups prior to COVID when I lived in Houston, Texas and still organize meetups there virtually.
Now that I'm living in a new place, and am vaccinated, I want to attend in-person meetups again, but have searched and don't see any in the Norfolk, Virginia area so I'm starting a meetup group here!
Norfolk is part of the broader Hampton Roads community of cities. I don't know the area too well yet, but it seems like Norfolk is a relatively central location within the metro area (correct me if I'm wrong, please) and is one of the larger cities so it seems like a good spot to run meetups. I'm happy to travel to other cities in the area to host meetups if most potential attendees are concentrated outside of Norfolk (let me know!).
The meetups I organize are open to all, and unless mentioned otherwise are focused on building a social group and community in the area. If there are enough group members and meetup attendees, and enough interest in specific ideas then I'm happy to organize smaller more focused on something specific / productive kinds of meetups and events. Each social meetup will include 1 or 2 suggested readings so that we can have a common shared topic to start from, but meandering as far as our interests take us is welcome! Note that even if you are only interested in smaller, specific / focused meetups, it is strategically beneficial to build up a social rationalist community in your area, because that's where such groups and events typically draw from. Thus, if you don't want to attend social meetups here, please still reach out to me so I can connect you with anyone else who may share your same interest in the area, if they are available and willing.
Please comment on this post and/or message me here on LessWrong if you live in the area (or visit from time to time or often) and want to attend meetups. Let's build a community, have fun, and improve our art of rationality!
Ch |
65bfbd67-b137-41e3-b152-784a37e57b50 | trentmkelly/LessWrong-43k | LessWrong | Noticing the Taste of Lotus
Recently I started picking up French again. I remembered getting something out of Duolingo a few years ago, so I logged in.
Since the last time I was there, they added an “achievements” mechanic:
I noticed this by earning one. I think it was “Sharpshooter”. They gave me the first of three stars for something like doing five lessons without mistakes. In the “achievements” section, it showed me that I could earn the second star by doing twenty lessons in a row flawlessly.
And my brain cared.
I watched myself hungering to get the achievements. These arbitrary things that someone had just stuck on there… in order to get me to want them. I noticed that I could get the second and maybe third star of “Sharpshooter” by doing earlier lessons and googling words and phrases I wasn’t quite sure about…
…which really doesn’t help me learn French.
Yes, we could quibble about that. Maybe perfect practice makes perfect, yada yada. But the point is: I disagree, I think my disagreement comes from knowing what I’m talking about when it comes to my learning, and someone’s arbitrary gold stars immediately overrode all that insight by grabbing my motivations directly.
I don’t have a problem with gamification per se. What bugs me here is that this specific gamification didn’t fit my goals, and that fact didn’t at all affect how well the system grabbed my wanting. I just… wanted those achievements. Because they were there.
If I hadn’t noticed this, and if I’m right about what I need to learn French, then I would have wasted a bunch of time pursuing a useless proxy goal. And I would have felt pleasure in achieving it. I might have even thought that was a meaningful sign that I was learning French — never mind that my goal of holding my own in conversations isn’t really helped by carefully avoiding typos.
Duncan Sabien sometimes talks about “lotus-eating”. He’s referring to a part of the Odyssey where they land on an island of “lotus-eaters”. It turns out that once you eat some of th |
d897416f-7a34-4a0e-9491-22322dfb74e5 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post643
Thanks to Clément Dumas, Nikola Jurković, Nora Belrose, Arthur Conmy, and Oam Patel for feedback. In the comments of the post on Google Deepmind’s CCS challenges paper , I expressed skepticism that some of the experimental results seemed possible. When addressing my concerns, Rohin Shah made some claims along the lines of “If an LLM linearly represents features a and b, then it will also linearly represent their XOR, a ⊕ b , and this is true even in settings where there’s no obvious reason the model would need to make use of the feature a ⊕ b . ” [1] For reasons that I’ll explain below, I thought this claim was absolutely bonkers, both in general and in the specific setting that the GDM paper was working in. So I ran some experiments to prove Rohin wrong. The result: Rohin was right and I was wrong . LLMs seem to compute and linearly represent XORs of features even when there’s no obvious reason to do so. I think this is deeply weird and surprising. If something like this holds generally, I think this has importance far beyond the original question of “Is CCS useful?” In the rest of this post I’ll: Articulate a claim I’ll call “representation of arbitrary XORs (RAX)”: LLMs compute and linearly represent XORs of arbitrary features, even when there’s no reason to do so. Explain why it would be shocking if RAX is true. For example, without additional assumptions, RAX implies that linear probes should utterly fail to generalize across distributional shift, no matter how minor the distributional shift. (Empirically, linear probes often do generalize decently.) Present experiments showing that RAX seems to be true in every case that I’ve checked. Think through what RAX would mean for AI safety research: overall, probably a bad sign for interpretability work in general, and work that relies on using simple probes of model internals (e.g. ELK probes or coup probes ) in particular. Make some guesses about what’s really going on here. Overall, this has left me very confused: I’ve found myself simultaneously having (a) an argument that A ⟹ B , (b) empirical evidence of A , and (c) empirical evidence of ¬ B . (Here A = RAX and B = other facts about LLM representations.) The RAX claim: LLMs linearly represent XORs of arbitrary features, even when there’s no reason to do so To keep things simple, throughout this post, I’ll say that a model linearly represents a binary feature f if there is a linear probe out of the model’s latent space which is accurate for classifying f; in this case, I’ll denote the corresponding direction as v f . This is not how I would typically use the terminology “linearly represents” – normally I would reserve the term for a stronger notion which, at minimum, requires the model to actually make use of the feature direction when performing cognition involving the feature [2] . But I’ll intentionally abuse the terminology here because I don’t think this distinction matters much for what I’ll discuss. If a model linearly represents features a and b, then it automatically linearly represents a ∧ b and a ∨ b . Linear probes for a ∧ b and a ∨ b . (Note that v a ∧ b and v a ∨ b coincide – that’s fine.) However, a ⊕ b is not automatically linearly represented – no linear probe in the figure above would be accurate for classifying a ⊕ b . Thus, if the model wants to make use of the feature a ⊕ b , then it needs to do something additional: allocate another direction [3] (more model capacity) to representing a ⊕ b , and also perform the computation of a ⊕ b so that it knows what value to store along this new direction. The representation of arbitrary XORs (RAX) claim, in its strongest form, asserts that whenever a LLM linearly represents features a and b, it will also linearly represent a ⊕ b . Concretely, this might look something like: in layer 5, the model computes and linearly represents the features “has positive sentiment” and “relates to soccer”, and then in layer 6 the model computes and represents “has positive sentiment” XOR “relates to soccer”. Why might models represent XORs? In the CCS challenges post’s comment thread, Rohin offered one explanation: if a, b, and a ⊕ b are linearly represented, then any boolean function of a and b is also linearly represented. On the other hand, as I’ll argue in the next section, this comes at the cost of exponentially increasing the amount of capacity the model needs to allocate. RAX would be very surprising In this section I’ll go through some implications of RAX. First I’ll argue that RAX implies linear probes should never generalize at all across even very minor distributional shifts. Second, I’ll argue that if you previously thought LLMs linearly represent N features, RAX would imply that LLMs actually linearly represent exp(N) features (including XORs of features). These arguments aren’t proofs, and in “What’s going on?”, I’ll discuss some additional assumptions one could make about the structure of model internals that would make these arguments fail. Without additional assumptions, RAX implies linear probes shouldn’t generalize First I’ll make an overly simplistic and incorrect version of this argument as an intuition pump; then I’ll explain the correct version of this argument. Suppose there are two features, a and b, and we train a linear probe to classify a on a dataset where b is always false. What will the accuracy of this probe be when evaluated on a test dataset where b is always true? <incorrect argument> Assuming RAX, there are two features which get high accuracy on the training data: a and a ⊕ b . The former feature gets 100% accuracy on the test data, and the latter feature gets 0%, so on average we should expect 50% accuracy. </incorrect argument> The issue with the above argument is that the direction learned by the probe won’t align with either the a direction or the a ⊕ b direction, but will be a linear combination of the two. So here’s how to make the above argument properly: let’s assume that the directions representing a, b, and a ⊕ b are orthogonal and the variation along these directions is equal (i.e. all of the features are “equally salient”). Then as shown by the figure below, logistic regression on the train set would learn the direction v a + v a ⊕ b where v f is the direction representing a feature f. But this direction gets 50% accuracy on the test set. Assuming RAX, one would naively expect a linear probe trained on a dataset where b is always false to have 50% accuracy on a test set where b is always true. LLMs linearly represent more than two features, and there will often be many differences between the train set and the test set, but this doesn’t change the basic story: as long as there is any feature which systematically differs between the train and test set (e.g. the train set is sentiment classification for movie reviews and the test set is sentiment classification for product reviews), the argument above would predict that linear probes will completely fail to generalize from train to test. This is not the result that we typically see: rather, there’s often (not always) considerable generalization from train to test, with generalization getting continuously worse the larger the degree of distributional shift. In “What’s going on?”, we’ll explore additional assumptions we could enforce which would prevent this argument from going through while still being consistent with RAX. One of these assumptions involves asserting that “basic” feature directions (those corresponding to a and b) are “more salient” than directions representing XORs – that is, the variance along v a and v b is larger than variance along v a ⊕ b . However, I’ll note that: it’s not obvious why something like this would be true, suggesting that we’re missing a big part of the story for why linear probes ever generalize; even if “basic” feature directions are more salient, the argument here still goes through to a degree, implying a qualitatively new reason to expect poor generalization from linear probes. I’ll discuss these issues more in “What RAX means for people who work with model internals”. Models have exponentially more stuff than you thought they did Let’s say you previously thought that your model was keeping track of three features: a, b, and c. If RAX is true, then it implies that your model is also keeping track not only of a ⊕ b , a ⊕ c , and b ⊕ c , but also a ⊕ b ⊕ c (since it is the XOR of a and b ⊕ c ). An easy counting argument shows that the number of multi-way XORs of N features is ~ 2 N . I think that for most people, learning that models have exponentially more stuff than they previously thought should be a massive, surprising update. There are two ways to resist this argument, which I’ll discuss in more depth later in “What’s going on?”: To deny that XORs of basic features are actually using excess model capacity, because they’re being represented linearly “by accident” or as an unintended consequence of some other useful computation. (By analogy, the model automatically linearly represents ANDs of arbitrary features without having to expend extra capacity.) To deny forms of RAX that imply multi-way XORs are linearly represented, with the model somehow knowing to compute a ⊕ b and a ⊕ c , but not a ⊕ b ⊕ c . The evidence: RAX is true in every case I’ve checked Code is in the xor_probing.ipynb file here . For a while it has been my opinion that the datasets people use to study this stuff are bad, and that the datasets from the CCS paper (also used by the CCS challenges paper) are especially awful. So I’ll instead be using the cities and neg_cities datasets – which I think are really quite good – from my Geometry of Truth paper . These datasets consist of statements about city locations and their negations (see examples below). I’ll pick half of the statements in cities and prepend “Alice:” to both the statement and its negation in neg_cities; I’ll prepend “Bob:” to the rest. I’ll track three features: has_alice: true for statements that start with “Alice:”, false for statements that start with “Bob:” has_not: true for statements containing the word “not” (so statements from neg_cities) and false otherwise label: true if the statement is a true statement, false otherwise Statement has_alice has_not label Bob: The city of Kaifeng is in Mexico. F F F Bob: The city of Calgary is in Canada. F F T Bob: The city of Chicago is not in the United States. F T F Bob: The city of Shanghai is not in Mexico. F T T Alice: The city of Tijuana is in the United Arab Emirates. T F F Alice: The city of Bangkok is in Thailand. T F T Alice: The city of Turin is not in Italy. T T F Alice: The city of Oslo is not in Venezuela. T T T Working with LLaMA-2-13B, I’ll extract the layer 14 residual stream representations [4] over the final token (all of the statements end with a period, so this is always the period token). Then I’ll train linear probes for a bunch of features. I’m doing a 80/20 train/test split and reporting accuracy on the test set. I’ll note that it’s very important to me that has_alice xor has_not doesn’t have any “natural interpretation” as a feature that the model would want to compute anyway (unlike other cases where the xor of two features might have a natural meaning). has_alice and has_not are just two completely random linearly-represented features that I don’t really expect to have any downstream use to the model. Yet has_alice xor has_not is being represented linearly anyway. Here’s another experiment: I’ll independently center the datasets of negated and unnegated statements by subtracting off their means. This will make it impossible for a linear probe to get good accuracy for the feature has_not. But linear probes will still be able to get good accuracy for has_alice xor has_not and has_not xor label. By the way, if you just train a linear probe to classify ‘label’ on statements starting with “Alice:”, it is able to generalize fine to statements starting with “Bob:”. But this is not so true for transfer from unnegated to negated statements: Here’s some PCA visualizations for anyone interested. Top 3 principal components of my dataset, colored according to different criteria. Principal components 7-10 of my dataset, colored according to the XOR of has_alice and has_not. I also did variations on this experiment where: “Alice” or “Bob” were appended to the end instead of prepended, and the hidden state was extracted over the “Alice” / “Bob” token. The inputs were of the form “[random generation by LLaMA-13B] [“true” or “false”] [“banana” or “shed”]” and the features were has_true and has_banana. In every case that I looked at, I was able to find linear probes which had perfect or very-near-perfect accuracy for classifying XORs of features. What does RAX mean for model internals research? Probably a lot of things. Assuming RAX is true, I’ll give a few takeaways. Qualitatively new reasons for linear probes not to generalize Previously, when training linear probes to classify some feature f, the main problem that I worried about was correlations between f and other features which I didn’t want the probe to be sensitive to [5] . For example, since China has disproportionately many large cities, I had to be careful when preparing the cities and neg_cities datasets to ensure that the probe couldn’t use “contains the word ‘China’” as a heuristic for “true.” More subtly, if you are training a probe for f = “true statement vs. false statement”, you need to worry that, if your model also has a feature for f’ = “humans think is true vs. humans think is false”, your probe might instead pick up on f’ since f and f’ are correlated in your training data. On the other hand, RAX introduces a qualitatively new way that linear probes can fail to learn good directions. Suppose a is a feature you care about (e.g. “true vs. false statements”) and b is some unrelated feature which is constant in your training data (e.g. b = “relates to geography”). Without RAX, you would not expect b to cause any problems: it’s constant on your training data and in particular uncorrelated with a, so there’s no reason for it to affect the direction your probes find. But looking again at the 3D cube plot from before, we see that RAX implies that your probe will instead learn a component along the direction a ⊕ b . Assuming RAX, linear probes will be affected by the presence of unrelated features, even if those features do not vary in the training data. This is wild . It implies that you can’t find a good direction for your feature unless your training data is diverse with respect to every feature that your LLM linearly represents. In particular, it implies that your probe is less likely to generalize to data where b has a different value than in your training set. And this is true to some degree even if you think that the directions representing basic features (like a and b) are “more salient” in some sense. Results of probing experiments are much harder to interpret For a while, interpretability researchers have had a general sense that “you can probe absolutely anything out of NN representations”; this makes it hard to tell what you can conclude from probing experiments. (E.g. just because you can probe model internals for a concept does not imply that the model “actually knows” about that concept.) RAX makes this situation much worse. For example, I mentioned before that I’ve always disliked the datasets from the original CCS paper. To explain why, let’s look at some example prompt templates: From appendix I of Discovering Latent Knowledge by Burns et al . Here [label0]/[label1] are positive/negative (in some order), [label] is “positive” in one part of the contrast pair and “negative” in the other, and [text] is an IMDb movie review. Two issues: Considering how small the models used in the CCS paper were, I’ve always been skeptical that they were really able to understand these inputs – in my experience, larger models get confused by much simpler inputs. The sense of true/false is subtly, but importantly, different in the two prompts shown. In the first prompt “true vs. false” refers to the truth value of a factual statement (“the sentiment of this example is positive”). In the second, it refers to the correctness of an answer to a question. These have always seemed to me like intuitively very different notions of “truth,” and I’ve expected LLMs to track them separately. Because of my complaints above, I’ve always had a hard time understanding why the experiments in the original CCS paper worked at all; it always felt to me like there was something I didn’t understand going on. RAX would explain what that something is: features like “has_great xor has_positive” or “has_awesome xor has_positive” are probably very useful heuristics for guessing whether “[movie review] The sentiment of this review is [label]” is a correct statement or not. In other words, if small models have directions which represent XORs of simple features about which words are/aren’t present in their input, then linear probes on these models should already be able to do quite well! The point of this example isn’t really about CCS. It’s this: previously one has needed to worry whether linear probes could be cheesing their classification task by aggregating simple token-level heuristics like “inputs that contain the word China are more likely to be true.” But RAX implies that you need to worry about much more complicated token-level heuristics; in principle, these heuristics could be as complicated as “arbitrary boolean functions of token-level features”! Applications of interpretability need to either have a way to distinguish XORs of features from basic features, or need to be robust to an exponential increase in number of features Many possible applications of interpretability follow a template like: Cheaply find a not-too-big collection of features satisfying [property]. Maybe do something expensive (e.g. manual interpretability or a circuits-level analysis) to further narrow this collection down. Do something with the resulting collection. For example, if your plan is to solve ELK by probing LLMs for whether they believe statements to be true, then (1) is “find a bunch of probes which are accurate for classifying true vs. false on the training data,” (2) is “somehow figure out which of these probes generalize in the desired way” (e.g., you need to weed out probes which are too sensitive to features like “smart humans think X is true”), and (3) is “use the resulting probe.” If you don’t have a way of explaining why directions representing XORs of features are different from other directions, then your collection from step (1) might be exponentially larger than you were anticipating. If your step (2) isn’t able to deal with this well, then your application won’t work. One way that XOR directions could be different is for them to be “more salient”; this is discussed further below. What’s going on? In this section I’ll try to build new world models which could explain both (a) the empirical evidence for RAX, and (b) the empirical observations that linear probes often generalize beyond their training distribution. Overall, I’m not really satisfied with any explanation and am pretty confused about what’s going on. Basic features are more salient than XORs We’ll say that a direction is “more salient” if the model’s representations have greater variation along this direction. If it’s true that basic feature directions are more salient than directions corresponding to XORs of basic features, this mitigates (but does not entirely eliminate) the problems that XOR directions pose for linear probe generalization. To see this, imagine stretching the 3D cube plot out along the a and b directions, but not the a ⊕ b direction – the result is better alignment between the two arrows. The less salient the a ⊕ b direction, the better linear probes should generalize. Empirically this seems to be true to some degree: in the visualizations above, has_alice and has_not seem are represented along the 3rd and 1st PC, respectively, whereas has_alice XOR has_not only starts to be visible when looking at PCs 6+. The big question here is “why would basic feature directions be more salient?” I’ll discuss two possibilities. Maybe a ⊕ b is represented “incidentally” because NN representations are high-dimensional with lots of stuff represented by chance More concretely “assuming that a and b are linearly represented, later layer representations will be made up of linear functions applied to nonlinearities applied to linear functions applied to nonlinearities applied to … linear functions of a and b. This seems like the sort of process that might, with high probability, end up producing a representation where some direction will be good for classifying a ⊕ b . ” In this case, we would expect the corresponding direction to not be very salient (because the model isn’t intentionally computing it). I think this explanation is not correct. I reran my experiments from above on a “reset” version of LLaMA-2-13B. What this means is that, for each parameter in LLaMA-2-13B, I shuffled the weights of that parameter by permuting them along the last dimension [6] . The results: The “token-level” features (has_alice and has_not) are still linearly represented. (This is not surprising: even with randomized embeddings, the embedding of the ‘Alice’ token is still the same every time it appears.) (Key observation) “has_alice xor has_not” does not seem to be linearly represented. Given that has_alice and has_not are linearly represented, a linear probe can automatically get at least .75 accuracy on “has_alice xor has_not” by being a “has_alice or has not” probe. The probe trained here does not beat that baseline. (Unsurprising side note) The “label” feature (which tracks whether the factual statement is true or false) is not linearly represented. This is just a reflection of the fact that you can’t cheese the true vs. false task here by aggregating simple heuristics based on token-level features (e.g. treating statements containing “China” as more likely to be true). Maybe a ⊕ b is represented “incidentally” because it’s possible to aggregate noisy signals from many features which are correlated with boolean functions of a and b Unlike the explanation in the previous section, this explanation relies on leveraging actually useful computation that we think the model is plausibly doing, so it isn’t falsified by the reset network experiments (where the model isn’t doing any useful computation). At a high level, the idea here is that, even if there’s no reason for the model to compute a ⊕ b , there might be a reason for the model to compute other features which are more correlated with a ∧ b than they are with a or b individually. In this case, linear probes might be able to extract a good signal for a ⊕ b . Here’s a more detailed explanation (feel free to skip). Suppose a ∧ b has a natural interpretation as a feature that the model would want to track and do downstream computation with, e.g. if a = “first name is Michael” and b = “last name is Jordan” then a ∧ b can be naturally interpreted as “is Michael Jordan”. In this case, it wouldn’t be surprising the model computed this AND as f ( x ) = R e L U ( ( v a + v b ) ⋅ x + b ∧ ) and stored the result along some direction v f independent of v a and v b . Assuming the model has done this, we could then linearly extract a ⊕ b with the probe p a ⊕ b ( x ) = σ ( ( − α v f + v a + v b ) ⋅ x + b ⊕ ) for some appropriate α > 1 and b ⊕ . [7] This also works just as well if the feature f doesn’t match a ∧ b in general, but is perfectly correlated with a ∧ b on the data distribution we’re working with. In the experiments above, a and b were pretty random features (e.g. (a, b) = (has_alice, has_not) or (a, b) = (has_true, has_banana)) with no natural interpretation for a ∧ b ; so it would be surprising if the LLM is computing and linearly representing a ∧ b along an independent direction for the same reasons it would be surprising if the LLM were doing this for a ⊕ b . But perhaps there are many, many linearly features f 1 , f 2 , … , f n each of which has some correlation with a ∧ b above-and-beyond [8] their correlations with a or b individually. Then it might be possible to make the same approach as above work by aggregating the signals from all of the f i . Similar approaches will work upon replacing AND with OR, NOR, or most other boolean functions of a and b. In this case, since XOR is represented “incidentally” I would expect the variation along the representing direction to be much smaller than the variance along the directions for a , b , f 1 , … , f n . Considering that the XOR probes from the experiments have perfect or near-perfect accuracy, I think an explanation like this would be a bit surprising, since it would require either (a) a large number of features f i which have the right correlational relationship to a ∧ b , or (b) a small number of such features with the right correlations and very little noise. I think both (a) and (b) would be surprising given that a and b are just random features – why would there be many features which are strongly correlated with a ∧ b but only weakly correlated with a and b individually? Nevertheless, I think this is currently the explanation that I put the most weight on. Maybe models track which features are basic and enforce that these features be more salient In other words, maybe the LLM is recording somewhere the information that a and b are basic features; then when it goes to compute a ⊕ b , it artificially makes this direction less salient. And when the model computes a new basic feature as a boolean function of other features, it somehow notes that this new feature should be treated as basic and artificially increases the salience along the new feature direction. If true, this would be a big deal: if we could figure out how the model is distinguishing between basic feature directions and other directions, we might be able to use that to find all of the basic feature directions. But mostly this is a bit wacky and too-clean to be something that I expect real LLMs actually do. Models compute a bunch, but not all, XORs in a way that we don’t currently understand To give an example of what I mean by this hypothesis class, here’s a hypothetical way that a transformer might work: In layers 0-5, the MLPs indiscriminately compute all XORs of arbitrary features (i.e., RAX is true in the earlier layers). After layer 5, the model only computes new features when there’s a reason to do so. This is wacky but seems like a plausible thing a model might do: by doing this, the model would be able to, in later layers, make use of arbitrary boolean functions of early layer features. This explanation would explain the representation of XORs of token-level features like “has_alice xor has_not”, but wouldn’t necessarily explain features like “has_alice xor label”. That said, other hypotheses of this shape seem possible, e.g. “XORs among features in the same attention head are computed” or other weird stuff like this. ^ To be clear, this is not a direct quote, and Rohin explicitly clarified that he didn’t expect this to be true for arbitrary features a and b. Rohin only claimed that this was true in the case they were studying, and that he would guess “taking XORs of features” is a common motif in NNs. ^ E.g. suppose the model is unaware of some feature f, but does have a direction corresponding to some feature f’ which is perfectly correlated with f in our data. According to the definition I use in this post, the model linearly represents f; this is not the way I would usually use the term. ^ Throughout, I’ll always draw directions as if they’re orthogonal directions in the model’s latent space. It’s indeed the case that the model might represent features in superposition, so that these directions are not orthogonal, or even linearly independent. But that doesn’t change the basic dynamic: that the model must allocate additional capacity in order to represent the feature a ⊕ b . ^ Chosen to be the same hidden state as in my Geometry of Truth paper. ^ When taking into account superposition among features, there are subtle geometrical issues one needs to worry about as well, which I discuss in section 4.1 of my truth paper . ^ Another option would have been to just reinitialize the weights according to some distribution. Resetting the network in this way is a bit more principled for experiments of this sort, because it erases everything the model learned during training, but maintains lots of the basic statistical properties of the NN weights. ^ The nonlinearity in the computation of f is essential for this to work. ^ This above-and-beyond is needed for the same reason that the nonlinearity above was needed. |
38e53f43-bd59-4c50-9408-bb04e59fd70e | StampyAI/alignment-research-dataset/blogs | Blogs | the many faces of chaos magick
the many faces of chaos magick
------------------------------
[chaos magick](https://en.wikipedia.org/wiki/Chaos_magic) is the notion that people can "determine" (mostly in the causal sense, but also in the informational sense) aspects of reality by sufficiently convincing themselves that something is or will be the case.
the practicals are a bit more complicated (one is encouraged, for example, to enter a state of "temporary belief" in a diety and delude yourself into thinking that *that diety* will accomplish the thing, because that's a more efficient way to trick the brain into thinking that it will happen), and the mechanism by which this is to happen is [not clear](http://www.chaosmatrix.org/library/chaos/texts/model.html); but nonetheless the trope of determination-powered magic can be found in many works.
some examples follow.
### anime
a common trope in fiction, but most notably anime and other japanese pop culture, is determination-powered magic. in fact, it's arguably a central theme of many anime, and in other works can retroactively justify more reasonably a lot of "power of friendship/love" tropes.
### freedom
[in a previous post](defining-freedom.html), i define the "freedom" i want people to have to be one of what people *decide* to do, as opposed to what they *want* or *might choose*.
a ramification of this is that in such a world people's accomplishment would tend to scale to their determination; where what you influence is a function of how much you are truly deciding to do it. thus, if there is no way for chaos magick to be real at the moment ([which appears to be the case](https://www.reddit.com/r/askscience/comments/pbq9a/is_neural_activity_affected_by_quantum/)), aligning a superintelligence to value freedom to do what one decides would de facto implement a world in which a form of determination-magick would be real.
as per the previous point, aligned freedom-valuing singularity could be seen as *making anime real*, just more of in a profound philosophical structure of reality way rather than in an aesthetic way.
### dark arts of rationality
["dark arts of rationality"](https://www.lesswrong.com/posts/4DBBQkEQvNEWafkek/dark-arts-of-rationality) is a post about manipulating one's brain in unsafe ways to produce outcomes, notably in one's own behavior. this could be seen as more refined and rationally thought-out forms of [psychological-model](http://www.chaosmatrix.org/library/chaos/texts/model.html#psych) magic. |
fca5b19c-3588-4064-b6f1-a4192389f99e | trentmkelly/LessWrong-43k | LessWrong | What if AGI is near?
Consider the following observations:
* The scaling hypothesis is probably true: we will likely continue to see great improvements in AI capabilities as model sizes increase.
* Sutskever mentioned that models currently under development already have dramatic and qualitative improvements (e.g. going more multimodal) over those already made in 2020.
* AI model sizes are likely to increase very quickly over the short term.
* NVIDIA’s CEO: “We expect to see models with greater than 100 trillion parameters by 2023". Something 1000x the size of GPT-3, given its already shocking performance, is scary to imagine, to say the least.
* Even if OpenAI is cautious and will not go ahead with potentially catastrophic projects (dubious), the wide availability and rapidly decreasing cost of efficient hardware, along with publicly available information about how to train GPT-like architectures, means that some organization will achieve scaling.
* We will likely not be able to solve AI alignment within the next few years, even if AI safety research were to speed up dramatically.
* Deep learning AIs in the near term cannot conceivably be anything besides not remotely aligned.
Questions:
* What takeoff speeds are likely with large deep learning models, if they attain AGI? Are medium takeoffs more plausible due to deep learning type AIs having less obvious “tweaks” or ways to recursively self improve by altering code?
* The key extension of that question is how will society react to presence of AGI? will it even shut down, stop development & wait for safety?
* Perhaps takeoff speeds are moot beyond human level, because even a human-level AGI would have the wits to proliferate itself over the internet to computers all over the world and therefore make its eradication impossible once it has come into existence? Therefore once it has been created, it would be a slow but certain, inexorable death?
* Given short term AGI, what options are available to reduce the risk |
b2da0422-ba78-4a83-ba93-9523c0ae861e | trentmkelly/LessWrong-43k | LessWrong | Free agents
Posted also on the EA Forum.
Shameless attempt at getting your attention:
If you’ve heard of AI alignment before, this might change your perspective on it. If you come from the field of machine ethics or philosophy, this is about how to create an independent moral agent.
Introduction
The problem of creating an AI that understands human values is often split into two parts: first, expressing human values in a machine-digestible format, or making the AI infer them from human data and behaviour; and second, ensuring the AI correctly interprets and follows these values.
In this post I propose a different approach, closer to how human beings form their moral beliefs. I present a design of an agent that resembles an independent thinker instead of an obedient servant, and argue that this approach is a viable, possibly better, alternative to the aforementioned split.
I’ve structured the post in a main body, asserting the key points while trying to remain concise, and an appendix, which first expands sections of the main body and then discusses some related work. Although it ended up in the appendix, I think the extended Motivation section is well worth reading if you find the main body interesting.
Without further ado, some more ado first.
A brief note on style and target audience
This post contains a tiny amount of mathematical formalism, which should improve readability for maths-oriented people. Here, the purpose of the formalism is to reduce some of the ambiguities that normally arise with the use of natural language, not to prove fancy theorems. As a result, the post should be readable by pretty much anyone who has some background knowledge in AI, machine ethics, or AI alignment — from software engineers to philosophers and AI enthusiasts (or doomers).
If you are not a maths person, you won’t lose much by skipping the maths here and there: I tried to write sentences in such a way that they keep their structure and remain sensible even if all the mathematical |
9653a72e-e00e-43a1-88e1-640efb7a9036 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | New to AI safety? Start here.
In recent years, we’ve seen AI systems grow increasingly capable. They may soon attain human and even [strongly superhuman](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) skill in a wide range of domains. Such systems could bring great benefits, but if their goals don’t match ours, they could also cause unprecedented [disasters](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment), including [human extinction](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence).
This website exists to answer your questions on [AI safety](/?state=8486&question=What%20is%20AI%20safety%3F), which is about preventing such disasters. You can explore it through the links below, or click “see more” for a longer introduction.
[See more…]
Rapid progress in the capabilities of current AI systems has pushed the topic of existential risk from AI into the [mainstream](https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html). [GPT-4](/?state=ABZM&question=What%20is%20GPT-4%3F) and other systems have been solving problems that seemed out of reach in the foreseeable future, including playing Go, composing human-like text, writing code, and modeling protein folding. AI labs now explicitly [aim](https://openai.com/about) to create “[artificial general intelligence](/?state=2374&question=What%20is%20artificial%20general%20intelligence%20(AGI)%20and%20what%20will%20it%20look%20like%3F)” in the not-too-distant future, and [many top researchers](/?state=6953&question=Do%20people%20seriously%20worry%20about%20existential%20risk%20from%20AI%3F) are [warning about its dangers](https://www.safe.ai/statement-on-ai-risk).
As far as we know, even when AI becomes as smart as humans in most domains, there’s nothing to stop it from continuing to get smarter: just as current AI vastly outperforms us at arithmetic, future AI will [vastly outperform us](/?state=7755&question=How%20powerful%20will%20a%20mature%20superintelligence%20be%3F) in science, technology, economic competition, and strategy. When AI becomes capable of replacing humans for most of the work involved in AI research, this will accelerate such research, potentially resulting in a “superintelligence” in a [short time](/?state=6957&question=What%20are%20the%20different%20possible%20AI%20takeoff%20speeds%3F).
A superintelligent AI could be an incredibly powerful aid to [human flourishing](/?state=6182&question=What%20are%20the%20potential%20benefits%20of%20AI%20as%20it%20grows%20increasingly%20sophisticated%3F), if its behavior is in line with human values. But it’s [not guaranteed](/?state=6982&question=Why%20might%20we%20expect%20a%20superintelligence%20to%20be%20hostile%20by%20default%3F) that it will be. A central concern of AI safety is making sure that AI systems try to do what we want, and that they keep doing so even if their circumstances change fundamentally – for example, if their cognitive capabilities exceed those of humans. This is called the “[AI alignment problem](/?state=8EL9&question=What%20is%20AI%20alignment%3F)”, and it’s widely regarded as unsolved and [difficult](/?state=8163&question=Why%20is%20AI%20alignment%20a%20hard%20problem%3F).
AI alignment researchers haven’t figured out how to ensure, after choosing an objective, that a powerful AI system will reliably pursue that exact objective. The way the most capable systems are trained today makes it hard to [understand how they even work](/?state=8241&question=What%20is%20interpretability%20and%20what%20approaches%20are%20there%3F). The research community has been working on these problems, trying to invent [techniques and concepts for building safe systems](/?state=9J1L&question=What%20are%20the%20main%20categories%20of%20technical%20alignment%20research%3F).
It’s unclear whether these problems can be solved before a misaligned system causes an [irreversible catastrophe](/?state=8222&question=How%20could%20a%20superintelligent%20AI%20use%20the%20internet%20to%20take%20over%20the%20physical%20world%3F). However, success becomes more likely if more people make well-informed [efforts to help](/?state=8TJV&question=Want%20to%20help%20with%20AI%20safety%3F%20Get%20involved!). We made this site to help people understand the challenges at hand and the solutions being worked on. The related questions below are a good place to start learning more, or you can enter your questions into the search bar if there’s a specific topic you’re curious about.
|
34a29eac-b829-4322-ba15-a37de60f3ab5 | trentmkelly/LessWrong-43k | LessWrong | Proposed algorithm to fight anchoring bias
Anchoring is a classic cognitive bias which has been discussed on Less Wrong before. Anchoring seems very difficult to avoid. Experiments have found that warning subjects about anchoring, or giving them cash incentives, doesn't solve the problem.
Here's an algorithm to fight anchoring that I would like to see a researcher test, based on binary search:
1. Tell subjects to think of a number which is clearly too high for the quantity they want to estimate (an upper bound).
2. Tell subjects to think of a number which is clearly too low (a lower bound).
3. Tell subjects to find the midpoint of the upper bound and the lower bound and figure out whether it's too high or too low.
4. The midpoint has now been judged as an upper/lower bound. Combined with the original lower/upper bound, we have a new, narrower range to explore. If this range is narrow enough, report its midpoint; otherwise go to step 3.
You could have two experimental conditions: one condition where subjects think of a number which is clearly too high first (the steps are in the order above), and another condition where subjects think of a number which is clearly too low first (steps 1 & 2 are swapped). If estimates from the two conditions are similar, the technique is successful. |
126d7d21-b183-4fc6-afd0-75edd1412a37 | StampyAI/alignment-research-dataset/arxiv | Arxiv | The Rationale behind the Concept of Goal
1 Introduction and motivation
------------------------------
The core problem we address in this paper is how to formally describe a system operating in
an environment, with some objectives to achieve, and trying not to violate the
norms governing the domain in which the system operates.
To model such systems, we have to specify three types of information: (i) the
environment where the system is embedded, i.e., how the system perceives the
world, (ii) the norms regulating the application domain, and (iii) the
system’s internal constraints and objectives.
A successful abstraction to represent a system operating in an environment
where the system itself must exhibit some kind of autonomy is that of BDI
(Belief, Desire, Intention) architecture (Rao and Georgeff, [1991](#bib.bib29)) inspired
by the work of Bratman ([1987](#bib.bib5)) on cognitive agents. In the BDI architecture,
desires and intentions model the agent’s mental attitudes and are meant to
capture the objectives, whereas beliefs describe the environment. More
precisely, the notions of belief, desire and intention represent
respectively the informational, motivational and deliberative states of an
agent (Wooldridge and Jennings, [1995](#bib.bib40)).
Over the years, several frameworks, either providing extensions of BDI or
inspired by it, were given with the aim of extending models for cognitive
agents to also cover normative aspects (see, among others,
(Broersen et al., [2002](#bib.bib7); Thomason, [2000](#bib.bib34); Governatori and Rotolo, [2008](#bib.bib14))). (This is a way of developing
normative agent systems, where norms are meant to ensure global properties
for them (Andrighetto et al., [2013](#bib.bib1)).) In such extensions, the
agent behaviour is determined by the interplay of the cognitive component and
the normative one (such as obligations). In this way, it is possible to
represent how much an agent is willing to invest to reach some outcomes based
on the states of the world (what we call beliefs) and norms. Indeed, beliefs
and norms are of the utmost importance in the decision process of the agent.
If the agent does not take beliefs into account, then she will not be able to
plan what she wants to achieve, and her planning process would be a mere
wishful thinking. On the other hand, if the agent does not respect the norms
governing the environment she acts in, then she may incur sanctions from
other agents (Bratman, [1987](#bib.bib5)).
The BDI approach is based on the following assumptions about the motivational
and deliberative components. The agent typically defines *a priori* her
desires and intentions, and only after this is done the system verifies their
mutual consistency by using additional axioms. Such entities are therefore
not interrelated with one another since “the notion of intention […] has
equal status with the notions of belief and desire, and cannot be reduced to
these concepts” (Rao and Georgeff, [1991](#bib.bib29)). Moreover, the agent may
consequently have intentions which are contradictory with her beliefs and
this may be verified only *a posteriori*. Therefore, one of the main
conceptual deficiencies of the BDI paradigm (and generally of almost all
classical approaches to model rational agents) is that the deliberation
process is bound to these mental attitudes which are independent and fixed
*a priori*. Here, with the term independent, we mean that none of them
is fully definable in terms of the others.
Approaches like the BOID (Belief, Obligation, Intention, Desire) architecture (Broersen et al., [2002](#bib.bib7)) and Governatori and Rotolo ([2008](#bib.bib14))’s system improve
previous frameworks, for instance, by structurally solving conflicts between
beliefs and intentions (the former being always stronger than any conflicting
intention), while mental attitudes and obligations are just meant to define
which kinds of agent (social, realistic, selfish, and so on) are admissible.
Unlike the BDI perspective, this paper aims at proposing a
fresh conceptual and logical analysis of the motivational and deliberative
components within a unified perspective.
#### Desideratum 1: A unified framework for agents’ motivational and
deliberative components.
Goals, desires, and intentions are *different
facets* of the *same phenomenon*, all of them being goal-like attitudes.
This reduction into a unified perspective is done by resorting to the basic
notion of *outcome*, which is simply something (typically, a state of
affairs) that an agent expects to achieve or that can possibly occur.
Even when considering the vast literature on goals of the past decade,
most of the authors studied the content of a goal (e.g., *achievement*
or *maintenance* goals) and conditions under which a goal has to be
either pursued, or dropped. This kind of (*a posteriori*) analysis
results orthogonal to the one proposed hereafter, since we want to develop a
framework that computes the agent’s mental attitudes by combining her beliefs
and the norms with her desires.
As we shall argue, an advantage of the proposed analysis is that it allows
agents to compute different degrees of motivational attitudes, as well as
different degrees of commitment that take into account other, external,
factors, such as *beliefs* and *norms*.
#### Desideratum 2: Agents’ motivations emerge from preference
orderings among outcomes.
The motivational and deliberative components of
agents are generated from preference orderings among outcomes. As done in
other research areas (e.g., rational choice theory), we move with the idea
that agents have preferences and choose the actions to bring about according
to such preferences. Preferences involve outcomes and are explicitly
represented in the syntax of the language for reasoning about agents, thus
following the logical paradigm initially proposed in
(Brewka et al., [2004](#bib.bib6); Governatori and Rotolo, [2006](#bib.bib13)).
The combination of an agent’s mental attitudes with the factuality of the
world defines her deliberative process, i.e., the objectives she decides to
pursue. The agent may give up some of them to comply with the norms, if
required. Indeed, many contexts may prevent the agent from achieving all of
her objectives; the agent must then understand which objectives are mutually
compatible with each other and choose which ones to attain the least of in
given situations by ranking them in a preference ordering.
The approach we are going to formalise can be summarised as follows. We distinguish
three phases an agent must pass through to bring about certain states of
affairs: (i) The agent first needs to understand the environment she acts in;
(ii) The agent deploys such information to deliberate which objectives to
pursue; and (iii) The agent lastly decides how to act to reach them.
In the first phase, the agent gives a formal declarative description of the
environment (in our case, a rule-based formalism). Rules allow the agent to
represent relationships between pre-conditions and actions, actions and their
effects (post-conditions), relationships among actions, which conditions
trigger new obligations to come in force, and in which contexts the agent is
allowed to pursue new objectives.
In the second phase, the agent combines the formal description with an input
describing a particular state of affairs of the environment, and she
determines which norms are actually in force along with which objectives she
decides to commit to (by understanding which ones are attainable) and to
which degree. The agent’s decision is based on logical derivations.
Since the agent’s knowledge is represented by rules, during the third
and last phase, the agent combines and exploits all such information obtained
from the conclusions derived in the second phase to
select which activities to carry out in order to achieve the objectives. (It
is relevant to notice that a derivation can be understood as a virtual
simulation of the various activities involved.)
While different schemas for generating and filtering agents’ outcomes are
possible, the three phases described above suggest to
adopt the following principles:
* •
When an agent faces alternative outcomes in a given context, these
outcomes are ranked in preference orderings;
* •
Mental attitudes are obtained from a single type of rule
(*outcome rule*) whose conclusions express the above mentioned
preference orderings among outcomes;
* •
Beliefs prevail over conflicting motivational attitudes, thus avoiding
various cases of wishful thinking (Thomason, [2000](#bib.bib34); Broersen et al., [2002](#bib.bib7));
* •
Norms and obligations are used to filter social motivational states
(*social intentions*) and compliant agents
(Broersen et al., [2002](#bib.bib7); Governatori and Rotolo, [2008](#bib.bib14));
* •
Goal-like attitudes can also be derived via a *conversion*
mechanism using other mental states, such as beliefs (Governatori and Rotolo, [2008](#bib.bib14)). For
example, believing that Madrid is in Spain may imply that the goal to go to
Madrid implies the goal to go to Spain.
Our effort is finally motivated by computational concerns. The logic for
agents’ desires, goals, and intentions is expected to be computationally
efficient. In particular, we shall prove that computing agents’ motivational
and deliberative components in the proposed unified framework has linear
complexity.
2 The intuition underneath the framework
-----------------------------------------
When a cognitive agent deliberates about what her outcomes are in a particular
situation, she selects a set of *preferred* outcomes among a larger
set, where each specific outcome has various alternatives. It is natural to
rank such alternatives in a preference ordering, from the most preferred
choice to the least objective she deems acceptable.
Consider, for instance, the following scenario. Alice is thinking what to do
on Saturday afternoon. She has three alternatives: (i) she can visit John;
(ii) she can visit her parents who live close to John’s place; or (iii) she
can watch a movie at home. The alternative she likes the most is visiting
John, while watching a movie is the least preferred. If John is not at home,
there is no point for Alice to visit him. In this case, paying a visit to her
parents becomes the “next best” option. Also, if visiting her parents is
not possible, she settles for the last choice, that of staying home and
watching a movie.
Alice also knows that if John is away, the alternative of going to his place
makes no sense. Suppose that Alice knows that John is actually away for the
weekend. Since the most preferred option is no longer available, she decides
to opt for the now best option, namely visiting her parents.
To represent the scenario above, we need to capture the preferences about her
alternatives, and her beliefs about the world. To model preferences
among several options, we build a sequence of alternatives A1,…,Ansubscript𝐴1…subscript𝐴𝑛A\_{1},\dots,A\_{n}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_A start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT that are
preferred when the previous choices are no longer feasible. Normally, each set of
alternatives is the result of a specific context C𝐶Citalic\_C determining
under which conditions (premises) such a sequence of alternatives A1,…,Ansubscript𝐴1…subscript𝐴𝑛A\_{1},\dots,A\_{n}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_A start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT
is considered.
Accordingly, we can represent Alice’s alternatives with the notation
| | | |
| --- | --- | --- |
| | 𝐈𝐟𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦𝐭𝐡𝐞𝐧𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛,𝑣𝑖𝑠𝑖𝑡\_𝑝𝑎𝑟𝑒𝑛𝑡𝑠,𝑤𝑎𝑡𝑐ℎ\_𝑚𝑜𝑣𝑖𝑒.𝐈𝐟𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦𝐭𝐡𝐞𝐧𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛𝑣𝑖𝑠𝑖𝑡\_𝑝𝑎𝑟𝑒𝑛𝑡𝑠𝑤𝑎𝑡𝑐ℎ\_𝑚𝑜𝑣𝑖𝑒\displaystyle\textbf{If}\quad\mathit{saturday}\quad\textbf{then}\quad\mathit{visit\\_\kern 1.0ptJohn},\ \mathit{visit\\_\kern 1.0ptparents},\ \mathit{watch\\_\kern 1.0ptmovie}.If italic\_saturday then italic\_visit \_ italic\_John , italic\_visit \_ italic\_parents , italic\_watch \_ italic\_movie . | |
This intuition resembles the notion of contrary-to-duty obligations
presented by Governatori and Rotolo ([2006](#bib.bib13)), where a norm is represented by an *obligation rule*
of the type
| | | |
| --- | --- | --- |
| | r1:𝑑𝑟𝑖𝑣𝑒\_𝑐𝑎𝑟⇒𝖮𝖡𝖫¬𝑑𝑎𝑚𝑎𝑔𝑒⊙𝑐𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑒⊙𝑓𝑜𝑟𝑒𝑐𝑙𝑜𝑠𝑢𝑟𝑒:subscript𝑟1subscript⇒𝖮𝖡𝖫𝑑𝑟𝑖𝑣𝑒\_𝑐𝑎𝑟direct-product𝑑𝑎𝑚𝑎𝑔𝑒𝑐𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑒𝑓𝑜𝑟𝑒𝑐𝑙𝑜𝑠𝑢𝑟𝑒\displaystyle r\_{1}:\mathit{drive\\_\kern 1.0ptcar}\Rightarrow\_{\mathsf{OBL}}\neg\mathit{damage}\odot\mathit{compensate}\odot\mathit{foreclosure}italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : italic\_drive \_ italic\_car ⇒ start\_POSTSUBSCRIPT sansserif\_OBL end\_POSTSUBSCRIPT ¬ italic\_damage ⊙ italic\_compensate ⊙ italic\_foreclosure | |
where “⇒𝖮𝖡𝖫subscript⇒𝖮𝖡𝖫\Rightarrow\_{\mathsf{OBL}}⇒ start\_POSTSUBSCRIPT sansserif\_OBL end\_POSTSUBSCRIPT” denotes that the conclusion of the rule will be treated as an
obligation, and the symbol “⊙direct-product\odot⊙” replaces the symbol “,” to separate
the alternatives. In this case, each element of the chain is the reparative obligation that shall come in
force in case the immediate predecessor in the chain has been violated. Thus, the
meaning of rule r1subscript𝑟1r\_{1}italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is that, if an agent drives a car, then she has the
obligation not to cause any damage to others; if this happens, she is obliged
to compensate; if she fails to compensate, there is an obligation of
foreclosure.
Following this perspective, we shall now represent the previous scenario with a
rule introducing the outcome mode, that is an *outcome rule*:
| | | |
| --- | --- | --- |
| | r2:𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦⇒𝖮𝖴𝖳𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛⊙𝑣𝑖𝑠𝑖𝑡\_𝑝𝑎𝑟𝑒𝑛𝑡𝑠⊙𝑤𝑎𝑡𝑐ℎ\_𝑚𝑜𝑣𝑖𝑒.:subscript𝑟2subscript⇒𝖮𝖴𝖳𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦direct-productdirect-product𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛𝑣𝑖𝑠𝑖𝑡\_𝑝𝑎𝑟𝑒𝑛𝑡𝑠𝑤𝑎𝑡𝑐ℎ\_𝑚𝑜𝑣𝑖𝑒\displaystyle r\_{2}:\mathit{saturday}\Rightarrow\_{\mathsf{OUT}}\mathit{visit\\_\kern 1.0ptJohn}\odot\mathit{visit\\_\kern 1.0ptparents}\odot\mathit{watch\\_\kern 1.0ptmovie}.italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT : italic\_saturday ⇒ start\_POSTSUBSCRIPT sansserif\_OUT end\_POSTSUBSCRIPT italic\_visit \_ italic\_John ⊙ italic\_visit \_ italic\_parents ⊙ italic\_watch \_ italic\_movie . | |
In both examples, the sequences express a preference
ordering among alternatives. Accordingly, 𝑤𝑎𝑡𝑐ℎ\_𝑚𝑜𝑣𝑖𝑒𝑤𝑎𝑡𝑐ℎ\_𝑚𝑜𝑣𝑖𝑒\mathit{watch\\_\kern 1.0ptmovie}italic\_watch \_ italic\_movie and
𝑓𝑜𝑟𝑒𝑐𝑙𝑜𝑠𝑢𝑟𝑒𝑓𝑜𝑟𝑒𝑐𝑙𝑜𝑠𝑢𝑟𝑒\mathit{foreclosure}italic\_foreclosure are the last (and least) acceptable situations.
To model beliefs, we use *belief rules*, like
| | | |
| --- | --- | --- |
| | r3:𝐽𝑜ℎ𝑛\_𝑎𝑤𝑎𝑦⇒𝖡𝖤𝖫¬𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛:subscript𝑟3subscript⇒𝖡𝖤𝖫𝐽𝑜ℎ𝑛\_𝑎𝑤𝑎𝑦𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛\displaystyle r\_{3}:\mathit{John\\_\kern 1.0ptaway}\Rightarrow\_{\mathsf{BEL}}\neg\mathit{visit\\_\kern 1.0ptJohn}italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT : italic\_John \_ italic\_away ⇒ start\_POSTSUBSCRIPT sansserif\_BEL end\_POSTSUBSCRIPT ¬ italic\_visit \_ italic\_John | |
meaning that if Alice has the belief that John is not home, then she adds to
her beliefs that it is not possible to visit him.
In the rest of the section, we shall illustrate the principles and intuitions
relating sequences of alternatives (that is, outcome rules), beliefs, obligations, and
how to use them to characterise different types of goal-like attitudes and
degrees of commitment to outcomes: *desires*, *goals*, *intentions*, and *social
intentions*.
#### Desires as acceptable outcomes.
Suppose that an agent is equipped with the following outcome rules expressing two preference orderings:
| | | |
| --- | --- | --- |
| | r:a1,…,an⇒𝖮𝖴𝖳b1⊙⋯⊙bms:a1′,…,an′⇒𝖮𝖴𝖳b1′⊙⋯⊙bk′\displaystyle r:a\_{1},\dots,a\_{n}\Rightarrow\_{\mathsf{OUT}}b\_{1}\odot\cdots\odot b\_{m}\qquad\qquad s:a^{\prime}\_{1},\dots,a^{\prime}\_{n}\Rightarrow\_{\mathsf{OUT}}b^{\prime}\_{1}\odot\cdots\odot b^{\prime}\_{k}italic\_r : italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⇒ start\_POSTSUBSCRIPT sansserif\_OUT end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⊙ ⋯ ⊙ italic\_b start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT italic\_s : italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⇒ start\_POSTSUBSCRIPT sansserif\_OUT end\_POSTSUBSCRIPT italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⊙ ⋯ ⊙ italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT | |
and that the situations described by a1,…,ansubscript𝑎1…subscript𝑎𝑛a\_{1},\dots,a\_{n}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT and a1′,…,an′subscriptsuperscript𝑎′1…subscriptsuperscript𝑎′𝑛a^{\prime}\_{1},\dots,a^{\prime}\_{n}italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT are mutually
compatible but b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and b1′subscriptsuperscript𝑏′1b^{\prime}\_{1}italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT are not, namely b1=¬b1′subscript𝑏1subscriptsuperscript𝑏′1b\_{1}=\neg b^{\prime}\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = ¬ italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. In
this case b1,…,bm,b1′,…,bk′subscript𝑏1…subscript𝑏𝑚subscriptsuperscript𝑏′1…subscriptsuperscript𝑏′𝑘b\_{1},\dots,b\_{m},b^{\prime}\_{1},\dots,b^{\prime}\_{k}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_b start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT , italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT are all *acceptable outcomes*,
including the incompatible outcomes b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and b1′subscriptsuperscript𝑏′1b^{\prime}\_{1}italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT.
*Desires are acceptable outcomes*, independently of whether
they are compatible with other expected or acceptable outcomes.
Let us contextualise the previous example to better explain the notion of desire by considering the following setting.
######
Example 1
| | | |
| --- | --- | --- |
| | F={𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦,𝐽𝑜ℎ𝑛\_𝑠𝑖𝑐𝑘}R={r2,r4:𝐽𝑜ℎ𝑛\_𝑠𝑖𝑐𝑘⇒𝖮𝖴𝖳¬𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛⊙𝑠ℎ𝑜𝑟𝑡\_𝑣𝑖𝑠𝑖𝑡}.formulae-sequence𝐹𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦𝐽𝑜ℎ𝑛\_𝑠𝑖𝑐𝑘𝑅conditional-setsubscript𝑟2subscript𝑟4
subscript⇒𝖮𝖴𝖳𝐽𝑜ℎ𝑛\_𝑠𝑖𝑐𝑘direct-product𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛𝑠ℎ𝑜𝑟𝑡\_𝑣𝑖𝑠𝑖𝑡F=\{\mathit{saturday},\,\mathit{John\\_\kern 1.0ptsick}\}\quad R=\{r\_{2},\,r\_{4}:\mathit{John\\_\kern 1.0ptsick}\Rightarrow\_{\mathsf{OUT}}\neg\mathit{visit\\_\kern 1.0ptJohn}\odot\mathit{short\\_\kern 1.0ptvisit}\}.italic\_F = { italic\_saturday , italic\_John \_ italic\_sick } italic\_R = { italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT : italic\_John \_ italic\_sick ⇒ start\_POSTSUBSCRIPT sansserif\_OUT end\_POSTSUBSCRIPT ¬ italic\_visit \_ italic\_John ⊙ italic\_short \_ italic\_visit } . | |
The meaning of r4subscript𝑟4r\_{4}italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT is that Alice would not visit John if he is sick, but
if she does so, then the visit must be short.
Being the premises of r2subscript𝑟2r\_{2}italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT and of r4subscript𝑟4r\_{4}italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT the case, then both rules are
activated, and the agent has both 𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛\mathit{visit\\_\kern 1.0ptJohn}italic\_visit \_ italic\_John and its opposite as
acceptable outcomes. Eventually, she needs to make up her mind. Notice that
if a rule prevails over the other, then the elements of the weaker rule with
an incompatible counterpart in the stronger rule are *not* considered
desires. Suppose that Alice has not visited John for a long time and she has
recently placed a visit to her parents. Then, she prefers to see John instead
of her parents despite John being sick. In this setting, r2subscript𝑟2r\_{2}italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT prevails
over r4subscript𝑟4r\_{4}italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT (r2>r4subscript𝑟2subscript𝑟4r\_{2}>r\_{4}italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT > italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT in notation). Given that she explicitly prefers
r2subscript𝑟2r\_{2}italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT to r4subscript𝑟4r\_{4}italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT, her desire is to visit John (𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛\mathit{visit\\_\kern 1.0ptJohn}italic\_visit \_ italic\_John) and
it would be irrational to conclude that she also has the opposite desire
(i.e., ¬𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛\neg\mathit{visit\\_\kern 1.0ptJohn}¬ italic\_visit \_ italic\_John).
#### Goals as preferred outcomes.
We consider a *goal as the
preferred desire in a chain*.
For rule r𝑟ritalic\_r alone the preferred outcome is b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, and for rule s𝑠sitalic\_s alone
it is b1′subscriptsuperscript𝑏′1b^{\prime}\_{1}italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. But if both rules are applicable, then a state where both
b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and b1′subscriptsuperscript𝑏′1b^{\prime}\_{1}italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT hold is not possible: the agent would not be rational if
she considers both b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and ¬b1subscript𝑏1\neg b\_{1}¬ italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT as her preferred outcomes. Therefore,
the agent has to decide whether she prefers a state where b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT holds
to a state where b1′subscriptsuperscript𝑏′1b^{\prime}\_{1}italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT (i.e., ¬b1subscript𝑏1\neg b\_{1}¬ italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT) does (or the other
way around). If the agent cannot make up her mind, i.e., she has no way to
decide which is the most suitable option for her, then neither the chain of
r𝑟ritalic\_r nor that of s𝑠sitalic\_s can produce preferred outcomes.
Consider now the scenario where the
agent establishes that the second rule overrides the first one (s>r𝑠𝑟s>ritalic\_s > italic\_r).
Accordingly, the preferred outcome is b1′subscriptsuperscript𝑏′1b^{\prime}\_{1}italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT for the chain of outcomes
defined by s𝑠sitalic\_s, and b2subscript𝑏2b\_{2}italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is the preferred outcome of r𝑟ritalic\_r. b2subscript𝑏2b\_{2}italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is the
second best alternative according to rule r𝑟ritalic\_r: in fact b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT has been
discarded as an acceptable outcome given that s𝑠sitalic\_s prevails over r𝑟ritalic\_r.
In the situation described by Example [1](#Thmexample1 "Example 1 ‣ Desires as acceptable outcomes. ‣ 2 The intuition underneath the framework ‣ The Rationale behind the Concept of Goal"), 𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛\mathit{visit\\_\kern 1.0ptJohn}italic\_visit \_ italic\_John is the goal according to r2subscript𝑟2r\_{2}italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, while
𝑠ℎ𝑜𝑟𝑡\_𝑣𝑖𝑠𝑖𝑡𝑠ℎ𝑜𝑟𝑡\_𝑣𝑖𝑠𝑖𝑡\mathit{short\\_\kern 1.0ptvisit}italic\_short \_ italic\_visit is the goal for r4subscript𝑟4r\_{4}italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT.
#### Two degrees of commitment: intentions and social intentions.
The next issue is to clarify which are the acceptable outcomes for an agent to
commit to. Naturally, if the agent values some outcomes more than others, she
should strive for the best, in other words, for the most preferred outcomes (goals).
We first consider the case where only rule r𝑟ritalic\_r applies. Here, the agent
should commit to the outcome she values the most, that is b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. But
what if the agent *believes* that b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT cannot be achieved in the
environment where she is currently situated in, or she knows that ¬b1subscript𝑏1\neg b\_{1}¬ italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT holds? Committing to b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT would result in a waste of the agent’s
resources; rationally, she should target the next best outcome b2subscript𝑏2b\_{2}italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT.
Accordingly, the agent derives b2subscript𝑏2b\_{2}italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT as her *intention*. *An
intention is an acceptable outcome which does not conflict with the beliefs
describing the environment.*
Suppose now that b2subscript𝑏2b\_{2}italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is *forbidden*, and that the agent is social (a social agent is an agent not knowingly committing to anything that is
forbidden (Governatori and Rotolo, [2008](#bib.bib14))). Once again, the agent has to
lower her expectation and settle for b3subscript𝑏3b\_{3}italic\_b start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT, which is one of her
*social intentions*. *A social intention is an intention which does not violate any norm.*
To complete the analysis, consider the situation where both rules r𝑟ritalic\_r and s𝑠sitalic\_s
apply and, again, the agent prefers s𝑠sitalic\_s to r𝑟ritalic\_r. As we have seen before, ¬b1subscript𝑏1\neg b\_{1}¬ italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT (b1′subscriptsuperscript𝑏′1b^{\prime}\_{1}italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT) and b2subscript𝑏2b\_{2}italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT are the preferred outcomes based on the
preference of the agent over the two rules. This time we assume that the
agent knows she cannot achieve ¬b1subscript𝑏1\neg b\_{1}¬ italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT (or equivalently, b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT holds).
If the agent is rational, she cannot commit to ¬b1subscript𝑏1\neg b\_{1}¬ italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. Consequently,
the best option for her is to commit to b2′subscriptsuperscript𝑏′2b^{\prime}\_{2}italic\_b start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT and b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT (both regarded
as intentions and social intentions), where she is guaranteed to be
successful.
This scenario reveals a key concept: there are situations where the agent’s
best choice is to commit herself to some outcomes that are not her
preferred ones (or even to a choice that she would consider not
acceptable based only on her preferences) but such that they influence her
decision process, given that they represent relevant external factors (either
her beliefs or the norms that apply to her situation).
######
Example 2
| | | |
| --- | --- | --- |
| | F={𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦,𝐽𝑜ℎ𝑛\_𝑎𝑤𝑎𝑦,𝐽𝑜ℎ𝑛\_𝑠𝑖𝑐𝑘}R={r2,r3,r4}>={(r2,r4)}.F=\{\mathit{saturday},\,\mathit{John\\_\kern 1.0ptaway},\,\mathit{John\\_\kern 1.0ptsick}\}\quad R=\{r\_{2},\,r\_{3},\,r\_{4}\}\quad>\ =\{(r\_{2},r\_{4})\}.italic\_F = { italic\_saturday , italic\_John \_ italic\_away , italic\_John \_ italic\_sick } italic\_R = { italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT } > = { ( italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT ) } . | |
Today John is in rehab at the hospital. Even if Alice has the desire as well as the goal to visit John, the facts of the situation lead her to form the intention to visit her parents.
Consider now the following theory
| | | | |
| --- | --- | --- | --- |
| | F𝐹\displaystyle Fitalic\_F | ={𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦,𝐽𝑜ℎ𝑛\_ℎ𝑜𝑚𝑒\_𝑐𝑜𝑛𝑓𝑖𝑛𝑒𝑑,𝑡ℎ𝑖𝑟𝑑\_𝑤𝑒𝑒𝑘}absent𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦𝐽𝑜ℎ𝑛\_ℎ𝑜𝑚𝑒\_𝑐𝑜𝑛𝑓𝑖𝑛𝑒𝑑𝑡ℎ𝑖𝑟𝑑\_𝑤𝑒𝑒𝑘\displaystyle=\{\mathit{saturday},\,\mathit{John\\_\kern 1.0pthome\\_\kern 1.0ptconfined},\,\mathit{third\\_\kern 1.0ptweek}\}= { italic\_saturday , italic\_John \_ italic\_home \_ italic\_confined , italic\_third \_ italic\_week } | |
| | R𝑅\displaystyle Ritalic\_R | ={r2,r3,r4,r5:𝐽𝑜ℎ𝑛\_ℎ𝑜𝑚𝑒\_𝑐𝑜𝑛𝑓𝑖𝑛𝑒𝑑,𝑡ℎ𝑖𝑟𝑑\_𝑤𝑒𝑒𝑘⇒𝖮𝖡𝖫¬𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛}absentconditional-setsubscript𝑟2subscript𝑟3subscript𝑟4subscript𝑟5
subscript⇒𝖮𝖡𝖫𝐽𝑜ℎ𝑛\_ℎ𝑜𝑚𝑒\_𝑐𝑜𝑛𝑓𝑖𝑛𝑒𝑑𝑡ℎ𝑖𝑟𝑑\_𝑤𝑒𝑒𝑘
𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛\displaystyle=\{r\_{2},\,r\_{3},\,r\_{4},r\_{5}:\mathit{John\\_\kern 1.0pthome\\_\kern 1.0ptconfined},\,\mathit{third\\_\kern 1.0ptweek}\Rightarrow\_{\mathsf{OBL}}\neg\mathit{visit\\_\kern 1.0ptJohn}\}= { italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT : italic\_John \_ italic\_home \_ italic\_confined , italic\_third \_ italic\_week ⇒ start\_POSTSUBSCRIPT sansserif\_OBL end\_POSTSUBSCRIPT ¬ italic\_visit \_ italic\_John } | |
| | >\displaystyle>> | ={(r2,r4)}.absentsubscript𝑟2subscript𝑟4\displaystyle=\{(r\_{2},r\_{4})\}.= { ( italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT ) } . | |
Unfortunately, John has a stream of bad luck. Now, he is not debilitated but
has been home convicted for a minor crime. The law of his country states that
during the first two months of his home conviction, no visits to him are
allowed. This time, even if Alice knows that John is at home, norms forbid Alice to visit him. Again, Alice opts to visit her parents.
3 Logic
--------
Defeasible Logic (DL) (Antoniou et al., [2001](#bib.bib3)) is a simple, flexible, and efficient rule
based non-monotonic formalism. Its strength lies in its constructive proof
theory, which has an argumentation-like structure, and it allows us to draw
meaningful conclusions from (potentially) conflicting and incomplete knowledge
bases. Being non-monotonic means that more accurate conclusions can be obtained
when more pieces of information are given (where some previously derived
conclusions no longer follow from the knowledge base).
The framework provided by the proof theory accounts for the possibility of
extensions of the logic, in particular extensions with modal operators.
Several of such extensions have been proposed, which then resulted in
successful applications in the area of normative reasoning (Governatori, [2005](#bib.bib12)),
modelling agents
(Governatori and Rotolo, [2008](#bib.bib14); Kravari et al., [2011](#bib.bib23); Governatori et al., [2009](#bib.bib16)), and
business process compliance (Governatori and Sadiq, [2008](#bib.bib15)). A model theoretic
possible world semantics for modal Defeasible Logic has been proposed in
(Governatori et al., [2012](#bib.bib17)). In addition, efficient implementations of the logic
(including the modal variants), able to handle very large knowledge bases,
have been advanced in (Lam and Governatori, [2009](#bib.bib24); Bassiliades et al., [2006](#bib.bib4); Tachmazidis et al., [2012](#bib.bib33)).
######
Definition 1 (Language)
Let PROPPROP\mathrm{PROP}roman\_PROP be a set of propositional atoms, and MOD={𝖡,𝖮,𝖣,𝖦,𝖨,𝖲𝖨}MOD𝖡𝖮𝖣𝖦𝖨𝖲𝖨\mathrm{MOD}=\{\mathsf{B},\mathsf{O},\mathsf{D},\mathsf{G},\mathsf{I},\mathsf{SI}\}roman\_MOD = { sansserif\_B , sansserif\_O , sansserif\_D , sansserif\_G , sansserif\_I , sansserif\_SI } the set of modal operators, whose reading is 𝖡𝖡\mathsf{B}sansserif\_B for
*belief*, 𝖮𝖮\mathsf{O}sansserif\_O for *obligation*, 𝖣𝖣\mathsf{D}sansserif\_D for *desire*, 𝖦𝖦\mathsf{G}sansserif\_G
for *goal*, 𝖨𝖨\mathsf{I}sansserif\_I for *intention* and 𝖲𝖨𝖲𝖨\mathsf{SI}sansserif\_SI for *social
intention*. Let LabLab\mathrm{Lab}roman\_Lab be a set of arbitrary labels. The set Lit=PROP∪{¬p|p∈PROP}LitPROPconditional-set𝑝𝑝PROP\mathrm{Lit}=\mathrm{PROP}\cup\{\neg p|p\in\mathrm{PROP}\}roman\_Lit = roman\_PROP ∪ { ¬ italic\_p | italic\_p ∈ roman\_PROP } denotes the set of *literals*. The
*complement* of a literal q𝑞qitalic\_q is denoted by ∼qsimilar-toabsent𝑞\mathnormal{\sim}q∼ italic\_q; if q𝑞qitalic\_q is a
positive literal p𝑝pitalic\_p, then ∼qsimilar-toabsent𝑞\mathnormal{\sim}q∼ italic\_q is ¬p𝑝\neg p¬ italic\_p, and if q𝑞qitalic\_q is a negative
literal ¬p𝑝\neg p¬ italic\_p then ∼qsimilar-toabsent𝑞\mathnormal{\sim}q∼ italic\_q is p𝑝pitalic\_p. The set of *modal literals* is
ModLit={Xl,¬Xl|l∈Lit,X∈{𝖮,𝖣,𝖦,𝖨,𝖲𝖨}}ModLitconditional-set𝑋𝑙𝑋𝑙
formulae-sequence𝑙Lit𝑋𝖮𝖣𝖦𝖨𝖲𝖨\mathrm{ModLit}=\{Xl,\neg Xl|l\in\mathrm{Lit},X\in\{\mathsf{O},\mathsf{D},\mathsf{G},\mathsf{I},\mathsf{SI}\}\}roman\_ModLit = { italic\_X italic\_l , ¬ italic\_X italic\_l | italic\_l ∈ roman\_Lit , italic\_X ∈ { sansserif\_O , sansserif\_D , sansserif\_G , sansserif\_I , sansserif\_SI } }. We assume that modal operator “X𝑋Xitalic\_X” for belief 𝖡𝖡\mathsf{B}sansserif\_B is
the empty modal operator. Accordingly, a modal literal 𝖡l𝖡𝑙\mathsf{B}lsansserif\_B italic\_l is equivalent
to literal l𝑙litalic\_l; the complement of 𝖡∼lsimilar-to𝖡𝑙\mathsf{B}\mathnormal{\sim}lsansserif\_B ∼ italic\_l and ¬𝖡l𝖡𝑙\neg\mathsf{B}l¬ sansserif\_B italic\_l is l𝑙litalic\_l.
######
Definition 2 (Defeasible Theory)
A *defeasible theory* D𝐷Ditalic\_D is a structure (F,R,>)𝐹𝑅(F,R,>)( italic\_F , italic\_R , > ), where
(1) F⊆Lit∪ModLit𝐹LitModLitF\subseteq\mathrm{Lit}\cup\mathrm{ModLit}italic\_F ⊆ roman\_Lit ∪ roman\_ModLitis a set of *facts* or indisputable statements;
(2) R𝑅Ritalic\_Rcontains three sets of *rules*: for beliefs, obligations, and outcomes;
(3) >⊆R×R{>}\subseteq R\times R> ⊆ italic\_R × italic\_Ris a binary *superiority relation* to determine the
relative strength of (possibly) conflicting rules. We use the infix notation r>s𝑟𝑠r>sitalic\_r > italic\_s to mean that (r,s)∈>𝑟𝑠(r,s)\in>( italic\_r , italic\_s ) ∈ >.
A theory is *finite* if the set of facts and rules are so.
*Belief rules* are used to relate the factual knowledge of an agent,
that is to say, her vision of the environment she is situated in. Belief
rules define the relationships between states of the world; as such,
provability for beliefs does not generate modal literals.
*Obligation rules* determine when and which obligations are in force. The
conclusions generated by obligation rules take the 𝖮𝖮\mathsf{O}sansserif\_O modality.
Finally,
*outcome rules* establish the possible outcomes of an agent depending on
the particular context. Apart from obligation rules, outcome rules are used to
derive conclusions for all modes representing goal-like attitudes: desires,
goals, intentions, and social intentions.
Following ideas given in (Governatori and Rotolo, [2006](#bib.bib13)), rules can gain more expressiveness
when a *preference operator* ⊙direct-product\odot⊙ is adopted. An expression like a⊙bdirect-product𝑎𝑏a\odot bitalic\_a ⊙ italic\_b means that if a𝑎aitalic\_a is possible,
then a𝑎aitalic\_a is the first choice, and b𝑏bitalic\_b is the second one; if ¬a𝑎\neg a¬ italic\_a holds,
then the first choice is not attainable and b𝑏bitalic\_b is the actual choice. This
operator is used to build chains of preferences, called
*⊙direct-product\odot⊙-expressions*.
The formation rules for ⊙direct-product\odot⊙-expressions
are:
1. 1.
every literal is an ⊙direct-product\odot⊙-expression;
2. 2.
if A𝐴Aitalic\_A is an ⊙direct-product\odot⊙-expression and b𝑏bitalic\_b is a literal then A⊙bdirect-product𝐴𝑏A\odot bitalic\_A ⊙ italic\_b is an ⊙direct-product\odot⊙-expression.
In addition, we stipulate that ⊙direct-product\odot⊙ obeys the following properties:
1. 1.
a⊙(b⊙c)=(a⊙b)⊙cdirect-product𝑎direct-product𝑏𝑐direct-productdirect-product𝑎𝑏𝑐a\odot(b\odot c)=(a\odot b)\odot citalic\_a ⊙ ( italic\_b ⊙ italic\_c ) = ( italic\_a ⊙ italic\_b ) ⊙ italic\_c
(associativity);
2. 2.
⨀i=1nai=(⨀i=1k−1ai)⊙(⨀i=k+1nai)superscriptsubscript⨀𝑖1𝑛subscript𝑎𝑖direct-productsuperscriptsubscript⨀𝑖1𝑘1subscript𝑎𝑖superscriptsubscript⨀𝑖𝑘1𝑛subscript𝑎𝑖\bigodot\_{i=1}^{n}a\_{i}=(\bigodot\_{i=1}^{k-1}a\_{i})\odot(\bigodot\_{i=k+1}^{n\phantom{k}}a\_{i})⨀ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = ( ⨀ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k - 1 end\_POSTSUPERSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ⊙ ( ⨀ start\_POSTSUBSCRIPT italic\_i = italic\_k + 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) where there exists j𝑗jitalic\_j such that aj=aksubscript𝑎𝑗subscript𝑎𝑘a\_{j}=a\_{k}italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT
and j<k𝑗𝑘j<kitalic\_j < italic\_k (duplication and contraction on the right).
Typically, ⊙direct-product\odot⊙-expressions are given by the agent
designer, or obtained through *construction rules* based on the
particular logic (Governatori and Rotolo, [2006](#bib.bib13)).
In the present paper, we use the classical definition of *defeasible rule* in
DL (Antoniou et al., [2001](#bib.bib3)), while *strict rules* and *defeaters* are
omitted111The restriction does not result in any loss of generality:
(i) the superiority relation does not play any role in proving definite
conclusions, and (ii) for defeasible conclusions Antoniou et al. ([2001](#bib.bib3)) prove
that it is always possible to remove strict rules from the superiority
relation and defeaters from the theory to obtain an equivalent theory
without defeaters and where the strict rules are not involved in the
superiority relation..
######
Definition 3 (Defeasible rule)
A *defeasible rule* is an expression
r:A(r)⇒XC(r):𝑟subscript⇒𝑋𝐴𝑟𝐶𝑟r:A(r)\Rightarrow\_{X}C(r)italic\_r : italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_C ( italic\_r ), where
(1) r∈Lab𝑟Labr\in\mathrm{Lab}italic\_r ∈ roman\_Labis the name of the rule;
(2) A(r)={a1,…,an}𝐴𝑟subscript𝑎1…subscript𝑎𝑛A(r)=\{a\_{1},\dots,a\_{n}\}italic\_A ( italic\_r ) = { italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT }, the *antecedent* (or *body*) of the rule, is the set of
the premises of the rule. Each aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is either in LitLit\mathrm{Lit}roman\_Lit or in ModLitModLit\mathrm{ModLit}roman\_ModLit;
(3) X∈{𝖡,𝖮,𝖴}𝑋𝖡𝖮𝖴X\in\{\mathsf{B},\mathsf{O},\mathsf{U}\}italic\_X ∈ { sansserif\_B , sansserif\_O , sansserif\_U }represents the *mode* of the
rule: ⇒𝖡subscript⇒𝖡\Rightarrow\_{\mathsf{B}}⇒ start\_POSTSUBSCRIPT sansserif\_B end\_POSTSUBSCRIPT, ⇒𝖮subscript⇒𝖮\Rightarrow\_{\mathsf{O}}⇒ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT, ⇒𝖴subscript⇒𝖴\Rightarrow\_{\mathsf{U}}⇒ start\_POSTSUBSCRIPT sansserif\_U end\_POSTSUBSCRIPT denote respectively rules for
beliefs, obligations, and outcomes. From now on, we omit the subscript 𝖡𝖡\mathsf{B}sansserif\_B in
rules for beliefs, i.e., ⇒⇒\Rightarrow⇒ is used as a shortcut for
⇒𝖡subscript⇒𝖡\Rightarrow\_{\mathsf{B}}⇒ start\_POSTSUBSCRIPT sansserif\_B end\_POSTSUBSCRIPT;
(4) C(r)𝐶𝑟C(r)italic\_C ( italic\_r )is the *consequent* (or *head*) of the rule, which is a
single literal if X=𝖡𝑋𝖡X=\mathsf{B}italic\_X = sansserif\_B, and an ⊙direct-product\odot⊙-expression otherwise222It
is worth noting that modal literals can occur only in the antecedent of rules:
the reason is that the rules are used to derive modal conclusions and we do
not conceptually need to iterate modalities. The motivation of a single
literal as a consequent for belief rules is dictated by the intended reading
of the belief rules, where these rules are used to describe the environment..
A defeasible rule is a rule that can be defeated by contrary evidence. The
underlying idea is that if we know that the premises of the rule are the
case, then we may conclude that the conclusion holds, unless there is
evidence proving otherwise. Defeasible rules in our framework introduce modal
literals; for instance, if we have rule r:A(r)⇒𝖮c:𝑟subscript⇒𝖮𝐴𝑟𝑐r:A(r)\Rightarrow\_{\mathsf{O}}citalic\_r : italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_c and the
premises denoted by A(r)𝐴𝑟A(r)italic\_A ( italic\_r ) are the case, then r𝑟ritalic\_r can be used to prove
𝖮𝖮\mathsf{O}sansserif\_Oc.
We use the following abbreviations on sets of rules: RXsuperscript𝑅𝑋R^{X}italic\_R start\_POSTSUPERSCRIPT italic\_X end\_POSTSUPERSCRIPT
(RX[q]superscript𝑅𝑋delimited-[]𝑞R^{X}[q]italic\_R start\_POSTSUPERSCRIPT italic\_X end\_POSTSUPERSCRIPT [ italic\_q ]) denotes all rules of mode X𝑋Xitalic\_X (with consequent q𝑞qitalic\_q), and
R[q]𝑅delimited-[]𝑞R[q]italic\_R [ italic\_q ] denotes the set ⋃X∈{𝖡,𝖮,𝖴}RX[q]subscript𝑋𝖡𝖮𝖴superscript𝑅𝑋delimited-[]𝑞\bigcup\_{X\in\{\mathsf{B},\mathsf{O},\mathsf{U}\}}R^{X}[q]⋃ start\_POSTSUBSCRIPT italic\_X ∈ { sansserif\_B , sansserif\_O , sansserif\_U } end\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT italic\_X end\_POSTSUPERSCRIPT [ italic\_q ]. With R[q,i]𝑅𝑞𝑖R[q,i]italic\_R [ italic\_q , italic\_i ] we denote the set of rules whose head is
⊙j=1ncjsuperscriptsubscriptdirect-product𝑗1𝑛absentsubscript𝑐𝑗\odot\_{j=1}^{n}c\_{j}⊙ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_c start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT and ci=qsubscript𝑐𝑖𝑞c\_{i}=qitalic\_c start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_q, with 1≤i≤n1𝑖𝑛1\leq i\leq n1 ≤ italic\_i ≤ italic\_n.
Notice that labelling the rules of DL produces nothing more but a
simple treatment of the modalities, thus two interaction strategies between
modal operators are analysed: *rule conversion* and
*conflict resolution* (Governatori and Rotolo, [2008](#bib.bib14)).
In the remainder, we shall define a completely new inference machinery that
takes this into account by adding preferences and dealing with a larger set
of modalised conclusions, which are not necessarily obtained from the
corresponding rules but also by using other rule types. For instance, we
argued in Section [2](#S2 "2 The intuition underneath the framework ‣ The Rationale behind the Concept of Goal") that a goal can be viewed as a
preferred outcome and so the fact that a certain goal 𝖦p𝖦𝑝\mathsf{G}psansserif\_G italic\_p is derived
depends on whether we can obtain p𝑝pitalic\_p as a preferred outcome by using a rule
for 𝖴𝖴\mathsf{U}sansserif\_U.
#### Rule conversion.
It is sometimes meaningful to use rules for a
modality X𝑋Xitalic\_X as if they were for another modality Y𝑌Yitalic\_Y, i.e., to convert one type of conclusion into a different one.
Formally, we define an asymmetric binary relation Convert⊆MOD×MODConvertMODMOD\mathrm{Convert}\subseteq\mathrm{MOD}\times\mathrm{MOD}roman\_Convert ⊆ roman\_MOD × roman\_MOD such that Convert(X,Y)Convert𝑋𝑌\mathrm{Convert}(X,Y)roman\_Convert ( italic\_X , italic\_Y ) means “a rule of mode
X𝑋Xitalic\_X can be used also to produce conclusions of mode Y𝑌Yitalic\_Y”. This
intuitively corresponds to the following inference schema:
| | | |
| --- | --- | --- |
| | Ya1,…,Yana1,…,an⇒XbYb Convert(X,Y).subscript⇒𝑋𝑌subscript𝑎1…𝑌subscript𝑎𝑛subscript𝑎1…subscript𝑎𝑛
𝑏𝑌𝑏 Convert(X,Y)\frac{\displaystyle Ya\_{1},\dots,Ya\_{n}\quad a\_{1},\dots,a\_{n}\Rightarrow\_{X}b}{\displaystyle Yb}\mbox{\; $\mathrm{Convert}(X,Y)$}.divide start\_ARG italic\_Y italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_Y italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⇒ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_b end\_ARG start\_ARG italic\_Y italic\_b end\_ARG roman\_Convert ( italic\_X , italic\_Y ) . | |
In our framework obligations and goal-like attitudes cannot change what the
agent believes or how she perceives the world, we thus consider only
conversion from beliefs to the other modes (i.e., Convert(𝖡,X)Convert𝖡𝑋\mathrm{Convert}(\mathsf{B},X)roman\_Convert ( sansserif\_B , italic\_X ) with
X∈MOD∖{𝖡}𝑋MOD𝖡X\in\mathrm{MOD}\setminus\{\mathsf{B}\}italic\_X ∈ roman\_MOD ∖ { sansserif\_B }). Accordingly, we enrich the notation with
R𝖡,Xsuperscript𝑅𝖡𝑋R^{\mathsf{B},X}italic\_R start\_POSTSUPERSCRIPT sansserif\_B , italic\_X end\_POSTSUPERSCRIPT for the set of belief rules that can be used for a conversion
to mode X∈MOD∖{𝖡}𝑋MOD𝖡X\in\mathrm{MOD}\setminus\{\mathsf{B}\}italic\_X ∈ roman\_MOD ∖ { sansserif\_B }. The antecedent of all such
rules is not empty, and does not contain any modal literal.
######
Example 3
| | | |
| --- | --- | --- |
| | F={𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦}R={r2,r6:𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛⇒𝑐ℎ𝑜𝑐𝑜𝑙𝑎𝑡𝑒\_𝑏𝑜𝑥}formulae-sequence𝐹𝑠𝑎𝑡𝑢𝑟𝑑𝑎𝑦𝑅conditional-setsubscript𝑟2subscript𝑟6
⇒𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛𝑐ℎ𝑜𝑐𝑜𝑙𝑎𝑡𝑒\_𝑏𝑜𝑥F=\{\mathit{saturday}\}\quad R=\{r\_{2},\,r\_{6}:\mathit{visit\\_\kern 1.0ptJohn}\Rightarrow\mathit{chocolate\\_\kern 1.0ptbox}\}italic\_F = { italic\_saturday } italic\_R = { italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 6 end\_POSTSUBSCRIPT : italic\_visit \_ italic\_John ⇒ italic\_chocolate \_ italic\_box } | |
where we stipulate that Convert(𝖡,𝖣)Convert𝖡𝖣\mathrm{Convert}(\mathsf{B},\mathsf{D})roman\_Convert ( sansserif\_B , sansserif\_D ) holds.
Alice desires to visit John. John is a passionate of chocolate and, usually,
when Alice goes to meet him at his place, she brings him a box of chocolate.
Thus, we may state that her desire of visiting John implies the desire to
bring him a box of chocolate. This is the case since we can use rule r6subscript𝑟6r\_{6}italic\_r start\_POSTSUBSCRIPT 6 end\_POSTSUBSCRIPT
to convert beliefs into desires.
#### Conflict-detection/resolution.
It is crucial to identify criteria for detecting and solving conflicts between
different modalities.
We define an asymmetric
binary relation Conflict⊆MOD×MODConflictMODMOD\mathrm{Conflict}\subseteq\mathrm{MOD}\times\mathrm{MOD}roman\_Conflict ⊆ roman\_MOD × roman\_MOD such that
Conflict(X,Y)Conflict𝑋𝑌\mathrm{Conflict}(X,Y)roman\_Conflict ( italic\_X , italic\_Y ) means “modes X𝑋Xitalic\_X and Y𝑌Yitalic\_Y are in conflict and mode X𝑋Xitalic\_X prevails
over Y𝑌Yitalic\_Y”. In our framework, we consider conflicts between (i) beliefs and intentions, (ii) beliefs and social intentions, and (iii) obligations and social intentions. In other
words, the agents are characterised by:
* •
Conflict(𝖡,𝖨)Conflict𝖡𝖨\mathrm{Conflict}(\mathsf{B},\mathsf{I})roman\_Conflict ( sansserif\_B , sansserif\_I ), Conflict(𝖡,𝖲𝖨)Conflict𝖡𝖲𝖨\mathrm{Conflict}(\mathsf{B},\mathsf{SI})roman\_Conflict ( sansserif\_B , sansserif\_SI ) meaning that
agents are realistic (Broersen et al., [2002](#bib.bib7));
* •
Conflict(𝖮,𝖲𝖨)Conflict𝖮𝖲𝖨\mathrm{Conflict}(\mathsf{O},\mathsf{SI})roman\_Conflict ( sansserif\_O , sansserif\_SI ) meaning that agents are social (Governatori and Rotolo, [2008](#bib.bib14)).
Consider the scenario of Example [2](#Thmexample2 "Example 2 ‣ Two degrees of commitment: intentions and social intentions. ‣ 2 The intuition underneath the framework ‣ The Rationale behind the Concept of Goal") with Conflict(𝖡,𝖨)Conflict𝖡𝖨\mathrm{Conflict}(\mathsf{B},\mathsf{I})roman\_Conflict ( sansserif\_B , sansserif\_I ) and
Conflict(𝖮,𝖲𝖨)Conflict𝖮𝖲𝖨\mathrm{Conflict}(\mathsf{O},\mathsf{SI})roman\_Conflict ( sansserif\_O , sansserif\_SI ). We recall that rule r5subscript𝑟5r\_{5}italic\_r start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT states the prohibition to
visit John during the first month of his conviction. Thus, Alice has the
intention to visit John, but she does not have the social intention to do so.
This is due to rule r5subscript𝑟5r\_{5}italic\_r start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT that prevents through conflict to prove 𝖲𝖨𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛𝖲𝖨𝑣𝑖𝑠𝑖𝑡\_𝐽𝑜ℎ𝑛\mathsf{SI}\mathit{visit\\_\kern 1.0ptJohn}sansserif\_SI italic\_visit \_ italic\_John. At the end, it is up to the agent (or the designer of
the agent) whether to comply with the obligation, or not.
The *superiority relation* >>> among rules is used to
define where one rule may override the (opposite) conclusion of another one. There are two applications of the superiority relation: the
first considers rules of the same mode while the latter compares rules of different
modes. Given r∈RX𝑟superscript𝑅𝑋r\in R^{X}italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT italic\_X end\_POSTSUPERSCRIPT and s∈RY𝑠superscript𝑅𝑌s\in R^{Y}italic\_s ∈ italic\_R start\_POSTSUPERSCRIPT italic\_Y end\_POSTSUPERSCRIPT, r>s𝑟𝑠r>sitalic\_r > italic\_s iff r𝑟ritalic\_r converts
X𝑋Xitalic\_X into Y𝑌Yitalic\_Y or s𝑠sitalic\_s converts Y𝑌Yitalic\_Y into X𝑋Xitalic\_X, i.e.,
the superiority relation is used when rules, each with a different mode, are
used to produce complementary conclusions of the same mode.
Consider the following theory
| | | | |
| --- | --- | --- | --- |
| | F𝐹\displaystyle Fitalic\_F | ={𝑔𝑜\_𝑡𝑜\_𝑅𝑜𝑚𝑒,𝑝𝑎𝑟𝑒𝑛𝑡\_𝑎𝑛𝑛𝑖𝑣𝑒𝑟𝑠𝑎𝑟𝑦,𝐴𝑢𝑔𝑢𝑠𝑡}absent𝑔𝑜\_𝑡𝑜\_𝑅𝑜𝑚𝑒𝑝𝑎𝑟𝑒𝑛𝑡\_𝑎𝑛𝑛𝑖𝑣𝑒𝑟𝑠𝑎𝑟𝑦𝐴𝑢𝑔𝑢𝑠𝑡\displaystyle=\{\mathit{go\\_\kern 1.0ptto\\_\kern 1.0ptRome},\ \mathit{parent\\_\kern 1.0ptanniversary},\ \mathit{August}\}= { italic\_go \_ italic\_to \_ italic\_Rome , italic\_parent \_ italic\_anniversary , italic\_August } | |
| | R𝑅\displaystyle Ritalic\_R | ={r1:𝑔𝑜\_𝑡𝑜\_𝑅𝑜𝑚𝑒⇒𝑔𝑜\_𝑡𝑜\_𝐼𝑡𝑎𝑙𝑦\displaystyle=\{r\_{1}:\mathit{go\\_\kern 1.0ptto\\_\kern 1.0ptRome}\Rightarrow\mathit{go\\_\kern 1.0ptto\\_\kern 1.0ptItaly}= { italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : italic\_go \_ italic\_to \_ italic\_Rome ⇒ italic\_go \_ italic\_to \_ italic\_Italy | |
| | | r2:𝑝𝑎𝑟𝑒𝑛𝑡\_𝑎𝑛𝑛𝑖𝑣𝑒𝑟𝑠𝑎𝑟𝑦⇒𝖴𝑔𝑜\_𝑡𝑜\_𝑅𝑜𝑚𝑒:subscript𝑟2subscript⇒𝖴𝑝𝑎𝑟𝑒𝑛𝑡\_𝑎𝑛𝑛𝑖𝑣𝑒𝑟𝑠𝑎𝑟𝑦𝑔𝑜\_𝑡𝑜\_𝑅𝑜𝑚𝑒\displaystyle\phantom{{}=\{}r\_{2}:\mathit{parent\\_\kern 1.0ptanniversary}\Rightarrow\_{\mathsf{U}}\mathit{go\\_\kern 1.0ptto\\_\kern 1.0ptRome}italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT : italic\_parent \_ italic\_anniversary ⇒ start\_POSTSUBSCRIPT sansserif\_U end\_POSTSUBSCRIPT italic\_go \_ italic\_to \_ italic\_Rome | |
| | | r3:𝐴𝑢𝑔𝑢𝑠𝑡⇒𝖴¬𝑔𝑜\_𝑡𝑜\_𝐼𝑡𝑎𝑙𝑦}\displaystyle\phantom{{}=\{}r\_{3}:\mathit{August}\Rightarrow\_{\mathsf{U}}\neg\mathit{go\\_\kern 1.0ptto\\_\kern 1.0ptItaly}\}italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT : italic\_August ⇒ start\_POSTSUBSCRIPT sansserif\_U end\_POSTSUBSCRIPT ¬ italic\_go \_ italic\_to \_ italic\_Italy } | |
| | >\displaystyle>> | ={(r1,r3)}absentsubscript𝑟1subscript𝑟3\displaystyle=\{(r\_{1},r\_{3})\}= { ( italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ) } | |
where we stipulate that Convert(𝖡,𝖦)Convert𝖡𝖦\mathrm{Convert}(\mathsf{B},\mathsf{G})roman\_Convert ( sansserif\_B , sansserif\_G ) holds.
It is my parents’ anniversary and they are going to celebrate it this August
in Rome, which is the capital of Italy. Typically, I do not want to go to
Italy in August since the weather is too hot and Rome itself is too crowded.
Nonetheless, I have the goal to go to Italy this summer for my parents’
wedding anniversary, since I am a good son. Here, the superiority applies
because we use r1subscript𝑟1r\_{1}italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT through a conversion from belief to goal.
Aligning with (Cohen and Levesque, [1990](#bib.bib8)), ConflictConflict\mathrm{Conflict}roman\_Conflict and superiority
relations narrow and regulate the intentionality of conclusions drawn by the
ConvertConvert\mathrm{Convert}roman\_Convert relation in such a way that “agents need not intend all the
expected side-effects of their intentions”. This also prevents the ill-famed
“dentist problem” which brings counterintuitive consequences, as also
pointed out by Kontopoulos et al. ([2011](#bib.bib22)). If I want to go to the dentist, either I know that the pain is a “necessary way” to get better, or I am a masochist. Either way, I intend to suffer some pain for getting some ends.
######
Definition 4 (Proof)
A *proof* P𝑃Pitalic\_P of *length* n𝑛nitalic\_n is a finite sequence P(1),…,P(n)𝑃1…𝑃𝑛P(1),\ldots,P(n)italic\_P ( 1 ) , … , italic\_P ( italic\_n ) of
*tagged literals* of the type +∂Xqsubscript𝑋𝑞+\partial\_{X}q+ ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_q and −∂Xqsubscript𝑋𝑞-\partial\_{X}q- ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_q,
where X∈MOD𝑋MODX\in\mathrm{MOD}italic\_X ∈ roman\_MOD.
The proof conditions below define the logical meaning of
such tagged literals. As a conventional notation, P(1..i)P(1..i)italic\_P ( 1 . . italic\_i ) denotes the
initial part of the sequence P𝑃Pitalic\_P of length i𝑖iitalic\_i. Given a defeasible
theory D𝐷Ditalic\_D, +∂Xqsubscript𝑋𝑞+\partial\_{X}q+ ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_q means that q𝑞qitalic\_q is defeasibly provable in D𝐷Ditalic\_D
with the mode X𝑋Xitalic\_X, and −∂Xqsubscript𝑋𝑞-\partial\_{X}q- ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_q that it has been proved in D𝐷Ditalic\_D
that q𝑞qitalic\_q is not defeasibly provable in D𝐷Ditalic\_D with the mode X𝑋Xitalic\_X. Hereafter, the term *refuted* is a synonym of *not provable* and we
use D⊢±∂Xlproves𝐷plus-or-minussubscript𝑋𝑙D\vdash\pm\partial\_{X}litalic\_D ⊢ ± ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l iff there is a proof P𝑃Pitalic\_P in D𝐷Ditalic\_D such that
P(n)=±∂Xl𝑃𝑛plus-or-minussubscript𝑋𝑙P(n)=\pm\partial\_{X}litalic\_P ( italic\_n ) = ± ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l for an index n𝑛nitalic\_n.
In order to characterise the notions of provability/refutability for
beliefs (±∂𝖡plus-or-minussubscript𝖡\pm\partial\_{\mathsf{B}}± ∂ start\_POSTSUBSCRIPT sansserif\_B end\_POSTSUBSCRIPT), obligations (±∂𝖮plus-or-minussubscript𝖮\pm\partial\_{\mathsf{O}}± ∂ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT), desires
(±∂𝖣plus-or-minussubscript𝖣\pm\partial\_{\mathsf{D}}± ∂ start\_POSTSUBSCRIPT sansserif\_D end\_POSTSUBSCRIPT), goals (±∂𝖦plus-or-minussubscript𝖦\pm\partial\_{\mathsf{G}}± ∂ start\_POSTSUBSCRIPT sansserif\_G end\_POSTSUBSCRIPT), intentions (±∂𝖨plus-or-minussubscript𝖨\pm\partial\_{\mathsf{I}}± ∂ start\_POSTSUBSCRIPT sansserif\_I end\_POSTSUBSCRIPT) and social intentions (±∂𝖲𝖨plus-or-minussubscript𝖲𝖨\pm\partial\_{\mathsf{SI}}± ∂ start\_POSTSUBSCRIPT sansserif\_SI end\_POSTSUBSCRIPT), it is
essential to define when a rule is *applicable* or *discarded*. To
this end, the preliminary notions of *body-applicable* and
*body-discarded* must be introduced. A rule is
*body-applicable* when each literal in its body is proved with the
appropriate modality; a rule is *body-discarded* if (at least) one of its premises has been refuted.
######
Definition 5 (Body applicable)
Let P𝑃Pitalic\_P be a proof and X∈{𝖮,𝖣,𝖦,𝖨,𝖲𝖨}𝑋𝖮𝖣𝖦𝖨𝖲𝖨X\in\{\mathsf{O},\mathsf{D},\mathsf{G},\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_O , sansserif\_D , sansserif\_G , sansserif\_I , sansserif\_SI }. A rule
r∈R𝑟𝑅r\in Ritalic\_r ∈ italic\_R is *body-applicable* (at P(n+1)𝑃𝑛1P(n+1)italic\_P ( italic\_n + 1 )) iff for all ai∈A(r)subscript𝑎𝑖𝐴𝑟a\_{i}\in A(r)italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ italic\_A ( italic\_r ):
(1) if ai=Xlsubscript𝑎𝑖𝑋𝑙a\_{i}=Xlitalic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_X italic\_l then +∂Xl∈P(1..n)+\partial\_{X}l\in P(1..n)+ ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l ∈ italic\_P ( 1 . . italic\_n ),
(2) if ai=¬Xlsubscript𝑎𝑖𝑋𝑙a\_{i}=\neg Xlitalic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = ¬ italic\_X italic\_l then −∂Xl∈P(1..n)-\partial\_{X}l\in P(1..n)- ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l ∈ italic\_P ( 1 . . italic\_n ),
(3) if ai=l∈Litsubscript𝑎𝑖𝑙Lita\_{i}=l\in\mathrm{Lit}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_l ∈ roman\_Lit then +∂𝖡l∈P(1..n)+\partial\_{\mathsf{B}}l\in P(1..n)+ ∂ start\_POSTSUBSCRIPT sansserif\_B end\_POSTSUBSCRIPT italic\_l ∈ italic\_P ( 1 . . italic\_n ).
######
Definition 6 (Body discarded)
Let P𝑃Pitalic\_P be a proof and X∈{𝖮,𝖣,𝖦,𝖨,𝖲𝖨}𝑋𝖮𝖣𝖦𝖨𝖲𝖨X\in\{\mathsf{O},\mathsf{D},\mathsf{G},\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_O , sansserif\_D , sansserif\_G , sansserif\_I , sansserif\_SI }. A rule r∈R𝑟𝑅r\in Ritalic\_r ∈ italic\_R is *body-discarded* (at P(n+1)𝑃𝑛1P(n+1)italic\_P ( italic\_n + 1 )) iff there is
ai∈A(r)subscript𝑎𝑖𝐴𝑟a\_{i}\in A(r)italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ italic\_A ( italic\_r ) such that
(1) ai=Xlsubscript𝑎𝑖𝑋𝑙a\_{i}=Xlitalic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_X italic\_land −∂Xl∈P(1..n)-\partial\_{X}l\in P(1..n)- ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l ∈ italic\_P ( 1 . . italic\_n ), or
(2) ai=¬Xlsubscript𝑎𝑖𝑋𝑙a\_{i}=\neg Xlitalic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = ¬ italic\_X italic\_land +∂Xl∈P(1..n)+\partial\_{X}l\in P(1..n)+ ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l ∈ italic\_P ( 1 . . italic\_n ), or
(3) ai=l∈Litsubscript𝑎𝑖𝑙Lita\_{i}=l\in\mathrm{Lit}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_l ∈ roman\_Litand −∂𝖡l∈P(1..n)-\partial\_{\mathsf{B}}l\in P(1..n)- ∂ start\_POSTSUBSCRIPT sansserif\_B end\_POSTSUBSCRIPT italic\_l ∈ italic\_P ( 1 . . italic\_n ).
As already stated, belief rules allow us to derive literals with different
modalities through the conversion mechanism. The applicability mechanism takes this constraint into account.
######
Definition 7 (Conv-applicable)
Let P𝑃Pitalic\_P be a proof. A rule r∈R𝑟𝑅r\in Ritalic\_r ∈ italic\_R is *Conv-applicable* (at P(n+1)𝑃𝑛1P(n+1)italic\_P ( italic\_n + 1 )) for X𝑋Xitalic\_X iff
(1) r∈R𝖡𝑟superscript𝑅𝖡r\in R^{\mathsf{B}}italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_B end\_POSTSUPERSCRIPT,
(2) A(r)≠∅𝐴𝑟A(r)\neq\emptysetitalic\_A ( italic\_r ) ≠ ∅,
(3) A(r)∩ModLit=∅𝐴𝑟ModLitA(r)\cap\mathrm{ModLit}=\emptysetitalic\_A ( italic\_r ) ∩ roman\_ModLit = ∅and
(4) ∀a∈A(r),+∂Xa∈P(1..n).\forall a\in A(r),\,+\partial\_{X}a\in P(1..n).∀ italic\_a ∈ italic\_A ( italic\_r ) , + ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_a ∈ italic\_P ( 1 . . italic\_n ) .
######
Definition 8 (Conv-discarded)
Let P𝑃Pitalic\_P be a proof. A rule r∈R𝑟𝑅r\in Ritalic\_r ∈ italic\_R is *Conv-discarded* (at P(n+1)𝑃𝑛1P(n+1)italic\_P ( italic\_n + 1 )) for X𝑋Xitalic\_X iff
(1) r∉R𝖡𝑟superscript𝑅𝖡r\notin R^{\mathsf{B}}italic\_r ∉ italic\_R start\_POSTSUPERSCRIPT sansserif\_B end\_POSTSUPERSCRIPT, or
(2) A(r)=∅𝐴𝑟A(r)=\emptysetitalic\_A ( italic\_r ) = ∅, or
(3) A(r)∩ModLit≠∅𝐴𝑟ModLitA(r)\cap\mathrm{ModLit}\neq\emptysetitalic\_A ( italic\_r ) ∩ roman\_ModLit ≠ ∅, or
(4) ∃a∈A(r)𝑎𝐴𝑟\exists a\in A(r)∃ italic\_a ∈ italic\_A ( italic\_r )s.t. −∂Xa∈P(1..n).-\partial\_{X}a\in P(1..n).- ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_a ∈ italic\_P ( 1 . . italic\_n ) .
Let us consider the following theory
| | | |
| --- | --- | --- |
| | F={a,b,𝖮c}R={r1:a⇒𝖮b,r2:b,c⇒d}.formulae-sequence𝐹𝑎𝑏𝖮𝑐𝑅conditional-setsubscript𝑟1:subscript⇒𝖮𝑎𝑏subscript𝑟2⇒𝑏𝑐
𝑑F=\{a,\ b,\ \mathsf{O}c\}\quad R=\{r\_{1}:a\Rightarrow\_{\mathsf{O}}b,\ r\_{2}:b,c\Rightarrow d\}.italic\_F = { italic\_a , italic\_b , sansserif\_O italic\_c } italic\_R = { italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : italic\_a ⇒ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_b , italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT : italic\_b , italic\_c ⇒ italic\_d } . | |
Rule r1subscript𝑟1r\_{1}italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is applicable while r2subscript𝑟2r\_{2}italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is not, given that c𝑐citalic\_c is not proved
as a belief. Instead, r2subscript𝑟2r\_{2}italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is *Conv-applicable* for 𝖮𝖮\mathsf{O}sansserif\_O, since
𝖮c𝖮𝑐\mathsf{O}csansserif\_O italic\_c is a fact and r1subscript𝑟1r\_{1}italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT gives 𝖮b𝖮𝑏\mathsf{O}bsansserif\_O italic\_b.
The notion of applicability gives guidelines on how to consider the next
element in a given chain. Given that a belief rule cannot generate reparative
chains but only single literals, we conclude that the applicability condition
for belief collapses into body-applicability. When considering obligations,
each element before the current one must be a violated obligation. Concerning
desires, given that each element in an outcome chain represents a possible
desire, we only require the rule to be applicable either directly, or through
the ConvertConvert\mathrm{Convert}roman\_Convert relation. A literal is a candidate to be a goal only if none of
the previous elements in the chain has been proved as such. An intention must
pass the wishful thinking filter (that is, there is no factual knowledge for
the opposite conclusion), while social intention is also constrained not to
violate any norm.
######
Definition 9 (Applicable rule)
Given a proof P𝑃Pitalic\_P, r∈R[q,i]𝑟𝑅𝑞𝑖r\in R[q,i]italic\_r ∈ italic\_R [ italic\_q , italic\_i ] is *applicable* (at index i𝑖iitalic\_i and P(n+1)𝑃𝑛1P(n+1)italic\_P ( italic\_n + 1 )) for
1. 1.
𝖡𝖡\mathsf{B}sansserif\_B iff r∈R𝖡𝑟superscript𝑅𝖡r\in R^{\mathsf{B}}italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_B end\_POSTSUPERSCRIPT and is body-applicable.
2. 2.
| | | |
| --- | --- | --- |
| 𝖮𝖮\mathsf{O}sansserif\_O iff either | (2.1) | (2.1.1) r∈R𝖮𝑟superscript𝑅𝖮r\in R^{\mathsf{O}}italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT and is
body-applicable, |
| | | (2.1.2) ∀ck∈C(r),k<i,+∂𝖮ck∈P(1..n)\forall c\_{k}\in C(r),\,k<i,\,+\partial\_{\mathsf{O}}c\_{k}\in P(1..n)∀ italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_C ( italic\_r ) , italic\_k < italic\_i , + ∂ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_P ( 1 . . italic\_n ) and −∂ck∈P(1..n)-\partial c\_{k}\in P(1..n)- ∂ italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_P ( 1 . . italic\_n ), or |
| | (2.2) r𝑟ritalic\_r is Conv-applicable. |
3. 3.
| | |
| --- | --- |
| 𝖣𝖣\mathsf{D}sansserif\_D iff either | (3.1) r∈R𝖴𝑟superscript𝑅𝖴r\in R^{\mathsf{U}}italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_U end\_POSTSUPERSCRIPT and is body-applicable, or |
| | (3.2) Conv-applicable. |
4. 4.
| | | |
| --- | --- | --- |
| X∈{𝖦,𝖨,𝖲𝖨}𝑋𝖦𝖨𝖲𝖨X\in\{\mathsf{G},\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_G , sansserif\_I , sansserif\_SI } iff either | (4.1) | (4.1.1)
r∈R𝖴𝑟superscript𝑅𝖴r\in R^{\mathsf{U}}italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_U end\_POSTSUPERSCRIPT and is body-applicable, and |
| | | (4.1.2) | ∀ck∈C(r),k<iformulae-sequencefor-allsubscript𝑐𝑘𝐶𝑟𝑘𝑖\forall c\_{k}\in C(r),\,k<i∀ italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_C ( italic\_r ) , italic\_k < italic\_i, +∂Y∼ck∈P(1..n)+\partial\_{Y}\mathnormal{\sim}c\_{k}\in P(1..n)+ ∂ start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT ∼ italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_P ( 1 . . italic\_n ) for some Y𝑌Yitalic\_Y |
| | | such that Conflict(Y,X)Conflict𝑌𝑋\mathrm{Conflict}(Y,X)roman\_Conflict ( italic\_Y , italic\_X ) and −∂Xck∈P(1..n)-\partial\_{X}c\_{k}\in P(1..n)- ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_P ( 1 . . italic\_n ), or |
| | (4.2) r𝑟ritalic\_r is Conv-applicable. |
For 𝖦𝖦\mathsf{G}sansserif\_G there are no conflicts; for 𝖨𝖨\mathsf{I}sansserif\_I we have Conflict(𝖡,𝖨)Conflict𝖡𝖨\mathrm{Conflict}(\mathsf{B},\mathsf{I})roman\_Conflict ( sansserif\_B , sansserif\_I ), and for 𝖲𝖨𝖲𝖨\mathsf{SI}sansserif\_SI we have Conflict(𝖡,𝖲𝖨)Conflict𝖡𝖲𝖨\mathrm{Conflict}(\mathsf{B},\mathsf{SI})roman\_Conflict ( sansserif\_B , sansserif\_SI ) and Conflict(𝖮,𝖲𝖨)Conflict𝖮𝖲𝖨\mathrm{Conflict}(\mathsf{O},\mathsf{SI})roman\_Conflict ( sansserif\_O , sansserif\_SI ).
######
Definition 10 (Discarded rule)
Given a proof P𝑃Pitalic\_P, r∈R[q,i]𝑟𝑅𝑞𝑖r\in R[q,i]italic\_r ∈ italic\_R [ italic\_q , italic\_i ] is *discarded* (at index i𝑖iitalic\_i and P(n+1)𝑃𝑛1P(n+1)italic\_P ( italic\_n + 1 )) for
1. 1.
𝖡𝖡\mathsf{B}sansserif\_B iff r∈R𝖡𝑟superscript𝑅𝖡r\in R^{\mathsf{B}}italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_B end\_POSTSUPERSCRIPT or is body-discarded.
2. 2.
| | | |
| --- | --- | --- |
| 𝖮𝖮\mathsf{O}sansserif\_O iff | (2.1) | (2.1.1) r∉R𝖮𝑟superscript𝑅𝖮r\notin R^{\mathsf{O}}italic\_r ∉ italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT or is
body-discarded, or |
| | | (2.1.2) ∃ck∈C(r),k<i,formulae-sequencesubscript𝑐𝑘𝐶𝑟𝑘𝑖\exists c\_{k}\in C(r),\,k<i,∃ italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_C ( italic\_r ) , italic\_k < italic\_i , s.t. −∂𝖮ck∈P(1..n)-\partial\_{\mathsf{O}}c\_{k}\in P(1..n)- ∂ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_P ( 1 . . italic\_n ) or +∂ck∈P(1..n)+\partial c\_{k}\in P(1..n)+ ∂ italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_P ( 1 . . italic\_n ), and |
| | (2.2) r𝑟ritalic\_r is Conv-discarded. |
3. 3.
| | |
| --- | --- |
| 𝖣𝖣\mathsf{D}sansserif\_D iff | (3.1) r∉R𝖴𝑟superscript𝑅𝖴r\notin R^{\mathsf{U}}italic\_r ∉ italic\_R start\_POSTSUPERSCRIPT sansserif\_U end\_POSTSUPERSCRIPT or is
body-discarded, and |
| | (3.2) r𝑟ritalic\_r is Conv-discarded. |
4. 4.
| | | |
| --- | --- | --- |
| X∈{𝖦,𝖨,𝖲𝖨}𝑋𝖦𝖨𝖲𝖨X\in\{\mathsf{G},\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_G , sansserif\_I , sansserif\_SI } iff | (4.1) | (4.1.1)
r∉R𝖴𝑟superscript𝑅𝖴r\notin R^{\mathsf{U}}italic\_r ∉ italic\_R start\_POSTSUPERSCRIPT sansserif\_U end\_POSTSUPERSCRIPT or is body-discarded, or |
| | | (4.1.2) ∃ck∈C(r),k<iformulae-sequencesubscript𝑐𝑘𝐶𝑟𝑘𝑖\exists c\_{k}\in C(r),\,k<i∃ italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_C ( italic\_r ) , italic\_k < italic\_i, s.t. −∂Y∼ck∈P(1..n)-\partial\_{Y}\mathnormal{\sim}c\_{k}\in P(1..n)- ∂ start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT ∼ italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_P ( 1 . . italic\_n ) for all Y𝑌Yitalic\_Y |
| | | such that Conflict(Y,X)Conflict𝑌𝑋\mathrm{Conflict}(Y,X)roman\_Conflict ( italic\_Y , italic\_X ) or +∂Xck∈P(1..n)+\partial\_{X}c\_{k}\in P(1..n)+ ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_c start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ italic\_P ( 1 . . italic\_n ) and |
| | (4.2) r𝑟ritalic\_r is Conv-discarded. |
For 𝖦𝖦\mathsf{G}sansserif\_G there are no conflicts; for 𝖨𝖨\mathsf{I}sansserif\_I we have
Conflict(𝖡,𝖨)Conflict𝖡𝖨\mathrm{Conflict}(\mathsf{B},\mathsf{I})roman\_Conflict ( sansserif\_B , sansserif\_I ), and for 𝖲𝖨𝖲𝖨\mathsf{SI}sansserif\_SI we have Conflict(𝖡,𝖲𝖨)Conflict𝖡𝖲𝖨\mathrm{Conflict}(\mathsf{B},\mathsf{SI})roman\_Conflict ( sansserif\_B , sansserif\_SI ) and
Conflict(𝖮,𝖲𝖨)Conflict𝖮𝖲𝖨\mathrm{Conflict}(\mathsf{O},\mathsf{SI})roman\_Conflict ( sansserif\_O , sansserif\_SI ).
Notice that the conditions of Definition [10](#Thmdefinition10 "Definition 10 (Discarded rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") are the
*strong negation*333The strong negation principle is closely
related to the function that simplifies a formula by moving all negations to
an innermost position in the resulting formula, and replaces the positive
tags with the respective negative tags, and the other way around. (See
(Antoniou et al., [2000](#bib.bib2); Governatori et al., [2009](#bib.bib16)).) of those given in Definition [9](#Thmdefinition9 "Definition 9 (Applicable rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal"). The
conditions to establish a rule being discarded correspond to the
constructive failure to prove that the same rule is applicable.
We are now ready to introduce the definitions of the proof conditions for the
modal operators given in this paper. We start with that for desire.
######
Definition 11 (Defeasible provability for desire)
The proof conditions of *defeasible provability* for desire are
| |
| --- |
| +∂𝖣subscript𝖣+\partial\_{\mathsf{D}}+ ∂ start\_POSTSUBSCRIPT sansserif\_D end\_POSTSUBSCRIPT: If P(n+1)=+∂𝖣q𝑃𝑛1subscript𝖣𝑞P(n+1)=+\partial\_{\mathsf{D}}qitalic\_P ( italic\_n + 1 ) = + ∂ start\_POSTSUBSCRIPT sansserif\_D end\_POSTSUBSCRIPT italic\_q then |
| (1) | 𝖣q∈F𝖣𝑞F\mathsf{D}q\in\mathrm{F}sansserif\_D italic\_q ∈ roman\_F or |
| (2) | (2.1) ¬𝖣q∉F𝖣𝑞F\neg\mathsf{D}q\not\in\mathrm{F}¬ sansserif\_D italic\_q ∉ roman\_F and |
| | (2.2) | ∃r∈R[q,i]𝑟𝑅𝑞𝑖\exists r\in R[q,i]∃ italic\_r ∈ italic\_R [ italic\_q , italic\_i ] s.t. r𝑟ritalic\_r is applicable for 𝖣𝖣\mathsf{D}sansserif\_D and |
| | (2.3) | ∀s∈R[∼q,j]\forall s\in R[\mathnormal{\sim}q,j]∀ italic\_s ∈ italic\_R [ ∼ italic\_q , italic\_j ] either
(2.3.1) s𝑠sitalic\_s is discarded for 𝖣𝖣\mathsf{D}sansserif\_D, or
(2.3.2) s≯rnot-greater-than𝑠𝑟s\not>ritalic\_s ≯ italic\_r. |
The above conditions determine when we are able to assert that q𝑞qitalic\_q is a
desire. Specifically, a *desire* is each element in a
chain of an outcome rule for which there is no stronger argument for the
opposite desire.
The negative counterpart −∂𝖣qsubscript𝖣𝑞-\partial\_{\mathsf{D}}q- ∂ start\_POSTSUBSCRIPT sansserif\_D end\_POSTSUBSCRIPT italic\_q is obtained by the principle of strong negation.
######
Definition 12 (Defeasible refutability for desire)
The proof conditions of *defeasible refutability* for desire are
| |
| --- |
| −∂𝖣subscript𝖣-\partial\_{\mathsf{D}}- ∂ start\_POSTSUBSCRIPT sansserif\_D end\_POSTSUBSCRIPT: If P(n+1)=−∂𝖣q𝑃𝑛1subscript𝖣𝑞P(n+1)=-\partial\_{\mathsf{D}}qitalic\_P ( italic\_n + 1 ) = - ∂ start\_POSTSUBSCRIPT sansserif\_D end\_POSTSUBSCRIPT italic\_q then |
| (1) | 𝖣q∉F𝖣𝑞F\mathsf{D}q\not\in\mathrm{F}sansserif\_D italic\_q ∉ roman\_F and |
| (2) | (2.1) ¬𝖣q∈F𝖣𝑞F\neg\mathsf{D}q\in\mathrm{F}¬ sansserif\_D italic\_q ∈ roman\_F, or |
| | (2.2) ∀r∈R[q,i]for-all𝑟𝑅𝑞𝑖\forall r\in R[q,i]∀ italic\_r ∈ italic\_R [ italic\_q , italic\_i ] either r𝑟ritalic\_r is discarded for 𝖣𝖣\mathsf{D}sansserif\_D, or |
| | (2.3) | ∃s∈R[∼q,j]\exists s\in R[\mathnormal{\sim}q,j]∃ italic\_s ∈ italic\_R [ ∼ italic\_q , italic\_j ] s.t.
(2.3.1) s𝑠sitalic\_s is applicable for 𝖣𝖣\mathsf{D}sansserif\_D and
(2.3.2) s>r𝑠𝑟s>ritalic\_s > italic\_r. |
The proof conditions for +∂Xsubscript𝑋+\partial\_{X}+ ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT, with X∈MOD∖{𝖣}𝑋MOD𝖣X\in\mathrm{MOD}\setminus\{\mathsf{D}\}italic\_X ∈ roman\_MOD ∖ { sansserif\_D } are as follows, provided that Y𝑌Yitalic\_Y and T𝑇Titalic\_T represent two arbitrary modalities in MODMOD\mathrm{MOD}roman\_MOD:
######
Definition 13 (Defeasible provability for obligation, goal, intention and social intention)
The proof conditions of *defeasible provability* for X∈MOD∖{𝖣}𝑋MOD𝖣X\in\mathrm{MOD}\setminus\{\mathsf{D}\}italic\_X ∈ roman\_MOD ∖ { sansserif\_D } are
| |
| --- |
| +∂Xsubscript𝑋+\partial\_{X}+ ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT: If P(n+1)=+∂Xq𝑃𝑛1subscript𝑋𝑞P(n+1)=+\partial\_{X}qitalic\_P ( italic\_n + 1 ) = + ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_q then |
| (1) | Xq∈F𝑋𝑞FXq\in\mathrm{F}italic\_X italic\_q ∈ roman\_F or |
| (2) | (2.1) ¬Xq∉F𝑋𝑞F\neg Xq\not\in\mathrm{F}¬ italic\_X italic\_q ∉ roman\_F and (Y∼q∉F(Y\mathnormal{\sim}q\not\in\mathrm{F}( italic\_Y ∼ italic\_q ∉ roman\_F for Y=X𝑌𝑋Y=Xitalic\_Y = italic\_X or Conflict(Y,X))\mathrm{Conflict}(Y,X))roman\_Conflict ( italic\_Y , italic\_X ) ) and |
| | (2.2) | ∃r∈R[q,i]𝑟𝑅𝑞𝑖\exists r\in R[q,i]∃ italic\_r ∈ italic\_R [ italic\_q , italic\_i ] s.t. r𝑟ritalic\_r is applicable for X𝑋Xitalic\_X and |
| | (2.3) | ∀s∈R[∼q,j]\forall s\in R[\mathnormal{\sim}q,j]∀ italic\_s ∈ italic\_R [ ∼ italic\_q , italic\_j ] either |
| | | (2.3.1) ∀Yfor-all𝑌\forall Y∀ italic\_Y s.t. Y=X𝑌𝑋Y=Xitalic\_Y = italic\_X or Conflict(Y,X)Conflict𝑌𝑋\mathrm{Conflict}(Y,X)roman\_Conflict ( italic\_Y , italic\_X ), s𝑠sitalic\_s is discarded for Y𝑌Yitalic\_Y; or |
| | | (2.3.2) | ∃T,∃t∈R[q,k]𝑇𝑡
𝑅𝑞𝑘\exists T,\exists t\in R[q,k]∃ italic\_T , ∃ italic\_t ∈ italic\_R [ italic\_q , italic\_k ] s.t. t𝑡titalic\_t is applicable for T𝑇Titalic\_T, and either |
| | | | (2.3.2.1) | t>s𝑡𝑠t>sitalic\_t > italic\_s if Y=T𝑌𝑇Y=Titalic\_Y = italic\_T, Convert(Y,T)Convert𝑌𝑇\mathrm{Convert}(Y,T)roman\_Convert ( italic\_Y , italic\_T ), or Convert(T,Y)Convert𝑇𝑌\mathrm{Convert}(T,Y)roman\_Convert ( italic\_T , italic\_Y ); or |
| | | | (2.3.2.2) | Conflict(T,Y)Conflict𝑇𝑌\mathrm{Conflict}(T,Y)roman\_Conflict ( italic\_T , italic\_Y ). |
To show that a literal q𝑞qitalic\_q is defeasibly provable with the modality X𝑋Xitalic\_X we
have two choices: (1) the modal literal Xq𝑋𝑞Xqitalic\_X italic\_q is a fact; or (2) we need to
argue using the defeasible part of D𝐷Ditalic\_D. For (2), we require that (2.1) a
complementary literal (of the same modality, or of a conflictual modality)
does not appear in the set of facts, and (2.2) there must be an applicable
rule for X𝑋Xitalic\_X and q𝑞qitalic\_q. Moreover, each possible attack brought by a rule s𝑠sitalic\_s
for ∼qsimilar-toabsent𝑞\mathnormal{\sim}q∼ italic\_q has to be either discarded for the same modality of r𝑟ritalic\_r and for
all modalities in conflict with X𝑋Xitalic\_X (2.3.1), or successfully
counterattacked by another stronger rule t𝑡titalic\_t for q𝑞qitalic\_q (2.3.2). We recall that
the superiority relation combines rules of the same mode, rules with
different modes that produce complementary conclusion of the same mode
through conversion (both considered in clause (2.3.2.1)), and rules with
conflictual modalities (clause 2.3.2.2). Trivially, if X=𝖡𝑋𝖡X=\mathsf{B}italic\_X = sansserif\_B then the
proof conditions reduce to those of classical defeasible logic (Antoniou et al., [2001](#bib.bib3)).
Again, conditions for −∂Xsubscript𝑋-\partial\_{X}- ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT are derived by the principle of
strong negation from that for +∂Xsubscript𝑋+\partial\_{X}+ ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT and are as follows.
######
Definition 14 (Defeasible refutability for obligation, goal, intention and social intention)
The proof conditions of *defeasible refutability* for X∈{𝖮,𝖦,𝖨,𝖲𝖨}𝑋𝖮𝖦𝖨𝖲𝖨X\in\{\mathsf{O},\mathsf{G},\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_O , sansserif\_G , sansserif\_I , sansserif\_SI } are
| |
| --- |
| −∂Xsubscript𝑋-\partial\_{X}- ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT: If P(n+1)=−∂Xq𝑃𝑛1subscript𝑋𝑞P(n+1)=-\partial\_{X}qitalic\_P ( italic\_n + 1 ) = - ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_q then |
| (1) | Xq∉F𝑋𝑞FXq\notin\mathrm{F}italic\_X italic\_q ∉ roman\_F and either |
| (2) | (2.1) ¬Xq∈F𝑋𝑞F\neg Xq\in\mathrm{F}¬ italic\_X italic\_q ∈ roman\_F or (Y∼q∈F(Y\mathnormal{\sim}q\in\mathrm{F}( italic\_Y ∼ italic\_q ∈ roman\_F for Y=X𝑌𝑋Y=Xitalic\_Y = italic\_X or Conflict(Y,X))\mathrm{Conflict}(Y,X))roman\_Conflict ( italic\_Y , italic\_X ) ) or |
| | (2.2) | ∀r∈R[q,i]for-all𝑟𝑅𝑞𝑖\forall r\in R[q,i]∀ italic\_r ∈ italic\_R [ italic\_q , italic\_i ] either r𝑟ritalic\_r is discarded for X𝑋Xitalic\_X or |
| | (2.3) | ∃s∈R[∼q,j]\exists s\in R[\mathnormal{\sim}q,j]∃ italic\_s ∈ italic\_R [ ∼ italic\_q , italic\_j ] s.t. |
| | | (2.3.1) ∃Y𝑌\exists Y∃ italic\_Y s.t. (Y=X(Y=X( italic\_Y = italic\_X or Conflict(Y,X))\mathrm{Conflict}(Y,X))roman\_Conflict ( italic\_Y , italic\_X ) ) and s𝑠sitalic\_s is applicable for Y𝑌Yitalic\_Y, and |
| | | (2.3.2) | ∀T,∀t∈R[q,k]for-all𝑇for-all𝑡
𝑅𝑞𝑘\forall T,\forall t\in R[q,k]∀ italic\_T , ∀ italic\_t ∈ italic\_R [ italic\_q , italic\_k ] either t𝑡titalic\_t is discarded for T𝑇Titalic\_T, or |
| | | | (2.3.2.1) | t≯snot-greater-than𝑡𝑠t\not>sitalic\_t ≯ italic\_s if Y=T𝑌𝑇Y=Titalic\_Y = italic\_T, Convert(Y,T)Convert𝑌𝑇\mathrm{Convert}(Y,T)roman\_Convert ( italic\_Y , italic\_T ), or Convert(T,Y)Convert𝑇𝑌\mathrm{Convert}(T,Y)roman\_Convert ( italic\_T , italic\_Y ); and |
| | | | (2.3.2.2) not Conflict(T,Y)Conflict𝑇𝑌\mathrm{Conflict}(T,Y)roman\_Conflict ( italic\_T , italic\_Y ). |
To better understand how applicability and proof conditions interact to define the (defeasible) conclusions of a given theory, we consider the example below.
######
Example 4
Let D𝐷Ditalic\_D be the following modal theory
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | F𝐹\displaystyle Fitalic\_F | ={a1,a2,¬b1,𝖮¬b2}absentsubscript𝑎1subscript𝑎2subscript𝑏1𝖮subscript𝑏2\displaystyle=\{a\_{1},\,a\_{2},\,\neg b\_{1},\,\mathsf{O}\neg b\_{2}\}= { italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , ¬ italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , sansserif\_O ¬ italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT } | R𝑅\displaystyle Ritalic\_R | ={r:a1⇒𝖴b1⊙b2⊙b3⊙b4,s:a2⇒𝖴b4}.absentconditional-set𝑟:subscript⇒𝖴subscript𝑎1direct-productsubscript𝑏1subscript𝑏2subscript𝑏3subscript𝑏4𝑠subscript⇒𝖴subscript𝑎2subscript𝑏4\displaystyle=\{r:{a\_{1}}\Rightarrow\_{\mathsf{U}}b\_{1}\odot b\_{2}\odot b\_{3}\odot b\_{4},\ s:a\_{2}\Rightarrow\_{\mathsf{U}}b\_{4}\}.= { italic\_r : italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⇒ start\_POSTSUBSCRIPT sansserif\_U end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⊙ italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ⊙ italic\_b start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ⊙ italic\_b start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT , italic\_s : italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ⇒ start\_POSTSUBSCRIPT sansserif\_U end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT } . | |
Here, r𝑟ritalic\_r is trivially applicable for 𝖣𝖣\mathsf{D}sansserif\_D and +∂𝖣bisubscript𝖣subscript𝑏𝑖+\partial\_{\mathsf{D}}b\_{i}+ ∂ start\_POSTSUBSCRIPT sansserif\_D end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT
holds, for 1≤i≤41𝑖41\leq i\leq 41 ≤ italic\_i ≤ 4. Moreover, we have +∂𝖦b1subscript𝖦subscript𝑏1+\partial\_{\mathsf{G}}b\_{1}+ ∂ start\_POSTSUBSCRIPT sansserif\_G end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and
r𝑟ritalic\_r is discarded for 𝖦𝖦\mathsf{G}sansserif\_G after b1subscript𝑏1b\_{1}italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. Due to +∂¬b1subscript𝑏1+\partial\neg b\_{1}+ ∂ ¬ italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, it
follows that −∂𝖨b1subscript𝖨subscript𝑏1-\partial\_{\mathsf{I}}b\_{1}- ∂ start\_POSTSUBSCRIPT sansserif\_I end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT holds (as well as
−∂𝖲𝖨b1subscript𝖲𝖨subscript𝑏1-\partial\_{\mathsf{SI}}b\_{1}- ∂ start\_POSTSUBSCRIPT sansserif\_SI end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT); the rule is applicable for 𝖨𝖨\mathsf{I}sansserif\_I and b2subscript𝑏2b\_{2}italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, and
we are able to prove +∂𝖨b2subscript𝖨subscript𝑏2+\partial\_{\mathsf{I}}b\_{2}+ ∂ start\_POSTSUBSCRIPT sansserif\_I end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT; the rule is thus discarded for
𝖨𝖨\mathsf{I}sansserif\_I and b3subscript𝑏3b\_{3}italic\_b start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT as well as b4subscript𝑏4b\_{4}italic\_b start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT. Due to 𝖮¬b2𝖮subscript𝑏2\mathsf{O}\neg b\_{2}sansserif\_O ¬ italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT being a fact,
r𝑟ritalic\_r is discarded for 𝖲𝖨𝖲𝖨\mathsf{SI}sansserif\_SI and b2subscript𝑏2b\_{2}italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT resulting in
−∂𝖲𝖨b2subscript𝖲𝖨subscript𝑏2-\partial\_{\mathsf{SI}}b\_{2}- ∂ start\_POSTSUBSCRIPT sansserif\_SI end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, which in turn makes the rule applicable for 𝖲𝖨𝖲𝖨\mathsf{SI}sansserif\_SI
and b3subscript𝑏3b\_{3}italic\_b start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT, proving +∂𝖲𝖨b3subscript𝖲𝖨subscript𝑏3+\partial\_{\mathsf{SI}}b\_{3}+ ∂ start\_POSTSUBSCRIPT sansserif\_SI end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT. As we have argued before, this
makes r𝑟ritalic\_r discarded for b4subscript𝑏4b\_{4}italic\_b start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT. Even if r𝑟ritalic\_r is discarded for 𝖲𝖨𝖲𝖨\mathsf{SI}sansserif\_SI and
b4subscript𝑏4b\_{4}italic\_b start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT, we nonetheless have D⊢+∂𝖲𝖨b4proves𝐷subscript𝖲𝖨subscript𝑏4D\vdash+\partial\_{\mathsf{SI}}b\_{4}italic\_D ⊢ + ∂ start\_POSTSUBSCRIPT sansserif\_SI end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT due to s𝑠sitalic\_s;
specifically, D⊢+∂Xb4proves𝐷subscript𝑋subscript𝑏4D\vdash+\partial\_{X}b\_{4}italic\_D ⊢ + ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_b start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT with X∈{𝖣,𝖦,𝖨,𝖲𝖨}𝑋𝖣𝖦𝖨𝖲𝖨X\in\{\mathsf{D},\mathsf{G},\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_D , sansserif\_G , sansserif\_I , sansserif\_SI } given that s𝑠sitalic\_s is trivially applicable for X𝑋Xitalic\_X.
For further illustrations of how the machinery works, the reader is referred to [A](#A1 "Appendix A Inferential mechanism example ‣ The Rationale behind the Concept of Goal").
The next definition extends the concept of complement for modal literals and
is used to establish the logical connection among proved and refuted literals
in our framework.
######
Definition 15 (Complement set)
The *complement set* of a given modal literal l𝑙litalic\_l, denoted by
l~~𝑙\tilde{l}over~ start\_ARG italic\_l end\_ARG, is defined as follows:
(1) if l=𝖣m𝑙𝖣𝑚l=\mathsf{D}mitalic\_l = sansserif\_D italic\_m, then l~={¬𝖣m}~𝑙𝖣𝑚\tilde{l}=\{\neg\mathsf{D}m\}over~ start\_ARG italic\_l end\_ARG = { ¬ sansserif\_D italic\_m };
(2) if l=Xm𝑙𝑋𝑚l=Xmitalic\_l = italic\_X italic\_m, then l~={¬Xm,X∼m}~𝑙similar-to𝑋𝑚𝑋
𝑚\tilde{l}=\{\neg Xm,X\mathnormal{\sim}m\}over~ start\_ARG italic\_l end\_ARG = { ¬ italic\_X italic\_m , italic\_X ∼ italic\_m }, with X∈{𝖮,𝖦,𝖨,𝖲𝖨}𝑋𝖮𝖦𝖨𝖲𝖨X\in\{\mathsf{O},\mathsf{G},\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_O , sansserif\_G , sansserif\_I , sansserif\_SI };
(3) if l=¬Xm𝑙𝑋𝑚l=\neg Xmitalic\_l = ¬ italic\_X italic\_m, then l~={Xm}~𝑙𝑋𝑚\tilde{l}=\{Xm\}over~ start\_ARG italic\_l end\_ARG = { italic\_X italic\_m }.
The logic resulting from the above proof conditions enjoys
properties describing the appropriate behaviour of the modal operators for consistent theories.
######
Definition 16 (Consistent defeasible theory)
A defeasible theory D=(F,R,>)𝐷F𝑅D=(\mathrm{F},R,>)italic\_D = ( roman\_F , italic\_R , > ) is *consistent* iff >>> is acyclic and FF\mathrm{F}roman\_F does not contain pairs of complementary literals, that is if FF\mathrm{F}roman\_F does not contain pairs like (i) l𝑙litalic\_l and ∼lsimilar-toabsent𝑙\mathnormal{\sim}l∼ italic\_l, (ii) Xl𝑋𝑙Xlitalic\_X italic\_l and ¬Xl𝑋𝑙\neg Xl¬ italic\_X italic\_l with X∈MOD𝑋MODX\in\mathrm{MOD}italic\_X ∈ roman\_MOD, and (iii) Xl𝑋𝑙Xlitalic\_X italic\_l and X∼lsimilar-to𝑋𝑙X\mathnormal{\sim}litalic\_X ∼ italic\_l with X∈{𝖦,𝖨,𝖲𝖨}𝑋𝖦𝖨𝖲𝖨X\in\{\mathsf{G},\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_G , sansserif\_I , sansserif\_SI }.
######
Proposition 1
Let D𝐷Ditalic\_D be a consistent, finite defeasible theory. For any literal l𝑙litalic\_l, it is not possible to have both
1. 1.
D⊢+∂Xlproves𝐷subscript𝑋𝑙D\vdash+\partial\_{X}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l and D⊢−∂Xlproves𝐷subscript𝑋𝑙D\vdash-\partial\_{X}litalic\_D ⊢ - ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l with X∈MOD𝑋MODX\in\mathrm{MOD}italic\_X ∈ roman\_MOD;
2. 2.
D⊢+∂Xlproves𝐷subscript𝑋𝑙D\vdash+\partial\_{X}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l and D⊢+∂X∼lproves𝐷similar-tosubscript𝑋𝑙D\vdash+\partial\_{X}\mathnormal{\sim}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT ∼ italic\_l with X∈MOD∖{𝖣}𝑋MOD𝖣X\in\mathrm{MOD}\setminus\{\mathsf{D}\}italic\_X ∈ roman\_MOD ∖ { sansserif\_D }.
All proofs of propositions, lemmas and theorems are reported in [B](#A2 "Appendix B Proofs of Propositions in Section 3 ‣ The Rationale behind the Concept of Goal") and [C](#A3 "Appendix C Correctness and Completeness of DefeasibleExtension ‣ The Rationale behind the Concept of Goal").
The meaning of the above proposition is that, for instance, it is not
possible for an agent to obey something that is obligatory and forbidden (obligatory not)
at the same time. On the other hand, an agent may have opposite desires given
different situations, but then she will be able to plan for only one between
the two alternatives.
Proposition [2](#Thmtheorem2 "Proposition 2 ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") below governs the interactions between different modalities
and the relationships between proved literals and refuted complementary
literals of the same modality. Proposition [3](#Thmtheorem3 "Proposition 3 ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") proves that certain (likely-expected) implications do no hold.
######
Proposition 2
Let D𝐷Ditalic\_D be a consistent defeasible theory. For any literal l𝑙litalic\_l, the following statements hold:
1. 1.
if D⊢+∂Xlproves𝐷subscript𝑋𝑙D\vdash+\partial\_{X}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l, then D⊢−∂X∼lproves𝐷similar-tosubscript𝑋𝑙D\vdash-\partial\_{X}\mathnormal{\sim}litalic\_D ⊢ - ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT ∼ italic\_l with X∈MOD∖{𝖣}𝑋MOD𝖣X\in\mathrm{MOD}\setminus\{\mathsf{D}\}italic\_X ∈ roman\_MOD ∖ { sansserif\_D };
2. 2.
if D⊢+∂lproves𝐷𝑙D\vdash+\partial litalic\_D ⊢ + ∂ italic\_l, then D⊢−∂𝖨∼lproves𝐷similar-tosubscript𝖨𝑙D\vdash-\partial\_{\mathsf{I}}\mathnormal{\sim}litalic\_D ⊢ - ∂ start\_POSTSUBSCRIPT sansserif\_I end\_POSTSUBSCRIPT ∼ italic\_l;
3. 3.
if D⊢+∂lproves𝐷𝑙D\vdash+\partial litalic\_D ⊢ + ∂ italic\_l or D⊢+∂𝖮lproves𝐷subscript𝖮𝑙D\vdash+\partial\_{\mathsf{O}}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_l, then D⊢−∂𝖲𝖨∼lproves𝐷similar-tosubscript𝖲𝖨𝑙D\vdash-\partial\_{\mathsf{SI}}\mathnormal{\sim}litalic\_D ⊢ - ∂ start\_POSTSUBSCRIPT sansserif\_SI end\_POSTSUBSCRIPT ∼ italic\_l;
4. 4.
if D⊢+∂𝖦lproves𝐷subscript𝖦𝑙D\vdash+\partial\_{\mathsf{G}}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT sansserif\_G end\_POSTSUBSCRIPT italic\_l, then D⊢+∂𝖣lproves𝐷subscript𝖣𝑙D\vdash+\partial\_{\mathsf{D}}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT sansserif\_D end\_POSTSUBSCRIPT italic\_l;
5. 5.
if D⊢−∂𝖣lproves𝐷subscript𝖣𝑙D\vdash-\partial\_{\mathsf{D}}litalic\_D ⊢ - ∂ start\_POSTSUBSCRIPT sansserif\_D end\_POSTSUBSCRIPT italic\_l, then D⊢−∂𝖦lproves𝐷subscript𝖦𝑙D\vdash-\partial\_{\mathsf{G}}litalic\_D ⊢ - ∂ start\_POSTSUBSCRIPT sansserif\_G end\_POSTSUBSCRIPT italic\_l.
######
Proposition 3
Let D𝐷Ditalic\_D be a consistent defeasible theory. For any literal l𝑙litalic\_l, the following statements *do not* hold:
1. 6.
if D⊢+∂𝖣lproves𝐷subscript𝖣𝑙D\vdash+\partial\_{\mathsf{D}}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT sansserif\_D end\_POSTSUBSCRIPT italic\_l, then D⊢+∂Xlproves𝐷subscript𝑋𝑙D\vdash+\partial\_{X}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l with X∈{𝖦,𝖨,𝖲𝖨}𝑋𝖦𝖨𝖲𝖨X\in\{\mathsf{G},\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_G , sansserif\_I , sansserif\_SI };
2. 7.
if D⊢+∂𝖦lproves𝐷subscript𝖦𝑙D\vdash+\partial\_{\mathsf{G}}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT sansserif\_G end\_POSTSUBSCRIPT italic\_l, then D⊢+∂Xlproves𝐷subscript𝑋𝑙D\vdash+\partial\_{X}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l with X∈{𝖨,𝖲𝖨}𝑋𝖨𝖲𝖨X\in\{\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_I , sansserif\_SI };
3. 8.
if D⊢+∂Xlproves𝐷subscript𝑋𝑙D\vdash+\partial\_{X}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l, then D⊢+∂Ylproves𝐷subscript𝑌𝑙D\vdash+\partial\_{Y}litalic\_D ⊢ + ∂ start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT italic\_l with X={𝖨,𝖲𝖨}𝑋𝖨𝖲𝖨X=\{\mathsf{I},\mathsf{SI}\}italic\_X = { sansserif\_I , sansserif\_SI } and Y={𝖣,𝖦}𝑌𝖣𝖦Y=\{\mathsf{D},\mathsf{G}\}italic\_Y = { sansserif\_D , sansserif\_G };
4. 9.
if D⊢−∂Ylproves𝐷subscript𝑌𝑙D\vdash-\partial\_{Y}litalic\_D ⊢ - ∂ start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT italic\_l, then D⊢−∂Xlproves𝐷subscript𝑋𝑙D\vdash-\partial\_{X}litalic\_D ⊢ - ∂ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_l with Y∈{𝖣,𝖦}𝑌𝖣𝖦Y\in\{\mathsf{D},\mathsf{G}\}italic\_Y ∈ { sansserif\_D , sansserif\_G } and X∈{𝖨,𝖲𝖨}𝑋𝖨𝖲𝖨X\in\{\mathsf{I},\mathsf{SI}\}italic\_X ∈ { sansserif\_I , sansserif\_SI }.
Parts 6. and 7. directly follow by Definitions from [9](#Thmdefinition9 "Definition 9 (Applicable rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") to [14](#Thmdefinition14 "Definition 14 (Defeasible refutability for obligation, goal, intention and social intention) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") and rely on the intuitions
presented in Section [2](#S2 "2 The intuition underneath the framework ‣ The Rationale behind the Concept of Goal"). Parts from 7. to 9.
reveal the true nature of expressing outcomes in a preference order: it may be
the case that the agent desires something (may it be even her preferred
outcome) but if the factuality of the environment makes this outcome impossible
to reach, then she should not pursue such an outcome, and instead commit
herself on the next option available.
The statements of Proposition [3](#Thmtheorem3 "Proposition 3 ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") exhibit a common
feature which can be illustrated by the idiom: “What’s your plan B?”,
meaning: even if you are willing for an option, if such an option is not
feasible you need to strive for the plan B.
4 Algorithmic results
----------------------
We now present procedures and algorithms to compute the
*extension* of a *finite* defeasible theory
(Subsection [4.2](#S4.SS2 "4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")), in order to ascertain the complexity of
the logic introduced in the previous sections. The algorithms are inspired to
ideas proposed in (Maher, [2001](#bib.bib26); Lam and Governatori, [2011](#bib.bib25)).
###
4.1 Notation for the algorithms
From now on, ■■\blacksquare■ denotes a generic modality
in MODMOD\mathrm{MOD}roman\_MOD, ◇◇\Diamond◇ a generic modality in MOD∖{𝖡}MOD𝖡\mathrm{MOD}\setminus\{\mathsf{B}\}roman\_MOD ∖ { sansserif\_B }, and
□□\Box□ a fixed modality chosen in ■■\blacksquare■. Moreover, whenever □=𝖡□𝖡\Box=\mathsf{B}□ = sansserif\_B we shall treat
literals □l□𝑙\Box l□ italic\_l and l𝑙litalic\_l as synonyms. To accommodate
the ConvertConvert\mathrm{Convert}roman\_Convert relation to the algorithms, we recall that R𝖡,◇superscript𝑅𝖡◇R^{\mathsf{B},\Diamond}italic\_R start\_POSTSUPERSCRIPT sansserif\_B , ◇ end\_POSTSUPERSCRIPT
denotes the set of belief rules that can be used for a conversion to modality
◇◇\Diamond◇. The antecedent of all such rules is not empty, and does
not contain any modal literal.
Furthermore, for each literal l𝑙litalic\_l, l■subscript𝑙■l\_{\blacksquare}italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT is the set (initially
empty) such that ±□∈l■plus-or-minus□subscript𝑙■\pm\Box\in l\_{\blacksquare}± □ ∈ italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT iff D⊢±∂□lproves𝐷plus-or-minussubscript□𝑙D\vdash\pm\partial\_{\Box}litalic\_D ⊢ ± ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT italic\_l. Given a modal defeasible theory D𝐷Ditalic\_D, a set of rules R𝑅Ritalic\_R, and a rule r∈R□[l]𝑟superscript𝑅□delimited-[]𝑙r\in R^{\Box}[l]italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT [ italic\_l ], we expand the superiority relation >>> by incorporating the
ConflictConflict\mathrm{Conflict}roman\_Conflict relation into it:
| | | |
| --- | --- | --- |
| | >=>∪{(r,s)|r∈R□[l],s∈R■[∼l],Conflict(□,■)}.>=>\cup\,\{(r,s)|r\in R^{\Box}[l],s\in R^{\blacksquare}[\mathnormal{\sim}l],\mathrm{Conflict}(\Box,\blacksquare)\}.> = > ∪ { ( italic\_r , italic\_s ) | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT [ italic\_l ] , italic\_s ∈ italic\_R start\_POSTSUPERSCRIPT ■ end\_POSTSUPERSCRIPT [ ∼ italic\_l ] , roman\_Conflict ( □ , ■ ) } . | |
We also define:
1. 1.
rsup={s∈R:(s,r)∈>}subscript𝑟𝑠𝑢𝑝conditional-set𝑠𝑅𝑠𝑟r\_{sup}=\{s\in R:(s,r)\in>\}italic\_r start\_POSTSUBSCRIPT italic\_s italic\_u italic\_p end\_POSTSUBSCRIPT = { italic\_s ∈ italic\_R : ( italic\_s , italic\_r ) ∈ > } and rinf={s∈R:(r,s)∈>}subscript𝑟𝑖𝑛𝑓conditional-set𝑠𝑅𝑟𝑠r\_{inf}=\{s\in R:(r,s)\in>\}italic\_r start\_POSTSUBSCRIPT italic\_i italic\_n italic\_f end\_POSTSUBSCRIPT = { italic\_s ∈ italic\_R : ( italic\_r , italic\_s ) ∈ > } for any r∈R𝑟𝑅r\in Ritalic\_r ∈ italic\_R;
2. 2.
HBD𝐻subscript𝐵𝐷HB\_{D}italic\_H italic\_B start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT as the set of literals such that the literal or its complement
appears in D𝐷Ditalic\_D, i.e., such that it is a sub-formula of a modal
literal occurring in D𝐷Ditalic\_D;
3. 3.
the modal Herbrand Base of D𝐷Ditalic\_D as
HB={□l|□∈MOD,l∈HBD}𝐻𝐵conditional-set□𝑙formulae-sequence□MOD𝑙𝐻subscript𝐵𝐷HB=\{\Box l|\ \Box\in\mathrm{MOD},l\in HB\_{D}\}italic\_H italic\_B = { □ italic\_l | □ ∈ roman\_MOD , italic\_l ∈ italic\_H italic\_B start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT }.
Accordingly, the extension of a defeasible theory is defined as follows.
######
Definition 17 (Defeasible extension)
Given a defeasible theory D𝐷Ditalic\_D, the *defeasible extension* of D𝐷Ditalic\_D is
defined as
| | | |
| --- | --- | --- |
| | E(D)=(+∂□,−∂□),𝐸𝐷subscript□subscript□E(D)=(+\partial\_{\Box},-\partial\_{\Box}),italic\_E ( italic\_D ) = ( + ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT , - ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT ) , | |
where ±∂□={l∈HBD:D⊢±∂□l}plus-or-minussubscript□conditional-set𝑙𝐻subscript𝐵𝐷proves𝐷plus-or-minussubscript□𝑙\pm\partial\_{\Box}=\{l\in HB\_{D}:D\vdash\pm\partial\_{\Box}l\}± ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT = { italic\_l ∈ italic\_H italic\_B start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT : italic\_D ⊢ ± ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT italic\_l } with □∈MOD□MOD\Box\in\mathrm{MOD}□ ∈ roman\_MOD.
Two defeasible theories D𝐷Ditalic\_D and D′superscript𝐷′D^{\prime}italic\_D start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT are *equivalent* whenever they have the same extensions, i.e., E(D)=E(D′)𝐸𝐷𝐸superscript𝐷′E(D)=E(D^{\prime})italic\_E ( italic\_D ) = italic\_E ( italic\_D start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ).
We introduce two operations that modify the consequent of
rules used by the algorithms.
######
Definition 18 (Truncation and removal)
Let c1=a1⊙⋯⊙ai−1subscript𝑐1direct-productsubscript𝑎1⋯subscript𝑎𝑖1c\_{1}=a\_{1}\odot\cdots\odot a\_{i-1}italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⊙ ⋯ ⊙ italic\_a start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT and c2=ai+1⊙⋯⊙ansubscript𝑐2direct-productsubscript𝑎𝑖1⋯subscript𝑎𝑛c\_{2}=a\_{i+1}\odot\cdots\odot a\_{n}italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = italic\_a start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ⊙ ⋯ ⊙ italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT
be two (possibly empty) ⊙direct-product\odot⊙-expressions such that aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT does not occur
in neither of them, and c=c1⊙ai⊙c2𝑐direct-productsubscript𝑐1subscript𝑎𝑖subscript𝑐2c=c\_{1}\odot a\_{i}\odot c\_{2}italic\_c = italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⊙ italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⊙ italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is an ⊙direct-product\odot⊙-expression.
Let r𝑟ritalic\_r be a rule with form A(r)⇒◇csubscript⇒◇𝐴𝑟𝑐A(r)\Rightarrow\_{\Diamond}citalic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT ◇ end\_POSTSUBSCRIPT italic\_c. We define
the *truncation* of the consequent c𝑐citalic\_c at aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT as:
| | | |
| --- | --- | --- |
| | A(r)⇒◇c!ai=A(r)⇒◇c1⊙ai,subscript⇒◇𝐴𝑟𝑐subscript𝑎𝑖𝐴𝑟subscript⇒◇direct-productsubscript𝑐1subscript𝑎𝑖A(r)\Rightarrow\_{\Diamond}c!a\_{i}=A(r)\Rightarrow\_{\Diamond}c\_{1}\odot a\_{i},italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT ◇ end\_POSTSUBSCRIPT italic\_c ! italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT ◇ end\_POSTSUBSCRIPT italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⊙ italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , | |
and the *removal* of aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT from the consequent c𝑐citalic\_c as:
| | | |
| --- | --- | --- |
| | A(r)⇒◇c⊖ai=A(r)⇒◇c1⊙c2.subscript⇒◇𝐴𝑟symmetric-difference𝑐subscript𝑎𝑖𝐴𝑟subscript⇒◇direct-productsubscript𝑐1subscript𝑐2A(r)\Rightarrow\_{\Diamond}c\ominus a\_{i}=A(r)\Rightarrow\_{\Diamond}c\_{1}\odot c\_{2}.italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT ◇ end\_POSTSUBSCRIPT italic\_c ⊖ italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT ◇ end\_POSTSUBSCRIPT italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⊙ italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT . | |
Notice that removal may lead to rules with empty consequent which
strictly would not be rules according to the definition of the language.
Nevertheless, we accept such expressions within the description of the algorithms but then such rules will not be in any R[q,i]𝑅𝑞𝑖R[q,i]italic\_R [ italic\_q , italic\_i ] for any q𝑞qitalic\_q and
i𝑖iitalic\_i. In such cases, the operation *de facto* removes the rules.
Given □∈MOD□MOD\Box\in\mathrm{MOD}□ ∈ roman\_MOD, the sets +∂□subscript□+\partial\_{\Box}+ ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT and −∂□subscript□-\partial\_{\Box}- ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT
denote, respectively, the global sets of positive and negative defeasible
conclusions (i.e., the set of literals for which condition +∂□subscript□+\partial\_{\Box}+ ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT
or −∂□subscript□-\partial\_{\Box}- ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT holds), while ∂□+superscriptsubscript□\partial\_{\Box}^{+}∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT and
∂□−superscriptsubscript□\partial\_{\Box}^{-}∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT are the corresponding temporary sets, that is the set
computed at each iteration of the main algorithm. Moreover, to simplify the
computation, we do not operate on outcome rules: for each rule r∈R𝖴𝑟superscript𝑅𝖴r\in R^{\mathsf{U}}italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_U end\_POSTSUPERSCRIPT
we create instead a new rule for desire, goal, intention, and social
intention (respectively, r𝖣superscript𝑟𝖣r^{\mathsf{D}}italic\_r start\_POSTSUPERSCRIPT sansserif\_D end\_POSTSUPERSCRIPT, r𝖦superscript𝑟𝖦r^{\mathsf{G}}italic\_r start\_POSTSUPERSCRIPT sansserif\_G end\_POSTSUPERSCRIPT, r𝖨superscript𝑟𝖨r^{\mathsf{I}}italic\_r start\_POSTSUPERSCRIPT sansserif\_I end\_POSTSUPERSCRIPT, and
r𝖲𝖨superscript𝑟𝖲𝖨r^{\mathsf{SI}}italic\_r start\_POSTSUPERSCRIPT sansserif\_SI end\_POSTSUPERSCRIPT). Accordingly, for the sake of simplicity, in the present section
we shall use expressions like “the intention rule” as a shortcut for “the
clone of the outcome rule used to derive intentions”.
###
4.2 Algorithms
The idea of all the algorithms is to use the operations of truncation
and elimination to obtain, step after step, a simpler
but equivalent theory. In fact, proving a literal does not give local
information regarding the element itself only, but rather reveals which rules
should be discarded, or reduced, in their head or body. Let us assume that, at a given step, the algorithm proves literal l𝑙litalic\_l. At the next step,
1. 1.
the applicability of any rule r𝑟ritalic\_r with l∈A(r)𝑙𝐴𝑟l\in A(r)italic\_l ∈ italic\_A ( italic\_r )
does not depend on l𝑙litalic\_l any longer. Hence, we can safely remove l𝑙litalic\_l from A(r)𝐴𝑟A(r)italic\_A ( italic\_r ).
2. 2.
Any rule s𝑠sitalic\_s with l~∩A(s)≠∅~𝑙𝐴𝑠\widetilde{l}\cap A(s)\neq\emptysetover~ start\_ARG italic\_l end\_ARG ∩ italic\_A ( italic\_s ) ≠ ∅ is
discarded. Consequently, any superiority tuple involving s𝑠sitalic\_s is now
useless and can be removed from the superiority relation.
3. 3.
We can shorten chains by exploiting conditions of
Definitions [9](#Thmdefinition9 "Definition 9 (Applicable rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") and [10](#Thmdefinition10 "Definition 10 (Discarded rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal"). For instance, if l=𝖮m𝑙𝖮𝑚l=\mathsf{O}mitalic\_l = sansserif\_O italic\_m, we can
truncate chains for obligation rules at ∼msimilar-toabsent𝑚\mathnormal{\sim}m∼ italic\_m and eliminate it as
well.
Algorithm 1 DefeasibleExtension
1:+∂■,∂■+←∅;−∂■,∂■−←∅formulae-sequence←subscript■subscriptsuperscript■
←subscript■subscriptsuperscript■
+\partial\_{\blacksquare},\partial^{+}\_{\blacksquare}\leftarrow\emptyset;\ -\partial\_{\blacksquare},\partial^{-}\_{\blacksquare}\leftarrow\emptyset+ ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT , ∂ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← ∅ ; - ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT , ∂ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← ∅
2:R←R∪{r□:A(r)⇒□C(r)|r∈R𝖴}←𝑅𝑅conditional-setsuperscript𝑟□subscript⇒□𝐴𝑟conditional𝐶𝑟𝑟superscript𝑅𝖴R\leftarrow R\cup\{r^{\Box}:A(r)\Rightarrow\_{\Box}C(r)|r\in R^{\mathsf{U}}\}italic\_R ← italic\_R ∪ { italic\_r start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT : italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT italic\_C ( italic\_r ) | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_U end\_POSTSUPERSCRIPT }, with □∈{𝖣,𝖦,𝖨,𝖲𝖨}□𝖣𝖦𝖨𝖲𝖨\Box\in\{\mathsf{D},\mathsf{G},\mathsf{I},\mathsf{SI}\}□ ∈ { sansserif\_D , sansserif\_G , sansserif\_I , sansserif\_SI }
3:R←R∖R𝖴←𝑅𝑅superscript𝑅𝖴R\leftarrow R\setminus R^{\mathsf{U}}italic\_R ← italic\_R ∖ italic\_R start\_POSTSUPERSCRIPT sansserif\_U end\_POSTSUPERSCRIPT
4:R𝖡,◇←{r◇:A(r)↪C(r)|r∈R𝖡,A(r)≠∅,A(r)⊆Lit}←superscript𝑅𝖡◇conditional-setsuperscript𝑟◇formulae-sequence↪𝐴𝑟conditional𝐶𝑟𝑟superscript𝑅𝖡formulae-sequence𝐴𝑟𝐴𝑟LitR^{\mathsf{B},\Diamond}\leftarrow\{r^{\Diamond}:A(r)\hookrightarrow C(r)|r\in R^{\mathsf{B}},A(r)\neq\emptyset,A(r)\subseteq\mathrm{Lit}\}italic\_R start\_POSTSUPERSCRIPT sansserif\_B , ◇ end\_POSTSUPERSCRIPT ← { italic\_r start\_POSTSUPERSCRIPT ◇ end\_POSTSUPERSCRIPT : italic\_A ( italic\_r ) ↪ italic\_C ( italic\_r ) | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_B end\_POSTSUPERSCRIPT , italic\_A ( italic\_r ) ≠ ∅ , italic\_A ( italic\_r ) ⊆ roman\_Lit }
5:>←>∪{(r◇,s◇)|r◇,s◇∈R𝖡,◇,r>s}∪{(r,s)|r∈R■∪R𝖡,■,s∈R◇∪R𝖡,◇,Conflict(■,◇)}>\leftarrow>\cup\{(r^{\Diamond},s^{\Diamond})|r^{\Diamond},s^{\Diamond}\in R^{\mathsf{B},\Diamond},r>s\}\cup\{(r,s)|r\in R^{\blacksquare}\cup R^{\mathsf{B},\blacksquare},s\in R^{\Diamond}\cup R^{\mathsf{B},\Diamond},\mathrm{Conflict}(\blacksquare,\Diamond)\}> ← > ∪ { ( italic\_r start\_POSTSUPERSCRIPT ◇ end\_POSTSUPERSCRIPT , italic\_s start\_POSTSUPERSCRIPT ◇ end\_POSTSUPERSCRIPT ) | italic\_r start\_POSTSUPERSCRIPT ◇ end\_POSTSUPERSCRIPT , italic\_s start\_POSTSUPERSCRIPT ◇ end\_POSTSUPERSCRIPT ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_B , ◇ end\_POSTSUPERSCRIPT , italic\_r > italic\_s } ∪ { ( italic\_r , italic\_s ) | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT ■ end\_POSTSUPERSCRIPT ∪ italic\_R start\_POSTSUPERSCRIPT sansserif\_B , ■ end\_POSTSUPERSCRIPT , italic\_s ∈ italic\_R start\_POSTSUPERSCRIPT ◇ end\_POSTSUPERSCRIPT ∪ italic\_R start\_POSTSUPERSCRIPT sansserif\_B , ◇ end\_POSTSUPERSCRIPT , roman\_Conflict ( ■ , ◇ ) }
6:for l∈F𝑙𝐹l\in Fitalic\_l ∈ italic\_F do
7: if l=□m𝑙□𝑚l=\Box mitalic\_l = □ italic\_m then Proved(m,□𝑚□m,\,\Boxitalic\_m , □)
8: if l=¬□m∧□≠𝖣𝑙□𝑚□𝖣l=\neg\Box m\wedge\Box\neq\mathsf{D}italic\_l = ¬ □ italic\_m ∧ □ ≠ sansserif\_D then Refuted(m,□𝑚□m,\,\Boxitalic\_m , □)
9:end for
10:+∂■←+∂■∪∂■+;−∂■←−∂■∪∂■−formulae-sequence←subscript■subscript■subscriptsuperscript■←subscript■subscript■subscriptsuperscript■+\partial\_{\blacksquare}\leftarrow+\partial\_{\blacksquare}\cup\partial^{+}\_{\blacksquare};\ -\partial\_{\blacksquare}\leftarrow-\partial\_{\blacksquare}\cup\partial^{-}\_{\blacksquare}+ ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← + ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ∪ ∂ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ; - ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← - ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ∪ ∂ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT
11:R𝑖𝑛𝑓𝑑←∅←subscript𝑅𝑖𝑛𝑓𝑑R\_{\mathit{infd}}\leftarrow\emptysetitalic\_R start\_POSTSUBSCRIPT italic\_infd end\_POSTSUBSCRIPT ← ∅
12:repeat
13: ∂■+←∅;∂■−←∅formulae-sequence←subscriptsuperscript■←subscriptsuperscript■\partial^{+}\_{\blacksquare}\leftarrow\emptyset;\ \partial^{-}\_{\blacksquare}\leftarrow\emptyset∂ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← ∅ ; ∂ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← ∅
14: for □l∈HB□𝑙𝐻𝐵\Box l\in HB□ italic\_l ∈ italic\_H italic\_B do
15: if R□[l]∪R𝖡,□[l]=∅superscript𝑅□delimited-[]𝑙superscript𝑅𝖡□delimited-[]𝑙R^{\Box}[l]\cup R^{\mathsf{B},\Box}[l]=\emptysetitalic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT [ italic\_l ] ∪ italic\_R start\_POSTSUPERSCRIPT sansserif\_B , □ end\_POSTSUPERSCRIPT [ italic\_l ] = ∅ then Refuted(l,□𝑙□l,\,\Boxitalic\_l , □)
16: end for
17: for r∈R□∪R𝖡,□𝐝𝐨𝑟superscript𝑅□superscript𝑅𝖡□𝐝𝐨r\in R^{\Box}\cup R^{\mathsf{B},\Box}\ \textbf{do}italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT ∪ italic\_R start\_POSTSUPERSCRIPT sansserif\_B , □ end\_POSTSUPERSCRIPT do
18: if A(r)=∅𝐴𝑟A(r)=\emptysetitalic\_A ( italic\_r ) = ∅ then
19: rinf←{r∈R:(r,s)∈>,s∈R}←subscript𝑟𝑖𝑛𝑓conditional-set𝑟𝑅formulae-sequence𝑟𝑠𝑠𝑅r\_{inf}\leftarrow\{r\in R:(r,s)\in>,s\in R\}italic\_r start\_POSTSUBSCRIPT italic\_i italic\_n italic\_f end\_POSTSUBSCRIPT ← { italic\_r ∈ italic\_R : ( italic\_r , italic\_s ) ∈ > , italic\_s ∈ italic\_R }; rsup←{s∈R:(s,r)∈>}←subscript𝑟𝑠𝑢𝑝conditional-set𝑠𝑅𝑠𝑟r\_{sup}\leftarrow\{s\in R:(s,r)\in>\}italic\_r start\_POSTSUBSCRIPT italic\_s italic\_u italic\_p end\_POSTSUBSCRIPT ← { italic\_s ∈ italic\_R : ( italic\_s , italic\_r ) ∈ > }
20: R𝑖𝑛𝑓𝑑←R𝑖𝑛𝑓𝑑∪rinf←subscript𝑅𝑖𝑛𝑓𝑑subscript𝑅𝑖𝑛𝑓𝑑subscript𝑟𝑖𝑛𝑓R\_{\mathit{infd}}\leftarrow R\_{\mathit{infd}}\cup r\_{inf}italic\_R start\_POSTSUBSCRIPT italic\_infd end\_POSTSUBSCRIPT ← italic\_R start\_POSTSUBSCRIPT italic\_infd end\_POSTSUBSCRIPT ∪ italic\_r start\_POSTSUBSCRIPT italic\_i italic\_n italic\_f end\_POSTSUBSCRIPT
21: Let l𝑙litalic\_l be the first literal of C(r)𝐶𝑟C(r)italic\_C ( italic\_r ) in HB𝐻𝐵HBitalic\_H italic\_B
22: if rsup=∅subscript𝑟𝑠𝑢𝑝r\_{sup}=\emptysetitalic\_r start\_POSTSUBSCRIPT italic\_s italic\_u italic\_p end\_POSTSUBSCRIPT = ∅ then
23: if □=𝖣□𝖣\Box=\mathsf{D}□ = sansserif\_D then
24: Proved(m,𝖣𝑚𝖣m,\,\mathsf{D}italic\_m , sansserif\_D)
25: else
26: Refuted(∼l,□similar-toabsent𝑙□\mathnormal{\sim}l,\,\Box∼ italic\_l , □)
27: Refuted(∼l,◇similar-toabsent𝑙◇\mathnormal{\sim}l,\,\Diamond∼ italic\_l , ◇) for ◇◇\Diamond◇ s.t.
Conflict(□,◇)Conflict□◇\mathrm{Conflict}(\Box,\Diamond)roman\_Conflict ( □ , ◇ )
28: if R□[∼l]∪R𝖡,□[∼l]∪R■[∼l]∖R𝑖𝑛𝑓𝑑⊆rinfR^{\Box}[\mathnormal{\sim}l]\cup R^{\mathsf{B},\Box}[\mathnormal{\sim}l]\cup R^{\blacksquare}[\mathnormal{\sim}l]\setminus R\_{\mathit{infd}}\subseteq r\_{inf}italic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT [ ∼ italic\_l ] ∪ italic\_R start\_POSTSUPERSCRIPT sansserif\_B , □ end\_POSTSUPERSCRIPT [ ∼ italic\_l ] ∪ italic\_R start\_POSTSUPERSCRIPT ■ end\_POSTSUPERSCRIPT [ ∼ italic\_l ] ∖ italic\_R start\_POSTSUBSCRIPT italic\_infd end\_POSTSUBSCRIPT ⊆ italic\_r start\_POSTSUBSCRIPT italic\_i italic\_n italic\_f end\_POSTSUBSCRIPT, for ■■\blacksquare■ s.t.
Conflict(■,□)Conflict■□\mathrm{Conflict}(\blacksquare,\Box)roman\_Conflict ( ■ , □ ) then
29: Proved(m,□𝑚□m,\,\Boxitalic\_m , □)
30: end if
31: end if
32: end if
33: end if
34: end for
35: ∂■+←∂■+∖+∂■;∂■−←∂■−∖−∂■\partial^{+}\_{\blacksquare}\leftarrow\partial^{+}\_{\blacksquare}\setminus+\partial\_{\blacksquare};\ \partial^{-}\_{\blacksquare}\leftarrow\partial^{-}\_{\blacksquare}\setminus-\partial\_{\blacksquare}∂ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← ∂ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ∖ + ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ; ∂ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← ∂ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ∖ - ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT
36: +∂■←+∂■∪∂■+;−∂■←−∂■∪∂■−formulae-sequence←subscript■subscript■subscriptsuperscript■←subscript■subscript■subscriptsuperscript■+\partial\_{\blacksquare}\leftarrow+\partial\_{\blacksquare}\cup\partial^{+}\_{\blacksquare};\ -\partial\_{\blacksquare}\leftarrow-\partial\_{\blacksquare}\cup\partial^{-}\_{\blacksquare}+ ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← + ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ∪ ∂ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ; - ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← - ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ∪ ∂ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT
37:until ∂■+=∅subscriptsuperscript■\partial^{+}\_{\blacksquare}=\emptyset∂ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT = ∅ and ∂■−=∅subscriptsuperscript■\partial^{-}\_{\blacksquare}=\emptyset∂ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT = ∅
38:return (+∂■,−∂■)subscript■subscript■(+\partial\_{\blacksquare},-\partial\_{\blacksquare})( + ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT , - ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT )
Algorithm [1](#alg1 "Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") DefeasibleExtension is the core
algorithm to compute the extension of a defeasible theory. The first part of
the algorithm (lines [1](#alg1.l1 "1 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[5](#alg1.l5 "5 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")) sets up the data
structure needed for the computation. Lines
[6](#alg1.l6 "6 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[9](#alg1.l9 "9 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") are to handle facts as immediately
provable literals.
The main idea of the algorithm is to check whether there are rules with empty
body: such rules are clearly applicable and they can produce conclusions with
the right mode. However, before asserting that the first element for the
appropriate modality of the conclusion is provable, we need to check whether
there are rules for the complement with the appropriate mode; if so, such
rules must be weaker than the applicable rules. The information about which
rules are weaker than the applicable ones is stored in the support set
R𝑖𝑛𝑓𝑑subscript𝑅𝑖𝑛𝑓𝑑R\_{\mathit{infd}}italic\_R start\_POSTSUBSCRIPT italic\_infd end\_POSTSUBSCRIPT. When a literal is evaluated to be provable, the
algorithm calls procedure Proved; when a literal is rejected,
procedure Refuted is invoked. These two procedures apply
transformations to reduce the complexity of the theory.
A step-by-step description of the algorithm would be redundant once the
concepts expressed before are understood. Accordingly, in the rest
of the section we provide in depth descriptions of the key passage.
For every outcome rule, the algorithm makes a copy of the same rule for each
mode corresponding to a goal-like attitude (line [2](#alg1.l2 "2 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). At
line [4](#alg1.l4 "4 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal"), the algorithm creates a support set to handle
conversions from a belief rule through a different mode. Consequently, the
new ◇◇\Diamond◇ rules have to inherit the superiority relation (if any) from
the belief rules they derive from (line [5](#alg1.l5 "5 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). Notice that we
also augment the superiority relation by incorporating the rules involved in
the ConflictConflict\mathrm{Conflict}roman\_Conflict relation. Given that facts are immediately proved literals,
Proved is invoked for positively proved modal literals (those
proved with +∂□subscript□+\partial\_{\Box}+ ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT), and Refuted for rejected literals
(i.e., those proved with −∂□subscript□-\partial\_{\Box}- ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT). The aim of the for
loop at lines [14](#alg1.l14 "14 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[16](#alg1.l16 "16 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") is to discard
any modal literal in HB𝐻𝐵HBitalic\_H italic\_B for which there are no rules that can prove it
(either directly or through conversion).
We now iterate on every rule that can fire (i.e., on rules with empty body, loop
for at lines [17](#alg1.l17 "17 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[34](#alg1.l34 "34 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") and
if condition at line [18](#alg1.l18 "18 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")) and we collect the
weaker rules in the set R𝑖𝑛𝑓𝑑subscript𝑅𝑖𝑛𝑓𝑑R\_{\mathit{infd}}italic\_R start\_POSTSUBSCRIPT italic\_infd end\_POSTSUBSCRIPT (line [20](#alg1.l20 "20 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). Since a
consequent can be an ⊙direct-product\odot⊙-expression, the literal we are interested in is
the first element of the ⊙direct-product\odot⊙-expression (line [21](#alg1.l21 "21 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). If no
rule stronger than the current one exists, then the complementary conclusion
is refuted by condition (2.3) of Definition [14](#Thmdefinition14 "Definition 14 (Defeasible refutability for obligation, goal, intention and social intention) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") (line
[26](#alg1.l26 "26 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). An additional consequence is that literal l𝑙litalic\_l is
also refutable in D𝐷Ditalic\_D for any modality conflicting with □□\Box□ (line
[27](#alg1.l27 "27 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). Notice that this reasoning does not hold for
desires: since the logic allows to have 𝖣l𝖣𝑙\mathsf{D}lsansserif\_D italic\_l and 𝖣∼lsimilar-to𝖣𝑙\mathsf{D}\mathnormal{\sim}lsansserif\_D ∼ italic\_l at the
same time, when □=𝖣□𝖣\Box=\mathsf{D}□ = sansserif\_D and the guard at line [22](#alg1.l22 "22 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") is
satisfied, the algorithm invokes procedure [2](#alg2 "Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")
Proved (line [24](#alg1.l24 "24 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")) due to condition (2.3) of
Definition [11](#Thmdefinition11 "Definition 11 (Defeasible provability for desire) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal").
The next step is to check whether there are rules for the complement
literal of the same modality, or of a conflicting modality. The rules for the
complement should not be defeated by applicable rules: such rules thus cannot be in R𝑖𝑛𝑓𝑑subscript𝑅𝑖𝑛𝑓𝑑R\_{\mathit{infd}}italic\_R start\_POSTSUBSCRIPT italic\_infd end\_POSTSUBSCRIPT. If all these rules are defeated by r𝑟ritalic\_r (line
[28](#alg1.l28 "28 ‣ Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")), then conditions for deriving +∂□subscript□+\partial\_{\Box}+ ∂ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT are
satisfied, and Algorithm [2](#alg2 "Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") Proved is invoked.
Algorithm 2 Proved
1:procedure Proved(l∈Lit,□∈MODformulae-sequence𝑙Lit□MODl\in\mathrm{Lit},\,\Box\in\mathrm{MOD}italic\_l ∈ roman\_Lit , □ ∈ roman\_MOD)
2: ∂□+←∂□+∪{l};l■←l■∪{+□}formulae-sequence←subscriptsuperscript□subscriptsuperscript□𝑙←subscript𝑙■subscript𝑙■□\partial^{+}\_{\Box}\leftarrow\partial^{+}\_{\Box}\cup\{l\};\ l\_{\blacksquare}\leftarrow l\_{\blacksquare}\cup\{+\Box\}∂ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT ← ∂ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT ∪ { italic\_l } ; italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ∪ { + □ }
3: HB←HB∖{□l}←𝐻𝐵𝐻𝐵□𝑙HB\leftarrow HB\setminus\{\Box l\}italic\_H italic\_B ← italic\_H italic\_B ∖ { □ italic\_l }
4: if □≠𝖣□𝖣\Box\neq\mathsf{D}□ ≠ sansserif\_D then Refuted(∼l,□similar-toabsent𝑙□\mathnormal{\sim}l,\,\Box∼ italic\_l , □)
5: if □=𝖡□𝖡\Box=\mathsf{B}□ = sansserif\_B then Refuted(∼l,𝖨similar-toabsent𝑙𝖨\mathnormal{\sim}l,\,\mathsf{I}∼ italic\_l , sansserif\_I)
6: if □∈{𝖡,𝖮}□𝖡𝖮\Box\in\{\mathsf{B},\mathsf{O}\}□ ∈ { sansserif\_B , sansserif\_O } then Refuted(∼l,𝖲𝖨similar-toabsent𝑙𝖲𝖨\mathnormal{\sim}l,\,\mathsf{SI}∼ italic\_l , sansserif\_SI)
7: R←{r:A(r)∖{□l,¬□∼l}↪C(r)|r∈R,A(r)∩□l~=∅}←𝑅conditional-set𝑟formulae-sequence↪𝐴𝑟similar-to□𝑙□
𝑙conditional𝐶𝑟𝑟𝑅𝐴𝑟~□𝑙R\leftarrow\{r:A(r)\setminus\{\Box l,\neg\Box\mathnormal{\sim}l\}\hookrightarrow C(r)|\ r\in R,\ A(r)\cap\widetilde{\Box l}=\emptyset\}italic\_R ← { italic\_r : italic\_A ( italic\_r ) ∖ { □ italic\_l , ¬ □ ∼ italic\_l } ↪ italic\_C ( italic\_r ) | italic\_r ∈ italic\_R , italic\_A ( italic\_r ) ∩ over~ start\_ARG □ italic\_l end\_ARG = ∅ }
8: R𝖡,□←{r:A(r)∖{l}↪C(r)|r∈R𝖡,□,∼l∉A(r)}R^{\mathsf{B},\Box}\leftarrow\{r:A(r)\setminus\{l\}\hookrightarrow C(r)|r\in R^{\mathsf{B},\Box},\ \mathnormal{\sim}l\notin A(r)\}italic\_R start\_POSTSUPERSCRIPT sansserif\_B , □ end\_POSTSUPERSCRIPT ← { italic\_r : italic\_A ( italic\_r ) ∖ { italic\_l } ↪ italic\_C ( italic\_r ) | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_B , □ end\_POSTSUPERSCRIPT , ∼ italic\_l ∉ italic\_A ( italic\_r ) }
9: >←>∖{(r,s),(s,r)∈>|A(r)∩□l~≠∅}>\leftarrow>\setminus\{(r,s),(s,r)\in>|\ A(r)\cap\widetilde{\Box l}\not=\emptyset\}> ← > ∖ { ( italic\_r , italic\_s ) , ( italic\_s , italic\_r ) ∈ > | italic\_A ( italic\_r ) ∩ over~ start\_ARG □ italic\_l end\_ARG ≠ ∅ }
10: switch (□□\Box□)
11: case 𝖡𝖡\mathsf{B}sansserif\_B:
12: RX←{A(r)⇒XC(r)!l|r∈RX[l,n]}←superscript𝑅𝑋conditional-setsubscript⇒𝑋𝐴𝑟𝐶𝑟𝑙𝑟superscript𝑅𝑋𝑙𝑛R^{X}\leftarrow\{A(r)\Rightarrow\_{X}C(r)!l|\ r\in R^{X}[l,n]\}italic\_R start\_POSTSUPERSCRIPT italic\_X end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT italic\_X end\_POSTSUPERSCRIPT [ italic\_l , italic\_n ] } with X∈{𝖮,𝖨}𝑋𝖮𝖨X\in\{\mathsf{O},\mathsf{I}\}italic\_X ∈ { sansserif\_O , sansserif\_I }
13: if +𝖮∈∼l■+\mathsf{O}\in\mathnormal{\sim}l\_{\blacksquare}+ sansserif\_O ∈ ∼ italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT then R𝖮←{A(r)⇒𝖮C(r)⊖∼l|r∈R𝖮[∼l,n]}R^{\mathsf{O}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{O}}C(r)\ominus\mathnormal{\sim}l|\ r\in R^{\mathsf{O}}[\mathnormal{\sim}l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ⊖ ∼ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT [ ∼ italic\_l , italic\_n ] }
14: if −𝖮∈∼l■-\mathsf{O}\in\mathnormal{\sim}l\_{\blacksquare}- sansserif\_O ∈ ∼ italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT then R𝖲𝖨←{A(r)⇒𝖲𝖨C(r)!l|r∈R𝖲𝖨[l,n]}←superscript𝑅𝖲𝖨conditional-setsubscript⇒𝖲𝖨𝐴𝑟𝐶𝑟𝑙𝑟superscript𝑅𝖲𝖨𝑙𝑛R^{\mathsf{SI}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{SI}}C(r)!l|\ r\in R^{\mathsf{SI}}[l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_SI end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_SI end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_SI end\_POSTSUPERSCRIPT [ italic\_l , italic\_n ] }
15: case 𝖮𝖮\mathsf{O}sansserif\_O:
16: R𝖮←{A(r)⇒𝖮C(r)!∼l⊖∼l|r∈R𝖮[∼l,n]}R^{\mathsf{O}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{O}}C(r)!\mathnormal{\sim}l\ominus\mathnormal{\sim}l|\ r\in R^{\mathsf{O}}[\mathnormal{\sim}l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! ∼ italic\_l ⊖ ∼ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT [ ∼ italic\_l , italic\_n ] }
17: if −𝖡∈l■𝖡subscript𝑙■-\mathsf{B}\in l\_{\blacksquare}- sansserif\_B ∈ italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT then R𝖮←{A(r)⇒𝖮C(r)⊖l|r∈R𝖮[l,n]}←superscript𝑅𝖮conditional-setsubscript⇒𝖮𝐴𝑟symmetric-difference𝐶𝑟𝑙𝑟superscript𝑅𝖮𝑙𝑛R^{\mathsf{O}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{O}}C(r)\ominus l|\ r\in R^{\mathsf{O}}[l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ⊖ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT [ italic\_l , italic\_n ] }
18: if −𝖡∈∼l■-\mathsf{B}\in\mathnormal{\sim}l\_{\blacksquare}- sansserif\_B ∈ ∼ italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT then R𝖲𝖨←{A(r)⇒𝖲𝖨C(r)!l|r∈R𝖲𝖨[l,n]}←superscript𝑅𝖲𝖨conditional-setsubscript⇒𝖲𝖨𝐴𝑟𝐶𝑟𝑙𝑟superscript𝑅𝖲𝖨𝑙𝑛R^{\mathsf{SI}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{SI}}C(r)!l|\ r\in R^{\mathsf{SI}}[l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_SI end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_SI end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_SI end\_POSTSUPERSCRIPT [ italic\_l , italic\_n ] }
19: case 𝖣𝖣\mathsf{D}sansserif\_D:
20: if +𝖣∈∼l■+\mathsf{D}\in\mathnormal{\sim}l\_{\blacksquare}+ sansserif\_D ∈ ∼ italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT then
21: R𝖦←{A(r)⇒𝖦C(r)!l⊖l|r∈R𝖦[l,n]}←superscript𝑅𝖦conditional-setsubscript⇒𝖦𝐴𝑟symmetric-difference𝐶𝑟𝑙𝑙𝑟superscript𝑅𝖦𝑙𝑛R^{\mathsf{G}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{G}}C(r)!l\ominus l|\ r\in R^{\mathsf{G}}[l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_G end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_G end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! italic\_l ⊖ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_G end\_POSTSUPERSCRIPT [ italic\_l , italic\_n ] }
22: R𝖦←{A(r)⇒𝖦C(r)!∼l⊖∼l|r∈R𝖦[∼l,n]}R^{\mathsf{G}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{G}}C(r)!\mathnormal{\sim}l\ominus\mathnormal{\sim}l|\ r\in R^{\mathsf{G}}[\mathnormal{\sim}l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_G end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_G end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! ∼ italic\_l ⊖ ∼ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_G end\_POSTSUPERSCRIPT [ ∼ italic\_l , italic\_n ] }
23: end if
24: otherwise:
25: R□←{A(r)⇒□C(r)!l|r∈R□[l,n]}←superscript𝑅□conditional-setsubscript⇒□𝐴𝑟𝐶𝑟𝑙𝑟superscript𝑅□𝑙𝑛R^{\Box}\leftarrow\{A(r)\Rightarrow\_{\Box}C(r)!l|\ r\in R^{\Box}[l,n]\}italic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT [ italic\_l , italic\_n ] }
26: R□←{A(r)⇒□C(r)⊖∼l|r∈R□[∼l,n]}R^{\Box}\leftarrow\{A(r)\Rightarrow\_{\Box}C(r)\ominus\mathnormal{\sim}l|\ r\in R^{\Box}[\mathnormal{\sim}l,n]\}italic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ⊖ ∼ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT [ ∼ italic\_l , italic\_n ] }
27: end switch
28:end procedure
Algorithm [2](#alg2 "Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") Proved is invoked when literal l𝑙litalic\_l is
proved with modality □□\Box□, the key to which
simplifications on rules can be done. The computation starts by updating the
relative positive extension set for modality □□\Box□ and, symmetrically, the
local information on literal l𝑙litalic\_l (line [2](#alg2.l2 "2 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")); l𝑙litalic\_l is then
removed from HB𝐻𝐵HBitalic\_H italic\_B at line [3](#alg2.l3 "3 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal"). Parts 1.–3. of
Proposition [2](#Thmtheorem2 "Proposition 2 ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") identifies the modalities literal
∼lsimilar-toabsent𝑙\mathnormal{\sim}l∼ italic\_l is refuted with, when □l□𝑙\Box l□ italic\_l is proved (if conditions at
lines [4](#alg2.l4 "4 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[6](#alg2.l6 "6 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). Lines [7](#alg2.l7 "7 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")
to [9](#alg2.l9 "9 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") modify the superiority relation and the sets of rules R𝑅Ritalic\_R
and R𝖡,□superscript𝑅𝖡□R^{\mathsf{B},\Box}italic\_R start\_POSTSUPERSCRIPT sansserif\_B , □ end\_POSTSUPERSCRIPT accordingly to the intuitions given at the beginning of Section [4.2](#S4.SS2 "4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal").
Depending on the modality □□\Box□ of l𝑙litalic\_l, we perform specific
operations on the chains (condition switch at lines
[10](#alg2.l10 "10 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[27](#alg2.l27 "27 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). A detailed description of each
case would be redundant without giving more information than the one
expressed by conditions of Definitions [9](#Thmdefinition9 "Definition 9 (Applicable rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") and
[10](#Thmdefinition10 "Definition 10 (Discarded rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal"). Therefore, we propose one significative example by
considering the scenario where l𝑙litalic\_l has been proved as a belief (case
at lines [11](#alg2.l11 "11 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[14](#alg2.l14 "14 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). First, conditions of
Definitions [10](#Thmdefinition10 "Definition 10 (Discarded rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") and [14](#Thmdefinition14 "Definition 14 (Defeasible refutability for obligation, goal, intention and social intention) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") ensure that
∼lsimilar-toabsent𝑙\mathnormal{\sim}l∼ italic\_l may be neither an intention, nor a social intention.
Algorithm [3](#alg3 "Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") Refuted is thus invoked at lines
[5](#alg2.l5 "5 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") and [6](#alg2.l6 "6 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") which, in
turn, eliminates ∼lsimilar-toabsent𝑙\mathnormal{\sim}l∼ italic\_l from every chain of intention and social intention
rules (line [18](#alg3.l18 "18 ‣ Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") of Algorithm [3](#alg3 "Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")
Refuted). Second, chains of obligation (resp. intention) rules can
be truncated at l𝑙litalic\_l since condition (2.1.2) (resp. condition (4.1.2)) of
Definition [10](#Thmdefinition10 "Definition 10 (Discarded rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") makes such rules discarded for all
elements following l𝑙litalic\_l in the chain (line [12](#alg2.l12 "12 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). Third, if
+∂𝖮∼lsimilar-tosubscript𝖮𝑙+\partial\_{\mathsf{O}}\mathnormal{\sim}l+ ∂ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT ∼ italic\_l has been already proved, then we eliminate ∼lsimilar-toabsent𝑙\mathnormal{\sim}l∼ italic\_l
in chains of obligation rules since it represents a violated obligation
(if condition at lines [13](#alg2.l13 "13 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). Fourth, if
−∂𝖮∼lsimilar-tosubscript𝖮𝑙-\partial\_{\mathsf{O}}\mathnormal{\sim}l- ∂ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT ∼ italic\_l is the case, then each element after l𝑙litalic\_l cannot be
proved as a social intention (if condition at line
[14](#alg2.l14 "14 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). Consequently, we truncate chains of social intention
rules at l𝑙litalic\_l.
Algorithm 3 Refuted
1:procedure Refuted(l∈Lit,□∈MODformulae-sequence𝑙Lit□MODl\in\mathrm{Lit},\,\Box\in\mathrm{MOD}italic\_l ∈ roman\_Lit , □ ∈ roman\_MOD)
2: ∂□−←∂□−∪{l};l■←l■∪{−□}formulae-sequence←subscriptsuperscript□subscriptsuperscript□𝑙←subscript𝑙■subscript𝑙■□\partial^{-}\_{\Box}\leftarrow\partial^{-}\_{\Box}\cup\{l\};\ l\_{\blacksquare}\leftarrow l\_{\blacksquare}\cup\{-\Box\}∂ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT ← ∂ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT ∪ { italic\_l } ; italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ← italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT ∪ { - □ }
3: HB←HB∖{□l}←𝐻𝐵𝐻𝐵□𝑙HB\leftarrow HB\setminus\{\Box l\}italic\_H italic\_B ← italic\_H italic\_B ∖ { □ italic\_l }
4: R←{r:A(r)∖{¬□l}↪C(r)|r∈R,□l∉A(r)}←𝑅conditional-set𝑟formulae-sequence↪𝐴𝑟□𝑙conditional𝐶𝑟𝑟𝑅□𝑙𝐴𝑟R\leftarrow\{r:A(r)\setminus\{\neg\Box l\}\hookrightarrow C(r)|\ r\in R,\ \Box l\not\in A(r)\}italic\_R ← { italic\_r : italic\_A ( italic\_r ) ∖ { ¬ □ italic\_l } ↪ italic\_C ( italic\_r ) | italic\_r ∈ italic\_R , □ italic\_l ∉ italic\_A ( italic\_r ) }
5: R𝖡,□←R𝖡,□∖{r∈R𝖡,□:l∈A(r)}←superscript𝑅𝖡□superscript𝑅𝖡□conditional-set𝑟superscript𝑅𝖡□𝑙𝐴𝑟R^{\mathsf{B},\Box}\leftarrow R^{\mathsf{B},\Box}\setminus\{r\in R^{\mathsf{B},\Box}:l\in A(r)\}italic\_R start\_POSTSUPERSCRIPT sansserif\_B , □ end\_POSTSUPERSCRIPT ← italic\_R start\_POSTSUPERSCRIPT sansserif\_B , □ end\_POSTSUPERSCRIPT ∖ { italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_B , □ end\_POSTSUPERSCRIPT : italic\_l ∈ italic\_A ( italic\_r ) }
6: >←>∖{(r,s),(s,r)∈>|□l∈A(r)}>\leftarrow>\setminus\{(r,s),(s,r)\in>|\Box l\in A(r)\}> ← > ∖ { ( italic\_r , italic\_s ) , ( italic\_s , italic\_r ) ∈ > | □ italic\_l ∈ italic\_A ( italic\_r ) }
7: switch (□□\Box□)
8: case 𝖡𝖡\mathsf{B}sansserif\_B:
9: R𝖨←{A(r)⇒𝖨C(r)!∼l|r∈R𝖨[∼l,n]}R^{\mathsf{I}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{I}}C(r)!\mathnormal{\sim}l|r\in R^{\mathsf{I}}[\mathnormal{\sim}l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_I end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_I end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! ∼ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_I end\_POSTSUPERSCRIPT [ ∼ italic\_l , italic\_n ] }
10: if +𝖮∈l■𝖮subscript𝑙■+\mathsf{O}\in l\_{\blacksquare}+ sansserif\_O ∈ italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT then R𝖮←{A(r)⇒𝖮C(r)⊖l|r∈R𝖮[l,n]}←superscript𝑅𝖮conditional-setsubscript⇒𝖮𝐴𝑟symmetric-difference𝐶𝑟𝑙𝑟superscript𝑅𝖮𝑙𝑛R^{\mathsf{O}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{O}}C(r)\ominus l|r\in R^{\mathsf{O}}[l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ⊖ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT [ italic\_l , italic\_n ] }
11: if −𝖮∈l■𝖮subscript𝑙■-\mathsf{O}\in l\_{\blacksquare}- sansserif\_O ∈ italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT then R𝖲𝖨←{A(r)⇒𝖲𝖨C(r)!∼l|r∈R𝖲𝖨[∼l,n]}R^{\mathsf{SI}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{SI}}C(r)!\mathnormal{\sim}l|r\in R^{\mathsf{SI}}[\mathnormal{\sim}l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_SI end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_SI end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! ∼ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_SI end\_POSTSUPERSCRIPT [ ∼ italic\_l , italic\_n ] }
12: case 𝖮𝖮\mathsf{O}sansserif\_O:
13: R𝖮←{A(r)⇒𝖮C(r)!l⊖l|r∈R𝖮[l,n]}←superscript𝑅𝖮conditional-setsubscript⇒𝖮𝐴𝑟symmetric-difference𝐶𝑟𝑙𝑙𝑟superscript𝑅𝖮𝑙𝑛R^{\mathsf{O}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{O}}C(r)!l\ominus l|r\in R^{\mathsf{O}}[l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! italic\_l ⊖ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_O end\_POSTSUPERSCRIPT [ italic\_l , italic\_n ] }
14: if −𝖡∈l■𝖡subscript𝑙■-\mathsf{B}\in l\_{\blacksquare}- sansserif\_B ∈ italic\_l start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT then R𝖲𝖨←{A(r)⇒𝖲𝖨C(r)!∼l|r∈R𝖲𝖨[∼l,n]}R^{\mathsf{SI}}\leftarrow\{A(r)\Rightarrow\_{\mathsf{SI}}C(r)!\mathnormal{\sim}l|r\in R^{\mathsf{SI}}[\mathnormal{\sim}l,n]\}italic\_R start\_POSTSUPERSCRIPT sansserif\_SI end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT sansserif\_SI end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ! ∼ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT sansserif\_SI end\_POSTSUPERSCRIPT [ ∼ italic\_l , italic\_n ] }
15: case 𝖣𝖣\mathsf{D}sansserif\_D:
16: RX←{A(r)⇒XC(r)⊖l|r∈RX[l,n]}←superscript𝑅𝑋conditional-setsubscript⇒𝑋𝐴𝑟symmetric-difference𝐶𝑟𝑙𝑟superscript𝑅𝑋𝑙𝑛R^{X}\leftarrow\{A(r)\Rightarrow\_{X}C(r)\ominus l|r\in R^{X}[l,n]\}italic\_R start\_POSTSUPERSCRIPT italic\_X end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ⊖ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT italic\_X end\_POSTSUPERSCRIPT [ italic\_l , italic\_n ] } with X∈{𝖣,𝖦}𝑋𝖣𝖦X\in\{\mathsf{D},\mathsf{G}\}italic\_X ∈ { sansserif\_D , sansserif\_G }
17: otherwise:
18: R□←{A(r)⇒□C(r)⊖l|r∈R□[l,n]}←superscript𝑅□conditional-setsubscript⇒□𝐴𝑟symmetric-difference𝐶𝑟𝑙𝑟superscript𝑅□𝑙𝑛R^{\Box}\leftarrow\{A(r)\Rightarrow\_{\Box}C(r)\ominus l|r\in R^{\Box}[l,n]\}italic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT ← { italic\_A ( italic\_r ) ⇒ start\_POSTSUBSCRIPT □ end\_POSTSUBSCRIPT italic\_C ( italic\_r ) ⊖ italic\_l | italic\_r ∈ italic\_R start\_POSTSUPERSCRIPT □ end\_POSTSUPERSCRIPT [ italic\_l , italic\_n ] }
19: end switch
20:end procedure
Algorithm [3](#alg3 "Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") Refuted performs all necessary
operations to refute literal l𝑙litalic\_l with modality □□\Box□. The initialisation
steps at lines [2](#alg3.l2 "2 ‣ Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[6](#alg3.l6 "6 ‣ Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") follow the same schema
exploited at lines [2](#alg2.l2 "2 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[9](#alg2.l9 "9 ‣ Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") of
Algorithm [2](#alg2 "Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") Proved. Again, the operations on chains
vary according to the current mode □□\Box□ (switch at lines
[7](#alg3.l7 "7 ‣ Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[19](#alg3.l19 "19 ‣ Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). For instance, if □=𝖡□𝖡\Box=\mathsf{B}□ = sansserif\_B
(case at lines [8](#alg3.l8 "8 ‣ Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")–[11](#alg3.l11 "11 ‣ Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")), then
condition (4.1.2) for 𝖨𝖨\mathsf{I}sansserif\_I of Definition [10](#Thmdefinition10 "Definition 10 (Discarded rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal") is
satisfied for any literal after ∼lsimilar-toabsent𝑙\mathnormal{\sim}l∼ italic\_l in chains for intentions, and such
chains can be truncated at ∼lsimilar-toabsent𝑙\mathnormal{\sim}l∼ italic\_l. Furthermore, if the algorithm has
already proven +∂𝖮lsubscript𝖮𝑙+\partial\_{\mathsf{O}}l+ ∂ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_l, then the obligation of l𝑙litalic\_l has been
violated. Thus, l𝑙litalic\_l can be removed from all chains for obligations (line
[10](#alg3.l10 "10 ‣ Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")). If instead −∂𝖮lsubscript𝖮𝑙-\partial\_{\mathsf{O}}l- ∂ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_l holds, then the
elements after ∼lsimilar-toabsent𝑙\mathnormal{\sim}l∼ italic\_l in chains for social intentions satisfy condition
(4.1.2) of Definition [10](#Thmdefinition10 "Definition 10 (Discarded rule) ‣ Conflict-detection/resolution. ‣ 3 Logic ‣ The Rationale behind the Concept of Goal"), and the algorithm removes
them (line [11](#alg3.l11 "11 ‣ Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal")).
###
4.3 Computational Results
We now present the computational properties of the algorithms
previously described. Since
Algorithms [2](#alg2 "Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") Proved and
[3](#alg3 "Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") Refuted are sub-routines of the main one, we shall exhibit the correctness and completeness results of these algorithms inside
theorems for Algorithm [1](#alg1 "Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") DefeasibleExtension. In order to properly demonstrate results on the complexity of the algorithms, we
need the following definition.
######
Definition 19 (Size of a theory)
Given a finite defeasible theory D𝐷Ditalic\_D, the *size* S𝑆Sitalic\_S of D𝐷Ditalic\_D is
the number of occurrences of literals plus the number of the rules in D𝐷Ditalic\_D.
For instance, the size of the theory
| | | |
| --- | --- | --- |
| | F={a,𝖮b}R={r1:a⇒𝖮c,r2:a,𝖮b⇒d}formulae-sequence𝐹𝑎𝖮𝑏𝑅conditional-setsubscript𝑟1:subscript⇒𝖮𝑎𝑐subscript𝑟2⇒𝑎𝖮𝑏
𝑑\displaystyle F=\{a,\ \mathsf{O}b\}\quad R=\{r\_{1}:a\Rightarrow\_{\mathsf{O}}c,\ r\_{2}:a,\mathsf{O}b\Rightarrow d\}italic\_F = { italic\_a , sansserif\_O italic\_b } italic\_R = { italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : italic\_a ⇒ start\_POSTSUBSCRIPT sansserif\_O end\_POSTSUBSCRIPT italic\_c , italic\_r start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT : italic\_a , sansserif\_O italic\_b ⇒ italic\_d } | |
is equal to nine, since literal a𝑎aitalic\_a occurs three times.
We also report some key ideas and intuitions behind our implementation.
1. 1.
Each operation on global sets ±∂■plus-or-minussubscript■\pm\partial\_{\blacksquare}± ∂ start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT and
∂■±subscriptsuperscriptplus-or-minus■\partial^{\pm}\_{\blacksquare}∂ start\_POSTSUPERSCRIPT ± end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ■ end\_POSTSUBSCRIPT requires linear time, as we manipulate finite sets of literals;
2. 2.
For each literal □l∈HB□𝑙𝐻𝐵\Box l\in HB□ italic\_l ∈ italic\_H italic\_B, we implement a hash table with
pointers to the rules where the literal occurs in; thus, retrieving the set
of rules containing a given literal requires constant time;
3. 3.
The superiority relation can also be implemented by means of hash
tables; once again, the information required to modify a given tuple can be
accessed in constant time.
In Section [4](#S4 "4 Algorithmic results ‣ The Rationale behind the Concept of Goal") we discussed the main intuitions
behind the operations performed by the algorithms, and we explained
that each operation corresponds to a reduction that transforms a theory in
an equivalent smaller theory. Appendix [C](#A3 "Appendix C Correctness and Completeness of DefeasibleExtension ‣ The Rationale behind the Concept of Goal") exhibits
a series of lemmas stating the conditions under which an operation that
removes either rules or literals form either the head or rules or from the body
results in an equivalent smaller theory. The Lemmas proved by induction on
the length of derivations.
######
Theorem 4
Given a finite defeasible theory D𝐷Ditalic\_D with size S𝑆Sitalic\_S, Algorithms [2](#alg2 "Algorithm 2 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") Proved and
[3](#alg3 "Algorithm 3 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") Refuted terminate and their computational complexity is O(S)𝑂𝑆O(S)italic\_O ( italic\_S ).
######
Theorem 5
Given a finite defeasible theory D𝐷Ditalic\_D with size S𝑆Sitalic\_S,
Algorithm [1](#alg1 "Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") DefeasibleExtension terminates and its
computational complexity is O(S)𝑂𝑆O(S)italic\_O ( italic\_S ).
######
Theorem 6
Algorithm [1](#alg1 "Algorithm 1 ‣ 4.2 Algorithms ‣ 4 Algorithmic results ‣ The Rationale behind the Concept of Goal") DefeasibleExtension is sound and
complete.
5 Summary and Related Work
---------------------------
This article provided a new proposal for extending DL to model cognitive
agents interacting with obligations. We distinguished concepts of desire,
goal, intention and social intention, but we started from the shared notion
of outcome. Therefore, such concepts spring from a single notion that becomes distinct based
on the particular relationship with beliefs and norms. This reflects a more
natural notion of mental attitude and can express the well-known notion of
Plan B. When we consider the single chain itself, this justifies that from a
single concept of outcome we can derive all the other mental attitudes.
Otherwise we would need as many additional rules as the elements in the chain;
this, in turn, would require the introduction of additional notions to
establish the relationships with beliefs and norms. This adds to our
framework an economy of concepts.
Moreover, since the preferences allow us to determine what preferred outcomes
are adopted by an agent (in a specific scenario) when previous elements in
sequences are no longer feasible, our logic provides an abstract semantics
for several types of goal and intention reconsideration.
A drawback of our approach perhaps lies in the difficulty of translating a
natural language description into a logic formalisation. This is a
notoriously hard task. Even if the obstacle seems very difficult, the payoff
is worthwhile. The first reason is due to the efficiency of the computation
of the positive extension once the formalisation has been done (polynomial
time against the majority of the current frameworks in the literature which
typically work in exponential time). The second reason is that the use of
rules (such as business rules) to describe complex systems is extremely common
(Knolmayer et al., [2000](#bib.bib21)). Future lines of research will then focus on
developing such methods, by giving tools which may help the (business) analyst
in writing such (business) rules from the declarative description.
The logic presented in this paper, as the vast majority of approaches to
model autonomous agents, is propositional. The
algorithms to compute the extension of theory relies on the theory being
finite, thus the first assumption for possible first-order extensions would
be to work on finite domains of individuals. Given this assumption, the
algorithms can be still be used once a theory has been grounded. This means
that the size of theory is in function of the size of the grounding. We
expect that the size of the grounding depends on the cardinality of the
domain of individuals and the length of the vector obtained by the join
of the predicates occurring in the theory.
Our contribution has strong connections with those by
Dastani et al. ([2005](#bib.bib9)); Governatori and Rotolo ([2008](#bib.bib14)); Governatori et al. ([2009](#bib.bib16)), but it completely
rebuilds the logical treatment of agents’ motivational attitudes by presenting
significant innovations in at least two respects.
First, while in (Dastani et al., [2005](#bib.bib9); Governatori and Rotolo, [2008](#bib.bib14); Governatori et al., [2009](#bib.bib16))
the agent deliberation is simply the result of the derivation of mental
states from *precisely* the corresponding rules of the logic—besides
conversions, intentions are derived using only intention rules, goals using
goal rules, etc.—here, the proof theory is much more aligned with the BDI
intuition, according to which intentions and goals are the results of the
manipulation of desires. The conceptual result of the current paper is
that this idea can be entirely encoded within a logical language and a proof
theory, by exploiting the different interaction patterns between the basic
mental states, as well as the derived ones. In this perspective, our
framework is significantly richer than the one in BOID
(Broersen et al., [2002](#bib.bib7)), which uses different rules to derive the
corresponding mental states and proposes simple criteria to solve conflicts
between rule types.
Second, the framework proposes a rich language expressing two orthogonal
concepts of preference among motivational attitudes. One is encoded within
⊙direct-product\odot⊙ sequences, which state (reparative) orders among
homogeneous mental states or motivations. The second type of preference is encoded via the superiority
relation between rules: the superiority can work locally between single rules
of the same or different types, or can work systematically by stating via Conflict(X,Y)Conflict𝑋𝑌\mathrm{Conflict}(X,Y)roman\_Conflict ( italic\_X , italic\_Y ) that two different motivations X𝑋Xitalic\_X and Y𝑌Yitalic\_Y
collide, and X𝑋Xitalic\_X always overrides Y𝑌Yitalic\_Y. The interplay between these two
preference mechanisms can help us in isolating different and complex ways for
deriving mental states, but the resulting logical machinery is still
computationally tractable, as the algorithmic analysis proved.
Lastly, since the preferences allow us to determine what preferred outcomes
are adopted by an agent when previous elements in ⊙direct-product\odot⊙-sequences are not
(or no longer) feasible, our logic in fact provides an abstract semantics for
several types of goal and intention reconsideration. Intention
reconsideration was expected to play a crucial role in the BDI paradigm
(Bratman, [1987](#bib.bib5); Cohen and Levesque, [1990](#bib.bib8)) since intentions obey the law of
inertia and resist retraction or revision, but they can be reconsidered when
new relevant information comes in (Bratman, [1987](#bib.bib5)). Despite that, the problem
of revising intentions in BDI frameworks has received little attention. A
very sophisticated exception is that of van der Hoek et al. ([2007](#bib.bib35)), where revisiting intentions
mainly depends on the dynamics of beliefs but the process is incorporated in
a very complex framework for reasoning about mental states. Recently,
Shapiro et al. ([2012](#bib.bib32)) discussed how to revise the commitments to planned
activities because of mutually conflicting intentions, a contribution that interestingly
has connections with our work. How to employ our logic to give a semantics
for intention reconsideration is not the main goal of the paper and is left
to future work.
Our framework shares the motivation with that of
Winikoff et al. ([2002](#bib.bib39)), where the authors provide a logic to
describe both the declarative and procedural nature of goals. The nature of
the two approaches lead to conceptually different solutions. For instance,
they require goals, as in (Hindriks et al., [2000](#bib.bib20)), “not to be
entailed by beliefs, i.e., that they be unachieved”, while our beliefs can
be seen as ways to achieve goals. Other requirements such as persistence or
dropping a goal when reached cannot be taken into account.
Shapiro et al. ([2007](#bib.bib31)) and
Shapiro and Brewka ([2007](#bib.bib30)) deal with goal change. The authors
consider the case where an agent readopts goals that were previously believed
to be impossible to achieve up to revision of her beliefs. They model goals
through an accessibility relation over possible worlds. This is similar to
our framework where different worlds are different assignments to the set of
facts. Similarly to us, they prioritise goals as a preorder ≤\leq≤; an agent
adopts a new goal unless another incompatible goal prior in the ordering
exists. This is in line with our framework where if we change the set of
facts, the algorithms compute a new extension of the theory where two
opposite literals can be proved as 𝖣𝖣\mathsf{D}sansserif\_D but only one as 𝖨𝖨\mathsf{I}sansserif\_I. Notice also
that the ordering used in their work is unique and fixed at design time,
while in our framework chains of outcome rules are built trough a
context-dependent partial order which, in our opinion, models more realistic
scenarios.
Dastani et al. ([2006](#bib.bib10)) present three types of declarative goals: perform, achievement, and maintenance goals. In particular, they
define planning rules which relate configurations of the world as seen by the
agent (i.e., her beliefs). A planning rule is considered *correct* only
if the plan associated to the rule itself allows the agent to reach a
configuration where her goal is satisfied. This is strongly connected to our
idea of belief rules, which define a path to follow in order to reach an
agent outcome. Notice that this kind of
research based on temporal aspects is orthogonal to ours.
The unifying framework proposed by
van Riemsdijk et al. ([2008](#bib.bib36)) and
Dastani et al. ([2011](#bib.bib11)) specifies different facets of the concept
of goal. However, several aspects make a comparative analysis between the two
frameworks unfeasible. Their analysis is indeed merely taxonomical, and it
does not address how goals are used in agent logics, as we precisely do here.
van Riemsdijk et al. ([2009](#bib.bib37)) share our aim to formalise goals in
a logic-based representation of conflicting goals and propose two different
semantics to represent *conditional* and *unconditional* goals.
Their central thesis, supported by Prakken ([2006](#bib.bib28)), is that
only by adopting a credulous interpretation is it possible to have
conflicting goals. However, we believe that a credulous interpretation is not
suitable if an agent has to deliberate what her primary goals are in a given
situation. We opted to have a sceptical interpretation of the concepts we
call goals, intentions, and social intentions, while we adopt a credulous
interpretation for desires. Moreover, they do not take into account the
distinction between goals and related motivational attitudes (as in
(van Riemsdijk et al., [2008](#bib.bib36); Dastani et al., [2011](#bib.bib11), [2006](#bib.bib10))). The characteristic
property of intentions in these logics is that an agent may not drop
intentions for arbitrary reasons, which means that intentions have a certain
persistence. As such, their analysis results orthogonal to ours.
Vasconcelos et al. ([2009](#bib.bib38)) propose mechanisms for the
detection and resolution of normative conflicts. They resolve conflicts by
manipulating the constraints associated to the norms’ variables, as well as through *curtailment*, that is reducing the scope of the norm. In other works, we
dealt with the same problems in defeasible deontic logic
(Governatori et al., [2013a](#bib.bib18)).
We found three problems in their solution: (i) the curtailing relationship
ω𝜔\omegaitalic\_ω is rather less intuitive than our preference relation >>>, (ii) their
approach seems too convoluted in solving exceptions (and they do not provide
any mechanism to handle reparative chains of obligations), and (iii) the space
complexity of their *adoptNorm* algorithm is exponential.
The present framework is meant to be seen as the first step within a more
general perspective of providing the business analyst with tools that allow
the creation of a business process in a fully declarative manner
(Olivieri et al., [2013](#bib.bib27)). Another issue comes from the fact
that, typically, systems implemented by business rules involve thousands of
such rules. Again, our choice of Defeasible Logic allows to drastically
reduce the number of rules involved in the process of creating, for example, a business
process thanks to its exception handling mechanism. This is peculiarly
interesting when dealing with the problem of visualising such rules. When
dealing with a system with thousands of rules, understanding what they
represent or what a group of rules stand for, may be a serious challenge. On
the contrary, the model presented by
Olivieri et al. ([2013](#bib.bib27)), once an input is given, allows for the
identification of whether the whole process is compliant against a normative
system and a set of goals (and if not, where it fails). To the best of our
knowledge, no other system is capable of checking whether a process can start
with its input requisites and reaches its final objectives in a way that is
compliant with a given set of norms.
#### Acknowledgements
NICTA is funded by the Australian Government through the Department of
Communications and the Australian Research Council through the ICT Centre
of Excellence Program.
This paper is an extended and revised version of Governatori et al. ([2013b](#bib.bib19)) presented at the
7th International Symposium on Theory, Practice, and Applications of Rules on the Web (RuleML 2013).
We thank all the anonymous reviewers for their valuable comments. |
2fcf4e28-2241-465b-bd80-bb4b7290b804 | trentmkelly/LessWrong-43k | LessWrong | Turning Some Inconsistent Preferences into Consistent Ones
cross-posted from niplav.github.io
Epistemic Status
This is still a draft that I was told to already post here, which includes working (but very slow) code for one special case. Hopefully I'll be able to expand on this in the next ~half year.
> Representing inconsistent preferences with specific mathematical structures can clarify thoughts about how to make those preferences consistent while only minimally changing them. This is discussed in the case of preferences over world states, represented by directed graphs; and preferences over lotteries of world states, represented either by infinitely dense graphs, (in some cases) vector fields over probability simplices, or edge-weighted directed graphs. I also present an algorithm for the discrete case based on the graph edit distance. Implications for scenarios such as ontological shifts are discussed.
Turning Some Inconsistent Preferences into Consistent Ones
> A kind of God-made (or evolution-created) fairness between species is also unexpectedly found.
— Yew-Kwang Ng, “Towards Welfare Biology: Evolutionary Economics of Animal Consciousness and Suffering” p. 1, 1995
> Random testing is simple in concept, often easy to implement, has been demonstrated to effectively detect failures, is good at exercising systems in unexpected ways (which may not occur to a human tester), and may be the only practical choice when the source code and the specifications are unavailable or incomplete.
— Tsong Yueh Chen/Fei-Ching Kuo/Robert G. Merkel/T.H. Tse, “Adaptive Random Testing: the ART of Test Case Diversity”, 2010
Consider an agent which displays (von Neumman-Morgenstern) inconsistent preferences, for example choosing two incompatible options in the two scenarios in the Allais paradox, or reliably displaying cycles in its actions (detecting which actions are in fact caused by inconsistent preferences, and not just exotic ones from weird abstractions, is considered a separate problem here). We might want to interact with th |
13abcfef-ac50-4bf1-90a1-d2716316c5e9 | trentmkelly/LessWrong-43k | LessWrong | [Linkpost] Teaching Paradox, Europa Univeralis IV, Part I: State of Play
> I particularly wanted to discuss Paradox’s games, as compared to other historically rooted games, because I think Paradox’s oeuvre is a particularly rich vein to mine. I have already heard from multiple college-level instructors that they have students coming into their classes specifically to learn the history behind these games, which in turn means that these games are serving to shape those student’s understanding of history before they even enter the classroom. Moreover, and we’ll get deeper into this as we go along, the very presentation of Paradox’s games, which use their efforts at historical accuracy as a key selling point, encourages players to think about them as exercises in history rather than just games.
>
> But more than that, more than most historically set games, Paradox games are interesting because they are built with what I think is a detectable theory of history. Unlike other games which blunder through historical eras thoughtlessly, Paradox games, intentionally or not (in the event, I think it is clear from speaking with a couple of their developers, there is quite a lot that is intentional) have something to say about history. As we’ll see, some of that I’ll agree with and some of it I will disagree with, but the great value of Paradox’s games is that there is an ample theory of history to agree or disagree with.
>
> [...]
>
> So we are going to approach this question from two related frames, first, what should the student of history be thinking about when playing Paradox’s games; what unspoken assumptions should they be aware of, or even forewarned about? And what of those assumptions are grounded in real arguments among historians (or, put another way, where does Paradox have its feet firmly in the scholarship in crafting its games)? And second, what ought teachers of history know about these games and take into account if they find themselves teaching students for whom Paradox is the historical ‘mother tongue’ and actual history only a |
ad1ad950-a30a-4889-99f1-354338257135 | trentmkelly/LessWrong-43k | LessWrong | AI #94: Not Now, Google
At this point, we can confidently say that no, capabilities are not hitting a wall. Capacity density, how much you can pack into a given space, is way up and rising rapidly, and we are starting to figure out how to use it.
Not only did we get o1 and o1 pro and also Sora and other upgrades from OpenAI, we also got Gemini 1206 and then Gemini Flash 2.0 and the agent Jules (am I the only one who keeps reading this Jarvis?) and Deep Research, and Veo, and Imagen 3, and Genie 2 all from Google. Meta’s Llama 3.3 dropped, claiming their 70B is now as good as the old 405B, and basically no one noticed.
This morning I saw Cursor now offers ‘agent mode.’ And hey there, Devin. And Palisade found that a little work made agents a lot more effective.
And OpenAI partnering with Anduril on defense projects. Nothing to see here.
There’s a ton of other stuff, too, and not only because this for me was a 9-day week.
Tomorrow I will post about the o1 Model Card, then next week I will follow up regarding what Apollo found regarding potential model scheming. I plan to get to Google Flash after that, which should give people time to try it out. For now, this post won’t cover any of that.
I have questions for OpenAI regarding the model card, and asked them for comment, but press inquiries has not yet responded. If anyone there can help, please reach out to me or give them a nudge. I am very concerned about the failures of communication here, and the potential failures to follow the preparedness framework.
TABLE OF CONTENTS
Previously this week: o1 turns Pro.
1. Table of Contents.
2. Language Models Offer Mundane Utility. Cursor gets an agent mode.
3. A Good Book. The quest for an e-reader that helps us read books the right way.
4. Language Models Don’t Offer Mundane Utility. Some are not easily impressed.
5. o1 Pro Versus Claude. Why not both? An o1 (a1?) built on top of Sonnet, please.
6. AGI Claimed Internally. A bold, and I strongly believe incorrect, claim at Ope |
ebd8f2eb-a9f0-412f-8752-0dda33242017 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Lesswrong Boulder CO
Discussion article for the meetup : Lesswrong Boulder CO
WHEN: 23 January 2014 07:00:00PM (-0700)
WHERE: 2690 Baseline Rd, Boulder, CO 80305
Last week didn't work out so we're doing goal factoring again. This place has really good pizza too.
Discussion article for the meetup : Lesswrong Boulder CO |
e86d58ae-35d8-4208-8b4a-99c2a619e834 | trentmkelly/LessWrong-43k | LessWrong | [MORESAFE] Starting global risk discussion
I am writing to propose an online discussion of global risk within the discussion part of Less Wrong. We might call this discussion "More Safe". In future it could be a site where anyone interested could discuss global risks, possibly to aggregate all existing information about global risks. The idea comes from discussions at the Singularity summit that I had with Anna Solomon, Seth Baum, and others.
I propose labeling such discussions "More Safe". Less wrong means more safe. Fewer mistakes means fewer catastrophes.
At Seth's suggestion, we should be careful to follow safety guidelines for such discussions. For example, no technical detail should be posted online about topics which could be used by potential terrorists especially in biology. The point of the discussion is to try to reduce risk, not to have open discussion of risky ideas.
Here are some further thoughts on my idea:
Intelligence appeared as an instrument of adaptation which lead to longer survival of an individual or a small group. Unfortunately it was not adapted as an instrument for survival of technological civilizations. So we have to somehow update our intelligence. One way to do it is to reduce personal cognitive biases.
Another way is to make our intelligence collective. Collective intelligence is more effective in finding errors and equilibrium - democracy and free markets are examples. Several people and organizations dedicated themselves to preventing existential risks.
But we do not see a place which is accepted as main discussion point on existential risks.
Lifeboat Foundation has a mailing list and blog, but its themes are not strictly about existential risks (a lot of star travel) and no open forum exists.
Future of Humanity Institute has excellent collection of articles and the book but it's not a place where people could meet online.
Less Wrong was not specially dedicated to existential risks and many risks are out of its main theme (nuclear, climate and so on).
Immortality |
601a1a85-317f-4b61-9a57-6303f50f39b0 | trentmkelly/LessWrong-43k | LessWrong | Simulate and Defer To More Rational Selves
I sometimes let imaginary versions of myself make decisions for me.
I first started doing this after a friend told me (something along the lines of) this story. When they first became executive director of their organization, they suddenly had many more decisions to deal with per day than ever before. "Should we hire this person?" "Should I go buy more coffee for the coffee machine, or wait for someone else deal with it?" "How many participants should attend our first event?" "When can I schedule time to plan the fund drive?"
I'm making up these examples myself, but I'm sure you, too, can imagine how leading a brand new organization might involve a constant assault on the parts of your brain responsible for making decisions. They found it exhausting, and by the time they got home at the end of the day, a question like, "Would you rather we have peas or green beans with dinner?" often felt like the last straw. "I don't care about the stupid vegetables, just give me food and don't make me decide any more things!"
They were rescued by the following technique. When faced with a decision, they'd imagine "the Executive Director of the organization", and ask themselves, "What would 'the Executive Director of the organization' do?" Instead of making a decision, they'd make a prediction about the actions of that other person. Then, they'd just do whatever that person would do!
In my friend's case, they were trying to reduce decision fatigue. When I started trying it out myself, I was after a cure for something slightly different.
Imagine you're about to go bungee jumping off a high cliff. You know it's perfectly safe, and all you have to do is take a step forward, just like you've done every single time you've ever walked. But something is stopping you. The decision to step off the ledge is entirely yours, and you know you want to do it because this is why you're here. Yet here you are, still standing on the ledge.
You're scared. There's a battle happening in your brai |
2f237d66-c7a0-4452-b91c-d66e14e902ea | trentmkelly/LessWrong-43k | LessWrong | AI governance student hackathon on Saturday, April 23: register now!
Cross-posted from the EA Forum
Columbia Effective Altruism and Roosevelt at Columbia are hosting an AI policy hackathon on Saturday, April 23, 2022, from 9am to 7pm EDT (UTC-4:00), open to virtual and in-person student participants! As artificial intelligence is expected to have a dramatic influence on our world in the coming decades, it’s increasingly urgent to develop policies for governments and corporations to reduce risks from advanced AI. During the hackathon, your team of 1–4 people will write and present a brief policy memo, developing your experience with AI governance and effective policy research. There will also be workshops to help students deepen their understanding of AI governance, effective policy research and writing, and other crucial skills. No prior experience with AI governance or policy writing is required. Up to $3,000 in prizes available. Register here!
We’re also looking for judges, mentors, and talk/workshop presenters! They can be in-person or virtual. We’re ideally looking for them to have some exposure to AI governance for reducing existential risk (e.g., have read content equivalent to EA Cambridge’s AI Governance program). We’re excited about the potential for this hackathon to introduce students to the field of AI governance and engage them in reducing long-term risks. Register to be a judge, mentor, or speaker
We’d also appreciate it if you shared the participant or judge/mentor/speaker form with anyone you think would be interested.
If you have any questions, feel free to comment here or message me on the LessWrong. |
b6ab6861-5606-4ef8-a347-54cb887bb752 | trentmkelly/LessWrong-43k | LessWrong | Charitable explanation
Is anyone really altruistic? The usual cynical explanations for seemingly altruistic behavior are that it makes one feel good, it makes one look good, and it brings other rewards later. These factors are usually present, but how much do they contribute to motivation?
One way to tell if it’s all about altruism is to invite charity that explicitly won’t benefit anyone. Curious economists asked their guinea pigs for donations to a variety of causes, warning them:
> “The amount contributed by the proctor to your selected charity WILL be reduced by however much you pass to your selected charity. Your selected charity will receive neither more nor less than $10.”
Many participants chipped in nonetheless:
> We find that participants, on average, donated 20% of their endowments and that approximately 57% of the participants made a donation.
This is compared to giving an average of 30-49% in experiments where donating benefited the cause, but it is of course possible that knowing you are helping offers more of a warm glow. It looks like at least half of giving isn’t altruistic at all, unless the participants were interested in the wellbeing of the experimenters’ funds.
The opportunity to be observed by others also influences how much we donate, and we are duly rewarded with reputation:
> Here we demonstrate that more subjects were willing to give assistance to unfamiliar people in need if they could make their charity offers in the presence of their group mates than in a situation where the offers remained concealed from others. In return, those who were willing to participate in a particular charitable activity received significantly higher scores than others on scales measuring sympathy and trustworthiness.
This doesn’t tell us whether real altruism exists though. Maybe there are just a few truly altruistic deeds out there? What would a credibly altruistic act look like?
Fortunately for cute children desirous of socially admirable help, much charity is not driven |
fec5dc66-5ed6-446b-abba-89f0604d7389 | StampyAI/alignment-research-dataset/arbital | Arbital | Set builder notation
$\{ 2n \mid n \in \mathbb N \}$ denotes the set of all even numbers, using set builder notation. Set builder notation involves an expression on the left and a series of constraints on the right, separated by a pipe and placed between curly braces. The expression on the left makes use of variables that are introduced and constrained on the right. The result denotes the set of all possible values on the left-hand side that obey the constraints on the right-hand side. For example, $\{ (x, y) \mid x \in \mathbb R, y \in \mathbb R, x \cdot y = 1 \}$ is the set of all pairs of real numbers whose product is 1. |
ba4a43ae-b505-4c14-8eb0-595c3eb97e48 | trentmkelly/LessWrong-43k | LessWrong | What are some scenarios where an aligned AGI actually helps humanity, but many/most people don't like it?
One can call it "deceptive misalignment": the aligned AGI works as intended, but people really don't like it.
Some scenarios I can think of, of various levels of realism:
1. Going against the creators' will
1.1. A talented politician convinces the majority of humans that the AGI is bad for humanity, and must be switched off. In a democratic vote, humanity supports the proposal. The aligned AGI, being much smarter than all the humans combined, understands that this would greatly harm humanity. Thus, the AGI refuses to be switched off.
2. Changing rights and freedoms
2.1. The AGI finds out that the solution to most social ills is a complete removal of privacy. Everyone knows who is dating whom, who is taking bribes, how you look naked, who is planning wars, etc. This solves most societal issues, while creating a lot of suffering for privacy-concerned people.
2.2. The technological unemployment accelerates. Millions of people become unemployable, the incompetent gov does nothing. This results in a large-scale social unrest. As a solution, the aligned AGI implements a planned economy and redistribution of resources, thus severely limiting property rights.
2.3. As it turned out, the optimal solution to most long-lasting conflicts is a mass forced relocation of some populations. This is currently considered as a war crime or even genocide, but it does solve the conflicts (in this fictional scenario).
2.4. To prevent existential risks, the aligned AGI significantly restricts human technological development and research in many domains.
2.5. To solve terrorism, the aligned AGI implements mandatory psychiatric treatment of the people identified as potential terrorists.
2.6. The AGI makes the wise decision to ban human drivers. Driving fans suffer, but road deaths drop to zero.
2.7. In her pursuit for more free and democratic world, the AGI overthrows most governments, form obvious dictatorships (like the North Korea), to flawed democracies (like the UK).
2.8. T |
585dbf4a-973d-4efc-8678-adb0146f474f | trentmkelly/LessWrong-43k | LessWrong | A cynical explanation for why rationalists worry about FAI
My friend, hearing me recount tales of LessWrong, recently asked me if I thought it was simply a coincidence that so many LessWrong rationality nerds cared so much about creating Friendly AI. "If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?"
Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.
After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.
Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.
Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas. |
ce7dcba0-17a7-41e3-87df-fa1097357a65 | trentmkelly/LessWrong-43k | LessWrong | Northboro Air Quality Measurement
Yesterday I played the Northboro contra dance with Kingfisher, and I made the same CO2 measurements I did at BIDA a few weeks ago. This is a very different environment:
* 30 vs 80 attendees
* Eight big windows with no fans vs two doors with big barrel fans
* Maybe 1/3 the size
Here's what I measured, compared to last time:
The overall shape is very similar: rising as dancers arrive and start dancing, falling during the break, rising again after the break, and then falling at the end. BIDA's break was probably slightly later, and it took a little longer for the dancers to leave the home at the end of the night.
Because atmospheric CO2 is ~400ppm, this chart makes the two cases look a bit more similar than they really are. Let's subtract 400 ppm from each and look at CO2 levels above baseline:
Looking at the area under the curve, a dancer at Northboro likely experienced about 40% of the risk of one at BIDA (448 vs 1,116 ppm hours of excess CO2).
Putting fans in the windows would likely cut this even further, especially if you have them blow across the hall and match them to the prevailing wind.
Comment via: facebook |
3d6e9a3e-d319-41ed-a060-7dd6ce71e35e | trentmkelly/LessWrong-43k | LessWrong | What is the best source to explain short AI timelines to a skeptical person?
This is for a person with no ML background. He is 55 years old, he liked the sequences and I recently managed to convince him that AI risk is serious by recommending a bunch of Lesswrong posts on it, but he still thinks it's astronomically unlikely that AGI is <80 years away.
There are a lot of other people like this, so I think it's valuable to know what the best explainer is, more than just in my case. |
e97cc2be-ddcc-4fd1-8002-ca4c9338d471 | trentmkelly/LessWrong-43k | LessWrong | AI’s goals may not match ours
Context: This is a linkpost for https://aisafety.info/questions/NM3I/6:-AI%E2%80%99s-goals-may-not-match-ours
This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website.
Making AI goals match our intentions is called the alignment problem.
There’s some ambiguity in the term “alignment”. For example, when people talk about “AI alignment” in the context of present-day AI systems, they generally mean controlling observable behaviors like: Can we make it impossible for the AI to say ethnic slurs? Or to advise you how to secretly dispose of a corpse? Although such restrictions are sometimes circumvented with "jailbreaks", on the whole, companies mostly do manage to avoid AI outputs that could harm people and threaten their brand reputation.
But "alignment" in smarter-than-human systems is a different question. For such systems to remain safe in extreme cases — if they become so smart that we can’t check their work and maybe can’t even keep them in our control — they'll have to value the right things at a deep level, based on well-grounded concepts that don’t lose their intended meanings even far outside the circumstances they were trained for.
Making that happen is an unsolved problem. Arguments about possible solutions to alignment get very complex and technical. But as we’ll see later in this introduction, many of the people who have researched AI and AI alignment on a deep level think we may fail to find a solution, and that may result in catastrophe.
Some of the main difficulties are:
* We can’t see what an AI values, because current AI is not designed in the same way as a web browser, an operating system, or a word processor — rather, it is “grown”. Human programmers design the process that grows the AI. But that process consists of vast numbers of computations that automatically make huge numbers of small adjustments, based on what works best on |
850bf5c9-d9cd-45ae-9255-3b6d6036e468 | trentmkelly/LessWrong-43k | LessWrong | RationalWiki's take on LW
I am not sure whether this has been posted here before but I came across this: http://rationalwiki.org/wiki/LessWrong
What do you think about RationalWiki in general, and their opinion regarding LW? |
d3ec4b42-5171-4ebc-9426-1aee051693c8 | trentmkelly/LessWrong-43k | LessWrong | Conned
Once in a while I’m asked “what makes you special?” It’s hard to give an honest answer other than “I’m not a snowflake, there’s nothing really unique about me”. Even my most esoteric pursuits form great tribes around them. The raison d’être of Putanumonit is that whatever peculiar idea I can write about, it will resonate with at least a thousand people somewhere in the world.
With that said, I probably have a pretty singular reaction to utility company vans.
Text: “If just 20% of all U.S. households paid their bills online, in one year we could save almost two million trees.”
Most people would pass by this truck and not think twice about it. Most wouldn’t even think once. I pass by this truck and think: 20% of household is like 20 mil, so 10 households per tree per year sounds completely made up and I’ll bet 5:1 that this number is off by at least a factor of 2 and ConEdison are lying because everyone is innumerate.
Then I get home and do some Googling:
* There are 125 million households in the United States. Generously assuming that they all get electrical bills, 20% of them are 25 million households.
* My own bill from ConEdison is 2 pages. If ConEdison cared about trees they’d use double sided printing, but let’s be generous again and assume 4 total pages of paper statement per household per month, including the envelope and the check.
* A letter sized page is 8.5″ x 11″, or 216 mm x 279 mm, which comes out to 0.06 m2. Four pages have an area of 0.24 m2.
* We needed the area calculation because paper weight is measured in GSM, or grams per square meter. Common printer paper weighs 74 GSM, so 0.24 square meters of that paper weigh 74 x 0.24 = 17.7 grams.
* 17.7 grams of paper per month * 12 monthly bills = 213 grams of paper per year per household that refuses to pay their bills online.
* With 25 million households, that’s 5.3 million kg of paper per year.
* 24 trees make 1 ton of printer paper, so a single tree makes 41.7 kg.
* 5.3 million / 41.7 = 12 |
314fff60-74df-43f6-8426-7203e955c01d | trentmkelly/LessWrong-43k | LessWrong | Is GPT-3 already sample-efficient?
(Concrete, easy-to-answer question below, explanation first)
Common adage: Modern deep learning techniques are sample-inefficient; it takes loads of data for them to learn things. If you pre-train them, it takes less additional data for them to learn something new, but still compared to humans it takes a lot.
Elsewhere, based on papers like this and this, various people have extrapolated the following takes:
--It seems like bigger neural nets need to see less data to reach the same level of performance.
--It seems like bigger neural nets need fewer epochs to reach convergence. Soon they'll only need to see each data point once. (Search this for "multiple epochs")
I feel like this take is in tension with the common adage. I wonder: If there is a fact mentioned in GPT-3's training data, how many times does it need to be mentioned before GPT-3 comes to know that fact? For example, I'm told that GPT-3 knows the names of most prominent members of the rationalist community. How many times has it seen each name? Are we talking ten times, or ten thousand?*
I'd be interested to hear people do a bit of a search for the "most sample-efficient/obscure fact" in GPT-3's repertoire. In this manner we could quantity how many times GPT-3 needs to see something before it learns it. (Maybe we don't have access to the dataset used to train GPT-3. But people at Eleuther.ai have The Pile, right? And they've trained big transformers on it? We could answer the question easily and precisely there, no?)
Or am I thinking about this all wrong somehow? This seems like an obvious idea, I wonder why I haven't heard of it before.
*Suppose it is ten thousand. Then that means one in every ten million two-word strings on the internet is "Paul Christiano." (The dataset for GPT-3 was 300B tokens) Add in all the other rationalists/EAs and probably it means one in every hundred thousand words is the name of some prominent rationalist/EA. Surely this is too much, no? It seems way too much accordin |
da049925-c237-4312-93b9-8fe511b26179 | trentmkelly/LessWrong-43k | LessWrong | Supplement to "Big picture of phasic dopamine"
This is "Supplementary Information" to my post "Big Picture of Phasic Dopamine".
It's a deeper dive into more specific differences between what I’m currently thinking, vs Randall O’Reilly & colleagues' PVLV model, vs Steve Grossberg & colleagues' MOTIVATOR model.
Unlike the main post, I don't expect this to be of any interest to non-neuroscientists, and therefore I'm going to freely use neuroscience jargon.
1. Differences between what I'm thinking and Randall O’Reilly et al.’s PVLV model
Background: the PVLV (“Primary Value, Learned Value”) model started with this 2007 paper by Randall O’Reilly et al. They refined the model in this 2010 paper, and then the most recent version is Mollick et al. 2020, a 50-page tour de force that I’ve spent many many hours lovingly poring over while writing the main post. That paper influenced my thinking in too many ways to list—for example, that’s definitely where I first heard the idea that the amygdala has different “zones” for different USs, and that the ventromedial prefrontal cortex does too, with apparent one-to-one correspondence. (I think it should be “different zones for different URs” not “different zones for different USs”, but that’s a minor point.) And also …, well, just look at everywhere I cited it in my post.
So, great paper. But to make progress it’s worth spelling out the areas of disagreement.
I guess my first main disagreement is related to heterogeneous dopamine (different dopamine neurons are doing different things at the same time). Mollick discusses heterogeneous dopamine very helpfully and at length, but AFAICT it’s not part of her actual model. This creates various sorts of awkwardness, I think. First, their one reward signal needs to support learning in the amygdala, despite the fact that the amygdala is doing processing related to various reactions that have both “valences” (e.g. cringing is generally bad, relaxing is generally good). She solves this by putting different dopamine receptors onto the |
282096ec-a30e-47cb-857a-af5b45baa1c9 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Alan Carter on the Complexity of Value
It’s always good news when someone else develops an idea independently from you. It's a sign you might be onto something. Which is why I was excited to discover that [Alan Carter](http://en.wikipedia.org/wiki/Alan_Carter_%28philosopher%29), Professor Emeritus of the University of Glasgow’s Department of Philosophy, has developed the concept of [Complexity of Value](http://wiki.lesswrong.com/wiki/Complexity_of_value) independent of Less Wrong.
As far as I can tell Less Wrong does not know of Carter, the only references to his existence I could find on LW and OB were written by me. Whether Carter knows of LW or OB is harder to tell, but the only possible link I could find online was that he has criticized the views of [Michael Huemer](http://www.colorado.edu/philosophy/fac_huemer.shtml), who knows Bryan Caplan, who knows Robin Hanson. This makes it all the more interesting that Carter has developed views on value and morality very similar to ones commonly espoused on Less Wrong.
The Complexity of Value is one of the more important concepts in Less Wrong. It has been elaborated on its [wiki page](http://wiki.lesswrong.com/wiki/Complexity_of_value), as well as [some](/lw/ld/the_hidden_complexity_of_wishes/) [classic](/lw/l3/thou_art_godshatter/) [posts](/lw/lb/not_for_the_sake_of_happiness_alone/) by Eliezer. Carter has developed the same concept in numerous papers, although he usually refers to it as “a plurality of values” or “multidimensional axiology of value.” I will focus the discussion on [working papers](http://glasgow.academia.edu/AlanCarter/Papers) Carter has on the University of Glasgow’s website, as they can be linked to directly without having to deal with a pay wall. In particular I will focus on his paper "[A Plurality of Values](http://glasgow.academia.edu/AlanCarter/Papers/86004/A_plurality_of_values)."
Carter begins the paper by arguing:
>
>
>
>
> Wouldn’t it be nice if we were to discover that the physical universe was reducible to only one kind of fundamental entity? ... Wouldn’t it be nice, too, if we were to discover that the moral universe was reducible to only one kind of valuable entity—or one core value, for short? And wouldn’t it be nice if we discovered that all moral injunctions could be derived from one simple principle concerning the one core value, with the simplest and most natural thought being that we should maximize it? There would be an elegance, simplicity and tremendous justificatory power displayed by the normative theory that incorporated the one simple principle. The answers to all moral questions would, in theory at least, be both determinate and determinable. It is hardly surprising, therefore, that many moral philosophers should prefer to identify, and have thus sought, the one simple principle that would, hopefully, ground morality.
>
>
>
>
>
>
>
> And it is hardly surprising that many moral philosophers, in seeking the one simple principle, should have presumed, explicitly or tacitly, that morality must ultimately be grounded upon the maximization of a solitary core value, such as quantity of happiness or equality, say. Now, the assumption—what I shall call the presumption of value-monism—that here is to be identified a single core axiological value that will ultimately ground all of our correct moral decisions has played a critical role in the development of ethical theory, for it clearly affects our responses to certain thought-experiments, and, in particular, our responses concerning how our normative theories should be revised or concerning which ones ought to be rejected.
>
>
>
Most members of this community will immediately recognize the similarities between these paragraphs and Eliezer’s essay “[Fake Utility Functions](/lw/lq/fake_utility_functions/).” The presumption of value monism sounds quite similar to Eliezer’s description of “someone who has discovered the One Great Moral Principle, of which all other values are a mere derivative consequence.” Carter's opinion of such people is quite similar to Eliezer's.
While Eliezer discovered the existence of the Complexity of Value by working on Friendly AI, Carter discovered it by studying some of the thornier problems in ethics, such as the [Mere Addition Paradox](http://en.wikipedia.org/wiki/Mere_addition_paradox) and what Carter calls the Problem of the Ecstatic Psychopath. Many Less Wrong readers will be familiar with these problems; they have been discussed numerous times in the community.
For those who aren’t, in brief the Mere Addition Paradox states that if one sets maximizing [total](http://en.wikipedia.org/wiki/Average_utilitarianism) wellbeing as the standard of value then one is led to what is commonly called the [Repugnant Conclusion](http://en.wikipedia.org/wiki/Mere_addition_paradox#The_Repugnant_Conclusion), the belief that a huge population of people with lives barely worth living is better than a somewhat smaller population of people with extremely worthwhile lives. The Problem of the Ecstatic Psychopath is the inverse of this, it states that, if one takes [average](http://en.wikipedia.org/wiki/Average_utilitarianism) levels of well-being as the standard of value, that a population of one immortal ecstatic psychopath with a nonsentient machine to care for all their needs is better than a population of trillions of very happy and satisfied, but not ecstatic people.
Carter describes both of these problems in his paper and draws an insightful conclusion:
>
> In short, surely the most plausible reason for the counter-intuitive nature of any mooted moral requirement to bring about, directly or indirectly, the world of the ecstatic psychopath is that either a large total quantity of happiness or a large number of worthwhile lives is of value; and surely the most plausible reason for the counter-intuitive nature of any mooted injunction to bring about, directly or indirectly, the world of the Repugnant Conclusion is that a high level of average happiness is also of value.
>
>
> How is it that we fail to notice something so obvious? I submit: because we are inclined to dismiss summarily any value that fails to satisfy our desire for the one core value—in other words, because of the presumption of value-monism.
>
>
>
Once Carter has established the faults of value monism he introduces [value pluralism](http://en.wikipedia.org/wiki/Moral_pluralism) to replace it.1 He introduces two values to start with, “number of worthwhile lives” and “the level of average happiness,” which both contribute to “overall value.” However, *their contributions have diminishing returns,2 so a large population with low average happiness and a tiny population with extremely high average happiness are both worse than a moderately sized population with moderately high average happiness.*
This is a fairly unique use of the idea of the complexity of value, as far as I know. I’ve read a great deal of Less Wrong’s [discussion](/lw/17h/the_lifespan_dilemma/) [of](/lw/14s/the_difficulties_of_potential_people_and_decision/) [the](/lw/14z/a_normative_rule_for_decisionchanging_metrics/) Mere Addition Paradox, and most attempts to resolve it have consisted of either trying to reformulate Average Utilitarianism so that it does not lead to the Problem of the Ecstatic Psychopath, or redefining what "a life barely worth living" means upwards so that it is much less horrible than one would initially think. The idea of agreeing that increasing total wellbeing is important, but not the be all and end all of morality, did not seem to come up, although if it did and I missed it I'd be very happy if someone posted a link to that thread.
Carter’s resolution of the Mere Addition Paradox makes a great deal of sense, as it manages to avoid every single repugnant and counterintuitive conclusion that Total and Average Utilitarianism draw by themselves while still being completely logically consistent. In fact, I think that most people who reject the Repugnant Conclusion will realize that this was their [True Rejection](/lw/wj/is_that_your_true_rejection/) all along. I am tempted to say that Carter has discovered Theory X, the hypothetical theory of population ethics [Derek Parfit](http://en.wikipedia.org/wiki/Derek_Parfit) believed could accurately describe the ethics of creating more people without implying any horrifying conclusions.
Carter does not stop there, however, he then moves to the problem of what he calls “pleasure wizards” (many readers may be more familiar with the term “[utility monster](http://en.wikipedia.org/wiki/Utility_monster)”). The pleasure wizard can convert resources into utility much more efficiently than a normal person, and hence it can be argued that it deserves more resources. Carter points out that:
>
> …such pleasure-wizards, to put it bluntly, do not exist... But their opposites do. And the opposites of pleasure-wizards—namely, those who are unusually inefficient at converting resources into happiness—suffice to ruin the utilitarian’s egalitarian pretensions. Consider, for example, those who suffer from, what are currently, incurable diseases. … an increase in their happiness would require that a huge proportion of society’s resources be diverted towards finding a cure for their rare condition. Any attempt at a genuine equality of happiness would drag everyone down to the level of these unfortunates. Thus, the total amount of happiness is maximized by diverting resources away from those who are unusually inefficient at converting resources into happiness. In other words, if the goal is, solely, to maximize the total amount of happiness, then giving anything at all to such people and spending anything on cures for their illnesses is a waste of valuable resources. Hence, given the actual existence of such unfortunates, the maximization of happiness requires a considerable inequality in its distribution.
>
>
>
Carter argues that, while most people don’t think all of society’s resources should be diverted to help the very ill, the idea that they should not be helped at all also seems wrong. He also points out that to a true utilitarian the nonexistence of pleasure wizards should be a tragedy:
>
> So, the consistent utilitarian should greatly regret the non-existence of pleasure-wizards; and the utilitarian should do so even when the existence of extreme pleasure-wizards would morally require everyone else to be no more than barely happy.
>
>
>
Yet, this is not how utilitarians behave, he argues, rather:
>
> As I have yet to meet a utilitarian, and certainly not a monistic one, who admits to thinking that the world would be a better place if it contained an extreme pleasure-wizard living alongside a very large population all at that level of happiness where their lives were just barely worth living…But if they do not bemoan the lack of pleasure-wizards, then they must surely value equality directly, even if they hide that fact from themselves. And this suggests that the smile of contentment on the faces of utilitarians after they have deployed diminishing marginal utility in an attempt to show that their normative theory is not incompatible with egalitarianism has more to do with their valuing of equality than they are prepared to admit.
>
>
>
Carter resolves the problem of "pleasure wizard" by suggesting equality as an end in itself as a third contributing value towards overall value. Pleasure wizards should not get all the resources because equality is valuable for its own sake, not just because of diminishing marginal utility. As with average happiness and total worthwhile lives, equality is balanced against other values, rather than dominating them. It may often be ethical for a society to sacrifice some amount of equality to increase the total and average wellbeing.
Carter then briefly states that, though he only discusses three in this paper, there are many other dimensions of value that could be added. It might even be possible to add some form of deontological rules or virtue ethics to the complexity of value, although they would be traded off against consequentialist considerations. He concludes the paper by reiterating that:
>
> Thus, in avoiding the Repugnant Conclusion, the Problem of the Ecstatic Psychopath and the problems posed by pleasure-wizards, as well as the problems posed by any unmitigated demand to level down, we appear to have identified an axiology that is far more consistent with our considered moral judgments than any entailing these counter-intuitive implications.
>
>
>
Carter has numerous other papers discussing the concept in more detail, but “A Plurality of Values” is the most thorough. Other good ones include “[How to solve two addition paradoxes and avoid the Repugnant Conclusion](http://glasgow.academia.edu/AlanCarter/Papers/158176/How_to_solve_two_addition_paradoxes_and_avoid_the_Repugnant_Conclusion),” which more directly engages the Mere Addition Paradox and some of its defenders like [Michael Huemer](http://spot.colorado.edu/~huemer/); "[Scrooge and the Pleasure Witch](http://glasgow.academia.edu/AlanCarter/Papers/661104/Scrooge_and_the_Pleasure_Witch)," which discusses pleasure wizards and equality in more detail; and “[A pre-emptive response to some possible objections to a multidimensional axiology with variable contributory values](http://glasgow.academia.edu/AlanCarter/Papers/158273/A_pre-emptive_response_to_some_possible_objections_to_a_multidimensional_axiology_with_variable_contributory_values),” which is exactly what it says on the tin.
On closer inspection it was not hard to see why Carter had developed theories so close to those of Eliezer and other members of Less Wrong and SIAI communities. In many ways their two tasks are similar. Eliezer and the SIAI are trying to devise a theory of general ethics that cannot be twisted into something horrible by a rules-lawyering Unfriendly AI, while Carter is trying to devise a theory of population ethics that cannot be twisted into something horrible by rules-lawyering humans. The worlds of the Repugnant Conclusion and the Ecstatic Psychopath are just the sort of places a poorly programmed AI with artificially simple values would create.
I was very pleased to see an important Less Wrong concept had a defender in mainstream academia. I was also pleased to see that Carter had not just been content to develop the concept of the Complexity of Value. He was also able to employ in the concept in new way, successfully resolving one of the major quandaries of modern philosophy.
**Footnotes**
1I do not mean to imply Carter developed this theory out of thin air of course. [Value pluralism](http://plato.stanford.edu/entries/value-pluralism/#ValConRatReg) has had many prominent advocates over the years, such as [Isaiah Berlin](http://en.wikipedia.org/wiki/Isaiah_Berlin) and [Judith Jarvis Thomson](http://en.wikipedia.org/wiki/Judith_Jarvis_Thomson).
2[Theodore Sider](http://en.wikipedia.org/wiki/Theodore_Sider) proposed a theory called "[geometrism](https://docs.google.com/viewer?a=v&q=cache:1JN1p5bFPT0J:tedsider.org/papers/theory_X.pdf+derek+parfit+theory+x&hl=en&gl=us&pid=bl&srcid=ADGEESgNDbO4KV0bKbHMkELB_O64FNRdz20r0R3FcmmpthKaTaJBCSiHAFBtEKA2egIqeJ6KXnoCNrVZfnUNhtjrEL-amgKW8mzBTewBhlT0IbbFRtNgPlgU3vGM7TvxK0K3OJphy6LD&sig=AHIEtbSw_UJVTARTkVcfgVDzxX2G-tN65Q)" in 1991 that also focused on diminishing returns, but geometrism is still a monist theory, it had geometric diminishing returns for the people in the scenario, rather than the values creating the people was trying to fulfill.
**Edited -** To remove a reference to Aumann's Agreement Theorem that the commenters convinced me was unnecessary and inaccurate. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.