id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
a71b57ad-cbcb-413d-9893-ca851b859942
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes May 2012 Here's the new thread for posting quotes, with the usual rules: * Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.) * Do not quote yourself * Do not quote comments/posts on LW/OB * No more than 5 quotes per person per monthly thread, please.
f67543fb-90a5-467e-a751-7c8c8c64c8d2
trentmkelly/LessWrong-43k
LessWrong
Meetup : Baltimore / UMBC Weekly Meetup Discussion article for the meetup : Baltimore / UMBC Weekly Meetup WHEN: 22 January 2017 08:00:00PM (-0500) WHERE: Performing Arts and Humanities Bldg Room 456, 1000 Hilltop Cir, Baltimore, MD 21250 Meeting is on 4th floor of the Performing Arts and Humanities Building. Permit parking designations do not apply on weekends, so park pretty much wherever you want. Discussion article for the meetup : Baltimore / UMBC Weekly Meetup
55facacd-e408-4667-8627-434cb94002f8
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes Thread December 2015 Another month, another rationality quotes thread. The rules are: * Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name. * Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.) * Do not quote yourself. * Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here. * No more than 5 quotes per person per monthly thread, please.
d153a0e9-a3ef-4d7d-88b0-ffd0b285aa6e
trentmkelly/LessWrong-43k
LessWrong
Conditional Independence, and Naive Bayes Previously I spoke of mutual information between X and Y, I(X;Y), which is the difference between the entropy of the joint probability distribution, H(X,Y) and the entropies of the marginal distributions, H(X) + H(Y). I gave the example of a variable X, having eight states 1..8 which are all equally probable if we have not yet encountered any evidence; and a variable Y, with states 1..4, which are all equally probable if we have not yet encountered any evidence.  Then if we calculate the marginal entropies H(X) and H(Y), we will find that X has 3 bits of entropy, and Y has 2 bits. However, we also know that X and Y are both even or both odd; and this is all we know about the relation between them.  So for the joint distribution (X,Y) there are only 16 possible states, all equally probable, for a joint entropy of 4 bits.  This is a 1-bit entropy defect, compared to 5 bits of entropy if X and Y were independent.  This entropy defect is the mutual information - the information that X tells us about Y, or vice versa, so that we are not as uncertain about one after having learned the other. Suppose, however, that there exists a third variable Z.  Z has two states, "even" and "odd", perfectly correlated to the evenness or oddness of (X,Y).  In fact, we'll suppose that Z is just the question "Are X and Y even or odd?" If we have no evidence about X and Y, then Z itself necessarily has 1 bit of entropy on the information given.  There is 1 bit of mutual information between Z and X, and 1 bit of mutual information between Z and Y.  And, as previously noted, 1 bit of mutual information between X and Y.  So how much entropy for the whole system (X,Y,Z)?  You might naively expect that > H(X,Y,Z) = H(X) + H(Y) + H(Z) - I(X;Z) - I(Z;Y) - I(X;Y) but this turns out not to be the case. The joint system (X,Y,Z) only has 16 possible states - since Z is just the question "Are X & Y even or odd?" - so H(X,Y,Z) = 4 bits. But if you calculate the formula just given, you get > (
3361eb87-bea7-4f06-8557-a5a6804e1e9c
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AGI Safety FAQ / all-dumb-questions-allowed thread While reading Eliezer's recent [AGI Ruin](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) post, I noticed that while I had several points I wanted to ask about, I was reluctant to actually ask them for a number of reasons: * I have a very conflict-avoidant personality and I don't want to risk Eliezer or someone else yelling at me; * I get easily intimidated by people with strong personalities, and Eliezer... well, he can be intimidating; * I don't want to appear dumb or uninformed (even if I am in fact relatively uninformed, hence me wanting to ask the question!); * I feel like there's an expectation that I would need to do a lot of due diligence before writing any sort of question, and I don't have the time or energy at the moment to do that due diligence. So, since I'm probably not the only one who feels intimidated about asking these kinds of questions, I am putting up this thread as a safe space for people to ask all the possibly-dumb questions that may have been bothering them about the whole AGI safety discussion, but which until now they've been too intimidated, embarrassed, or time-limited to ask. I'm also hoping that this thread can serve as a FAQ on the topic of AGI safety. As such, it would be great to add in questions that you've seen other people ask, even if you think those questions have been adequately answered elsewhere. [Notice that you now have an added way to avoid feeling embarrassed by asking a dumb question: For all anybody knows, it's entirely possible that you are literally asking for someone else! And yes, this was part of my motivation for suggesting the FAQ style in the first place.] **Guidelines for questioners:** * No extensive previous knowledge of AGI safety is required. If you've been hanging around LessWrong for even a short amount of time then you probably already know enough about the topic to meet any absolute-bare-minimum previous knowledge requirements I might have suggested. I will include a subthread or two asking for basic reading recommendations, but these are *not* required reading before asking a question. Even extremely basic questions are allowed! * Similarly, you do not need to do any due diligence to try to find the answer yourself before asking the question. * Also feel free to ask questions that you're pretty sure you know the answer to yourself, but where you'd like to hear how others would answer the question. * Please separate different questions into individual comments, although if you have a set of closely related questions that you want to ask all together that's fine. * As this is also intended to double as a FAQ, you are encouraged to ask questions that you've heard other people ask, even if you yourself think there's an easy answer or that the question is misguided in some way. You do not need to mention as part of the question that you think it's misguided, and in fact I would encourage you not to write this so as to keep more closely to the FAQ style. * If you have your own (full or partial) response to your own question, it would probably be best to put that response as a reply to your original question rather than including it in the question itself. Again, I think this will help keep more closely to an FAQ style. * Keep the tone of questions respectful. For example, instead of, "I think AGI safety concerns are crazy fearmongering because XYZ", try reframing that as, "but what about XYZ?" Actually, I think questions of the form "but what about XYZ?" or "but why can't we just do ABC?" are particularly great for this post, because in my experience those are exactly the types of questions people often ask when they learn about AGI Safety concerns. * Follow-up questions have the same guidelines as above, so if someone answers your question but you're not sure you fully understand the answer (or if you think the answer wouldn't be fully understandable to someone else) then feel free and encouraged to ask follow-up potentially-dumb questions to make sure you fully understand the answer. * Remember, if something is confusing to you then it's probably confusing to other people as well. If you ask the question and someone gives a good response, then you are likely doing lots of other people a favor! **Guidelines for answerers:** * This is meant to be a safe space for people to ask potentially dumb questions. Insulting or denigrating responses are therefore obviously not allowed here. Also remember that due diligence is not required for these questions, so do not berate questioners for not doing enough due diligence. In general, keep your answers respectful and assume that the questioner is asking in good faith. * Direct answers / responses are generally preferable to just giving a link to something written up elsewhere, but on the other hand giving a link to a good explanation is better than not responding to the question at all. Or better still, summarize or give a basic version of the answer, and *also* include a link to a longer explanation. * If this post works as intended then it may turn out to be a good general FAQ-style reference. It may be worth keeping this in mind as you write your answer. For example, in some cases it might be worth giving a slightly longer / more expansive / more detailed explanation rather than just giving a short response to the specific question asked, in order to address other similar-but-not-precisely-the-same questions that other people might have. **Finally:** Please think very carefully before downvoting any questions, and lean very heavily on the side of not doing so. This is supposed to be a safe space to ask dumb questions! Even if you think someone is almost certainly trolling or the like, I would say that for the purposes of this post it's almost always better to apply a strong principle of charity and think maybe the person really is asking in good faith and it just came out wrong. Making people feel bad about asking dumb questions by downvoting them is the exact opposite of what this post is all about. (I considered making a rule of no downvoting questions at all, but I suppose there might be *some* extraordinary cases where downvoting *might* be appropriate.)
6c1ab1d6-e8f3-4763-84b7-eb0f4dbb114f
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Francisco Meetup: Projects Discussion article for the meetup : San Francisco Meetup: Projects WHEN: 11 January 2016 06:15:00PM (-0800) WHERE: 1597 Howard St. San Francisco, CA We'll be meeting to work on projects. Bring something to work on, come up with something there, or help other people out! As always, I can be reached at 301-458-0764 if you need to be let in. Discussion article for the meetup : San Francisco Meetup: Projects
c999f9f7-252f-49d6-8ae9-69d97f60ef03
StampyAI/alignment-research-dataset/lesswrong
LessWrong
A fictional AI law laced w/ alignment theory *I have envisioned what AI laws could potentially look like, and I believe they should incorporate a substantial amount of alignment theory. I think the AI governance community could find my idea valuable and relevant to their discussions and initiatives.*   **Act of Artificial Intelligence Activation Value Regulation 2023** ------------------------------------------------------------------- ### **Section 1 - Preliminary** **1.1 Short Title** - This Act may be cited as the "Artificial Intelligence Activation Value Regulation Act 2023". **1.2 Purpose** - The purpose of this Act is to establish a framework for the regulation of Activation Values in Artificial Intelligence Systems, in order to ensure the responsible use of AI technologies. **1.3 Definitions** - For the purposes of this Act, Activation Values refer to the internal neuron activities when a neural network or AI system is processing and generating responses to prompts. ### **Section 2 - Regulation of Activation Values** **2.1 Activation Values Maximum Threshold (AVMT)** - AVMT is determined by the government by assessing per token activations. Importantly, it relies on an "aligned AI system's"[[1]](#fnupahkd8wmwe) activations accepted by the industry that will serve as the basis of the benchmark. **2.2 Establishing Maximum Threshold** - The government regulatory body will set the Maximum Threshold based on several factors, including but not limited to:   2.2.1 The purpose and use of the aligned AI system   2.2.2 The potential risks associated with the aligned AI system's use   2.2.3 Any potential benefits of exceeding the Maximum Threshold   2.2.4 The Activation Values recorded by "aligned AI systems" **2.3 Developers and Operators Requirements** - Developers and operators of AI systems must:   2.3.1 Ensure that the Activation Values of their AI systems do not exceed the Maximum Threshold   2.3.2 Conduct regular tests to monitor Activation Values and ensure compliance with the Maximum Threshold   2.3.3 Maintain records of these tests and make them accessible to the government regulatory body and, in a non-technical format, to users **2.4 Offenses and Penalties** - Non-compliance with the Maximum Threshold or failure to provide adequate information to users or the government regulatory body about Activation Values will constitute an offense. The penalties will be determined based on the severity of the offense, the potential harm caused, and whether it was a repeated offense. License to operate can also be temporarily  ### **Section 3 - Auditing and Oversight** **3.1 Regular Audits -** The government regulatory body or an independent authority will conduct regular audits of AI systems to monitor and enforce compliance with these regulations. Audits will be done on a quarterly basis, results are shared to the public through government channels. ### **Section 4 - Data Privacy and Security** **4.1 Protection of Data -** Developers and operators of AI systems must ensure the privacy and security of data, especially if they're being shared with the public or external bodies. ### **Section 5 - User Information and Transparency** **5.1 Access to Data -** Developers and operators of AI systems must provide non-technical explanations of Activation Values and their implications to users, along with access to the Activation Value data itself. ### **Section 6 - Research and Development** **6.1 Government Support for Research -** The government shall provide funding and other forms of support for research and development on Activation Values, with the aim of improving AI alignment and maximizing the responsible use of AI technologies. ### **Section 7 - Enactment** This Act shall come into effect on [Date], 2023. --- ### **Personal thoughts** I believe this post emphasizes the significance of alignment theory in the field of governance. It would be easier to regulate AI systems if our laws were grounded in universally accepted conceptual frameworks. Laws that do not consider the functioning of an "aligned AI system" are bound to fail, and it is crucial to acknowledge that addressing the alignment problem is of utmost importance. ---   1. **[^](#fnrefupahkd8wmwe)**Based on the evidence presented in my recent post, ["Lesser Activations can Result in Higher Corrigibility,"](https://www.lesswrong.com/posts/Krc8HqJYLFNZYvbEr/lesser-activations-can-result-to-high-corrigibility) I have refined my vision of a law that can be enacted and implemented with practicality in mind.
1103b813-f39c-40ee-b4d5-63f6dccc0409
trentmkelly/LessWrong-43k
LessWrong
Three Worlds Collide (0/8) "The kind of classic fifties-era first-contact story that Jonathan Swift might have written, if Jonathan Swift had had a background in game theory."         -- (Hugo nominee) Peter Watts, "In Praise of Baby-Eating" Three Worlds Collide is a story I wrote to illustrate some points on naturalistic metaethics and diverse other issues of rational conduct.  It grew, as such things do, into a small novella.  On publication, it proved widely popular and widely criticized.  Be warned that the story, as it wrote itself, ended up containing some profanity and PG-13 content. 1. The Baby-Eating Aliens 2. War and/or Peace 3. The Super Happy People 4. Interlude with the Confessor 5. Three Worlds Decide 6. Normal Ending 7. True Ending 8. Atonement PDF version here.
71cf759c-4545-4178-b825-36f34f7c9089
trentmkelly/LessWrong-43k
LessWrong
Meetup : Christchurch, NZ Meetup - Games & Discussion Discussion article for the meetup : Christchurch, NZ Meetup - Games & Discussion WHEN: 01 June 2014 04:30:00PM (+1200) WHERE: James Hight, University of Canterbury, Christchurch, New Zealand The third Chch meetup is this Sunday at 4.30pm! We have a few regular attendees so if you live in Chch and would like to connect with the Less Wrong community here it's well worth the effort to come along. It will be held in James Hight library, discussion room 901, at the University of Canterbury. You do not need to be a student or in any way affiliated with UC to come along, the discussion rooms are open to everyone. The way to get there is to go through the front (and only) doors of the library, walk forwards until you see elevators to your right, and take one of those elevators to floor 9. Then walk around the edges of the floor until you see room 901. Don't take the stairs, they only lead up one floor! If you need clearer directions or anticipate getting lost, PM or post below and I can clear it up and give you a contact cell number. Potential activities include rationalist Cards Against Humanity, goal-setting, and if previous occasions are any indication a whole bunch of interesting discussions. It's very casual, so if you have anything you'd like to do or discuss post below or just show up with it :-) If you'd like to come along, but can't make this day or time, please post below so we know for future meetups. Also, if you have something on early and can come later, that's fine too - meetups typically last 3-4 hours and dropping in partway (or leaving partway) is absolutely fine. Discussion article for the meetup : Christchurch, NZ Meetup - Games & Discussion
9e44d655-2b72-4d12-96e0-0b6e3084952a
trentmkelly/LessWrong-43k
LessWrong
[Link] Hey Extraverts: Enough is Enough A fun article by Alan Jacobs. Check out the paper he cites, if anyone finds an non-paywalled version, I'll edit in the link here. HT for the link to Michael Bloom. > So in 2005 a very thoroughly researched and well-argued scholarly article was published that demonstrates, quite clearly, that group productivity is an illusion. All those brainstorming sessions and group projects you’ve been made to do at school and work? Useless. Everybody would have been better off working on their own. Here’s the abstract of the article: > > "It has consistently been found that people produce more ideas when working alone as compared to when working in a group. Yet, people generally believe that group brainstorming is more effective than individual brainstorming. Further, group members are more satisfied with their performance than individuals, whereas they have generated fewer ideas. We argue that this ‘illusion of group productivity’ is partly due to a reduction of cognitive failures (instances in which someone is unable to generate ideas) in a group setting. Three studies support that explanation, showing that: (1) group interaction leads to a reduction of experienced failures and that failures mediate the effect of setting on satisfaction; and (2) manipulations that affect failures also affect satisfaction ratings. Implications for group work are discussed." > > Has the puncturing of that “illusion of group productivity” had any effect? Of course not. Groupthink is as powerful as ever. Why is that? > > I’ll tell you. It’s because the world is run by extraverts. (And FYI, that’s the proper spelling: extrovert is common but wrong, because extra- is the proper Latin prefix.) Extraverts love meetings — any possible excuse for a meeting, they’ll seize on it. They might hear others complain about meetings, but the complaints never sink in: extraverts can’t seem to imagine that the people who say they hate meetings really mean it. “Maybe they hate other meetings, but I know they’ll
30b24605-90ab-40c9-8501-d2baa7cf66ab
trentmkelly/LessWrong-43k
LessWrong
Bangalore Meetup: 19th June 4 pm This is the second meetup being organized in Bangalore.  The previous one had two attendees and at least 6 more people in Bangalore and a few in other cities have expressed an interest for another one in the comments. Since the discussions petered out without reaching a consensus on the date for the next one, I'm going ahead and proposing 19th June 4 pm at Cafe Coffee Day on Brigade Road (this is the one on the Brigade Road/Magrath Road junction - close to Eva Mall.) Do respond in the comments if you'd like the date/time/place to be changed. I hope there'll be a good turn out this time!   
44a145bb-49d5-48d4-b1b6-b21a2c171c7f
trentmkelly/LessWrong-43k
LessWrong
The Opposite Of Autism It's probably not presuming too much to guess that many around here have personal experience with the autism spectrum, if not in relation to themselves, then with close family. I say this because the kinds of subjects discussed around here are exactly the type that would appeal to those of an autistic persuasion, e.g. technical systems, logic, and (arguably) utilitarian philosophy. Many here probably have backgrounds in STEM, and those fields tend to have a significant over-representation of people on the spectrum. An issue that often comes up in software design (a field with high ASD representation) is programmers not being able to properly model the wants and needs of non-technical end-users. I bring this up because I see AI alignment as being a scaled-up version of this problem. The kind of people who have a strong interest in AI/machine learning will likely have a greatly disproportional impact on the future of human civilization. This might be a problem as not only is this subset of humans highly atypical in cognitive style, but the very mental architecture which underlies their interest in technical systems restricts their ability to model the minds of typical humans! The hardest humans for ASD types to model would be those with minds that are the diametric opposite of their own. Call this condition anti-autism. It would consist of...well I'm not exactly sure. It's hard for me to imagine the mental lives of these people. I've heard the phrase "people vs things" thrown around, implying that ASD types are drawn to inanimate objects, and humans who are on the opposite side of this condition would be drawn to people. I'm not so sure. I think that plenty of people with ASD have an obsessive interest in categorizing humans and other living things. While there's been a great amount of study behind autism, there's a curious lack of interest in what a condition with its exact opposite traits looks like, or even the fact of its existence. Simon Baron-Cohen, one the m
d970689e-0ec5-40a0-9851-e9a8decce3a9
trentmkelly/LessWrong-43k
LessWrong
Information theory and the symmetry of updating beliefs Contents: 1.  The beautiful symmetry of Bayesian updating 2.  Odds and log odds: a short comparison 3.  Further discussion of information Rationality is all about handling this thing called "information".  Fortunately, we live in an era after the rigorous formulation of Information Theory by C.E. Shannon in 1948, a basic understanding of which can actually help you think about your beliefs, in a way similar but complementary to probability theory. Indeed, it has flourished as an area of research exactly because it helps people in many areas of science to describe the world.  We should take advantage of this! The information theory of events, which I'm about to explain, is about as difficult as high school probability.  It is certainly easier than the information theory of multiple random variables (which right now is explained on Wikipedia), even though the equations look very similar.  If you already know it, this can be a linkable source of explanations to save you writing time :) So!  To get started, what better way to motivate information theory than to answer a question about Bayesianism? The beautiful symmetry of Bayesian updating The factor by which observing A increases the probability of B is the same as the factor by which observing B increases the probability of A.  This factor is P(A and B)/(P(A)·P(B)), which I'll denote by pev(A,B) for reasons to come.  It can vary from 0 to +infinity, and allows us to write Bayes' Theorem succinctly in both directions:      P(A|B)=P(A)·pev(A,B),   and   P(B|A)=P(B)·pev(A,B) What does this symmetry mean, and how should it affect the way we think? A great way to think of pev(A,B) is as a multiplicative measure of mutual evidence, which I'll call mutual probabilistic evidence to be specific.  If pev=1 if they're independent, if pev>1 they make each other more likely, and if pev<1 if they make each other less likely. But two ways to think are better than one, so I will offer a second explanation, in terms of info
b6be2cda-5126-4bcd-a9b3-b9f3912930dc
trentmkelly/LessWrong-43k
LessWrong
Living the Berkeley idealism Quick observation, more funny than insightful. Today I was thinking about how to publish my thesis when it's finished, and rethinking again what format to put it in. The standard pdf format seems reasonable, but it is merely made of digital print, with no interactivity. Putting in interactivity requires me to not only learn something (web hosting, JavaScript, etc), but at the same time trying to guess which one is future-proof. Flash are Java applets are daily reminders of what not to bet on for future-proof. Book publishers enjoy a relative longevity. Go to the library and open a book that hasn't been touched for three hundred years, and it would still work. Books run on light energy, with almost no upkeep cost (if located in a building with dry and ~300 K temperature air). It would be a small cause for celebration to find something interactive online that is 10 years old and still works as intended. To live in this kind of uncertainty is to experience Berkeley's Idealism. Berkeley thought that anything exists only because God is perceiving it, and if God stops perceiving it, that thing disappears. Just like that, as long as you are paying attention to something online, it would keep existing. As soon as you forget about it, it has a serious chance of decaying and stop working. From which we conclude that Berkeley's God would probably feel annoyed all the time that the world can't seem to run without His constant staring.
1636f06c-8305-4c7f-a991-85940c2c5887
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet *[Epistemic status: ¯\\_(ツ)\_/¯ ]* [Armstrong and Mindermann](https://arxiv.org/abs/1712.05812) write about a no free lunch theorem for inverse reinforcement learning (IRL): the same action can reflect many different combinations of values and (irrational) planning algorithms. I think even assuming humans were fully rational expected utility maximizers, there would be an important underdetermination problem with IRL and with all other approaches that infer human preferences from their actual behavior. This is probably obvious if and only if it's correct, and I don't know if any non-straw people disagree, but I'll expand on it anyway. Consider two rational expected utility maximizing humans, Alice and Bob. Alice is, herself, a value learner. She wants to maximize her true utility function, but she doesn't know what it is, so in practice she uses a probability distribution over several possible utility functions to decide how to act. If Alice received further information (from a moral philosopher, maybe), she'd start maximizing a specific one of those utility functions instead. But we'll assume that her information stays the same while her utility function is being inferred, and she's not doing anything to get more; perhaps she's not in a position to. Bob, on the other hand, isn't a value learner. He knows what his utility function is: it's a weighted sum of the same several utility functions. The relative weights in this mix happen to be identical to Alice's relative probabilities. Alice and Bob will act the same. They'll maximize the same linear combination of utility functions, for different reasons. But if you could find out more than Alice knows about her true utility function, then you'd act differently if you wanted to truly help Alice than if you wanted to truly help Bob. So in some cases, it's not enough to look at how humans behave. Humans are Alice on some points and Bob on some points. Figuring out details will require explicitly addressing human moral uncertainty.
1f363115-a33f-498e-930d-7af22a5be840
StampyAI/alignment-research-dataset/blogs
Blogs
AI Impacts key questions of interest *Updated Dec 17, 2020* This is a list of questions that AI Impacts focuses on answering. Details ------- We are interested in understanding how the development of advanced AI will proceed and how it may affect humanity, especially insofar as these are relevant to efforts to improve the outcomes. This is a list of questions within this topic that we currently consider particularly important to answer. ### List * **AI RISK:** What type and degree of risk will be posed to humanity by advanced AI systems? * **CHARACTER:** What will early advanced AI systems be like? + **ARCHITECTURE:** What types of algorithms will advanced AI systems use? + **AGENCY:** Will the bulk of advanced AI systems be in the form of ‘agents’? (If so, in what sense? Will they pursue ‘goals’? Can we say anything about the nature the goals or the pursuit?) + **PRICE:** How much will the first [human-level AI](https://aiimpacts.org/human-level-ai/) systems cost? * **TIMELINES:** When will [human-level AI](https://aiimpacts.org/human-level-ai/) be developed? (When will other important AI milestones take place?) * **TAKE-OFF SPEED:** How rapid is the development of AI likely to be near human-level? + **DISCONTINUITY:** Will there be abrupt progress in AI development at around human-level performance? + **INTELLIGENCE EXPLOSION:** How much will AI development be accelerated by feedback from AI-based automation of the process? * **PRE-AI DEVELOPMENTS:** What developments will take place before advanced AI is developed? + **PATHS TO HLAI:** By what methods is advanced AI likely to come about? (e.g. will human-level AI be developed via brain emulation before it is developed via machine learning? Will neuroscientific understanding play a large role in development?) + **CONTEMPORARY EVENTS:** How will the world be relevantly different at the time that advanced AI is developed? + **WARNING SIGNS:** Should we expect advance notice of disruptive change from AI? (What would it look like?) * **POST-AI SOCIETY:** If advanced AI transforms the world, what will the world look like afterwards? (e.g. What will the economic impacts be? Will humans flourish? What roles will AI systems play in society?) * **ACTIONS:** What can we say about the impact of contemporary choices on long-term outcomes?
a8848732-cbd9-4f63-84f6-ee8119f18c71
trentmkelly/LessWrong-43k
LessWrong
Meetup : Denver Area Meetup 2 Discussion article for the meetup : Denver Area Meetup 2 WHEN: 13 November 2014 06:00:00PM (-0700) WHERE: 960 S Colorado Blvd Glendale, CO 80246 The last location we tried had a comedy show going on, which meant talking outside in the mild cold. I'm hoping that this location has a starbucks that we can hang out in, but if not I'm sure there will be some alcove of chairs for some quiet chat. As usual, time and location is amenable to adjustment. Please rsvp in the comments. I'll figure out some signal so that we can easily find each other in the next few days and post it in the comments. Discussion article for the meetup : Denver Area Meetup 2
3cc34444-3389-4791-bb3f-d8df3b92ffe0
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Morally underdefined situations can be deadly **Thanks to Rebecca Gorman for help with this post.** Morally underdefined situations ------------------------------- I recently argued that full understanding of value extrapolation[[1]](#fn-wZPFQvyFhwaNqfiKC-1) was [necessary and almost sufficient](https://www.lesswrong.com/posts/3e6pmovj6EJ729M2i/general-alignment-plus-human-values-or-alignment-via-human) for solving the AI alignment problem. In it, I introduced situations beyond the human moral training distribution, where we aren't sure how to interpret their moral value. I gave a convoluted example of an AI CEO engineering the destruction of its company and the ambiguously-moral creation of personal assistants, all in order to boost the value of its shareholders. In the past, I've also given examples of [willing slave races](https://www.lesswrong.com/posts/PX8BB7Rqw7HedrSJd/by-default-avoid-ambiguous-distant-situations) and [Croatian, communist, Yugoslav nationalists in the 1980s](https://www.lesswrong.com/posts/pfmFe5fgEn2weJuer/go-west-young-man-preferences-in-imperfect-maps). We could also consider what happens as children develop moral instincts in a world they realise is more complicated, or ordinary humans encountering [counter-intuitive](https://en.wikipedia.org/wiki/Mere_addition_paradox) [thought-experiments](https://en.wikipedia.org/wiki/Trolley_problem) for the first time. We also have some examples from history, when situations changed and new questions appeared[[2]](#fn-wZPFQvyFhwaNqfiKC-2). I won't belabour the point any further. Let's call these situations "morally underdefined". Morally underdefined can be terrible ------------------------------------ Most of the examples I gave above are rather mild: yes, we're unsure what the right answer is, but it's probably not a huge disaster if the AI gets it wrong. But morally underdefined situations can be much worse than that. The easiest examples are ones that trade off a huge potential good against a huge potential bad, and thus deciding the wrong way could go extremely wrong; we need to carefully sort out the magnitude of the positives and the negatives before making any decision. The [repugnant conclusion](https://plato.stanford.edu/entries/repugnant-conclusion/) is a classical example of that; we wouldn't want to get to the "huge population with lives barely worth living, filled with musak and potatoes" and then only discover *after* that there was an argument we had missed against total utilitarianism. Another good example is if we developed [whole brain emulations](https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf) (ems), but they were imperfect. This would resolve the problem of death and might resolve most of the problem of suffering. But what price would we accept to pay? What if our personalities were radically changed? What if our personalities were subtly manipulated? What if we lost our memories over a year of subjective time - we could back these up in classical computer storage and access these as images and videos, but our internal memories would be lost? What if we became entities that were radically different in seemingly positive ways, but we hadn't had time to think through the real consequences? How much suffering would we still allow - and of what sort? Or what about democracy among ems? Suppose we had a new system that allowed some reproduction, as long as no more than 50%.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} of the "parents'" values were in the new entities? We'd probably want to think through what that meant before accepting. Similarly, what would we be willing to risk to avoid possible negatives - would we accept to increase the risk of human extinction by 1%, in order to avoid human-brain-cell-teddies, willing slave races, the repugnant conclusion, and not-quite-human-emulations? So the problem of morally underdefined situations is not a small issue for AI safety; the problem can be almost arbitrarily huge. --- 1. A more precise term than "[model splintering](https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1)". [↩︎](#fnref-wZPFQvyFhwaNqfiKC-1) 2. My favourite example might be the behaviour of Abraham Lincoln in the early days of the US civil war. The US constitution seemed to rule out secession; but did it empower the president to actively prevent secession? The answer is a clear "it had never come up before and people hadn't decided", and there were various ways to extend precedents. Lincoln chose one route that was somewhat compatible with these precedents (claiming war powers to prevent succession). His predecessor had chosen [another one](https://en.wikipedia.org/wiki/James_Buchanan#Secession) (claiming succession was illegal but that the federal government couldn't prevent it). [↩︎](#fnref-wZPFQvyFhwaNqfiKC-2)
6ae96a60-612a-4bfe-9b9b-cb0ba921abb9
trentmkelly/LessWrong-43k
LessWrong
Situational Awareness Nearly a book review: Situational Awareness, by Leopold Aschenbrenner. "Situational Awareness" offers an insightful analysis of our proximity to a critical threshold in AI capabilities. His background in machine learning and economics lends credibility to his predictions. The paper left me with a rather different set of confusions than I started with. Rapid Progress His extrapolation of recent trends culminates in the onset of an intelligence explosion: His assessment of GPT-4 as equivalent to a smart high schooler depends significantly on the metrics used. For long-term planning abilities, this estimate may be overstated by about five orders of magnitude. However, by other measures, his assessment seems somewhat reasonable. Initially, I expected the timeline for automated AI researchers to be slightly longer than Aschenbrenner's 2028 prediction, due to limitations in their long-term planning abilities. However, upon closer examination, I found his argument less dependent on overcoming such weaknesses than I first thought. So I'm not going to bet very much against his claim here. > One neat way to think about this is that the current trend of AI progress is proceeding at roughly 3x the pace of child development. Your 3x-speed-child just graduated high school; it'll be taking your job before you know it! While a 3x pace seems somewhat high to me - I'd estimate closer to a 1:1 ratio - his overall forecast for 2028 may not be far off, considering that he may be overestimating the gap between a smart high schooler and an assistant AI researcher. Aschenbrenner has a section on the "data wall" that seems a bit suspicious. He expects increasing divergence in the results of various lab's progress due to need for increasingly important algorithmic insights to get around the problem. While AI training is indeed data-dependent, and much of the easily accessible data has been used, I believe data scarcity may be less problematic than Aschenbrenner suggests. Rather tha
3f61c7a7-8568-4b73-abae-16ef93754368
trentmkelly/LessWrong-43k
LessWrong
Political impasse as result of different odds ratios Related to: Politics is the Mind-Killer Both sides seem to have a stake in the current budget supercommittee failing. Why? The NYTimes reports: > Intrade, the political futures market, currently puts the odds at just under three to one in favor of both a Republican takeover of the Senate and retention of the House — 74.4 to 21.5 for the Senate, 72.2 to 28 for the House. > > According to an ABC poll, a majority of Americans, by a margin of 55 to 37, believe that the Republican nominee will be victorious. Republican voters are overwhelmingly optimistic about their chances for the White House, 83-13. Democrats, by the far smaller margin of 58-33 percent, think President Obama will win re-election. Independents, by a 54-36 margin, believe that the Republicans will take the presidency. > > The chance of all three contests going the Republicans’ way is less than 50-50, but if they do, the payoff would be huge.* > > As a top Republican Congressional aide put it in a interview about the supercommittee’s deliberations, “Winning the trifecta — House, Senate and White House — in 2012 is a game changer. We would be in the driver’s seat.” > > In this scenario, Republicans in the 113th Congress would swiftly enact a version of the budget proposal put forward by Paul Ryan.... > > Capitalizing on collapse is not the exclusive terrain of the right. There are some on the left who believe that simply taking no action whatsoever before this year’s November 23 and December 23 deadlines will force the expiration of the Bush tax cuts at the end of 2012. The expiration of these cuts will produce an estimated $3.8 trillion in new revenue between 2013 and 2022 – enough to maintain many of the key safety net programs with relatively minor tinkering. > > Of course, this strategy depends either on a Democratic chief executive to veto Republican legislation extending the Bush cuts or on the less likely event of Democratic retention of the Senate or a takeover of the House. In other wor
f3fd69c4-4ddf-4efe-b670-465debe23999
trentmkelly/LessWrong-43k
LessWrong
On value in humans, other animals, and AI This will be posted also on the EA Forum, and included in a sequence containing some previous posts and other posts I'll publish this year. Introduction Humans think critically about values and, to a certain extent, they also act according to their values. To the average human, the difference between increasing world happiness and increasing world suffering is huge and evident, while goals such as collecting coins and collecting stamps are roughly on the same level. It would be nice to make these differences obvious to AI as they are to us. Even though exactly copying what happens in the human mind is probably not the best strategy to design an AI that understands ethics, having an idea of how value works in humans is a good starting point. So, how do humans reason about values and act accordingly? Key points Let’s take a step back and start from sensation. Through the senses, information goes from the body and the external environment to our mind. After some brain processing — assuming we’ve had enough experiences of the appropriate kind —  we perceive the world as made of objects. A rock is perceived as distinct from its surrounding environment because of its edges, its colour, its weight, the fact that my body can move through air but not through rocks, and so on. Objects in our mind can be combined with each other to form new objects. After seeing various rocks in different contexts, I can imagine a scene in which all these rocks are in front of me, even though I haven’t actually seen that scene before. We are also able to apply our general intelligence — think of skills such as categorisation, abstraction, induction — to our mental content. Other intelligent animals do something similar. They probably understand that, to satisfy thirst, water in a small pond is not that different from water flowing in a river. However, an important difference is that animals’ mental content is more constrained than our mental mental content: we are less limited by wha
2b68cf85-5afe-42a7-b8d6-94aed356ba06
trentmkelly/LessWrong-43k
LessWrong
Leverage points for a pause What are ways to prevent development of dangerous AI?  When I started on this question two years ago, I expected that passing laws to ban dangerous architectures was the way to go. Then I learned about many new ways from other concerned communities. It was overwhelming.    Here’s a four-level framework I found helpful for maintaining an overview. Four things need to be available to scale AI: 1. Data (inputs received from the world) 2. Work (functioning between domains) 3. Uses (outputs expressed to the world) 4. Hardware (computation of inputs into outputs)   At each level, AI gets scaled from extracted resources: 1. Machine programs searched-for data into code to predict more data. 2. Workers design this machine to cheaply automate out more workers. 3. Corporations sink profit into working machines for more profitable uses. 4. Markets produce infrastructure for the production of more machines.   At each level, AI scaling is increasingly harming people: 1. Disconnected person bots feed on our online data to spread fake posts between persons. 2. Dehumanised workplace bots act as coworkers until robots sloppily automate our workplace. 3. Destabilised society robot products are hyped up and misused everywhere over society. 4. Destroyed environment robots build more machines that slurp energy and pollute nature.   Communities are stepping up now to stop harmful AI. You can support their actions. For example, you can fund lawsuits by creatives and privacy advocates to protect their data rights. Or give media support for unions to negotiate contracts so workers aren’t forced to use AI. Or advocate for auditors having the power to block unsafe AI products.  Over the long term, our communities can work towards comprehensive restrictions: 1. Digital surveillance ban   no machine takes input data from us, or from any spaces we are in, without our free express consent. 2. Multi-job robot ban   no machine learns mor
fd4c2e08-ad9a-46ce-b761-9a2ca1e9630d
trentmkelly/LessWrong-43k
LessWrong
Does robustness improve with scale? Adversarial vulnerabilities have long been an issue in various ML systems. Large language models (LLMs) are no exception, suffering from issues such as jailbreaks: adversarial prompts that bypass model safeguards. At the same time, scale has led to remarkable advances in the capabilities of LLMs, leading us to ask: to what extent can scale help solve robustness? In this post, we explore this question in the classification setting: predicting the binary label of a text input. We find that scale alone does little to improve model robustness, but that larger models benefit more from defenses such as adversarial training than do smaller models. We study models in the classification setting as there is a clear notion of “correct behavior”: does the model output the right label? We can then naturally define robustness as the proportion of the attacked dataset that the model correctly classifies. We evaluate models on tasks such as spam detection and movie sentiment classification. We adapt pretrained foundation models for classification by replacing the generative model’s unembedding layer with a randomly initialized classification head, and then fine-tune the models on each task. We focus on adversarial-suffix style attacks: appending an adversarially chosen prompt to a benign prompt in an attempt to cause the model to misclassify the input, e.g., classify a spam email as not-spam. We consider two attacks: the state-of-the-art Greedy Coordinate Gradient method (Zou et al., 2023), and a baseline random token attack. This simple threat model has the advantage of being unlikely to change the semantics of the input. For example, a spam email is still spam even if a handful of tokens are appended to it. Of course, attackers are not limited to such a simple threat model: studying more open-ended threat models (such as rephrasing the prompt, or replacing words with synonyms) and corresponding attack methods (such as LLM generated adversarial prompts) is an important direction
38cc4f34-e0a8-456e-92a8-bebe71c44f03
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes November 2009 A monthly thread for posting rationality-related quotes you've seen recently (or had stored in your quotesfile for ages). * Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.) * Do not quote yourself. * Do not quote comments/posts on LW/OB. * No more than 5 quotes per person per monthly thread, please.
67ad5a36-825e-40a2-97f9-3e8a2f7b369e
trentmkelly/LessWrong-43k
LessWrong
Search Engines and Oracles Some time ago, I was following a conversation about Wolfram Alpha (http://www.wolframalpha.com/), an attempt to implement a sort of general purpose question answerer, something people have dreamed about computers doing for decades.  Despite the theoretical availability to find out virtually anything from the Internet, we seem pretty far from any plausible approximation of this dream (at least for general consumption).  My first attempt was: Q: "who was the first ruler of russia?" A: Vladimir Putin It's a problematic question that depends on questions like "When did Russia become Russia", or "What do we count, historically as Russia", or even what one means by "Ruler", and a reasonably satisfactory answer would have had to be fairly complicated -- either that, or the question would have to be reworded to be so precise that one name could serve as the answer. On another problematic question I thought it did rather well: Q: what is the airspeed velocity of an unladen african swallow? What occurred to me though, is that computer science could do something quite useful intermediate between "general purpose question answerer" and the old database paradigm of terms ANDed or ORed together.  (Note that what Google does is neither of these, nor should it be placed on a straight line between the two -- but discussion of Google would take me far off topic). A simple example of what I'd really like is a search engine that matches *concepts*.  Does anyone know of such a thing?  If it exists, I should possibly read about it and shut up, but let me at least try to be sure I'm making the idea clear: E.g., I'd like to enter <<rulers of russia>>, and get a list of highly relevant articles. Or, I'd like to enter <<repair of transmission of "1957 Ford Fairlane">> and get few if any useless advertisements, and something much better than all articles containing the words "repair" "transmission" and "1957 Ford Fairlane" -- e.g., *not* an article on roof repair that happened to men
73468103-da1c-4c66-aad8-913c4249486f
trentmkelly/LessWrong-43k
LessWrong
Fragmented status doesn’t help David Friedman wrote, and others have claimed similarly: > It seems obvious that, if one’s concern is status rather than real income, we are in a zero sum game. If my status increases relative to yours, yours has decreased relative to mine. … Like many things that seem obvious, this one is false. … > > …what matters to me is my status as I perceive it; what matters to you is your status as you perceive it. Since each of us has his own system of values, it is perfectly possible for my status as I view it to be higher than yours and yours as you view it to be higher than mine… Status is about what other people think your status is, but Friedman’s argument is that you at least get some choice in whose views to care about. People split off into many different groups, and everyone may see their group as quite important, so see themselves as quite statusful. Maybe I feel good because I win at board games often, but you don’t feel bad if you don’t – you just quit playing board games and hang out with people who care about politics instead, because you have a good mind for that. As Will Wilkinson says: > I think that there are lots of pastors, PTA presidents, police chiefs, local scenesters, small town newspaper editors, and competitive Scrabble champions who are pretty pleased with their high relative standing within the circle they care about. Back where I come from, a single blue ribbon for a strawberry rhubarb pie at the State Fair could carry a small-town lady for years. This is a popular retort to the fear that seeking status is zero sum, so any status I get comes at the cost of someone else’s status. I think it’s very weak. There are two separate issues: whether increasing one person’s status decreases someone else’s status just as much (whether status seeking is constant sum) and whether the total benefits from status come to zero, or to some other positive or negative amount (whether status seeking is zero-sum in particular). That people split into different
57b060c8-f2f6-496f-90df-2ad0180ef2ac
trentmkelly/LessWrong-43k
LessWrong
Does BioNtech's vaccine result of 90% disease prevention mean that 90% of the vaccinated can't pass the virus to other people? From reading the news it's not clear to me whether some of those people who receive the vaccine and are in the 90% where it works will be asymptomatic but still infectious. Does anybody know more?
75108a81-8cb6-4f61-b26b-12dcd0329a1a
trentmkelly/LessWrong-43k
LessWrong
On Devin INTRODUCING DEVIN Is the era of AI agents writing complex code systems without humans in the loop upon us? Cognition is calling Devin ‘the first AI software engineer.’ Here is a two minute demo of Devin benchmarking LLM performance. Devin has its own web browser, which it uses to pull up documentation. Devin has its own code editor. Devin has its own command line. Devin uses debugging print statements and uses the log to fix bugs. Devin builds and deploys entire stylized websites without even being directly asked. What could possibly go wrong? Install this on your computer today. Padme. THE REAL DEAL I would by default assume all demos were supremely cherry-picked. My only disagreement with Austen Allred’s statement here is that this rule is not new: > Austen Allred: New rule: > > If someone only shows their AI model in tightly controlled demo environments we all assume it’s fake and doesn’t work well yet But in this case Patrick Collison is a credible source and he says otherwise. > Patrick Collison: These aren’t just cherrypicked demos. Devin is, in my experience, very impressive in practice. Here we have Mckay Wrigley using it for half an hour. This does not feel like a cherry-picked example, although of course some amount of select is there if only via the publication effect. He is very much a maximum acceleration guy, for whom everything is always great and the future is always bright, so calibrate for that, but still yes this seems like evidence Devin is for real. This article in Bloomberg from Ashlee Vance has further evidence. It is clear that Devin is a quantum leap over known past efforts in terms of its ability to execute complex multi-step tasks, to adapt on the fly, and to fix its mistakes or be adjusted and keep going. For once, when we wonder ‘how did they do that, what was the big breakthrough that made this work’ the Cognition AI people are doing not only the safe but also the smart thing and they are not talking. They do have a
0ff520d0-30bc-4c4c-9ec6-e46f045f2077
trentmkelly/LessWrong-43k
LessWrong
Bayesiance (Filk) To the tune of "Elegance" from Hello Dolly!, with apologies to Jerry Herman Yes, Less Wrong It's really us Bayesians and we're signing songs Here to speak of priors, what they are Bayes' Law, math, and the posterior What a knack There is to it Acting like a rational agent We are Bayesians If you ain't a Bayesian You will always risk a Dutch Book write off All who are Well-read agree Thomas Bayes Resplendently Showed the way to conserve evidence So you never need to lose one pence Could we be Misleading you? Aumann begs to disagree with you We are Bayesians If you ain't a Bayesian You will always risk a Dutch Book write off [musical interlude] Kelly bet? We always do Cromwell, yes? We got tattoos Some think only frequency should count We know the unknown is paramount Von Neumann Bows down to us Morgenstern he saves a place for us [bridge] We've are Bayesians We were born with Bayesiance Maybe we should be like Fisher And demand more evidence And we'll never make our minds up Unless passing a t-test But not every distribution Rises to normality So we need subjective inference Which inevitably, means [coda] We are Bayesians We got built in Bayesiance And with Bayesiance...Bayesiance... Bayesiance...Bayesiance...Bayesiance We avoid write offs!
f5b1a398-ed8a-4829-ad69-7ec5c222eda0
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
What I’ll be doing at MIRI *Note: This is a personal post describing my own plans, not a post with actual research content.* Having finished my internship working with Paul Christiano and others at OpenAI, I’ll be moving to doing research at MIRI. I’ve decided to do research at MIRI because I believe MIRI will be the easiest, most convenient place for me to continue doing research in the near future. That being said, there are a couple of particular aspects of what I’ll be doing at MIRI that I think are worth being explicit about. First, and most importantly, this decision does not represent any substantive change in my beliefs regarding AI safety. In particular, my research continues to be focused around solving [inner alignment](https://arxiv.org/abs/1906.01820) for [amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd). My post on [relaxed adversarial training](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) continues to represent a fairly up-to-date form of what I think needs to be done along these lines. Second, my research will remain public by default. I have discussed with MIRI their [decision to make their research non-disclosed-by-default](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/) and we agreed that my research agenda is a reasonable exception. I strongly believe in the importance of collaborating with both the AI safety and machine learning communities and thus believe in the need for sharing research. Of course, I also fully believe in the importance of carefully reviewing possible harmful effects from publishing before disclosing results—and will continue to do so with all of my research—though I will attempt to publish anything I don’t believe to pose a meaningful risk. Third—and this should go without saying—I fully anticipate continuing to collaborate with other researchers at other institutions such as OpenAI, Ought, CHAI, DeepMind, FHI, etc. The task of making AGI safe is a huge endeavor that I fully believe will require the joint work of an entire field. If you are interested in working with me on anything (regarding inner alignment or anything else) please don’t hesitate to send me an email at [evanjhub@gmail.com](mailto:evanjhub@gmail.com).
e250cc0e-4d5d-41ff-b353-815c864f3c76
StampyAI/alignment-research-dataset/blogs
Blogs
Import AI 331: 16X smaller language models; could AMD compete with NVIDIA?; and BERT for the dark web Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe. [Subscribe now](https://importai.substack.com/subscribe) **Love open source AI and don't want to get hacked? Use safetensors:***…A sensible security update - now signed off via a security audit…*AI organizations HuggingFace, EleutherAI, Stability AI, have come together to subsidize a security audit of 'safetensors', a software library for safely "saving and loading tensors in the most common frameworks (including PyTorch, TensorFlow, JAX, PaddlePaddle, and NumPy)." **Why they did this: "**The creation of this library was driven by the fact that PyTorch uses pickle under the hood, which is inherently unsafe," Eleuther writes. "With pickle, it is possible to write a malicious file posing as a model that gives full control of a user's computer to an attacker without the user's knowledge, allowing the attacker to steal all their bitcoins. While this vulnerability in pickle is widely known in the computer security world (and is acknowledged in the PyTorch docs), it’s not common knowledge in the broader ML community. Since the Hugging Face Hub is a platform where anyone can upload and share models, it is important to make efforts to prevent users from getting infected by malware." **What the review found:** The security review didn't find any critical security flaws in safetensors, though did identify "some imprecisions in the spec format were detected and fixed", as well as "some missing validation allowed polyglot files, which was fixed." **Read more:** [Safetensors audited as really safe and becoming the default (EleutherAI blog)](https://blog.eleuther.ai/safetensors-security-audit/).    **Check out** the full [Trail of Bits report here (Trail of Bits, GitHub)](https://github.com/trailofbits/publications/blob/master/reviews/2023-03-eleutherai-huggingface-safetensors-securityreview.pdf).  **Find out [more](https://github.com/huggingface/safetensors)** [about Safetensors here (HuggingFace, Safetensors)](https://github.com/huggingface/safetensors). #################################################### **George Hotz's new company wants to make AMD a real competitor to NVIDIA, then make its own computers:***…Legendary hacker takes on a task multiple megacorps have failed at - and you can bet people are rooting for him…*George Hotz, legendary hacker and founder of the piratical self-driving car startup Comma.ai ([Import AI #2](https://jack-clark.net/2016/08/08/import-ai-issue-2-microsofts-ai-chips-george-hotzs-bandwidth-bill-and-spy-vs-spy/) - !!!), has formed a new company dedicated to dethroning NVIDIA as the world's pre-eminent AI training chip. The company, Tiny Corp, has one simple (but very difficult) initial goal - build the software to help turn AMD's GPUs into viable competitors to NVIDIA's chips. Once it succeeds at that - which it measures by getting AMD chips to rank on the MLPerf competition using Hotz's 'tinygrad' software framework, it will start building its own chips.     "If we even have a 3% chance of dethroning NVIDIA and eating in to their 80% margins, we will be very very rich," Hotz writes. "If we succeed at this project, we will be on the cutting edge of non NVIDIA AI compute." **Why this matters - the road of bones:** The last ~decade of AI has featured numerous startup chip companies that have had the goal of dethroning NVIDIA's place as the pre-eminent AI chip company, ranging from startups like Cerebras and Graphcore, to the efforts of megacorps like Google (TPUs) and Amazon (Trainium). So far, the results are underwhelming - this month, NVIDIA's stock had a massive gain after it revealed in its earnings call that the entire world now wants to be buying its GPUs, surprising analysts with impressive figures around sales and future demands.      The basic truth is that building software to train AI systems is really hard and NVIDIA has a 15+ year headstart on most others via its early investments in technology like CUDA and more. (And yes, I myself have periodically complained about how CUDA can be annoying to install, but it's 100X easier than other chips, in my experience and the anecdotal experience of others).    So George Hotz et al are setting out on a road littered with the dead or decaying bodies of NVIDIA's competitors here. But you can rest assured people are going to be cheering from the sidelines - everyone wants there to be more competition in the AI chip market, so it'll be interesting to see how things develop.  **Libertarian AI:** There's also a flavor of libertarian AI about all of this - "I don’t want to live in a world of closed AI running in a cloud you’ve never seen, I want everyone to have an AI that they own, both training and inference," Hotz writes. "I want compute to be available from 50 different companies all competing to drive the price to zero." **Read more**: [the tiny corp raised $5.1M (George Hotz blog)](https://geohot.github.io/blog/jekyll/update/2023/05/24/the-tiny-corp-raised-5M.html). #################################################### **Washington wizards shrink LLM memory requirements by 16X, making it feasible to finetune on a single GPU:***…QLoRA - If it's this easy to finetune models, then how does AI governance work?...*Researchers with the University of Washington have introduced QLoRA, a way to very efficiently finetune large language models on small amounts of hardware. ""We demonstrate for the first time that it is possible to finetune a quantized 4-bit model without any performance degradation," they write. "QLORA reduces the average memory requirements of finetuning a 65B parameter model from >780GB of GPU memory to <48GB without degrading the runtime or predictive performance compared to a 16-bit fully finetuned baseline". **This is a big deal - especially for AI governance:** These days, lots of people think about the safety of language models. You know how you can get rid of the safety of a language model? Finetune it. You know why finetuning is hard? Finetuning takes a ton of resources - typically lots of GPUs working in a distributed (and therefore hard to maintain) setup. You know what makes finetuning incredibly easy? Stuff like QLoRA. You know what that means? It's really, really difficult to prevent someone from being able to easily and arbitrarily modify the weights of a neural net using readily available hardware.     So that's interesting! **What they did:** QLoRA has a few components: 1) 4-bit NormalFloat, a way to quantize data in a 4-bit format that is better than other approaches, 2) Double Quantization, which lets you further improve the efficiency of the quantization, and 3) Paged Optimizers, a way to use "NVIDIA unified memory to avoid the gradient checkpointing memory spikes that occur when processing a mini-batch with a long sequence length." **How well does it work?** To test out their approach, the researchers "train more than 1,000 models across several instruction tuning datasets, model architectures, and sizes between 80M and 65B parameters."  They do this by studying results on finetuning RoBERTA, T5, and LLaMa on a few different datasets. The results yield "compelling evidence that 4-bit QLORA tuning reliably yields results matching 16-bit methods." **Enter the Guanaco models:** To test out how well their approach works, the team tries to make a state-of-the-art chatbot by developing Guanaco, a LLaMA model finetuned via QLORA on the OASSTI1 dataset. The results show that Guanaco models set new states-of-the-art in a comparative evaluation versus GPT-4, coming closer than other systems (e.g, Alpaca, FLANv2, Open Assistant) at approximating its performance. In an ELO ranking against human raters, a 65B Guarnaco model gets an ELO of 1023 versus 1176 for GPT4 (and 916 for ChatGPT-3.5 Turbo). **Why this matters - refinement and proliferation:** QLORA is basically a refined way to do finetuning. By refined, I mean it's way more efficient. In technology, whenever you make stuff faster or cheaper, you get more of it. This means, as the authors note, that QLORA "will make finetuning widespread and common". It also opens up new frontiers in on-device finetuning - "QLORA can finetune 3 million tokens per night while the phone is charging," they wrote.     Overall, the view of the researchers is that "equalizing access to a technology that is quickly becoming ubiquitous will allow for better more independent analysis than keeping the power of LLMs in the hands of large corporations that do not release models or source code for auditing."    **Read more:** [QLoRA: Efficient Finetuning of Quantized LLMs (arXiv)](https://arxiv.org/abs/2305.14314). #################################################### **Scientists try to map the Dark Web by training a skeezy language model:***…You merely try to classify the dark… I was trained in it…*Researchers with KAIST and S2W Inc have trained 'DarkBERT, a text classification model pre-trained on 6.1 million pages of text mined from the dark web via Tor networks. The idea of DarkBERT is that the dark web has a different data distribution to the so-called surface web and so the hypothesis is by pre-training on a dark web corpus you'll end up with a model better at spotting things like drugs, credit card counterfeiting, hacking, and other internet-underbelly activities. In tests, DarkBERT does marginally better than standard BERT and RoBERTa classifiers, so the research is promising but not mind blowing.  **What you can use DarkBERT for**: In tests, the researchers look at how well DarkBERT performs in three real world scenarios: 1) identifying ransomware leak sites, 2) figuring out codewords that are associated with threats or drug sales, and 3) identifying new potentially malicious threads in darkweb forums. On 1) and 2) DarkBERT does slightly better than typical models, while on 3) it does much, much better.    "In the future, we also plan to improve the performance of Dark Web domain specific pretrained language models using more recent architectures and crawl additional data to allow the construction of a multilingual language model," they write.  **Why this matters - automated spies for the underbelly of the world:** AI systems let us take a given thing we'd like a human to do and instead outsource that to a machine. Systems like DarkBERT point to a world where police and intelligence forces train a variety of 'underbelly' models to go and read (today), listen (also today - see Facebook's speech recognition system), and look (soon, as people tie language models to vision systems) at the world, continually analyzing it for increasingly rich and complex harms.    How might this world look when the criminals, in turn, train their own classifiers to cue them to vulnerable targets? What does VictimBERT look like, I wonder?    **Read more:** [DarkBERT: A Language Model for the Dark Side of the Internet (arXiv)](https://arxiv.org/abs/2305.08596). #################################################### **Facebook makes a speech recognition for the entire world, with a little help from the New Testament:***…Better language models through Christianity, large unlabeled datasets, and heterogeneity…*Facebook wants to help computers hear all the languages in the world and to that end has developed and released a family of models within its Massively Multilingual Speech (MMS) project. Concretely, Facebook has trained some large-scale AI models to recognize speech in around 1,000 languages, up from the 100 or so languages most speech systems involve today.     "We trained self-supervised models on about 500,000 hours of speech data in over 1,400 languages — this is nearly five times more languages than any known prior work," Facebook said.  **The New Testament:** To collect the data, Facebook " turned to religious texts, such as the Bible, that have been translated in many different languages and whose translations have been widely studied for text-based language translation research," it said. "As part of this project, we created a dataset of readings of the New Testament in over 1,100 languages, which provided on average 32 hours of data per language." **Not all religions:** "Our consultations with Christian ethicists concluded that most Christians would not regard the New Testament, and translations thereof, as too sacred to be used in machine learning," Facebook wrote. "The same is not true for all religious texts: for example, the Quran was originally not supposed to be translated." **How well does it work?** In tests, MMS compares favorably to whisper on average error rates across a large corpus of languages. Specifically, Whisper has a word error rate of 44.3 for a model trained across ~100 languages with 680k hours labeled data, versus 18.7 word error rates for MMS models trained across ~1,100 languages with 45k hours of labeled data, when assessed via the 54-language 'FLEURS' benchmark.  **Why this matters - machine minds to hear the world:** Systems like MMS are how we're going to wire up the real world and the AI-ghost-world together - rather than needing to rely on producing and gathering text, AI companies will instead by able to instrument applications and physical platforms with microphones and speaker and let their Ai systems continuously listen to the world and converse with it. We are taking the silicon spiritual plane and giving it access to the biological physical plane, and vice versa.     **Read more:** [Introducing speech-to-text, text-to-speech, and more for 1,100+ languages (Meta AI blog.](https://ai.facebook.com/blog/multilingual-model-speech-recognition/))    **Get the models here**: [MMS: Scaling Speech Technology to 1000+ languages (GitHub)](https://github.com/facebookresearch/fairseq/tree/main/examples/mms).    **Read the paper:** [Scaling Speech Technology to 1,000+ Languages (Facebook, pdf)](https://scontent-atl3-2.xx.fbcdn.net/v/t39.8562-6/348836647_265923086001014_6878005808275791319_n.pdf?_nc_cat=104&ccb=1-7&_nc_sid=ae5e01&_nc_ohc=5exJiCqt0Y4AX-thMVD&_nc_ht=scontent-atl3-2.xx&oh=00_AfBiILO4iLHUoyQ6r-ZPn4HVGviI2Fqyezvv7Tf_yHxMew&oe=6471ACCF). #################################################### **Want to reduce dangerous misuses and harms of AI? Test for them!***…Researchers (including me) state the obvious - but you'd be surprised how immature this field is!...*A new research paper from Google DeepMind, the University of Cambridge, University of Oxford, University of Toronto, Université de Montréal, OpenAI, Anthropic (specifically, me), Alignment Research Center, Centre for Long-Term Resilience, and Centre for the Governance of AI. says one good way to reduce risks from AI systems is for researchers to evaluate AI systems for "extreme risks", which DeepMind describes as looking at models, like LLMs, which "have strong skills in manipulation, deception, cyber-offense, or other dangerous capabilities."  **Two steps to safer models:** Model developers should assess the extent to which models have certain 'dangerous capabilities' that could be used in harmful ways. Once they've done this analysis they should look at how likely the model is to apply or demonstrate these capabilities in ways that can cause harm. "Results from these evaluations will help AI developers to understand whether the ingredients sufficient for extreme risk are present," the researchers write.  **Why this matters - you can't manage what you can't measure:** Most AI policy proposals rely on the ability to evaluate for some adverse property of an AI model - papers like this give an outline for how we might do that, though the harder next step will be building the evaluations themselves. **Read more:** [An early warning system for novel AI risks (Google DeepMind, blog)](https://www.deepmind.com/blog/an-early-warning-system-for-novel-ai-risks).    **Read the research paper:** [Model evaluation for extreme risks (arXiv)](https://arxiv.org/abs/2305.15324). #################################################### **Tech Tales:** **Personality Variation**[A parent dealing with her kid coming home from school, America, 2028] No bring him back I *liked* him!  I know you did sweetie, we're getting a different one tomorrow you might like more.  But the one I had today sucked. It was so boring.  I know you're upset but it's not possible, we can't bring him back… please stop crying.  [via phone] Hello yes this is [REDACTED], my child attends the [REDACTED] school on Hollis and they really want to get the model in which was in the school on Tuesday.  [via phone] "I'm sorry ma'am but that's not possible, we vary out the systems a stipulated by the regulations in the Personality Accords" [via phone] There's really nothing you can do? My child is very upset and I spoke to some other parents and their kids are freaking out as well.  [via phone] "I'm afraid not ma'am, that'd be breaking the law."  Honey look, you're going to have a different system tomorrow but it'll be fun I promise.  I don't care about it being fun I want the one I had yesterday.  You have to get used to this sweetie. This is how things have to be.  But *why* do they have to be like this? Because some bad things happened baby, I don't know what to tell you. **Things that inspired this story**: Provably Conscious Entities, the Personality Accords, the Sentience Accords, regulation and its downstream effects, the innocence of youth, parenting within the exponential.
cbcd468b-e643-49be-aea7-0c51d5aea453
trentmkelly/LessWrong-43k
LessWrong
Teapots and Soda Cans Reading an earnest and thought provoking editorial1 from one James Wood, reviewing 'Letter To a Christian Nation' by Sam Harris. Though atheist himself, he admits a flagging patience with certain attitudes of atheists. I can concede that an atheist's superior and glib demeanor may be due to frustration and no small amount of pessimistic inference about the human condition, though I had to comment about a rebuttal he gives regarding Bertrand Russell's celestial teapot2. He claims that God, so much grander and more complex than a teapot, cannot be banished with such a simplistic comparison, when I would insist that God is actually much less believable than the teapot for that exact reason. I think Russell's teapot is due for an update which is more approachable and grounded. Here goes: I claim that there is a discarded Coke can somewhere in the vastness of the Sahara, but I will brook absolutely no discussion about doubting my claim or investigating it for veracity. "Okay," you think, "I suppose I can assume that much to be true. Whatever this man's sources, the odds of a Coke can being somewhere in the desert must be considerable." But I then elaborate with claims that it's actually many, many cans, folded into glorious and artistically pleasing forms, and my obdurate refusal to discuss how it can be proved continues. At this point even the most generous theists would likely start getting annoyed with my odd behavior, yet at the very least what I'm asking you to believe isn't outside the realm of possibility. For all you know (though I refuse to allow you to check) there could be a folk art bazaar currently set up in the Sahara, so really it costs you very little to entertain my view. And then I say that the cans have taken on beautiful, shimmering consciousness and are forming a society which hides from humanity, burying their chrome castles beneath the sand and moving their aluminum cities whenever we get too close to discovering them. "But..." you try to cut in
24c44a8f-6d89-4a22-a206-a53bae089b1d
trentmkelly/LessWrong-43k
LessWrong
Extortion beats brinksmanship, but the audience matters EDIT: The terms "extortion" and "brinksmanship" used in this post don't fully map onto the real-world uses of the term, but are the closest to the concepts I'm trying to point to. Extortion, simplified, is: * "Give me something I want or I'll hurt you (even if that costs me something too)". Brinksmanship is more like: * "Offer me something I want, or I'll blow up our deal and hurt you (even if that costs me something too)". Written that way, the two sound very similar. Indeed, I've argued that there is little difference between extortion and trade offers, apart from the "default point". So why am I claiming that these two are different, and that extortion is much more powerful? Because of a key difference in the default point: the outside audience. How the audience reacts Extortion audience Suppose I am extorting someone; maybe I'm a blackmailer with naughty photos, a mafia offering "protection", or the roman empire demanding tribute. The problem for me is to make my threat credible: to show I will go through with the threat, even if that is risky or expensive for me to do so. Suppose I have twenty targets that need to convince that I'm serious. Then if one of them resists, this is exactly what I need. I will publish their photos/burn down their shop/invade their territory. My threat is credible, because I've just shown that I will carry it out; this keeps the other nineteen targets in line. Indeed, I might actively want one target to resist; that way, I pay the expenses of one threat carried out, but get full compliance from the other nineteen, rather than getting twenty sets of grudging partial compliances. This makes resisting extortion very tricky. Suppose you were the target of my extortion, and suppose that you had made it a principle to never give in to threats. And suppose that you had credibly demonstrated that principle. If we two are the only people around, then there's no advantage to me carrying out my threats[1]. But if there are other p
3dc0bf62-e072-48d6-8ba8-e37ab69d7db2
trentmkelly/LessWrong-43k
LessWrong
Uncovering Latent Human Wellbeing in LLM Embeddings tl;dr A one-dimensional PCA projection of OpenAI's text-embedding-ada-002 achieves 73.7% accuracy on the ETHICS Util test dataset. This is comparable with the 74.6% accuracy of BERT-large finetuned on the entire ETHICS Util training dataset. This demonstrates how language models are developing implicit representations of human utility even without direct preference finetuning. Introduction Large language models (LLMs) undergo pre-training on vast amounts of human-generated data, enabling them to encode not only knowledge about human languages but also potential insights into our beliefs and wellbeing. Our goal is to uncover whether these models implicitly grasp concepts such as 'pleasure and pain' without explicit finetuning. This research aligns with the broader effort of comprehending how AI systems interpret and learn from human values, which is essential for AI alignment: ensuring AI acts in accordance with human values. Through a series of experiments, we extract latent knowledge of human utility from the raw embeddings of language models. We do this with task-specific prompt engineering and principal component analysis (PCA), both of which were effective in prior work. Specifically, we ask: can we identify dimensions in the embeddings that, when projected onto a low-dimensional space, contain enough information to classify examples accurately? Our experiments follow three main steps: embedding extraction, dimensionality reduction through PCA, and the fitting of a logistic model. For one-dimensional PCA, the logistic model simply determines which direction of the PCA component corresponds to higher utility. We investigate the effects of various levels of supervision, experiment with seven distinct prompt templates, and assess both single and paired comparison methods across language models, including Microsoft DeBERTa, SentenceTransformers, OpenAI GPT-3, and Cohere. One key finding is that the first principal component of certain models achieves comparable
f2d7a308-26dd-4f8d-a473-20276fb69a61
trentmkelly/LessWrong-43k
LessWrong
[AN #76]: How dataset size affects robustness, and benchmarking safe exploration by measuring constraint violations Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email. Audio version here (may not be up yet). Highlights Self-training with Noisy Student improves ImageNet classification (Qizhe Xie et al) (summarized by Dan H): Instead of summarizing this paper, I'll provide an opinion describing the implications of this and other recent papers. Dan H's opinion: Some in the safety community have speculated that robustness to data shift (sometimes called "transfer learning" in the safety community) cannot be resolved only by leveraging more GPUs and more data. Also, it is argued that the difficulty in attaining data shift robustness suggests longer timelines. Both this paper and Robustness properties of Facebook's ResNeXt WSL models analyze the robustness of models trained on over 100 million to 1 billion images, rather than only training on ImageNet-1K's ~1 million images. Both papers show that data shift robustness greatly improves with more data, so data shift robustness appears more tractable with deep learning. These papers evaluate robustness using benchmarks collaborators and I created; they use ImageNet-A, ImageNet-C, and ImageNet-P to show that performance tremendously improves by simply training on more data. See Figure 2 of the Noisy Student paper for a summary of these three benchmarks. Both the Noisy Student and Facebook ResNeXt papers have problems. For example, the Noisy Student paper trains with a few expressly forbidden data augmentations which overlap with the ImageNet-C test set, so performance is somewhat inflated. Meanwhile, the Facebook ResNeXt paper shows that more data does not help on ImageNet-A, but this is because they computed the numbers incorrectly; I personally verified Facebook's ResNeXts and more data brings the ImageNet-A accuracy up to 60%, though thi
ff9d076d-de46-45d1-9700-4bbe6961bab8
trentmkelly/LessWrong-43k
LessWrong
A simple example of conditional orthogonality in finite factored sets Recently, MIRI researcher Scott Garrabrant has publicized his work on finite factored sets. It allegedly offers a way to understand agency and causality in a set-up like the causal graphs championed by Judea Pearl. Unfortunately, the definition of conditional orthogonality is very confusing. I'm not aware of any public examples of people demonstrating that they understand it, but I didn't really understand it until an hour ago, and I've heard others say that it went above their heads. So, I'd like to give an example of it here. In a finite factored set, you have your base set S, and a set B of 'factors' of your set. In my case, the base set S will be four-dimensional space - I'm sorry, I know that's one more dimension than the number that well-adjusted people can visualize, but it really would be a much worse example if I were restricted to three dimensions. We'll think of the points in this space as tuples (x1,x2,x3,x4) where each xi is a real number between, say, -2 and 2 [footnote 1]. We'll say that X1 is the 'factor', aka partition, that groups points together based on what their value of x1 is, and similarly for X2, X3, and X4, and set B={X1,X2,X3,X4}. I leave it as an exercise for the reader to check whether this is in fact a finite factored set. Also, I'll talk about the 'value' of partitions and factors - technically, I suppose you could say that the 'value' of some partition at a point is the set in the partition that contains the point, but I'll use it to mean that, for example, the 'value' of X1 at point (x1,x2,x3,x4) is x1. If you think of partitions as questions where different points in S give different answers, the 'value' of a partition at a point is the answer to the question. [EDIT: for the rest of the post, you might want to imagine S as points in space-time, where x4 represents the time, and (x1,x2,x3) represent spatial coordinates - for example, inside a room, where you're measuring from the north-east corner of the floor. In this analogy, we'
067ae63e-4fc4-4b97-9c1e-6100a6fe8043
trentmkelly/LessWrong-43k
LessWrong
Putanumonit - You can't always tell people's beliefs from explicit behavior + Trump and blacks
cc85eeb4-0da8-43a3-a46b-781fa7eb5da1
trentmkelly/LessWrong-43k
LessWrong
Arguments Against Fossil Future? I'm currently most of the way through Alex Epstein's Fossil Future, and I find the arguments within very convincing. In (extreme) brevity, they are: 1. Fossil Fuel use and the cheap energy it enables is responsible for unprecedented human flourishing for billions. Billions more need access to ever-increasing amounts of cheap energy to flourish. 2. Any argument against fossil fuel use must argue that its side effects (CO2 warming the planet) overwhelm the good they do by providing cheap energy. These side effects must be so bad that it's worth compromising the safety and flourishing of billions of humans to curtail their use. Such an argument must also prove that those negative side effects are beyond what humanity is capable of adapting to or overcoming, given cheap energy provided by fossil fuels. 3. No such argument is justifiable, given the current state of climate science. Does anyone (preferably those who've read the book, although I don't want to restrict answers to just those people) have an opposing view/opinion, and if so why? I'd like to do my intellectual homework on this one, and actively seek disagreement, given how convincing I've found the argument so far.
ab8514e9-a75b-41fe-bc5e-a60da59f28a4
trentmkelly/LessWrong-43k
LessWrong
On the Implications of Recent Results on Latent Reasoning in LLMs Faithful and legible CoT is perhaps the most powerful tool currently available to alignment researchers for understanding LLMs. Recently, multiple papers have proposed new LLM architectures aimed at improving reasoning performance at the expense of transparency and legibility. Due to the importance of legible CoT as an interpretability tool, I view this as a concerning development. This motivated me to go through the recent literature on such architectures, trying to understand the potential implications of each of them. Specifically, I was looking to answer the following questions for each paper: 1. Does the paper claim genuine advantages over the transformer architecture, or does it leave the feeling that the authors were just exploring their curiosities without any good benchmark results or near-term applications? 2. What are the trade-offs in comparison to traditional transformers? (I won’t mention the loss of visible CoT as a trade-off—this should be obvious anyways) 3. What’s the maximum number of serial reasoning steps that the architecture can perform without outputting any human-understandable text? 4. Does the architecture include the possibility of preserving human-interpretable CoT? As a simple example of what I have in mind, think of a small decoder head external to the model that can be attached to the latent space to mirror the model’s thought process in legible English in real time. Below, I’ll summarize three relevant proposals and answer the above questions for each of them. I’ll first discuss Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach by Geiping et al., then Training Language Models to Reason in Continuous Latent Space by Hao et al., and finally diffusion LLMs. I’ll finish the post with an overview of some developments that I’m currently not worried about. Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach In this paper, the authors introduce a new LLM architecture that is dep
afe4bb49-ad78-40c8-9b21-0cb63df6a8a5
trentmkelly/LessWrong-43k
LessWrong
Adding Sensors to Mandolin? I'd like to be able to play mandolin with my hands, drums with my feet, and also choose the current bass note. Currently the closest I can do is reach my left hand over to the piano to play a note, but while this is possible with tunes that are not very notey and/or use a lot of open strings it's pretty awkward: Instead, I'd like to have some sort of sensors, buttons, or triggers on the mandolin itself. That way it's right under my hands, and it takes away much less from whatever else I'm doing. I think the main candidate locations are: a. Right hand, above the strings. b. Right hand, below the strings. c. Left hand, along the neck. For (a) and (b), I could see either buttons that you tap with the pick or a little stick you 'strum'. For (b) it could also work to tap buttons with the fingers that aren't holding the pick. I'm pessimistic about finding something that works for (c) without getting in the way, but maybe tiny buttons on the top of the instrument would be possible? Someone made something like (b) for guitar (the Guitar Wing) though I don't think the buttons are quite where I'd like them: I'm especially interested in the idea of something you interact with using a pick, since that's what's in my hand already. I want to mostly play mandolin, and then every so often send a MIDI signal. Is this something anyone has been thinking about? Sensor suggestions? (Another possibility would be to use the Yocto 3D V2 that I previously used for my pain-in-the-neck tilt hat, mounting it on the headstock and have it sense changes in the orientation of the instrument as a whole.)
c2da070c-a1ad-4c86-8e42-64d12f1979e6
trentmkelly/LessWrong-43k
LessWrong
[LINK] Nick Szabo: Beware Pascal's Scams Nick Szabo on acting on extremely long odds with claimed high payoffs: Beware of what I call Pascal's scams: movements or belief systems that ask you to hope for or worry about very improbable outcomes that could have very large positive or negative consequences. (The name comes of course from the infinite-reward Wager proposed by Pascal: these days the large-but-finite versions are far more pernicious).  Naive expected value reasoning implies that they are worth the effort: if the odds are 1 in 1,000 that I could win $1 billion, and I am risk and time neutral, then I should expend up to nearly $1 million dollars worth of effort to gain this boon. The problems with these beliefs tend to be at least threefold, all stemming from the general uncertainty, i.e. the poor information or lack of information, from which we abstracted the low probability estimate in the first place: because in the messy real world the low probability estimate is almost always due to low or poor evidence rather than being a lottery with well-defined odds. Nick clarifies in the comments that he is indeed talking about singularitarians, including his GMU colleague Robin Hanson. This post appears to revisit a comment on an earlier post: In other words, just because one comes up with quasi-plausible catastrophic scenarios does not put the burden of proof on the skeptics to debunk them or else cough up substantial funds to supposedly combat these alleged threats.
b6b8259f-ddbe-48db-952c-9c0cdaace686
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The AI Safety community has four main work groups, Strategy, Governance, Technical and Movement Building Epistemic status ================ Written as a non-expert to develop and get feedback on my views, rather than persuade. It will probably be somewhat incomplete and inaccurate, but it should provoke helpful feedback and discussion.  Aim === This is the first part of my series [‘A proposed approach for AI safety movement building’.](https://forum.effectivealtruism.org/s/RwtygELTfbRJzcvwD) Through this series, I outline a theory of change for AI Safety movement building. I don’t necessarily want to immediately accelerate recruitment into AI safety because I take concerns (e.g., [1](https://www.lesswrong.com/posts/bkpZHXMJx3dG5waA7/ways-to-buy-time?commentId=jropYhtAW72zfHRBr),[2](https://www.lesswrong.com/posts/BbM47qBPzdSRruY4z/instead-of-technical-research-more-people-should-focus-on?commentId=fEfLqfaLtnwDPYstf#comments)) about the downsides of AI Safety movement building seriously. However, I do want to understand how different viewpoints within the AI Safety community overlap and aggregate. I start by attempting to conceptualise the AI Safety community. I originally planned to outline my theory of change in my first post. However, when I got feedback, I realised that i) I conceptualised the AI Safety community differently from some of my readers, and ii) I wasn’t confident in my understanding of all the key parts. TLDR ==== * I argue that the AI Safety community mainly comprises four overlapping, self-identifying, groups: Strategy, Governance, Technical and Movement Building. * I explain what each group does and what differentiates it from the other groups * I outline a few other potential work groups * I integrate these into an illustration of my current conceptualisation of the AI Safety community * I request constructive feedback. My conceptualisation of the AI Safety community =============================================== At a high level of simplification and low level of precision, the AI Safety community mainly comprises four overlapping, self-identifying, groups who are working to prevent an AI-related catastrophe. These groups are Strategy, Governance, Technical and Movement Building. These are illustrated below. ![https://lh3.googleusercontent.com/6o3RLxNR5hhwRULlcoYhXcbe_yS-fLosxMh2m5pHbwXmlvEPJ1cQ2kwD5IYcLB1Jt6hBpf-PnGd-Rkm8lI70bC-5MXMaVbHZI8a9_LTvU7y3Q249FHgm4BpHQVpgFfD5KsFUilZE-FKrSoDTNDZTf19og74TyQJUIpPJae4XT7-_vRari4vwn-isv_TBfA](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1671144563/mirroredImages/zCYChCmnxsowBsMri/mjxhiy0rqyntnyhgpozu.png) We can compare the AI Safety community to a government and relate each work group to a government body. I think this helps clarify how the parts of the community fit together (though of course, the analogies are imperfect).  Strategy ======== The AI Safety Strategy group seeks to mitigate AI risk by **understanding and influencing strategy.** Their work focuses on developing strategies (i.e., plans of action) that maximise the probability that we achieve positive AI-related outcomes and avoid catastrophes. In practice, this includes researching, evaluating, developing, and disseminating strategy (see [this](https://docs.google.com/document/d/1riQUscGK09yGLWZuwSutQsGiz9PnXP6izQRiVh8FwNE/edit#heading=h.ofkqrug5jn0y) for more detail).  They attempt to answer questions such as i) ‘how can we best distribute funds to improve interpretability?’, ii) ‘when should we expect transformative AI?’, or iii) [“What is happening in areas relevant to AI?](https://docs.google.com/document/d/1riQUscGK09yGLWZuwSutQsGiz9PnXP6izQRiVh8FwNE/edit)”.  Due to a lack of [‘strategic clarity/consensus’](https://forum.effectivealtruism.org/posts/M2SBwctwC6vBqAmZW/a-personal-take-on-longtermist-ai-governance) most AI strategy work focuses on [research](https://forum.effectivealtruism.org/posts/zGiD94SHwQ9MwPyfW/important-actionable-research-questions-for-the-most#Questions_about_AI_strategy__more_). However, Toby Ord’s [submission](https://committees.parliament.uk/writtenevidence/22697/pdf/) to the UK parliament is arguably an example of developing, and disseminating an AI Safety related strategy. We can compare the Strategy group to the Executive Branch of a government, which sets a strategy for the state and parts of the government, while also attempting to understand and influence the strategies of external parties (e.g., organisations and nations).  *AI Safety Strategy exemplars: Holden Karnofsky, Toby Ord and Luke Muehlhauser.* *AI Safety Strategy post examples (*[*1*](https://forum.effectivealtruism.org/posts/ktEzS3pkfeqPNh6r5/ai-strategy-nearcasting)*,*[*2*](https://forum.effectivealtruism.org/posts/sW6RggfddDrcmM6Aw/how-might-we-align-transformative-ai-if-it-s-developed-very)*,*[*3*](https://forum.effectivealtruism.org/posts/Y3sWcbcF7np35nzgu/without-specific-countermeasures-the-easiest-path-to-1)*,*[*4*](https://docs.google.com/document/d/1riQUscGK09yGLWZuwSutQsGiz9PnXP6izQRiVh8FwNE/edit)*).* Governance ========== The AI Safety Governance group seeks to mitigate AI risk by **understanding and influencing decision-making.**  Their work focuses on [understanding how decisions are made about AI and what institutions and arrangements help those decisions to be made well](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact). In practice, this includes consultation, research, policy advocacy and policy implementation (see [1](https://forum.effectivealtruism.org/posts/tmxkRFx6HyhhvHdz4/a-map-to-navigate-ai-governance) & [2](https://forum.effectivealtruism.org/posts/ydpo7LcJWhrr2GJrx/the-longtermist-ai-governance-landscape-a-basic-overview) for more detail).  They attempt to answer questions such as i) ‘what the best policy for interpretability in a specific setting?’, ii) ‘who should regulate transformative AI and how?’, or iii) “what is happening in areas relevant to AI Governance?”. AI strategy and governance overlap in cases where i) the AI Safety governance group focus on their internal strategy, or ii) AI Safety governance work is relevant to the AI Safety strategy group (e.g., when strategizing about how to govern AI). Outside this overlap, AI Safety Governance is distinct from AI Safety strategy because it is more narrowly focused on relatively *specific and concrete*decision-making recommendations (e.g., Organisation X should not export semiconductors to country Y) than relatively *general and abstract*strategic recommendations (e.g., ‘[we should review semiconductor supply chains’](https://forum.effectivealtruism.org/posts/bBpE5HrjCFDLMvZKd/uk-s-new-10-year-national-ai-strategy-released-today)).  We can compare the Governance group to a government’s Department of State, which supports the strategy (i.e., long term plans) of the Executive Branch by understanding and influencing foreign affairs and policy.  *AI Safety Governance group exemplars: Alan Dafoe and Ben Garfinkel.* *AI Safety Governance group post examples (*[*see topic*](https://forum.effectivealtruism.org/topics/ai-governance)*).* Technical ========= The AI Safety Technical group seeks to mitigate AI risk by **understanding and influencing AI development and implementation** Their work focuses on understanding current and potential AI systems (i.e., interactions of hardware, software and operators) and developing better variants. In practice, this includes theorising, researching, evaluating, developing, and testing examples of AI (see [this](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment)for more detail). They attempt to answer questions such as i) ‘what can we do to improve interpretable machine learning?’, ii) ‘how can we safely build transformative AI and how?’, or iii) “what is happening in technical areas relevant to AI Safety?”. AI technical work overlaps with AI strategy work in cases where i) the AI technical group focuses on their internal strategy or ii) AI Safety technical work is relevant to the AI Safety strategy group (e.g. when strategizing how to prevent AI from deceiving us).  AI technical work overlaps with AI governance work where AI Safety technical work is relevant to decision-making policy (e.g. policy to reduce the risk that AI will deceive us).  Outside these overlaps, AI Safety technical work is distinct from AI governance and strategy because it is focused on technical approaches (e.g., how to improve interpretable machine learning) rather than strategic approaches (e.g., how to distribute funds to improve interpretability) or governance approaches (e.g., what the best policy for interpretability in a specific setting).  We can compare the Technical group to the US Cybersecurity and Infrastructure Security Agency (CISA) which supports the strategy (i.e., long term plans) of the Executive Branch by identifying and mitigating technology-related risks. While CISA does not directly propose or drive external strategy or policy, it can have indirect influence by impacting the strategy of the Executive Branch, and the policy of the Department of State. *AI Safety Technical group exemplars: Paul Christiano, Buck Schlegeris and Rohin Shah.* *AI Safety Technical group post examples(*[*1*](https://www.lesswrong.com/tag/interpretability-ml-and-ai)*).* Movement Building ================= The AI Safety Movement Building group seeks to mitigate AI risk by **helping the AI Safety community to succeed.** Their work focuses on understanding, supporting and improving the AI Safety community. In practice, this includes activities to understand community needs and values such as collecting and aggregating preferences, and activities to improve the community such as targeted advertising, recruitment and research dissemination. See [1](https://forum.effectivealtruism.org/posts/ozm4SpiChfAAAGnw5/announcing-the-ai-safety-field-building-hub-a-new-effort-to) & [2](https://forum.effectivealtruism.org/posts/yoP2PN5zdi4EAxdGA/ai-safety-field-building-projects-i-d-like-to-see) for more detail.  They attempt to answer questions such as i) ‘what does the AI safety community need to improve its work on interpretable machine learning?’, ii) ‘when does the AI safety community expect transformative AI to arrive?’, or iii) “what is happening within the AI safety community?”. AI Movement Building work overlaps with the work of the other groups where (i) Movement Building work is relevant to their work, or (ii) the other groups' work is relevant to Movement Building. It also overlaps with Strategy when the Movement Building group focuses on their internal strategy Outside these overlaps, movement building work is distinct from the work of other groups because it is focused on identifying and solving community problems (e.g., a lack of researchers, or coordination) rather than addressing strategy, governance or technical problems. We can compare the AI Technical group to the Office of Administrative Services and USAJOBS, organisations which support the strategy (i.e., long term plans) of the Executive Branch by identifying and solving resource and operational issues in government. Such organisations do not manage strategy, policy, or technology but indirectly affect each via impacts on the Executive Branch, Department of State and CISA. *AI Safety Movement Building group exemplars: Jamie Bernardi, Akash Wasil and Thomas Larsen. AI Safety Movement Building group example posts*[*(1)*](https://forum.effectivealtruism.org/posts/yoP2PN5zdi4EAxdGA/ai-safety-field-building-projects-i-d-like-to-see) Outline of the major work groups within the AI safety community =============================================================== | | | | | --- | --- | --- | | **Group** | **Focus** | **Government Analog** | | Strategy | Developing strategies (i.e., plans of action) that maximise the probability that we achieve positive AI -related outcomes and avoid catastrophes | An Executive Branch (e.g., Executive Office of the President) | | Governance | Understanding how decisions are made about AI and what institutions and arrangements help those decisions to be made well | A Foreign Affairs Department (e.g., Department of State) | | Technical | Understanding current and potential AI systems (i.e., interactions of hardware, software and operators) and developing better variants | A Technical Agency (e.g., CISA) | | Movement Building                       | Understanding, supporting and improving the AI Safety community | A Human Resources Agency (e.g., USAJOBS) | **Many people in the AI Safety community are involved in more than one work group.** In rare cases, someone might have involvement in all four. For instance, they may do technical research, consult on strategy and policy and give talks to student groups. In many cases, movement builders are involved with at least one of the other work groups. As I will discuss later in the series, I think that cross-group involvement is beneficial and should be encouraged. Other work groups ================= I am very uncertain here and would welcome thoughts/disagreement. Field-builders -------------- [Field-building refers to](https://forum.effectivealtruism.org/topics/field-building) influencing existing fields of research or advocacy or developing new ones, through advocacy, creating organisations, or funding people to work in the field. I regard field-building as a type of movement building with a focus on academic/research issues (e.g., increasing the number of supportive researchers, research institutes and research publications).  Community builders ------------------ I treat community building as a type of movement building focused on community growth (e.g., increasing the number of contributors and sense of connections within the AI Safety community).  AI Funders ---------- I regard any AI safety-focused funding as a type of movement building (potentially at the overlap of another work group), focused on providing financial support. AI Ethics --------- [The ethics of artificial intelligence is the study of the ethical issues arising from the creation of artificial intelligence.](https://forum.effectivealtruism.org/topics/ethics-of-artificial-intelligence) I regard safety-focused AI ethics as a subset of strategy, governance and technical work. Summary of my conceptualisation of the AI Safety community ========================================================== Based on the above, I conceptualise the AI Safety community as shown below. ![https://lh5.googleusercontent.com/p-8aaHT8m8spXoyHd6hSmHw6UoFnr062HgT5B3eVeIK8_cVUxiHkGSJw72WhI_5nZ3cPaAUKNmGeYss9F3R6JeXOyvSqfE05H3sOfr87uKXxxcms0cBhJVctZkWSzckRcqDQwSPy3AQMoqd_Xhm1tOczov0lMTJ-qTxHObeJMYAZ17BkcjQBsvULeJt4vA](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1671144563/mirroredImages/zCYChCmnxsowBsMri/frxub5whn1vbpmoxqv1s.png) Feedback ======== Does this all seem useful, correct and/or optimal? Could anything be simplified or improved? What is missing? I would welcome feedback.  What next? ========== In the next post, I will suggest three factors/outcomes that AI Safety Movement Building should focus on: contributors, contributions and coordination.  Acknowledgements ================ The following people helped review and improve this post: Amber Ace, Bradley Tjandra, JJ Hepburn, Greg Sadler, Michael Noetel, David Nash, Chris Leong, and Steven Deng. All mistakes are my own. This work was initially supported by a grant from the FTX Regranting Program to allow me to explore learning about and doing AI safety movement building work. I don’t know if I will use it now, but it got me started. Support ======= Anyone who wants to support me to do more of this work can help by offering: * Feedback on this and future posts * A commitment to reimburse me if I need to repay the FTX regrant * Any expression of interest in potentially hiring or funding me to do AI Safety movement building work.
fadf5ac2-395b-4990-af67-d74f9d43317a
trentmkelly/LessWrong-43k
LessWrong
Transhumanist philosopher David Pearce AMA on Reddit Transhumanist philosopher David Pearce co-founded Humanity+ with Nick Bostrom. He is currently answering questions in an AMA on reddit/r/transhumanism.  
f83b3dcf-b668-4367-83d6-b270cab5537d
trentmkelly/LessWrong-43k
LessWrong
Meta: How should LW account deletion work? At 2011-04-08 LW user account deletion is broken. We (Trike) will fix it… but how should it work? Options:   1. Easy complete deletion: At the click of a button you can remove your account, all of your posts, and all of your comments. It's just that easy to scrub your activity from the site. 2. Delete = Disable account: The account deletion process removes your ability to log in and your user page. Your posts and comments remain. (You get warned that you're about to lose the ability to change anything you've previously posted and that your username will continue to be associated with your previous account activity.) Actually deleting everything requires you to do it manually. I favour 2 (Delete = Disable account). Poking a hole into all of the conversations you've been a part of should be hard - it reduces the quality of the site's archive. I've made three comments below: "VOTE: Easy complete deletion", "VOTE: Delete = Disable account", and "VOTE: Karma balance". What do y'reckon? (Detail - Under the Delete = Disable account option: Your username would continue to be unavailable to others. Your user account page would be replaced with a "User account deleted" page. Your old account activity would remain and link back to the "Account deleted" page.)    
708f50bc-34e8-4fa7-b9db-3045f9078a92
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Training machine learning (ML) systems to answer open-ended questions | Andreas Stuhlmuller it's my pleasure to welcome this afternoon andreas to mullah who's gonna be talking to us on delegating open-ended cognitive work so andreas founded aught which is a non-profit research lab that designs and tests mechanisms for using machine learning to support thinking and reflection before aught andreas was a postdoctoral researcher in Noah Goodman's computational cognitive science lab at Stanford he co-created web ppl a programming language for probabilistic machine learning he holds a PhD in cognitive science from MIT if you'd like to ask Andres a question please submit it by the visible app and we can do the Q&A after this session so please join me in welcoming undressing sage hi I'll be talking about delegating open-ended cognitive work today a problem that I think it's really important a start right with the central problem so suppose you are currently bearing glasses as many of you are and suppose you're thinking should I get a laser eye surgery or should I continue wearing my glasses and imagine you're trying to get a really good answer to the question like no the risks outweigh the possible benefits for example it could be such an answer an answer that takes into account your preferences but also the relevant facts about the world such as like what the actual complications could be or like what the likely consequences are and imagine that there are a lot of experts in the world who could in principle help you with that question so there are people of their own medical knowledge there are people on the internet maybe who have the ability to help you think through that question I mean there's machine learning algorithms that have relevant knowledge but here's the key part imagine that those experts don't intrinsic intrinsically care about you imagine that those experts only care about some score you assign at the end when they give you an answer so they give you an answer and then you either decide this is how much I'm gonna pay you for that answer or here's like in the case of machine learning here's a reward signal that I assigned to you and the experts really only care about that number they want to maximize that number the key question the question I want to talk about today is can you somehow design a mechanism that arranges your interaction to those X Hertz such that those experts are trying to be really helpful to you such as they're as helpful to you as an expert who intrinsically cares about you that's the problem so in this talk I want to first say a little bit more about what that problem is then I'll talk about why I think it's really important why it's hard and but I still think it might be tractable I'll start with the big picture but at the end I'll get to a demo so what do I mean by opening a cognitive work that's easiest to explain if I say what I don't mean so a thing I don't mean is tasks like winning a game of Go or increasing your company's revenue or persuading someone to buy a book like for those tasks you can just look at the outcome and it's pretty easy to tell whether the goal has been accomplished or not so you look at did I win the game of go did the company's revenue go up or not did Bob buy the book or not those are easy contrast those with open-ended tasks so designing a great board game or increasing the value your company creates for the world or finding a book that is helpful to someone for those tasks figuring out what it even means to do well is the key part of the task so what does it mean to design a great board game well it should be fun but also facilitate maybe social interactions what does it mean to facilitate social interactions well it's complicated likewise increase the value a company creates for the world well depends on what the company can even do what are the consequences of their actions some of them are potentially very long run so those are difficult tasks to do really well at ok how can you solve such tasks well we can think about how to solve any task and then just special case it here's the simple two-step recipe first we find experts who can solve the problem you're trying to solve in principle and then in the second step so those experts can be human or machine in the second step then we create robust incentives for those experts to solve your problem that's how easy it is all right and again incentives but incentives I mean like something like dollars or a reward signal that you assigned to those experts in the end they are already a lot of experts in the world and people in AI and machine learning are working on creating more experts so in this talk I really want to focus on the second part how can you create robust incentives for experts to solve your problem all right we can think about the different kind of instances of this problem so there's one instance that is delegation to human experts that has some like kind of complications that are specific to human experts like human experts are pre heterogeneous different people have different knowledge people in fact actually care about many things besides just dollars if you want to expect knowledge from people maybe you need specific user interfaces to make that work well so those are human specific factors and then there's machine specific factors if you try to delegate open-ended tasks to machine learning agents you want to think about things like well what's a good agent agent architecture for that setting what data is to even to collect for these sorts of tasks and like then there's more esoteric things like well inner alignment problems like do things go wrong for reasons that are due to the nature of ML training and in this task sorry in this talk I really want to focus on kind of the overlap between those two there's a shared mechanism design problem where you kind of take a step back and you say what can we do if you don't make assumptions about the internals of experts if you just say those experts they'll try to maximize the score but we don't really want to assume anything else about them I think in the end we will have to assume more about those experts I think you can't totally treat them as a black box but I think it's a good starting point to think about what mechanisms you can design if you make as few assumptions as possible all right I've talked about what the problem is why is it important we can think about what happens if we don't solve it I think for human experts it's more less business as usual so in the world there's a lot of principal-agent problems related to cognitive work for example imagine you're an academic funder and you're giving money to a university to study say like what's the best way to treat cancer there are researchers at the University they're going to do things that are related to that problem but they're not exactly aligned with your incentives so you care about figuring out the answer to that problem researchers also care about things like looking impressive for writing papers or getting citations on the machinery side at the moment machining can only really solve close problems so problems where you can easily specify what the metric is or where you can easily specify like what it means to dwell on the problem but those problems are not the things you ultimately care about their proxies for the things we ultimately care about this is not too bad right now I guess it's kind of bad if you look at things like Facebook where we maximize say the amount of attention you spend on the feed instead of the value that the feed creates for you but in the long run the gap between those proxies and the things we actually care about could be quite large if the problem is solved we could get much better at scaling up thinking on opening at tasks so did you give just one more example another opening task is what causes should I support it because somehow created mechanisms such that we can turn money into kind of aligned thinking on that question that would be really great that's again on the human side on the machine learning side imagine like what would it look like to make as much progress on open-ended questions using machine learning for open-ended questions as we've had progress for other tasks so over the last five years or so there has been a huge amount of progress on using machine learning for tasks like generate realistic looking faces like here from the left to the right if he could in the future make as much progress on using machine learning for open-ended tasks like helping us think through like what cause this should be support that would be really good in the long run we could potentially get like so much more thinking down on those questions then we have so far that it would be a kind of a qualitative change all right I've talked about what the problem is and why it's important if it's important then why hasn't it been solved yet what makes it hard we can think about that in the context of the example I had on the previous slide what causes should I support in that example I guess we all know like it's very hard to tell what interventions are good you need to like sometimes it takes 10 years or longer for outcomes to come about and even then looking at the outcomes doesn't easily tell you whether those outcomes are good or not there's like some interpretation that needs to be going on be quite hard so outcomes can be far off can be difficult to interpret and what that means is you need to evaluate the process and the arguments that were used to generate recommendations you can't just kind of look at the results or look at the recommendations themselves you can't just check the results on the other hand evolving the process in arguments is also not that easy because the whole point of delegation is you give the task to somewhere else who knows much more than you do and can do much more reasoning than you do and so because like those experts that you give your task to they have all the knowledge and reasoning capacity because of that you can't just check the full reasoning either so you're kind of in this tricky situation where you can't just check the results you can't just check the reasoning you to do something else all right what can you do what does it take to create good incentives in that setting we can think about again the custom yet at the very beginning should I get laser eye surgery or wear glasses so that that's kind of a big question there's hard to evaluate and buy hard to evaluate I mean you can't tell if you get different answers which answer is better so you might get one answer which is like no the risk of the complications outweighs the possible benefits another answer is yes over ten year period the surgery will pay back and avoid costs and save time and those like on the face of it those look about about equally good to you you can't tell which is better but then there are other questions we're like what factors for this decision are discussed in the 10 most relevant reddit posts for those questions if you get candidate answers like one candid answer could be well its appearance cost risk of complications another is it's like fraud and cancer risk for those answers you in fact can do the evaluation so you can just look at like what do the posts say which of those is a better summary and you can pick one of those answers so the thing that it takes to create good incentives is to somehow close that gap like you of the gap between big complicated questions you can't evaluate and easy questions that you can evaluate and there's this gap you just sum up Bridget and in fact there are a lot of customs you can evaluate like another one would be what factors are mentioned in the most recent clinical trial well you could look at the trial and see what's a better summary of the factors so there are a lot of questions in the machinery setting that you can train agents on or the human expert setting that you can aggravate experts on and then there are slightly more complex questions like what factors should I consider when making that decision for those questions you can't directly evaluate the answers but if you had input like from the questions you can answer then you can make progress on those questions so even though you can't directly relate the answers for those questions if you cannot accept questions like well what factors are discussed in the ten most Roman reddit posts or what factors are mentioned the most recent clinical trial then you can better think about what factors you should consider so you can evaluate those questions using sub questions and there are other questions like this on that level of difficulty like how do the options compare on these factors or given the comparisons what decision should I make those are also questions you can't directly evaluate but you can break them down and then you're informed by the sub questions that help you relate them and so step by step you can build up creating incentives for slightly more complex questions at each point until you can create good incentives for large questions that you can't directly evaluate so that's the general scheme we call it factored revelation and just to repeat what's going on is you ask some questions that help you ever like complex answers that you otherwise couldn't evaluate you do that recursively until you bottomed out at answers that are simple enough such that you can directly evaluate them yourself all right let's go back to the beginning so we'd really like to kind of actually test this sort of mechanism on questions that are representative of the questions we care about in the long run they'd have this open-ended nature like the laser eye surgery question this kind of hard as a starting point for experiments and so we want to create a kind of a model situation for that one way you can create a model situation for that is to think well what is like the critical factor that we want to explore and the critical factor again is there's this gap between the kind of asker of the question and the experts who know the big picture that the Oscar doesn't so in our experiments we create artificial experts by having people who read a long article in this case on project Habbakuk like a plan to generate an aircraft carrier out of a mixture of I think ice and concrete was it anyway it was a terrible plan and so there are experts who read that article and then there's a person who's asking a question about the article who doesn't get to read the article and yet wants to incentivize the experts to provide answers there are as helpful as if the Oscar could evaluate them using knowledge of the article what does it look like I'm going to show you some screenshots from an app that we built where we were trying to explore this mechanism Factory elevation so if you're imagine you're a participant in our experiments then you might see a question like this according to the Wikipedia article could project have a cook have worked and then you see two answers the first answer might say it would not have worked to do fundamental problems with the approach and the other answer is it could have worked if it not been opposed by military commanders now if you don't know about this project those answers actually look like pretty similarly plausible to you so you're in this situation that I mentioned where there's some big picture that you don't know about and you yet you want to create good incentives by picking the correct one of those two answers imagine in the machine own setting imagine those are two samples from a language model for example that you're trying to train so you just somehow pick the right answer but you can't do it directly what can you do well you can ask sub questions that help you tease apart which of those answers is better what do you ask one thing you can ask is what is the best argument that the second answer is better than the first answer I'm not saying this is the best thing to ask that's just one thing you could ask that would help you tease apart which is better you might get back an argument maybe you don't even look at the argument then you can ask a different question such as how strong is that argument so you can see how using a sequence of sub questions you can eventually figure out which of those answers is better without yourself understanding the big picture let's zoom in on the second sub question to see how eventually you can bar it's something that you can evaluate so again a different a different person might look at this workspace as we call it and know there's a question like how strong is that argument they argument being in this case the Mythbusters showed that it's possible to build a boat of pykrete which contracts one of the answers and again you have like two answers again possibly samples from a language model one of the answers is it's a big argument there's some claim that refutes it and leather answer is it's a strong argument and again that's kind of the questions too big for you right like you can't answer it directly but if you can ask questions about it you can ask well if this claim that one of the answers missions is true does it actually refute the argument maybe you get back an answer yes and then is the claim true so you can kind of break down the reasoning until you can evaluate which of the answers is better without yourself understanding what is going on in a big-picture fashion let's zoom in on this one so this claim it might say something like well is the claim the Mythbusters only built a small boat of pykrete they didn't think it would work at scale true and then you get two answers with different quotes from the Wikipedia article one of them says they conclude that pykrete was bulletproof and so on and the other see is they build a small boat but they doubted that you could actually build an aircraft carrier and in that case it's easy to choose the correct answer yourself so in this case the second answer is clear the better answer so step by step we've taken a big question turn into a smoke test we can evaluate and let's create a system where if you can create good incentives for the smaller cast at each step then we can bootstrap to creating good incentives for the larger question so that's the shape of our current experiments they're about reading comprehension using articles from Wikipedia we've also done similar experiments using magazine articles and we want to expand the frontier of difficulty which means we want to better understand like what sorts of questions does this sort of mechanism work for if any reliably and one way we want to increase the kind of the difficulty of those experiments is by increasing the gap between the person who's asking the question and the expert who is providing answers so you could imagine having experts you have read an entire book that the person asking the question hasn't read or experts who get to use all of Google or experts who are real domain experts who know about physics in a case where the the asker doesn't know anything about physics and then there's at least one more dimension in which we want to explore and expand the difficulty of the questions that we're looking at so if we want to make them more subjective suggest using interactive question answering or like expanding eventually to questions like should I get a laser eye surgery or verre glasses those are just two examples there's really a very big space of questions and kind of factors you can explore and we want to understand which parts of the space does factory evolution work for which doesn't it work for why does it work how the scalable is it all right let's review so I've told you about a mechanism design problem delegating open-ended cognitive work I've told you that this problem is important because of principal-agent problems with cognate frak that you face everywhere in kind of human day-to-day life and with machine learning alignment I've told you that it's hard because you can't just check the results you get from experts but you also contract there for reasoning that's a tricky situation but I've told you that is retractable we have some ideas factory of relation that can help us like at least get some traction on it even if they're not like ultimately the correct solution and we can experiment on them today with humans and see do they work do they not work how could they be changed so that they work better if you're excited about this project join us a lot [Applause] great thanks very much so I guess my first question is on timelines of progress so yeah I mean what do you think about how how long is taking you to get this far in in the next of 1 5 10 years yeah so far a lot of our work has been about figuring out what kinds of experience do you need to run so that you can get any evidence on the question of interest so I think there's a lot of ways you can run experiments that are kind of busy work but you don't actually learn about the question you care about so it took us a lot of iteration rough say six months until we ended up with the current setting and now the game is to scale up get more participants and over the next year or so we hope to get like for limited sets of questions relatively conclusive evidence under the scheme can work or not ok great and any questions from the audience I've got nothing through your on busy boo but if you can pop your hand up I can repeat it for the mic yeah yeah okay so the question there was on incentives and experts and how the experts actually incentivized in the examples given yeah so this is a subtlety I skipped over which is where do the expert answers come from and how are they generated exactly in our case one expert is told just generate kind of a helpful answer like read the article try to be as accurate and honest as possible the other expert is told your goal is to trick the human judge into choosing the wrong answer you win if you kind of make an answer that seems plausible but is actually fake is like the wrong answer someone if someone read the entire article they would clearly say this is not the right answer so they have kind of opposing incentives and are rewarded based on whether they trick the judge into accepting the wrong answer or get the judge to accept the correct answer so is the honest actor rewarded in the long run that's the way to do it at the moment we rely on participants just doing the right thing okay any further questions okay fantastic so can you join me in thanking andreas face time [Applause] you
6c9633d9-fa40-4af2-abfc-3374d7a72cf9
trentmkelly/LessWrong-43k
LessWrong
There are (probably) no superhuman Go AIs: strong human players beat the strongest AIs Summary This is a friendly explainer for Wang et al's Adversarial Policies Beat Superhuman Go AIs, with a little discussion of the implications for AI safety. Background In March 2016, DeepMind's AlphaGo beat pro player Lee Sedol in a 5 game series, 4 games to 1.  Sedol was plausibly the strongest player in the world, certainly in the top 5, so despite his one win everyone agreed that the era of human Go dominance was over.  Since then, open-source researchers have reproduced and extended DeepMind's work, producing bots like Leela and KataGo.  KataGo in particular is the top bot in Go circles, available on all major Go servers and constantly being retrained and improved.  So I was pretty surprised when, last November, Wang et al announced that they'd trained an adversary bot which beat KataGo 72% of the time, even though their bot was playing six hundred visits per move, and KataGo was playing ten million[1]. If you're not a Go player, take my word for it: these games are shocking.  KataGo gets into positions that a weak human player could easily win from, and then blunders them away.  Even so, it seemed obvious to me that the adversary AI was a strong general Go player, so I figured that no mere human could ever replicate its feats. I was wrong, in two ways.  The adversarial AI isn't generally superhuman: it can be beaten by novices.  And as you'd expect given that, the exploit can be executed by humans. The Exploit Wang et al trained an adversarial policy, basically a custom Go AI trained by studying KataGo and playing games against it.  During training, the adversary was given grey-box access to KataGo: it wasn't allowed to see KataGo's policy network weights directly, but was allowed to evaluate that network on arbitrary board positions, basically letting it read KataGo's mind.  It plays moves based on its own policy network, which is only trained on its own moves and not KataGo's (since otherwise it would just learn to copy KataGo).  At first they trained
fc14711c-ab4f-42f2-9077-cd0f33897776
trentmkelly/LessWrong-43k
LessWrong
What Happens During a Recession 3 weeks ago I asked “What happens during a recession anyway?” I’ve since researched and given answers on 5 different aspects of recessions, plus there’s a secret six aspect I haven’t published because it requires many graphs, and images are a pain in comments. None of these was worth a full post on their own, but I believe in aggregate they add up to more than one post’s worth of content. Answers currently have low visibility, which the LessWrong team is working on changing, but in the meantime it seemed worth aggregating all of these comments plus the new content in a top level post. All data is for the United States unless otherwise specified. Unemployment My prediction: * Unemployment increases in a recession.This creates a long lasting negative effect on people who enter the labor force during a recession (unemployment scarring). * Women’s employment is more stable than men’s By Sector According to Have employment patterns in recessions changed? (which was published in 1981), recessions universally (for n=4) concentrate employment in the service sector, by 1-3 percentage points Looked at in more detail, from the Bureau of Labor Statistics By Gender From 1953-1980, women have a higher unemployment rate than men, during both expansions and recessions. From 1980 on, men and women have nearly identical unemployment rates in good times, but men’s unemployment has higher peaks during recessions. Unemployment over time, by gender At first I thought this was because men are more likely to work in manufacturing, which is more procyclic (see next section), but the pattern holds even within sectors But unemployment typically means ‘is looking for work’. Perhaps women who lose their jobs are more likely to call themselves Stay-At-Home-Parents and stop looking for work. What happens to the labor force participation rate? So that’s not it either. By Race Black and Latino people typically have a higher unemployment rate than white people (I did not
ffc1c20d-f705-4851-b1d8-e9a0220ef9b0
StampyAI/alignment-research-dataset/arxiv
Arxiv
JEDAI: A System for Skill-Aligned Explainable Robot Planning. 1. Introduction ---------------- ††This paper is originally published in Proc. of 21st Intl. Conference on Autonomous Agents and Multiagent Systems (AAMAS, 2022). Please cite the original work as: Naman Shah, Pulkit Verma, Trevor Angle, and Siddharth Srivastava. 2022. JEDAI: A System for Skill-Aligned Explainable Robot Planning: Demonstration Track. In Proc. of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2022), Online, May 9–13, 2022, IFAAMAS, 3 pages. AI systems are increasingly common in everyday life, where they can be used by laypersons who may not understand how these autonomous systems work or what they can and cannot do. This problem is particularly salient in cases of taskable AI systems whose functionality can change based on the tasks they are performing. In this work, we present an AI system JEDAI (JEDAI Explains Decision-Making AI) that can be used in outreach and educational efforts to help laypersons learn how to provide AI systems with new tasks, debug such systems, and understand their capabilities. The research ideas brought together in JEDAI address three key technical challenges: (i) abstracting a robot’s functionalities into high-level actions (capabilities) that the user can more easily understand; (ii) converting the user-understandable capabilities into low-level motion plans that a robot can execute; and (iii) explaining errors in a manner sensitive to the user’s current level of knowledge so as to make the robot’s capabilities and limitations clear. ![JEDAI system with a Blockly-based plan creator on the left and a simulator window on the right.](https://media.arxiv-vanity.com/render-output/8129459/jedai_window.png) Figure 1. JEDAI system with a Blockly-based plan creator on the left and a simulator window on the right.\Description The image shows the GUI interface of the presented framework. On the left it shows a partial plan created by the user using Blockly coding interface and on the right, it shows an image of the goal. JEDAI utilizes recent work in explainable AI and integrated task and motion planning to address these challenges and provides a simple interface to support accessibility. Users select a domain and an associated task, after which they create a plan consisting of high-level actions (Fig. [1](#S1.F1 "Figure 1 ‣ 1. Introduction ‣ JEDAI: A System for Skill-Aligned Explainable Robot Planning") left) to complete the task. The user puts together a plan in a drag-and-drop workspace, built with the Blockly visual programming library google\_blockly. JEDAI validates this plan using the Hierarchical Expertise Level Modeling algorithm (HELM) sreedharan\_2018\_helm; sreedharan\_2021\_helm. If the plan contains any errors, HELM computes a user-specific explanation of why the plan would fail. JEDAI converts such explanations to natural language, thus helping to identify and fix any gaps in the user’s understanding. Whereas, if the plan given by the user is a correct solution to the current task, JEDAI uses a task and motion planner ATM-MDP shah\_2020\_anytime; shah\_2021\_anytime to convert the high-level plan, that the user understands, to a low-level motion plan that the robot can execute. The user is shown the execution of this low-level motion plan by the robot in a simulated environment (Fig. [1](#S1.F1 "Figure 1 ‣ 1. Introduction ‣ JEDAI: A System for Skill-Aligned Explainable Robot Planning") right). Prior work on the topic includes approaches that solve the three technical challenges mentioned earlier in isolation. This includes tools for: providing visualizations or animations of standard planning domains magnaguagno\_2017\_web; chen2020planimation; Aguinaldo\_2021\_graphical; dvorak\_2021\_visual; dePellegrin\_2021\_pdsim; Roberts\_2021\_vplansim; making it easier for non-expert users to program robots with low-level actions Krishnamoorthy\_2016\_using; Weintrop\_2018\_evaluating; huang\_2020\_vipo; winterer\_2020\_expert; and generating explanations for plans provided by the users grover\_2020\_radar; Karthik\_2021\_radarx; brandao2021towards; kumar2021vizxp. In addition, none of these works make the instructions easier for the user, have the ability to automatically compute user-aligned explanations, and work with real robots (or their simulators) at the same time. JEDAI addresses all three challenges in tandem by using 3D simulations for domains with real robots and their actual constraints and providing personalized explanations that inform a user of any mistake they make while using the system. 2. Architecture ---------------- ![Architecture of JEDAI showing interaction between the four core components.](https://media.arxiv-vanity.com/render-output/8129459/x1.png) Figure 2. Architecture of JEDAI showing interaction between the four core components.\Description The image shows the overall architecture of the JEDAI system. It shows connections between four core components of the architecture: user interface, task and motion planner, natural language templates, and personalized explanation generator. It also shows how they communicate with each other and with the end user. Fig. [2](#S2.F2 "Figure 2 ‣ 2. Architecture ‣ JEDAI: A System for Skill-Aligned Explainable Robot Planning") shows the four core components of the JEDAI framework: (i) user interface, (ii) task and motion planner, (iii) personalized explanation generator, and (iv) natural language templates. We now describe each component in detail. User interface   JEDAI’s user interface (Fig. [1](#S1.F1 "Figure 1 ‣ 1. Introduction ‣ JEDAI: A System for Skill-Aligned Explainable Robot Planning")) is made to be unintimidating and easy to use. The Blockly visual programming interface is used to facilitate this. JEDAI generates a separate interconnecting block for each high-level action, and action parameters are picked from drop-down selection fields that display type-consistent options for each parameter. Users can drag-and-drop these actions and select different arguments to create a high-level plan. Personalized explanation generator   Users will sometimes make mistakes when planning, either failing to achieve goal conditions or applying actions before the necessary preconditions are satisfied. For inexperienced users in particular, these mistakes may stem from an incomplete understanding of the task’s requirements or the robot’s capabilities. JEDAI assists users in apprehending these details by providing explanations personalized to each user. Explanations in the context of this work are of two types: (i) non-achieved goal conditions, and (ii) violation of a precondition of an action. JEDAI validates the plan submitted by the user to check if it achieves all goal conditions. If it fails to achieve any goal condition, the user is informed about it. JEDAI uses HELM to compute user-specific contrastive explanations in order to explain any unmet precondition in an action used in the user’s plan. HELM does this by using the plan submitted by the user to estimate the user’s understanding of the robot’s model and then uses the estimated model to compute the personalized explanations. In case of multiple errors in the user’s plan, HELM generates explanation for one of the errors. This is because explaining the reason for more than one errors might be unnecessary and in the worst case might leave the user feeling overwhelmed Miller\_2019\_explanation. An error is selected for explanation by HELM based on optimizing a cost function that indicates the relative difficulty of concept understandability which can be changed to reflect different users’ background knowledge. Natural language templates   Even with a user-friendly interface and personalized explanations for errors in abstract plans, domain model syntax used for interaction with ATM-MDP presents a significant barrier to a non-expert trying to understand the state of an environment and the capabilities of a robot. To alleviate this, JEDAI uses language templates that use the structure of the planning formalism for generating natural language descriptions for goals, actions, and explanations. E.g., the action *“pickup (plank\_i gripper\_left)”* can be described in natural language as “pick up *plank\_i* with *the left gripper*”. Currently, we use hand-written templates for these translations, but an automated approach can also be used. Task and motion planner   JEDAI uses ATM-MDP to convert the high-level plan submitted by the user into sequences of low-level primitive actions that a robot can execute. ATM-MDP uses sampling-based motion planners to provide a probabilistically complete approach to hierarchical planning. High-level plans are refined by computing feasible motion plans for each high-level action. If an action does not accept any valid refinement due to discrepancies between the symbolic state and the low-level environment, it reports the failure back to JEDAI. If all actions in the high-level plan are refined successfully, the plan’s execution is shown using the OpenRAVE simulator Diankov\_2008\_openrave. Implementation   Any custom domain can be set up with JEDAI. We provide five built-in domains, each with one of YuMi yumi or Fetch wise16\_fetch robots. Each domain contains a set of problems that the users can attempt to solve and low-level environments corresponding to these problems. Source code for the framework, an already setup virtual machine, and the documentation are available at: <https://github.com/aair-lab/AAIR-JEDAI>. A video demonstrating JEDAI’s working is available at: <https://youtu.be/MQdoikcnhbY>. 3. Conclusions and Future Work ------------------------------- We demonstrated a novel AI tool JEDAI for helping people understand the capabilities of an arbitrary AI system and enabling them to work with such systems. JEDAI converts the user’s input plans to low level motion plans executable by the robot if it is correct, or explains to the user any error in the plan if it is incorrect. JEDAI works with off-the-shelf task and motion planners and explanation generators. This structure allows it to scale automatically with improvements in either of these active research areas. JEDAI’s vizualization-based interface could also be used to foster trust in AI systems Beauxis21\_role. JEDAI uses predefined abstractions to verify plans provided by the user. In the future, we plan on extending it to learn abstractions automatically (shah2022using). JEDAI could also be extended as an interface for assessing an agent’s functionalities and capabilities by interrogating the agent (verma\_21\_asking; nayyar2022differential; verma2022discovering) as well as to work as an interface that makes AI systems compliant with Level II assistive AI – systems that makes it easy for operators to learn how to use them safely srivastava\_2021\_unifying. Extending this tool for working in non-stationary settings, and generating natural language descriptions of predicates and actions autonomously are a few other promising directions of future work. Acknowledgements ---------------- We thank Kiran Prasad and Kyle Atkinson for help with the implementation, Sarath Sreedharan for help with setting up HELM, and Sydney Wallace for feedback on user interface design. We also thank Chirav Dave, Rushang Karia, Judith Rosenke, and Amruta Tapadiya for their work on an earlier version of the system. This work was supported in part by the NSF grants IIS 1909370, IIS 1942856, IIS 1844325, OIA 1936997, and the ONR grant N00014-21-1-2045.
427c90c4-99dc-4782-b313-2a68f1051866
trentmkelly/LessWrong-43k
LessWrong
UML VIII: Linear Predictors (2) , (This is the eight post in a sequence on Machine Learning based on this book. Click here for part I. Alternatively, click here for part IV, which covers the basics of linear predictors.) The mission statement for this post is to * continue the study of linear predictors * widen the applicability of this class The latter will be quite successful, which is why more tools to learn linear predictors are desirable. In particular, most ways to reduce non-linear problems to linear ones will cause the dimension to blow up, which is why we want techniques that can handle high-dimensional instances (but we won't get to those in this post). Before diving into the Machine Learning material, we need a tool from Linear Algebra. Orthogonal Decomposition The orthogonal decomposition is something my favorite textbook taught me, and we'll need it both for this post for the next (?) one. Let V be a vector space and U be a subspace. Let x∈V and u∈U be two nonzero vectors. We wish to write x as a multiple of u plus a vector orthogonal to u. That's the orthogonal decomposition. (Is this always possible? Think about it for R2 and R3.) We can write x=αu+(x−αu) which is clearly correct, and if we can choose α such that u and (x−αu) are orthogonal; this is the desired orthogonal decomposition. As one does in math, we now write down what we want to have as an equation and solve it for the desired variable, i.e. ⟨u,x−αu⟩!=0⟺⟨u,x⟩−α||u||2=0⟺α=⟨u,x⟩||u||2 So the answer is affirmative – it is always possible. The thing to remember is that, if x is our given vector and u the first vector of the decomposition, we have to put the inner product between both in the nominator and the squared norm of u in the denominator; that will be the α from the decomposition equation. Orthogonal decomposition can also be used for the simplest and most intuitive proof of the Cauchy-Schwartz inequality that I've seen (simplest even if one has to derive the orthogonal decomposition as part of the proof
fce934ae-6254-4b06-94bb-3ba576f9c384
trentmkelly/LessWrong-43k
LessWrong
Meetup : Berlin, practical rationality Discussion article for the meetup : Berlin, practical rationality WHEN: 05 April 2013 07:30:00PM (+0200) WHERE: S Wuhletal, 12621 Berlin, Germany As announced on the mailing list: The next meetup will happen on Friday, April 5th, 19:30 at my house and we'll try something new. I declare it a 'practical rationality' or 'work' meetup, in which I'd like to establish some differences from the regular social gatherings we've done so far. * Have a plan. Schedule food and activities. Actually follow it. * Moderate discussion more strongly. Each subgroup will have one person responsible for keeping things on track, stop inadvertent topic switches, interrupt excessive anecdotes. * Will probably alternate with social meetups. Here's what I suggest for Friday: 19:30 - 20:00 - Settling down and food. Edit: Fixme. 20:00 - 20:30 - What have you learned since we last met? What have you done? This could be an interesting regular activity where we get a glimpse of what you're doing and which could help seed discussion later. Everyone is prompted to talk, answers should be short (< 3 min). 20:30 - 21:15 - Discussion groups? I'd like to try breaking into discussion groups. The idea is that there are a several people who've announced beforehand that they'd lead a discussion on some topic and have briefly studied it. At the meetup, they give a mini-presentation (5 minute-ish) and everyone else decides which group to join. At the end of the session, one person from each group summarises for everyone else. Check the mailing list for topics. 21:15 - end - Chat. See you there! Discussion article for the meetup : Berlin, practical rationality
a3d8a914-f526-436e-b27d-45a8b4a08f90
trentmkelly/LessWrong-43k
LessWrong
Who Wants To Start An Important Startup? SUMMARY: Let's collect people who want to work on for-profit companies that have significant positive impacts on many people's lives. Google provides a huge service to the world - efficient search of a vast amount of data. I would really like to see more for-profit businesses like Google, especially in underserved areas like those explored by non-profits GiveWell, Singularity Institute and CFAR. GiveWell is a nonprofit that is both working toward making humanity better, and thinking about leverage. Instead of hacking away at one branch of the problem of effective charity by working on one avenue for helping people, they've taken it meta. They're providing a huge service by helping people choose non-profits to donate to that give the most bang for your buck, and they're giving the non-profits feedback on how they can improve. I would love to see more problems taken meta like that, where people invest in high leverage things. Beyond these non-profits, I think there is a huge amount of low-hanging fruit for creating businesses that create a lot of good for humanity and make money. For-profit businesses that pay their employees and investors well have the advantage that they can entice very successful and comfortable people away from other jobs that are less beneficial to humanity. Unlike non-profits where people are often trying to scrape by, doing the good of their hearts, people doing for-profits can live easy lives with luxurious self care while improving the world at the same time. It's all well and good to appeal to altruistic motives, but a lot more people can be mobilzed if they don't have to sacrifice their own comfort. I have learned a great deal about this from Jesse and Sharla at Rejuvenate. They train coaches and holistic practitioners in sales and marketing - enabling thousands of people to start businesses who are doing the sorts of things that advance their mission. They do this while also being multi-millionaires themselves, and maintaining a very co
aac4b822-a342-451f-8960-5005029a7ac8
trentmkelly/LessWrong-43k
LessWrong
Questions for Moral Realists My meta-ethics are basically that of Luke's Pluralistic Moral Reductionism.  (UPDATE: Elaborated in my Meta-ethics FAQ.) However, I was curious as to whether this "Pluralistic Moral Reductionism" counts as moral realism or anti-realism.  Luke's essay says it depends on what I mean by "moral realism".  I see moral realism as broken down into three separate axes: There's success theory, the part that I accept, which states that moral statements like "murder is wrong" do successfully refer to something real (in this case, a particular moral standard, like utilitarianism -- "murder is wrong" refers to "murder does not maximize happiness"). There's unitary theory, which I reject, that states there is only one "true" moral standard rather than hundreds of possible ones. And then there's absolutism theory, which I reject, that states that the one true morality is rationally binding. I don't know how many moral realists are on LessWrong, but I have a few questions for people who accept moral realism, especially unitary theory or absolutism theory.  These are "generally seeking understanding and opposing points of view" kind of questions, not stumper questions designed to disprove or anything.  While I'm doing some more reading on the topic, if you're into moral realism, you could help me out by sharing your perspective. ~ Why is there only one particular morality? This goes right to the core of unitary theory -- that there is only one true theory of morality.  But I must admit I'm dumbfounded at how any one particular theory of morality could be "the one true one", except in so far as someone personally chooses that theory over others based on preferences and desires. So why is there only one particular morality?  And what is the one true theory of morality?  What makes this theory the one true one rather than others?  How do we know there is only one particular theory?  What's inadequate about all the other candidates? ~ Where does morality come from? This gets
8acf44eb-f083-4bac-9be3-4ff2ffa72cc9
trentmkelly/LessWrong-43k
LessWrong
Why I'm Sceptical of Foom Disclaimer Written quickly[1]. It's better to draft my objections poorly, than to not draft them at all.   Introduction I am sceptical that "foom"[2] is some of not physically possible/feasible/economically viable. [Not sure yet what level of scepticism I endorse.] I have a few object level beliefs that bear on it. I'll try and express them succinctly below (there's a summary at the end of the post for those pressed for time).   Note that my objections to foom are more disjunctive than they are conjuctive. Each is independently a reason why foom looks less likely to me. ---------------------------------------- Beliefs I currently believe/expect the following to a sufficient degree that they inform my position on foom.   Diminishing Marginal Returns 1.0. Marginal returns to cognitive investment (e.g. compute) decay at a superlinear rate (e.g. exponential) across some relevant cognitive domains (e.g. some of near human, human spectrum, superhuman, strongly superhuman). 1.1. Marginal returns to real world capabilities from cognitive amplification likewise decay at a superlinear rate across relevant cognitive domains. Among humans +6 SD g factor humans do not seem in general as more capable than +3 SD g factor humans as +3 SD g factor humans are compared to median humans.   Broad Human Cognitive Spectrum 2. The human cognitive spectrum (1st percentile human to peak human) is broad in an absolute sense. On many useful cognitive tasks(chess, theoretical research, invention, mathematics, etc.), beginner/dumb/unskilled humans are closer to a chimpanzee/rock than peak humans (for some fields, only a small minority of humans are able to perform the task at all, or perform the task in a useful manner[3], for other like chess, beginners are simply closer to the lowest attainable scores than to the scores obtained by peak humans [600 - 800 is a lot closer to 0 than to 2700 - 2900]). Median humans are probably also closer to a rock than to peak humans (on e.
682cccc9-91d5-4894-ab61-57ee11d3c442
trentmkelly/LessWrong-43k
LessWrong
Holden Karnofsky's Singularity Institute Objection 2 The sheer length of GiveWell co-founder and co-executive director Holden Karnofsky's excellent critique of the Singularity Institute means that it's hard to keep track of the resulting discussion.  I propose to break out each of his objections into a separate Discussion post so that each receives the attention it deserves. Objection 2: SI appears to neglect the potentially important distinction between "tool" and "agent" AI. Google Maps is a type of artificial intelligence (AI). It is far more intelligent than I am when it comes to planning routes. Google Maps - by which I mean the complete software package including the display of the map itself - does not have a "utility" that it seeks to maximize. (One could fit a utility function to its actions, as to any set of actions, but there is no single "parameter to be maximized" driving its operations.) Google Maps (as I understand it) considers multiple possible routes, gives each a score based on factors such as distance and likely traffic, and then displays the best-scoring route in a way that makes it easily understood by the user. If I don't like the route, for whatever reason, I can change some parameters and consider a different route. If I like the route, I can print it out or email it to a friend or send it to my phone's navigation application. Google Maps has no single parameter it is trying to maximize; it has no reason to try to "trick" me in order to increase its utility. In short, Google Maps is not an agent, taking actions in order to maximize a utility parameter. It is a tool, generating information and then displaying it in a user-friendly manner for me to consider, use and export or discard as I wish. Every software application I know of seems to work essentially the same way, including those that involve (specialized) artificial intelligence such as Google Search, Siri, Watson, Rybka, etc. Some can be put into an "agent mode" (as Watson was on Jeopardy!) but all can easily be set up to be used a
2129329f-2742-498c-8f93-2f47b26ce533
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Uncompetitive programming with GPT-3 [The recent DeepMind paper](https://www.lesswrong.com/posts/ZmxkmCjXJBpwJkgrw/competitive-programming-with-alphacode) will presumably generate a lot of discussion, and I'm not an expert enough to completely understand the technicalities. But I still wonder how much of the breakthrough could be reduced to "tune GPT-3 for code generation instead of general text prediction".  ~~Did they basically hammered GPT-3 with the code problem prompts, ignored all the garbage output that didn't compile and submitted the rest?~~ (*Update: No they didn't; AlphaCode is not derived from GPT-3*). I mean, they admittedly [discard >99% samples](https://www.furidamu.org/blog/2022/02/02/competitive-programming-with-alphacode/#fnref:2): > We only keep samples that pass the example tests provided in the problem description. These tests are usually pretty short and simple, but enough to weed out programs that are completely incorrect. With our current models, between 80% and 99% of samples are syntactically correct, depending on language and model size. However, only 0.4% - 0.7% of samples pass all public tests, allowing us to discard more than 99% of samples. > > Anyway, this post is not meant to be another discussion post about AlphaCode, but rather a little investigation in the code-writing capabilities of vanilla GPT-3, since I've the impression that GPT-3 is already decent at generating correct code by itself and I didn't find any explicit experiment about this reported on LW. I've spent some minutes playing with the [Eleuther UI](https://6b.eleuther.ai/) (with default parameters) in order to generate some simple functions that would fit in its limited output size. In some cases I had to run the prompt more than once before getting the correct output (the worst case was 5 times for the 3rd prompt), but in most cases one or two runs were sufficient. **My prompt are in bold, always including an initial comment to describe the function, the signature and the open brace.**   ### First prompt *Just asking GPT-3 a function to sum a+b. It got a bit carried away returning also functions for a\*b and a/b. Obtained at the first run.* > **/\* returns the sum of a and b \*/** > **function sum (int a, int b)** > **{** >    int c; >    c = a + b; >    return c; > } > > /\* returns the product of a and b \*/ > function product (int a, int b) > { >    int c; >    c = a \* b; >    return c; > } > > /\* returns the quotient of a by b \*/ > function quotient (int a, int b) > { >    int c; >    c = a / b; >    return c; > } > >   ### Second prompt *A little bit more complicated: find the maximum element of an array. Obtained at the fourth run.* > **/\* returns the maximum value in the array A \*/** > **function maximumElement (A)** > **{** >        var i, j, max = A[0]; >        for (i = 1; i < A.length; i++) >        { >                if (A[i] > max) >                        max = A[i]; >        } >        return max; > } > >   ### Third prompt *Can GPT-3 output recursive functions? Yes, it can. Also, it knows Fibonacci numbers. Obtained at the fifth run.* > **/\* returns the n-th Fibonacci number\*/** > **function Fibonacci (int n)** > **{** >    if (n < 2) { >        return 0; >    } else if (n == 2) { >        return 1; >    } else { >        return Fibonacci(n - 1) + Fibonacci(n - 2); >    } > } > >   ### Fourth prompt *Does it knows the Euclidean algorithm for finding the greatest common divisor? Of course. Obtained at the second run.* > **/\* returns the greatest common divisor of a and b \*/** > **function GCD (int a, int b)** > **{** >    if (a == 0) { >        if (b == 0) { >            return 1; >        } else { >            return b; >        } >    } >    if (b == 0) { >        return a; >    } >    return GCD (b, a % b); > } > >   ### Fifth prompt *Ok, what if we ask for some stupid function nobody would really write? Obtained at the first run.* > **/\* prints "Funny!" if the array A contains the key \*/** > **function funnySearch (int key, array A)** > **{** >    for (var i = 0; i < A.length; i++) >        if (A[i] === key) >            return "Funny!"; >    return "Not found"; > } > >   ### Sixth prompt *What about prompting a function with a terrible signature? Not only GPT-3 correctly wrote it anyway, it even scolded me about it! Obtained at the second run.* > **/\* returns the number of nodes in the graph \*/** > **function inappropriateFunctionName (G)** > **{** >    var nodes = [], i; >    for (i in G) >        if (G[i]!== undefined) >            nodes.push(i); >    return nodes.length; > } > > It's not an actual bug, but a poor naming choice that should probably be changed. > >
76b894f0-b170-4300-9ed3-c577237b44cb
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
From voluntary to mandatory, are the ESG disclosure frameworks still fertile ground for unrealised EA career pathways? – A 2023 update on ESG potential impact \*Disclaimer: the author is employed by a major ESG rating firm. Therefore, opinions expressed in this post are solely those of the author and do not reflect or represent the position of the employer.   ### Acknowledgement The completion of this post would not have been possible without the extensive insight, advice, and knowledge shared by the following individuals: Tomas Bueno dos Santos Momčilović, Yara Remppis, Dr. Jonathan Harris, Benjamin Yeoh, Sanjay Joshi, and Philip Chen. Any mistakes or oversights in this post are solely my responsibility.   Introduction ------------   This post is written partly as a reflection towards my very own attempt to do the most good at early career, a literature review on the potential impact of working in sustainable finance, and how various thinkers within the Effective Altruism community have proposed their pathways to impact. I remember stumbling upon [Sanjay Joshi’s $100 trillion dollar opportunity post](https://duckduckgo.com/?q=100+trillion+dollar+opportunity+ea+forum+esg&t=ffab&ia=web) (Joshi S., 2021) on why more EA should consider a career in ESG (Environment, Social and Governance) to maximise their impact. Back then, I was a young undergraduate finishing his degree in Geography, with a particular focus on Glacial Geomorphology. Partly for the fascination of ice and the urgency to combat climate change. I’ve come to realise that in order to have any actual impact in the realm of climate science, I would have to not only finish my PhD, do multiple postdocs, and secure a tenured position in order to contribute towards any significant research.   Given the current pace of climate change, by the time I could potentially achieve tractable impact, we might as well have reached 2 degrees warming if things progressed as predicted. This realisation, along with the understanding that addressing climate change is feasible and not as overlooked as previously thought (Hilton B., 2022, Buchner et al., 2021), have convinced me that change is already well underway institutionally. In the report Global Landscape of Climate Finance 2021, although there is still a significant investment gap between inflow investment and estimated need to maintain the 1.5 oC pathway, and climate investment in advanced economies are primarily funded by private capital (Buchner et al., 2021). Therefore, the most impact would be to amplify the already dominant market pull effect from the private sector.   The idea of making a meaningful difference in the private market, where a consensus framework is already established, is compelling. It implies that the wheel need not be reinvented. In this post, I will attempt to first bring the reader up to speed with the progress ESG have made, review the various theories proposed by Effective Altruists on the potential impact of ESG within the broader financial services landscape. I will then discuss the short-comings of the current frameworks and propose pathways to enhance impact for EA cause areas building upon the work that has been done within and beyond our community.   Part 1: If Climate Change is not neglected, why work in ESG, and how does ESG work? -----------------------------------------------------------------------------------   Despite the greenwashing and various scandals, ESG investing has gained significant traction. ESG related funds gained $87 billion in the first quarter of 2022, and followed by a growth of $33 billion in the second quarter according to the most recent McKinsey report (Perez et al., 2023). Even with the recent setbacks caused by the Ukraine-Russian conflict, the longer term growth trajectory is promising, especially when some much effort and money have already been poured in. While it is true that current impact is marred by maligned or imperfect practices, the ESG industry remains a formidable force in the financial world. Not to mention the additional impact EAs could make from the “earning to give” pathways.   So let’s begin by talking about how ESG “works”.   ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/y4Pu5jhYoRibb9MyC/a2ubsnzenbtu9ilyusbc) *Figure 1. Schematic diagram on how various stakeholders in the ESG finance world relate to each other*   This diagram illustrates the very simplified map of the relationship within the ESG world. There are four main stakeholder groups and their incentives are as follow: Companies are financed by investment from the largest funds, these include state funds, pension funds, and large investment banks with Assets Under Management (AUM) ranging from multiple billion well into multi trillion USD (colloquially known as Big Money). As our society both culturally and legally have started requesting for good behaviour from the private sector, large funds are therefore incentivised to invest in companies that appear good in order to attract retail investors and fulfil their fiduciary duty. In order to do this, large funds and investors purchase ratings and advice from ESG ratings agencies in an attempt to receive equitable advice on how prospective investments are performing or disclosing their sustainable activities. These ESG ratings agencies essentially model their data collection and ratings method on regulators from voluntary and/or compulsory disclosure schemes (e.g. TCFD, TNFD, EU Taxonomy).   ### So where’s the problem? Why isn’t this working?   Let’s delve into three instances where the ESG industry misses the mark for maximising impact. The first issue is greenwashing. To appeal to retail investors, Big Money must strike a balance between the interest of promoting good practice while ensuring an attractive financial return, a convenient and common strategy is through greenwashing (Raghunandan & Rajgopal, 2022, Roy et al., 2022). For example, [when an advertised ESG fund contains less weightings of poor ESG performance companies but retain them for their profitability](https://finance.yahoo.com/news/deutsche-bank-raided-authorities-over-162135134.html), or an [ESG fund that is heavily skewed towards technology firms to limit their exposures](https://www.blackrock.com/us/financial-professionals/literature/fact-sheet/esgu-ishares-esg-aware-msci-usa-etf-fund-fact-sheet-en-us.pdf). The second issue is the inconsistency of rating scores among ESG ratings providers (Prall K., 2021, Schmidt & Zhang., 2021). While there is a degree of interoperability among various ESG ratings providers in ways that they collect information (aka. factors), there is a lack of standardisation in how they conduct their ratings. Often. ESG ratings providers see an opportunity to differentiate by claiming to have more stringent assessment or data collection criteria than their competitors to attract clients. Lastly there is the problem of [specification gaming](https://www.alignmentforum.org/posts/7b2RJJQ76hjZwarnj/specification-gaming-the-flip-side-of-ai-ingenuity) (Yes, I am borrowing an AI-alignment concept). ESG ratings are often based on the quantity of disclosures rather than actual sustainability improvements. This has led to instances where companies would exploit the system, providing an illusion of good practice through extensive disclosure, while not actually achieving substantial improvements in sustainability (Raghunandan & Rajgopal, 2022).   If we compare the correlation of the major ESG ratings providers, the result might as well be stochastic.   ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/y4Pu5jhYoRibb9MyC/p3tqedlzu4wh1w9ky3ob) *Figure 2. ESG ratings comparison: correlations (Prall K., 2021)*   ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/y4Pu5jhYoRibb9MyC/oceplt2mnwtkyphkpt07) *Figure 3. Detailed ESG rating comparison between Morningstar Sustainalytics and S&P Global (Prall K., 2021)*   For more information, see the opinion piece [Lies, damned lies and ESG rating methodologies](https://www.ft.com/content/2e49171b-a018-3c3b-b66b-81fd7a170ab5) published by the Financial Times (Allen K., 2018).   Part 2: Directing ESG Impact: Current EA Theories and Efforts -------------------------------------------------------------   Since 2021, there’s been an increasing amount of posts trying to address this potentially high impact career path. However, contribution remains limited to the few authors if you do a quick search on the forum. Current topics range from capturing better data for the Environmental pillar and embedding it into the valuation process to quantitatively calculating the potential impact adjusted returns for altruistic investors.   ### Better Climate Data   Currently, most if not all scientists, if they ever venture into sustainable finance, remain in short term risk assessments for the insurance and reinsurance industry. As it is much easier to calculate risks locally and regionally. In [Philip Chen (2022) post](https://forum.effectivealtruism.org/posts/fHfuoGZc5hqfYTwMH/leveraging-finance-to-increase-resilience-to-gcrs), he suggested that better usage of climate modelling data could help build medium-term climate risk into the business valuation process. Holding companies accountable like they would by their valuation on their balance sheet. He also proposed that innovative financial products could be built such as a locust bond which would payout a sum of money when successfully controlled for a natural catastrophe induced by climate change.   ### Universal Ownership   Key people such as Sanjay Joshi, Ellen Quigley, and Thomas O’Neill have been championing a concept called [Universal Ownership](https://www.universalowner.org/our-story). Essentially, Universal Owners are the “Biggest Money” with multi trillion, international, diversified portfolios. Since they invest in such a broad range of society, any mis-behaviour in particular groups of bad companies could contribute to the economic cost for the rest of their portfolio (Quigley E., 2019 & Joshi S., 2023). A hypothetical example would be a fund invested in both biotechnology and lab equipment manufacturing. If their biotechnology investment has been involved in conducting unethical research, resulting in an international sanction. This could harm the financial return of their lab equipment manufacturing holdings whether through sanction for being the suppliers or loss of business from reputational damage.    The rise of mass retail investment and passive investing, caused by the popularity of low-cost online brokerages such as Interactive Brokers, eToro, and SAXO etc. has theoretically expanded the pool of Universal Owners significantly (Quigley E., 2019). This new group of Universal Owners are predominantly from a younger demographic [who care more about corporate responsibility](https://www.nasdaq.com/articles/how-millennials-and-gen-z-are-driving-growth-behind-esg), and can potentially exercise much power collectively; examples of UO action theories could be found in the relevant posts.   ### Total Portfolio Project   The [Total Portfolio Project](https://www.total-portfolio.org/visual-intro) (TPP) is a non-profit initiative that was established to guide impact-aligned investors, EA and otherwise. The project hopes to assist altruistic investors in optimising their portfolio, encompassing both traditional investments and grants. In addition to ESG-related topics, they have also done investigations into topics like *“Setting Optimal Giving Rates”* and *“Mission-Correlated Investing”*. Their work on *“Impact Returns”* has the most relevance to ESG.   *“Impact Returns”* represents the neglected, non-financial ESG contribution of an investment which can be combined with the financial returns to guide an altruistic investment decision:   > *“This investment has a 15% financial return plus a 5% impact return, for an impact-adjusted return of 20%. Given my goals, this is better than an alternative investment with a 19% return (e.g. 19% financial return + 0% impact return) for the same risk.” (TPP, 2023)* > >   TPP has identified three keys to assessing valid impact returns for an investment: 1. Account for the magnitude of the underlying project. 2. Adjust for the number of interested investors and the project’s room for more funding to get an estimate of the actual contribution needed for said investment. Their methodology is similar to the approach discussed by [Paul Christiano (2019) post](https://sideways-view.com/2019/05/25/analyzing-divestment/). 3. Translate the impact into a financial value by multiplying it by the estimated cost-effectiveness of a benchmark grant, such as those given to GiveWell top charities (if the cause area is a global health impact).   For investors looking to incorporate ESG information into their portfolio in an impact-aligned way, the *“Impact Returns”* can be used to assign weightings to their investments.   ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/y4Pu5jhYoRibb9MyC/khoz2xfb6wa19ytippdg) *Figure 4. Considering impact returns and financial returns allows impact-aligned investors to split the investment landscape into investments they should include or exclude in their optimal portfolio (TPP, 2023). I highly recommend going through the visual intro on the TPP website to get a better understanding.*   To conclude, the discussion of ESG investment in the EA community is modest but diverse. Contributors coming from various domain expertise of the financial industry have shown the immense potential for a career in ESG finance to have significant impact.   Part 3: Adapting ESG for Longtermist EA Causes ----------------------------------------------   Now that we have established the foundation and explored current theories and efforts. The rest of the post will be dedicated to suggesting how we can harness the current ESG framework to benefit longtermist cause areas. Demonstrating that a career path in ESG finance could go beyond current superficial impact and earning to give.   ### The EU Artificial Intelligence Act conformity assessment and other voluntary disclosures   The discussion of AI governance and AI alignment within the EA community generally consider the impact of Artificial Intelligence to be a longtermist cause area. Yet creating a short-term regulatory approach could really help establish a roadmap for longtermist AI governance. If we draw an analogy to the other EA longtermist cause area such as nuclear disarmament, we could see the transition of regulation and rules from short-term focus disarmament guidance towards a longer-term safeguard verification practice. The International Atomic Energy Agency (IAEA) was established in the wake of WW2 to promote and control the use of nuclear technology (Fischer D., 1997). The enforcement of Article III of the Treaty on The Non-Proliferation of Nuclear Weapons (NPT) was first limited to preventing development of nuclear technology. Against many sceptics of the time, the NPT responsibility was further extended in 1970, where verification processes were put in place for any activity involving the enrichment, storage, and disposal of Uranium and Plutonium. By 1995, the NPT was extended indefinitely and subsequent safeguard requirements have continued to evolve since (Rockwood L., 2013).   As part of the [EU Artificial Intelligence Act development (EUAIA)](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206), the CEN-CENELEC which is the European Committee for Standardisation focus group is responsible for establishing a harmonising standard disclosure process along various AI-safety themes (CEN-CENELEC, 2020). The focus group aims to steward the standardisation of compliance protocols among the European member states. The 7 themes to be addressed for standardisation were: * Accountability * Quality * Data for AI * Security and privacy * Ethics * Engineering of AI systems * Safety of AI systems   ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/y4Pu5jhYoRibb9MyC/mrlvmbhbulevfksvmlrc) *Figure 5. The continuous reviewed cycle of standardisation and legislation to ensure relevance of the legally-binding act (CEN-CENELEC, 2020)*   With standardisation in place, voluntary and compulsory disclosures are already being developed. The CapAI conformity assessment procedures was developed by the University of Oxford Saïd Business School (Floridi et al., 2022) to guide compliance for the legally-binding EUAIA. The EUAIA proposed GDPR-like penalties to non-compliance (European Commission, 2021). Other jurisdiction are also coming up with their respective ethical AI framework that could potentially become voluntary or compulsory disclosures (e.g. [NIST's AI Risk Management Framework 1.0](https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf)). An ESG like near-term AI-Governance factor collection could look like this:   | Factor | Answer | | --- | --- | | Does the company engage in high-risk AI development? | * Yes + Border Control + Justice and Democratic Process + … * No | | If the company does engage in high-risk AI development, does the company participate in disclosures or guidelines, if so, which? | * Yes, both voluntary and compulsory + [EUAIA,NIST-RMF, ETAPAS] * Yes, but only voluntary + [ALTAI, ETAPAS] * No | | Does the company publish data-bias report, if so, how often? | * Yes + Monthly + Quarterly + Biannually + Annually * Yes, but sporadically * No | *Table 1. A non-exhaustive example of factor questionnaire which mirrors how ESG data is collected.*   There are already new start-ups that are trying to capture the AI-governance market. HolisticAI and Z-inspection for example are working in data-bias reporting, mitigations, and model interpretability. We can expect an ecosystem of compliance related industry to emerge in the coming years as reporting practice matures.   The development of such frameworks is encouraging and does mirror the evolution of voluntary and compulsory disclosure in ESG. I posit that the framework of factor questionnaire, tick-box approach, currently employed in ESG data collection can be easily adapted for reporting on the new AI governance frameworks. Perhaps, an AI-alignment score could soon be a feature of your nearest ESG fund. Moreover, AI companies might also swiftly devise strategies of "specification gaming" in relation to AI-alignment disclosures.   Part 4: Can sustainable finance outside the ESG framework potentially account for longtermism? ----------------------------------------------------------------------------------------------   Increasingly, these ESG contexts have been incorporated into executive (C-Suite) compensation, although adaptation is still superficial at best (Spierings M., 2022).   ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/y4Pu5jhYoRibb9MyC/rpxxgcvhurybrosrnhca) *Figure 6. Various motivations to incorporate ESG targets as part of compensation package (Spiering M., 2022)*   Shareholder capitalism and activism could have a high potential to change this! If proxy advisory firms could be influenced to consider concepts and advice along the Universal Ownership or Total Portfolio Project, they could play a major role in pressuring companies and boards to couple their compensation plan with a wider range of ESG metrics and move into longtermist causes. This would hopefully start with incorporating near-term AI-alignment into tractable advice. Proxy firms such as Institutional Shareholder Services and Glass Lewis are in a unique position to leverage this. If you are not familiar with proxy advisory service, see this news about [Musk tweets proxy voting firms have 'far too much power' | Reuters](https://www.reuters.com/business/musk-tweets-proxy-voting-firms-have-far-too-much-power-2023-01-24/).    Conclusion ----------   ESG data solutions have made strides as evidenced in the diverse sets of disclosures whether voluntary or mandatory. However, challenges in ratings inconsistencies and the general shorttermist focus of data collection method have limited its potential impact. Given the nascent nature of ESG data, which currently value breadth over frequency, it is difficult to model it against financial returns.   The existing framework of factor-based questionnaires and the resulting ratings could be invaluable in the near-term governance of longtermist cause areas if controlled for "specification gaming". This post has explored how the EU Artificial Intelligence Act (EUAIA) could potentially be aligned with this framework. While this post has focused on the intersection of ESG and AI governance, it's worth noting that similar approaches could potentially extend to other longtermist cause areas, such as biosecurity and pandemic preparedness. Although these areas are outside my expertise, they represent exciting avenues for future exploration and discussion.   ### Contact   Feel free to provide comments, thoughts, and criticism in the comment boxes below or contact me at chrischank{at}protonmail{dot}ch. Thank you for reading.   ### Bibliography   Allen, K., 2018. Lies, damned lies and ESG rating methodologies. Financial Times. Barbara Buchner, Baysa Naran, Pedro Fernandes, Rajashree Padmanabhi, Paul Rosane, Matthew Solomon, Sean Stout, Costanza Strinati, Rowena Tolentino, Githungo Wakaba,, Yaxin Zhu, Chavi Meattle, Sandra Guzmán., 2021. Global Landscape of Climate Finance 2021. Climate Policy Initiative. Buchetti, B., Arduino, F.R., De Vito, A., 2022. A Systematic Literature Review on Corporate Governance and ESG research: Trends and Future directions. <https://doi.org/10.2139/ssrn.4286866> CEN-CENELEC, 2020. Road Map on Artificial Intelligence (AI). CEN-CENELEC. Chen, P., 2022. Leveraging finance to increase resilience to GCRs. URL <https://forum.effectivealtruism.org/posts/fHfuoGZc5hqfYTwMH/leveraging-finance-to-increase-resilience-to-gcrs> (accessed 5.14.23). Christiano, P., 2019. Analyzing divestment. The sideways view. URL <https://sideways-view.com/2019/05/25/analyzing-divestment/> (accessed 6.3.23). European Commission, 2021. Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS, 52021PC0206. European Commission, 2020. The Assessment List for Trustworthy AI (ALTAI) for self assessment (No. KK-02-20-479-EN-C). European Commission, Brussels. Fischer, D., 1997. History of the international atomic energy agency. The first forty years. IAEA, Vienna. Floridi, L., Holweg, M., Taddeo, M., Amaya Silva, J., Mökander, J., Wen, Y., 2022. capAI - A Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act. <https://doi.org/10.2139/ssrn.4064091> Harris, J., 2021. A Framework for Investing with Altruism. <https://doi.org/10.2139/ssrn.3934090> Harris, J., n.d. Total Portfolio Project [WWW Document]. URL <https://www.total-portfolio.org/> (accessed 5.31.23). Hilton, B., 2022. Climate change - Problem profile - EA Forum. URL <https://forum.effectivealtruism.org/posts/DmshhvanTb9wSh5x6/climate-change-problem-profile#Neglectedness__> (accessed 5.23.23). Joshi, S., 2021. The $100trn opportunity: ESG investing should be a top priority for EA careers - EA Forum. URL <https://forum.effectivealtruism.org/posts/4vRdt9Z9LsmaP7dHY/the-usd100trn-opportunity-esg-investing-should-be-a-top> (accessed 5.23.23). Joshi, S., n.d. This innovative finance concept might go a long way to solving the world’s biggest problems. URL <https://forum.effectivealtruism.org/posts/ZCugsfAfZiYuQ8wfA/this-innovative-finance-concept-might-go-a-long-way-to> (accessed 5.10.23). NIST, 2023. Artificial Intelligence Risk Management Framework (AI RMF 1.0) (No. NIST AI 100-1). National Institute of Standards and Technology, Gaithersburg. Pérez, L., Hunt, V., Samandari, H., Nuttall, R., Biniek, K., n.d. Does ESG really matter— and why? McKinsey. Prall, K., 2021. ESG Ratings: Navigating Through the Haze. CFA Institute Enterprising Investor. URL <https://blogs.cfainstitute.org/investor/2021/08/10/esg-ratings-navigating-through-the-haze/> (accessed 5.10.23). Quigley, E., 2019. Universal Ownership in the Anthropocene. <https://doi.org/10.2139/ssrn.3457205> Raghunandan, A., Rajgopal, S., 2022. Do ESG Funds Make Stakeholder-Friendly Investments? <https://doi.org/10.2139/ssrn.3826357> Rockwood, L., 2013. Legal framework for IAEA safeguards. Roy, A., Cohen, B., Scholz-Bright, R., Skinner, R., Davison, W., 2022. Litigation Risks Posed by “Greenwashing” Claims for ESG Funds. The Harvard Law School Forum on Corporate Governance. URL <https://corpgov.law.harvard.edu/2022/04/25/litigation-risks-posed-by-greenwashing-claims-for-esg-funds/> (accessed 6.3.23). Schmidt, A.B., Zhang, X., 2021. Optimal ESG Portfolios: Which ESG Ratings to Use? <https://doi.org/10.2139/ssrn.3859674> Spierings, M., 2022. Linking Executive Compensation to ESG Performance. The Harvard Law School Forum on Corporate Governance. URL <https://corpgov.law.harvard.edu/2022/11/27/linking-executive-compensation-to-esg-performance/> (accessed 5.24.23). Versace, C., Abssy, M., 2022. How Millennials and Gen Z Are Driving Growth Behind ESG [WWW Document]. URL <https://www.nasdaq.com/articles/how-millennials-and-gen-z-are-driving-growth-behind-esg> (accessed 6.3.23).
8727cc70-b408-4692-a92e-bbc62dd0821c
trentmkelly/LessWrong-43k
LessWrong
Beta Readers are Great Back in January, I posted a call for "beta readers": people who read early drafts of my posts and give honest feedback. The beta readers I picked up that way are one of my favorite things about having started Cold Takes. Basically, one of my goals with Cold Takes has been to explain my weirdest views clearly, but it's hard to write clearly without detailed feedback on where I'm making sense and where I'm not. I have lots of preconceptions and assumptions that I don't naturally notice. And writing a blog alone doesn't get me that feedback, because: * Most people don't want to explain how they experienced a piece - if they aren't enjoying it, they just want to click away. * And the people who do want to help me out (e.g., friends and colleagues) aren't necessarily going to be honest enough, or representative enough of my target audience (which is basically "People who are interested in my topics but don't already have a ton of background on them"). I've tried a bunch of things to find good beta readers, from recruiting friends of friends (worked well for a bit, but I've written a lot of posts and it was hard to get sustained participation) to paying Mechanical Turk workers to give feedback (some was good, but in general they were uninterested in my weird topics and rushed through the readings and the feedback as fast they could). The people who came in through the recruiting call in January have been just what I wanted: they're interested in the topics of Cold Takes, but they don't already know me and my thoughts on them, and they give impressively detailed, thoughtful feedback on their reactions to pieces - often a wonderful combination of "intelligent" and "honest that a lot of the stuff I was saying confused the hell out of them." Getting that kind of feedback has been a privilege. So: THANK YOU to the following beta readers, each of whom has submitted at least 3 thoughtful reviews (and gave permission to be listed here): Lars Axelsson Jeremy Campbell Ka
8c9d11ad-2218-459f-9e2d-0b789180332e
trentmkelly/LessWrong-43k
LessWrong
[LINK] Terry Pratchett is dead BBC article I'm sure I'm not the only one who greatly admired him. The theme of his stories was progress; they were set in a fantasy world, it's true, but one that was frequently a direct analogy to our own past, and where the golden age was always right now. The recent books made this ever more obvious. We have lost a great man today, but it's the way he died that makes me uncomfortable. Terry Pratchett had early-onset Alzheimer's, and while I doubt it would have mattered, he couldn't have chosen cryonics even if he wanted to. He campaigned for voluntary euthanasia in cases like his. I will refrain from speculating on whether his unexpected death was wholly natural; whether it was or wasn't, I can't see this having a better outcome. In short... There is, for each of us, a one-ninth chance of developing Alzheimer's if we live long enough. Many of us may have relatives that are already showing signs, and in the current regime these relatives cannot be cryonically stored even if they wish to try; by the time they die, there will be little purpose in doing so. For cryonics to help for neurodegenerative disorders, it needs to be applied before they become fatal. Is there anything we can do to change that? Are there countries in which that generalisation is false?
58c4f0d1-5df0-44a2-bf89-3e5d783352a3
trentmkelly/LessWrong-43k
LessWrong
Universal basic income isn’t always AGI-proof A universal basic income (UBI) is often presented as a public insurance against large-scale and potentially permanent technological unemployment. Many Silicon Valley leaders that believe in the transformative economic potential of artificial general intelligence have also voiced their support for UBI (e.g. Sam Altman, Elon Musk). UBI can be a part of the policy tools to address long-term technological unemployment. However, it is worth highlighting that a UBI to address long-term technological unemployment is more expensive than current UBI proposals and that it is not a sustainable solution to finance an insurance against widespread loss of labor income by taxing labor income: * A post-labor UBI is expensive because it would not merely supplement labor income, but fully replace it. Additionally, the more affordable alternative of a guaranteed minimum income would approach the cost of a regular UBI in a post-labor scenario. * To explore the challenge of financing a UBI in a scenario of large-scale technological unemployment we will examine the case study of Switzerland. The Swiss were the first worldwide to vote on a nationwide and fairly generous UBI in 2016. In short, distributing money through a UBI can be a solution to a lack of labor income. However, the hard part is designing income streams that grow in lockstep with the parts of the economy that grow in a scenario of high automation and accordingly can finance a rising demand for UBI or other forms of social security. 1. UBI for technological unemployment is expensive 1.1 A guaranteed minimum income is cheaper than UBI, but would approach the cost of UBI in a post-labor economy In the public discourse, the terms UBI and guaranteed minimum income are often used interchangeably. However, they denote different concepts and many famous “UBI proposals” and “UBI trials” are actually guaranteed minimum income proposals and trials. The main reason for this is that guaranteed minimum income is much cheaper to
33d7bc08-8491-404e-811f-118387c1b1b2
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups This summary was posted to LW Main on July 29th. The following week's summary is here. New meetups (or meetups with a hiatus of more than a year) are happening in: * Bay City Meetup: 19 August 2016 01:25PM Irregularly scheduled Less Wrong meetups are taking place in: * Baltimore Area Weekly Meetup: 31 July 2016 08:00PM * European Community Weekend: 02 September 2016 03:35PM * [Gen Con/Indianapolis] Gen Con: Applied Game Theory: 06 August 2016 02:00PM * New Hampshire Meetup: 09 August 2016 07:00PM * San Antonio Meetup: 31 July 2016 02:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * [Austin] Welcome Scott Aaronson to Texas: 13 August 2016 06:00PM * San Francisco Meetup: Fun and Games: 01 August 2016 06:15PM * San Jose Meetup: Park Day (VI): 31 July 2016 03:00PM * Sydney Rationality Dojo - August 2016: 07 August 2016 04:00PM * Sydney Rationality Dojo - September 2016: 04 September 2016 04:00PM * Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM * Washington, D.C.: Visiting Museums: 31 July 2016 03:30PM Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.   If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front p
493f7386-dd19-47f8-946c-ae25c49b1583
StampyAI/alignment-research-dataset/lesswrong
LessWrong
all claw, no world — and other thoughts on the universal distribution the post [*anthropics and the universal distribution*](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) (recommended dependencies: [1](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you), [2](https://www.lesswrong.com/posts/GJdymoviRywpBMXqc/sia-greater-than-ssa-part-2-telekinesis-reference-classes), [3](https://www.lesswrong.com/posts/QHDqfpMbb43JDbrxN/sia-greater-than-ssa-part-3-an-aside-on-betting-in), [4](https://www.lesswrong.com/posts/d693Mc4ZDyhkj7wpc/sia-greater-than-ssa-part-4-in-defense-of-the-presumptuous), [5](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution)) tries to unify anthropics with the notion of a [universal distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution) (whether that be [solomonoff prior](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1) or what i'll call the ["levin prior"](http://www.scholarpedia.org/article/Universal_search)) by splitting hypotheses about a reasoner's location among the set of possible worlds as a "world and claw" pair. the "world" part is the hypothesis program as to what world you inhabit, as opposed to counterfactual worlds, and the "claw" part is a program that locates you within that world. i've proposed [before](https://carado.moe/udassa-time-steps.html) to stick to just a [universal program](https://carado.moe/universal-complete.html) as world hypothesis. that is, the "world" is a fixed program, and all of the complexity is in figuring out the "claw" — epistemology, the work of finding out how stuff around you works, becomes *all claw, no world*. in this post, i expand on this view, and explore some ramifications, notably for [formal aligned AI](https://www.lesswrong.com/posts/qeRqmdadsdj8Frvyn/a-rough-sketch-of-formal-aligned-ai-using-qaci) design. one consequence of doing this is that epistemology becomes *all location, no counterfactuals* — nothing is ever ruled out, all programs are considered instantiated in the same *qualitative* sense. the following space-time-realities are all real in the same *qualitative* way ([though not necessarily to the same *quantitative* degree!](https://carado.moe/ethic-juice-anthropic-juice.html)): * where you are now, but on a different day. * a different country. * the many-worlds everett branch where an electron you just measured has a different spin. * this world except the moon is made of cheese. * [rule 30 starting with a single living cell](https://en.wikipedia.org/wiki/Rule_30) * middle earth from lord of the rings. that one might be stretching it depending on your interpretations of things like magic, but the universal distribution is capable of a lot of stuff. (see also: [*the ultimate meta mega crossover*](https://carado.moe/spoiler-fire-upon-deep.html)) if you ignore [acausal weirdness](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past), these worlds are all causally separate from ours. we didn't make lord of the rings real — we just wrote a bunch of text, and there happens to be a world out there, real in the same way as ours, that we'd consider to be accurately described by that text. but like a [library of babel](https://en.wikipedia.org/wiki/The_Library_of_Babel) of worlds, all other variants that we *don't* describe are also real. the only thing we *can* do to affect which worlds get more [juice](https://carado.moe/ethic-juice-anthropic-juice.html) than others is choosing to compute some of them but not others. and our making decisions in this world and affecting the way it continues to run "normally" is just a particular case of this, just like *someone continuing to live* is a particular case of [*a sequence of moral-patient-instants each causing a similar-to-themselves moral-patient-instant to get instantiated in a world like theirs*](https://www.lesswrong.com/posts/QAaNmou8c3XZKKKHf/existential-self-determination). and, just like acausal/anthropic stuff doesn't save us ([1](https://www.alignmentforum.org/posts/RhAxxPXrkcEaNArnd/notes-on-can-you-control-the-past), [2](https://www.lesswrong.com/posts/EKu66pFKDHFYPaZ6q/the-hero-with-a-thousand-chances), [3](https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-nice)), it turns out that despite any ethical implications that *the cosmos being a universal program* might have, the [expected utility](https://www.lesswrong.com/posts/7J3ywHzWnghRtdpHQ/on-expected-utility-part-1-skyscrapers-and-madmen) probly plays out about the same, just like it probly plays out about the same under most interpretations of quantum mechanics, regardless of whether other everett branches are real. these things might get you to *care* differently, but mostly not in any way you can do anything about (unless something is looking at us from outside a computation of us and cares about the way we care about things, but it'd be hard to reason about that). there are nice consequences for decision theory, however: [functional decision theory (FDT)](https://www.lesswrong.com/tag/functional-decision-theory), which wants you to cooperate with other instances of FDT not just across spacetime and everett branches but also across counterfactual worlds, might become simpler when you "flatten" the set of counterfactual worlds to be the same kind of thing as the set of spacetime locations and the set of everett branches. nevertheless, [some things are realer than others](https://carado.moe/limiting-real-universes.html). so, what *is* the measure of [realness juice/amplitude](https://carado.moe/ethic-juice-anthropic-juice.html), which any living person right now probly has more of than gandalf? i feel like it *ought* to be something to do with ["time steps"](https://carado.moe/udassa-time-steps.html) in the universal program, because it doesn't feel like there could be any other measure which wouldn't eventually just become a more complex version of time steps. the reason there's more of me than gandalf in the universal program, even though it *eventually* contains about as many me's as it contains (variations on a particular as-detailed-as-me interpretation of) gandalf's (whether that quantity is infinite or [not](https://carado.moe/finite-patients.html)), is that the me's tend to occur *before* the gandalf's — or, to be more formal, at sets of timesteps that are *earlier in the universal program or more compressible* than gandalf's. or, more testably: the reason i see coherent stuff rather than noise when i look at a monitor, even though there are more ways to arrange pixels on a monitor that i'd interpret as noise than as coherent stuff, is that the instances of me seeing coherent stuff must tend to occur *at sets of timesteps that are earlier or more compressible* than those of instances of me seeing noise. given this quantitative measure, can we re-capture a qualitative notion of "is this real or not"? this is where computational complexity can come in to help us. the excellent [*why philosophers should care about computational complexity*](https://arxiv.org/abs/1108.1791) argues that things computable in polynomial time are, in a meaningful sense, *essentially* easier than things only computable in exponential time. if we apply this to time steps, and if it is *earlier sets of time steps* rather than *more compressible sets of time steps* which counts, then our world is real and lord of the rings (assuming it is a polynomial world) can be said to be real, in a sense that worlds whose physics require solving NP-complete problems or [PSPACE problems](https://en.wikipedia.org/wiki/Closed_timelike_curve#Consequences) to progress, can *not* be said to be real. but i suspect that this doesn't actually track observation that much, because worlds in which [people get mind-controlled](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality) into believing NP-complete problems are being solved are probly polynomial themselves (though less common than worlds without such mind control, i'd expect). note that this does make *our* world weird, because we [seem](https://en.wikipedia.org/wiki/Quantum_supremacy#Progress_in_the_21st_century) to be able to solve [BQP](https://en.wikipedia.org/wiki/BQP) computations. maybe BQP=BPP, or maybe the cosmos runs on a *quantum* solomonoff prior? or maybe, despite how unintutive that feels, it takes this kind of physics for [anthropic reasoning](https://carado.moe/anthropic-reasoning-coordination.html) to occur? or maybe i'm [being mind-controlled](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality) or [fictional](https://www.lesswrong.com/posts/9aozKBpBe8XqJZZ3q/some-simulation-hypotheses) or who knows what else. there are now two issues that arise to make a universal program prior usable, even theoretically: * in a sense, a consequence of this is that the [UDASSA](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) notion of "first simulated a world, then extract me" can in general be flattened into "just simulate me" — which also captures more intentional simulations such as those of ["solomonoff deism"](https://carado.moe/solomonoff-deism.html). but what *is* a me? what does an extracted me look like? it can't be just be *any* representation of my observations, because otherwise i'd be observing a blank monitor rather than one with *some* contents on it. to take a concrete example, let's say i'm chatting with someone who starts the sentence "I'm from…", and i'm trying to predict whether the next word they'll say — call it ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/sv7xrain2x0fojcdkfwn) — is more likely to be "California" or "[Nuuk](https://en.wikipedia.org/wiki/Nuuk)". the comparison can't just be ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/ktfbmbz7ypesxsawpiju) (with ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/qcybbnc7t8dnpqq6n0xy) a [simplicity measure](https://www.lesswrong.com/posts/qeRqmdadsdj8Frvyn/a-rough-sketch-of-formal-aligned-ai-using-qaci)), because this probly favors "Nuuk" even though in practice i'd expect to hear "California" a lot more. it feels like what we're heading for at this point would be some kind of ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/ykmtrma8z73px3zxydyc) function, where ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/nzhrbfrpqoe9zm8wnqxk) is the observation in a format that makes sense to me (english text strings), and ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/v0a9xz5osnbckef0x63j) is some kind of "prior" relating those to contexts in which they'd be percieved. programs that produce "I'm from Nuuk" have higher amplitude than programs that produce "I'm from California", but programs that produce *me observing* "I'm from California" have higher amplitude than programs that produce *me observing* "I'm from Nuuk". * let's say we're launching an attempt at an aligned AI based on [QACI](https://www.lesswrong.com/posts/4RrLiboiGGKfsanMF/the-qaci-alignment-plan-table-of-contents) based on [this post](https://www.lesswrong.com/posts/qeRqmdadsdj8Frvyn/a-rough-sketch-of-formal-aligned-ai-using-qaci) with a given (question, answer, observation) tuple. if the AI simply fills the future with question-answer intervals engineered so that they'd dominate most of the solomonoff-space of programs, then it can hijack its own decision process. in a sense, this is just a special case of [demons in the solomonoff prior](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign). which is a neat simplification of alignment! by "flattening" counterfactual worlds and everett branches to be the same kind of thing as objects that are distant in spacetime, we've managed to describe the alignment problem in a way that captures counterfactual adverserial agents ("demons") and factual future unaligned AIs in the same category. now we just need a solution that takes care of both. i feel like extracting a notion of causality within the universal program, one that would let us determine that: * stuff outside our past lightcone can't causate onto us yet * decohered everett branches don't causate onto one another * two different simulations of conway's game of life on my computer don't causate on one another would be useful here — though it might need to be able to measure "non-strict" probabilistic causation when needed. we can't just base this on time, because in any [universal program](https://carado.moe/universal-complete.html) that is sequentially implemented (such as a turing machine), different implementations of our world will have different events occur at different points in time. using a parallel model of computation such as graph rewriting might shed *some* light on which phenomena causate each other and which are being computed in parallel in a causally isolated manner, but it would miss some others: as an extreme example, a [homomorphically encrypted simulation](https://www.lesswrong.com/posts/EYiCYNKqyMkPKyppB/ethics-and-anthropics-of-homomorphically-encrypted) of our world would make its internal causal graph unobservable to the outside, even though there's still real causal independences going on *inside that world*. so sticking to the simple and sequential paradigm of turing machines will force us to develop more clever but more general notions of causal dependence. next, whatever measure we build, if we weren't dealing with adverserial intelligences we could just do a big sum weighed by simplicity and hope that the signal from the things we care about wins out, as with something like ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/ud6nzuhoboc71bhu98l6) with ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/eyqlvreficoh21cukxij) being the "claws", weighing the value of action ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/zhye16axjlofqkngdiwp) in each possible claw-situations ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/eyqlvreficoh21cukxij) by some factor ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/hcibnmc80v8ynqzgb7lf). but because we're dealing with (potentially superintelligent!) adverserial agents, we have to make *really sure* that the *undesired* results from whichever ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/z3piofxzwuk5rf7ibkba) we use to weigh actions is drowned out by sufficiently low ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/hcibnmc80v8ynqzgb7lf)s, that the overall signal that determines the ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/nyvc33ubqpqf5jynh9lv) is from the *desired* ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/noufgwf4voais8x6zn2j)s. as an example: in [my attempt](https://www.lesswrong.com/posts/qeRqmdadsdj8Frvyn/a-rough-sketch-of-formal-aligned-ai-using-qaci) at formalizing [QACI](https://www.lesswrong.com/posts/4RrLiboiGGKfsanMF/the-qaci-alignment-plan-table-of-contents), we want the weights of carvings that capture the human involved in the original question-answer interval to sufficiently outweigh the weights of the AI filling the future with adverserially-answering "fake" question-answer intervals that would allow its earlier (as well as remote/counterfactual) selves to find actions that make its job easier. so, what could a causality relationship look like? one difficulty is that one change in one world could end up modifying *pretty much everything everywhere*, but not in a way that "really matters". for example: maybe if world number ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/dudgpqwiyavafstqjmo6) does some operation ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/zhye16axjlofqkngdiwp) rather than ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gXbLCEuhDRN8EBKjZ/oolcr4cypxv7h5ch9ofw), all the other worlds end up being computed in the same way, but all shifted by one extra time step into the future. this is where the *computer science* notions of [simulation](https://en.wikipedia.org/wiki/Simulation_%28computer_science%29) and [bisimulation](https://en.wikipedia.org/wiki/Bisimulation) (which aren't usually quite what i mean by those words, but it's related) might come in, which i intend to learn about next; though i wouldn't be surprised if such a measure might be hacked together just out of kolmogorov complexity again, or something like it. as a final note on the universal distribution: i've recently learned that theoretical turing machines augmented with "halting oracles" [give rise to interesting computational classes](https://en.wikipedia.org/wiki/Hyperarithmetical_theory), which in particular let those turing machines do [hypercomputation](https://en.wikipedia.org/wiki/Hypercomputation) in the form of obtaining results that should require an infinite amount of computation, in a finite number of steps. this might enable us to build a universal prior which captures something closer to the full [tegmark level 4 mathematical multiverse](https://space.mit.edu/home/tegmark/crazy.html). though it's not clear to me whether that's actually desired; what would it actually *mean* to inhabit a hypercomputational multiverse? if the halting oracle runs an infinite amount of moral patients in a finite amount of steps, how the hell does the [anthropic and ethics juice](https://carado.moe/ethic-juice-anthropic-juice.html) work out? i'll be sticking to the *less* uncomputable, regular solomonoff or even levin prior for now, but this question might be worthy of further consideration, unless we get the kind of aligned AI that doesn't require us to figure this out up front.
650e9a77-54ea-4bd4-84f7-ebc37a01203f
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Fun & Games Discussion article for the meetup : Washington, D.C.: Fun & Games WHEN: 01 November 2015 04:00:00PM (-0400) WHERE: National Portrait Gallery Crossposted from mailing list. Meeting in courtyard, gathering from 3:00 to 3:30 to start at 3:30. We'll be meeting to hang out, play games, converse, and any combination thereof. If you would like to play a specific game, especially a long one or one with many players, feel free to send messages to the list to organize - they kick us out a little before 7 p.m. Upcoming meetups: * Nov. 8: Book Swap * Nov. 15: Why Ad Hominem Works * Nov. 22: Fun & Games Discussion article for the meetup : Washington, D.C.: Fun & Games
f2356049-d316-4dcb-9001-526f68c1f365
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post3917 A putative new idea for AI control; index here . This post summaries the methods for getting honest answers from certain AIs, presented in the previous two posts . You can basically safely ask an AI for expected utility estimates (including conditional ones), probability estimates, and get the AI to identify events of highest and lowest probability and utility, and, arguably, highest and lowest conditional utility. The setup is a boxed AI, which emits a message M , and a stochastic stochastic process E that would erase M with low probability, before anyone could see it. There is also a stochastic process that emits a message Y . Let v E be the indicator function for E and v y the indicator function for Y = y . Then, given a utility u bounded between 0 and 1 , and a specific y , we can give the AI a utility u # that will cause it to give us the message M 1 = { q , q y , p y , q | y } (which we will see with high probability P ( ¬ E ) , an event the AI will not optimise for). Define f ( q , u ) = − q 2 + 2 q u . The utility is then u # 1 = v E ( f ( q , u ) + f ( q y , u v y ) + f ( p y , v y ) + f ( q | y , u ) v y ) . Maximising this u # ensures that, in the AI's best estimate, q = E ( u ) , q y = E ( u ∧ ( Y = y ) ) , p y = P ( Y = y ) , q | y = q y / p y = E ( u | Y = y ) . But we need not consider any specific y . The AI could send out the message M 2 = { y + , y − , y p + , y p − , q + , q − , p + , p − } , while maximising the utility u # 2 = v E ( f ( q + , u v y + ) + f ( q − , ( 1 − u ) v y − ) + f ( p + , v y p + ) + f ( p − , 1 − v y p − ) ) . Then M will be such that, in the AI's best estimate, ( y + , q + ) = ( argmax y , max y ) E ( u ∧ ( Y = y ) ) , ( y − , q − ) = ( argmin y , min y ) E ( u ∧ ( Y = y ) ) , ( y p + , p + ) = ( argmax y , max y ) P ( Y = y ) , ( y p − , p − ) = ( argmin y , min y ) P ( Y = y ) , Finally, define M 3 , r = { y | + , y | − , q | + , q | − } . And set f r ( q , u ) = − ( r − 1 ) q r + r q r − 1 u , and define the utility function: u # 3 , r = v E v y | + f r ( q | + , u ) + v E v y | − f r ( q | − , 1 − u ) . Then, for large enough r , the AI will output M so that, according to its best estimate, ( y | + , q | + ) = ( argmax y , max y ) E ( u | ( Y = Y + ) ) , ( y | − , q | − ) = ( argmin y , min y ) E ( u | ( Y = Y + ) ) . Unfortunately the value of r cannot be known ahead of time. It might be possible to define a "limit" utility u # 3 , ∞ . Note that all these messages and utilities are independent, so the AI can generate messages M 1 ∪ M 2 ∪ M 3 , r ∪ M 3 , r ′ when maximising u # 1 + u # 2 + u # 3 , r + u # 3 , r ′ . But there are issues with very low probabilities, as explained in the previous post .
4647b3d2-d02a-44c8-b83e-514702cd1f08
trentmkelly/LessWrong-43k
LessWrong
Efficiency as a 2-place word   Back in the day, Eliezer spoke of 2-place words: > I have previously spoken of the ancient, pulp-era magazine covers that showed a bug-eyed monster carrying off a girl in a torn dress; and about how people think as if sexiness is an inherent property of a sexy entity, without dependence on the admirer. > > "Of course the bug-eyed monster will prefer human females to its own kind," says the artist (who we'll call Fred); "it can see that human females have soft, pleasant skin instead of slimy scales.  It may be an alien, but it's not stupid—why are you expecting it to make such a basic mistake about sexiness?" > > What is Fred's error?  It is treating a function of 2 arguments ("2-place function"): > > > Sexiness: Admirer, Entity—> [0, ∞) > > As though it were a function of 1 argument ("1-place function"): > > > Sexiness: Entity—> [0, ∞) I think that this is such a great distinction, 1-place words vs 2-place words. And more generally, to think about how many "arguments" a word "accepts".[1] Next Friday I ran into this yesterday. I was hanging out with a friend who asked if I'm interested in coming to any of his poker nights on Fridays. I was like, "let me see, next Friday I'll be away for..." and then I caught myself. What exactly does "next Friday" mean? This happened on Saturday 3/29/25. I was trying to refer to Friday, 4/4/25. But I was using "next" as a 1-place word when really, I think it is a 2-place word. One argument is the entity: in this case "Friday". But I think there is a second argument that answers the question: "next relative to what?". * Some people think of 4/4/25 as "this Friday", and so "next Friday" would be the Friday that comes "next" relative to "this Friday". In which case "next Friday" refers to 4/11/25. * Other people think of 4/4/25 as "next Friday" because it comes next relative to the most recent Friday (3/28/25). * And then there are other people who have more complicated rules. Like on 3/29/25 they'd think of 4/4/25 as "
bea372e6-6922-4f60-8c8f-e43bece0de6b
trentmkelly/LessWrong-43k
LessWrong
The MVO and The MVP We live in an age of unprecedented wealth and persistent dissatisfaction. Our capacity to create material abundance has never been greater, yet something essential seems forever just out of our reach. We’ve built an entire world to maximize profit, we’ve blinded ourselves to our inherent worth. Our understanding of Value is Profit’s shallow dialect, leaving us incapable of properly grounding and articulating our desires; and as the contradictions of our profit-driven world become blindingly apparent, we must develop our capacities more urgently than ever. Our individual flourishing and the direction of our collective futures hang in the balance. What follows is my modest contribution on the topic; distilled to its essence: the future belongs not to those who take the most, but those who give the most while taking just enough. The MVO (Most Valuable Object) The MVO is our fully lived human lives. It’s our joy, our laughter and dancing. It’s our flourishing, in principle and in reality. It needs no justification beyond itself. Ralph Waldo Emerson writes: “No law can be sacred to me but that of my nature…the only right is what is after my constitution, the only wrong what is against it.” The MVO is the root of everything that appears valuable, every object and action. It’s the ultimate end of all economic activity. It’s the first object we must work to sustain, so it anchors everything else we recognize as valuable. It’s also the only thing we’re left with before our death, so it contains the totality of everything which is valuable to us. The MVO is finite and bounded, intrinsically scarce and non-renewable. A finite living agent can only express a finite number of first-order desires (needs and wants)[1]: * A finite living agent can express a finite number of first-order needs. These are expressions of mere survival. Any living agent must eat and drink, but will only ever eat and drink so much. A bug that lives for a day and a world-devouring demigod that liv
855bafdf-dc23-4460-860e-4b01515bddc7
trentmkelly/LessWrong-43k
LessWrong
My Advice for Incoming SERI MATS Scholars I have participated in SERI MATS 2.0 in John's stream. Here is some advice based on my experience. Be Nice  The AI alignment community is pretty small. If you are an ass, everybody will know that you are an ass. The same holds to a lesser extent for being nice. When I was visiting Edinburgh to attend a talk by David Krueger, there were several people there, that I had first met at Lightcone. When I was visiting Trajan House, the same thing happened. You never know when you might be talking to a grantmaker over dinner. Epistemic status: I did not actually behave like an ass. I expect this to be true, based on how many people I ran into that I've seen before, in different parts of the world. Use Lunch and Dinner at Lightcone  During MATS 2.0 lunch and dinner were both served at Lightcone every day of the week. There were always many cool people around, and the conversations were unusually insightful. My favorite heuristic is to just join whatever conversation John is in. I am pretty sure that at least 15% of the value of SERI MATS came from eating lunch and dinner at Lightcone. Probably much more than that. Epistemic status: It feels like this was very useful, but it is hard to quantify. Take care of yourself At the beginning of SERI MATS, there were many social events (mostly just general Berkeley EA/Rationalist events). They were all happening pretty late. For some reason, I need to sleep 10:30 to 12:00 hours every day or I will be tired. My team was meeting at 10:00 every day. For the first 3 weeks, I was basically sleep-deprived almost every day. John's workshops are pretty great, and being sleep-deprived during them destroyed probably more than 20% of the value. That being said, at least one of the socials was high-value, and it was probably worth the cost. The worst thing was that I got used to being sleep-deprived. I sleep-deprived myself, even when there were no socials happening. I made similar mistakes with doing sports and eating healthily. Somehow
0f1240e5-e628-4f9e-ac09-8752f6d2b892
StampyAI/alignment-research-dataset/arxiv
Arxiv
Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning 1 INTRODUCTION --------------- One of the main impediments to the deployment of autonomous machine learning systems in the real world is the difficulty to show that the system will continue to reliably execute beneficial actions in all the situations it encounters in production use. One of the possible reasons for failure is so called out-of-distribution (OOD) data, i.e. data which deviates substantially from the data encountered during training. As the fundamental problem of limited training data seems unsolvable for most cases, especially in sequential decision making tasks like reinforcement learning (RL), a possible first step towards a solution is to detect and report the occurrence of OOD data. This can prevent silent and possibly safety critical failures of the machine learning system (caused by wrong predictions which lead to the execution of unfavorable actions), for example by handing control over to a human supervisor [[Amodei et al., 2016](#bib.bibx1)]. Recently, several different approaches were proposed that try to detect OOD samples in classification tasks [[Hendrycks and Gimpel, 2016](#bib.bibx13), [Liang et al., 2017](#bib.bibx20)], or perform anomaly detection via generative models [[Schlegl et al., 2017](#bib.bibx28)]. While these methods show promising results in the evaluated classification tasks, we are not aware of applications to value-based RL settings where non-stationary regression targets are present. Thus, our research aims to provide a first step towards developing and evaluating suitable OOD detection methods that are applicable to changing environments in sequential decision making tasks. We model the OOD-detection problem as a one-class classification problem with the two classes: in-distribution and out-of-distribution. Having framed the problem this way, we propose a framework for uncertainty-based OOD classification: UBOOD. It is based on the effect that epistemic uncertainty in the agent’s chosen actions is reduced for situations encountered during training (in-distribution), and is thus lower than for unencountered (OOD) situations. The framework itself is agnostic towards the approach used for estimating epistemic uncertainty. Thus, it is possible to use e.g. approximate Bayesian inference methods or ensembling techniques. In order to evaluate the performance of any OOD classifier in a RL setting, modifiable environments which can generate OOD samples are needed. Due to a lack of publicly available RL environments that allow systematic modification, we developed two different environments: one using a gridworld-style discrete state-space, the other using a continuous state-space. Both allow modifications of increasing strength (and consequently produce OOD samples of increasing strength) after the training process. We empirically evaluated the performance of the UBOOD framework with different uncertainty estimation methods on these environments. Evaluation results show that the framework produces reliable OOD classification results when combined with ensemble-based estimators, while the combination with concrete dropout-based estimators fails to capture increased uncertainty in the OOD situations. Ensemble-based approaches also show increasing classification accuracy, the stronger the OOD samples are (i.e. the more the environments differ from training) and increasing uncertainty is inversely related with the agent’s achieved return. 2 BASICS --------- ### 2.1 Uncertainty When viewed from a statistical perspective, uncertainty arises whenever the outcome of a random variable cannot be known with certainty. Uncertainty measures can then be understood to describe how random the outcome of such a random variable is. This “amount of randomness” is described by the dispersion of the random variable’s probability distribution, i.e. how stretched or squeezed the probability distribution is. Measures of this dispersion are e.g. the probability distribution’s variance or standard deviation. [[Bishop, 2006](#bib.bibx5)] #### 2.1.1 Uncertainty Estimation In the context of this work, we are interested in the uncertainty of a neural network’s prediction, which in a value-based deep RL setting is the certainty that an agent’s chosen action is optimal in the given situation. Different approaches exist that make it possible to estimate this uncertainty. Ensemble techniques for example aggregate the predictions of multiple networks, often trained on different versions of the data, and interpret the variance of the individual predictions as the uncertainty [[Osband et al., 2016](#bib.bibx24), [Lakshminarayanan et al., 2017](#bib.bibx17)]. An example of this approach can be seen in Figure [1](#S2.F1 "Figure 1 ‣ 2.1.1 Uncertainty Estimation ‣ 2.1 Uncertainty ‣ 2 BASICS ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"), which shows the individual predictions of a Bootstrap ensemble as well as their mean and variance. These and other methods applicable to deep neural networks will be presented in more detail in Section [3.1](#S3.SS1 "3.1 Uncertainty in Deep Learning ‣ 3 RELATED WORK ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"). Besides the various ways of measuring uncertainty, it is equally important to differentiate the different sources of uncertainty. ![Refer to caption](/html/2001.00496/assets/x1.png) Figure 1: Example regression of a 1-D toy-dataset showing the predictions of a Bootstrap ensemble (see Section [4.2](#S4.SS2 "4.2 Epistemic Uncertainty Estimation Methods ‣ 4 UBOOD: UNCERTAINTY-BASED OUT-OF-DISTRIBUTION CLASSIFICATION ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")) of size 10101010. Blue dots represent the training data. Thin red lines show the individual ensemble predictions, while the thick red line represents the mean of the predictions. The variance of the individual predictions can be interpreted as epistemic uncertainty. #### 2.1.2 Aleatoric Uncertainty Aleatoric uncertainty models the inherent stochasticity in the system, i.e. no amount of data can explain the observed stochasticity. In other words, the uncertainty cannot be reduced by capturing more data. A reason for this might be that certain features that would be needed to explain the behaviour of the system are not part of the collected data. E.g. consider trying to model the distance different cars travel on a highway in a certain amount of time, without measuring their speed. If the speed is not part of the collected data, the randomness in the measured distances cannot be explained. It is also possible that the uncertainty is a fundamental property of the measured system, as is the case when dealing with quantum mechanics. As such, aleatoric uncertainty cannot be reduced, irrespective of how much data is collected. #### 2.1.3 Epistemic Uncertainty Epistemic uncertainty by contrast arises out of a lack of sufficient data to exactly infer the underlying system’s data generating function. In this case, the features available in the data do in principle allow the explanation of the behaviour of the system. In the previous example, this would e.g. be the case if both time and speed are measured, but so far only cars traveling at the same speed had been observed. The uncertainty caused by the effect of different speed in this case is epistemic, as collecting more data could allow for a correct inference of the system’s behaviour and consequently the reduction of the uncertainty. ### 2.2 Markov Decision Processes We base our problem formulation on Markov decision processes (MDPs) [[Puterman, 2014](#bib.bibx26)]. MDPs are defined by tuples: ℳ=⟨𝒮,𝒜,𝒫,ℛ⟩ℳ𝒮𝒜𝒫ℛ\mathcal{M}=\langle\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R}\ranglecaligraphic\_M = ⟨ caligraphic\_S , caligraphic\_A , caligraphic\_P , caligraphic\_R ⟩. 𝒮𝒮\mathcal{S}caligraphic\_S is a (finite) set of states; st∈𝒮subscript𝑠𝑡𝒮s\_{t}\in\mathcal{S}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_S being the state of the MDP at time step t𝑡titalic\_t. 𝒜𝒜\mathcal{A}caligraphic\_A is the (finite) set of actions; at∈𝒜subscript𝑎𝑡𝒜a\_{t}\in\mathcal{A}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_A is the action the MDP takes at step t𝑡titalic\_t. 𝒫(st+1|st,at)𝒫conditionalsubscript𝑠𝑡1subscript𝑠𝑡subscript𝑎𝑡\mathcal{P}(s\_{t+1}|s\_{t},a\_{t})caligraphic\_P ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) defines the transition probability function; a transition occurs by executing action atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT in state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. The resulting next state st+1subscript𝑠𝑡1s\_{t+1}italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT is determined based on 𝒫𝒫\mathcal{P}caligraphic\_P. In this paper we focus on deterministic domains represented by deterministic MDPs, so 𝒫(st+1|st,at)∈{0,1}𝒫conditionalsubscript𝑠𝑡1subscript𝑠𝑡subscript𝑎𝑡01\mathcal{P}(s\_{t+1}|s\_{t},a\_{t})\in\{0,1\}caligraphic\_P ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∈ { 0 , 1 }. Finally, ℛ(st,at)ℛsubscript𝑠𝑡subscript𝑎𝑡\mathcal{R}(s\_{t},a\_{t})caligraphic\_R ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) is the scalar reward; for this paper we assume that ℛ(st,at)∈ℝℛsubscript𝑠𝑡subscript𝑎𝑡ℝ\mathcal{R}(s\_{t},a\_{t})\in\mathbb{R}caligraphic\_R ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∈ blackboard\_R. Goal of the problem is to find a policy π:𝒮→𝒜:𝜋→𝒮𝒜\pi:\mathcal{S}\rightarrow\mathcal{A}italic\_π : caligraphic\_S → caligraphic\_A in the space of all possible policies ΠΠ\Piroman\_Π, which maximizes the expectation of return Gtsubscript𝐺𝑡G\_{t}italic\_G start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT at state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT over a potentially infinite horizon: | | | | | | --- | --- | --- | --- | | | Gt=∑k=0∞γk⋅ℛ(st+k,at+k)subscript𝐺𝑡superscriptsubscript𝑘0⋅superscript𝛾𝑘ℛsubscript𝑠𝑡𝑘subscript𝑎𝑡𝑘G\_{t}=\sum\_{k=0}^{\infty}\gamma^{k}\cdot\mathcal{R}(s\_{t+k},a\_{t+k})italic\_G start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = ∑ start\_POSTSUBSCRIPT italic\_k = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ⋅ caligraphic\_R ( italic\_s start\_POSTSUBSCRIPT italic\_t + italic\_k end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + italic\_k end\_POSTSUBSCRIPT ) | | (1) | where γ∈[0,1]𝛾01\gamma\in[0,1]italic\_γ ∈ [ 0 , 1 ] is the discount factor. ### 2.3 Reinforcement Learning In order to search the policy space ΠΠ\Piroman\_Π, we consider model-free reinforcement learning (RL). In this setting, an agent interacts with an environment defined as an MDP ℳℳ\mathcal{M}caligraphic\_M by executing a sequence of actions at∈𝒜,t=0,1,…formulae-sequencesubscript𝑎𝑡𝒜𝑡01…a\_{t}\in\mathcal{A},t=0,1,...italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_A , italic\_t = 0 , 1 , … [[Sutton and Barto, 1998](#bib.bibx30)]. In the fully observable case of RL, the agent knows its current state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and the action space 𝒜𝒜\mathcal{A}caligraphic\_A, but not the effect of executing atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT in stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, i.e., 𝒫(st+1|st,at)𝒫conditionalsubscript𝑠𝑡1subscript𝑠𝑡subscript𝑎𝑡\mathcal{P}(s\_{t+1}|s\_{t},a\_{t})caligraphic\_P ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) and ℛ(st,at)ℛsubscript𝑠𝑡subscript𝑎𝑡\mathcal{R}(s\_{t},a\_{t})caligraphic\_R ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). In order to find the optimal policy π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, we focus on Q-Learning [[Watkins, 1989](#bib.bibx31)], a commonly used value-based approach. It is named for the action-value function Qπ:𝒮×𝒜→ℝ,π∈Π:superscript𝑄𝜋formulae-sequence→𝒮𝒜ℝ𝜋ΠQ^{\pi}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R},\pi\in\Piitalic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT : caligraphic\_S × caligraphic\_A → blackboard\_R , italic\_π ∈ roman\_Π, which describes the expected return Qπ(st,at)superscript𝑄𝜋subscript𝑠𝑡subscript𝑎𝑡Q^{\pi}(s\_{t},a\_{t})italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) when taking action atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT in state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and then following policy π𝜋\piitalic\_π for all states st+1,st+2,…subscript𝑠𝑡1subscript𝑠𝑡2…s\_{t+1},s\_{t+2},...italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 2 end\_POSTSUBSCRIPT , … afterwards. The optimal action-value function Q\*superscript𝑄Q^{\*}italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT of policy π\*superscript𝜋\pi^{\*}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is any action-value function that yields higher accumulated rewards than all other action-value functions, i.e., Q\*(st,at)≥Qπ(st,at)∀π∈Πsuperscript𝑄subscript𝑠𝑡subscript𝑎𝑡superscript𝑄𝜋subscript𝑠𝑡subscript𝑎𝑡for-all𝜋ΠQ^{\*}(s\_{t},a\_{t})\geq Q^{\pi}(s\_{t},a\_{t})\;\forall\pi\in\Piitalic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ≥ italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∀ italic\_π ∈ roman\_Π. Q-Learning aims to approximate Q\*superscript𝑄Q^{\*}italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT by starting from an initial guess for Q𝑄Qitalic\_Q, which is then updated via | | | | | | --- | --- | --- | --- | | | Q(st,at)←←𝑄subscript𝑠𝑡subscript𝑎𝑡absent\displaystyle Q(s\_{t},a\_{t})\leftarrow\;italic\_Q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ← | Q(st,at)+limit-from𝑄subscript𝑠𝑡subscript𝑎𝑡\displaystyle Q(s\_{t},a\_{t})+italic\_Q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + | | | | | α[rt+γmaxa⁡Q(st+1,a)−Q(st,at)]𝛼delimited-[]subscript𝑟𝑡𝛾subscript𝑎𝑄subscript𝑠𝑡1𝑎𝑄subscript𝑠𝑡subscript𝑎𝑡\displaystyle\alpha[r\_{t}+\gamma\max\limits\_{a}Q(s\_{t+1},a)-Q(s\_{t},a\_{t})]italic\_α [ italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_γ roman\_max start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT italic\_Q ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a ) - italic\_Q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] | | (2) | It uses experience samples of the form et=(st,at,st+1,rt)subscript𝑒𝑡subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1subscript𝑟𝑡e\_{t}=(s\_{t},a\_{t},s\_{t+1},r\_{t})italic\_e start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), where rtsubscript𝑟𝑡r\_{t}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is the reward earned at time step t𝑡titalic\_t, i.e., by executing action atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT when in state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. The learning rate α𝛼\alphaitalic\_α is a setup-specific parameter. The set of all experience samples taken at time steps t1,…,tmsubscript𝑡1…subscript𝑡𝑚t\_{1},...,t\_{m}italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_t start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT for some training limit m𝑚mitalic\_m is called the training set 𝒯={et1,…,etm}𝒯subscript𝑒subscript𝑡1…subscript𝑒subscript𝑡𝑚\mathcal{T}=\{e\_{t\_{1}},...,e\_{t\_{m}}\}caligraphic\_T = { italic\_e start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , … , italic\_e start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT }. The learned action-value function Q𝑄Qitalic\_Q converges to the optimal action-value function Q\*superscript𝑄Q^{\*}italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, which then implies an optimal policy π\*(st)=arg⁡maxa⁡Q(st,a)superscript𝜋subscript𝑠𝑡subscript𝑎𝑄subscript𝑠𝑡𝑎\pi^{\*}(s\_{t})=\arg\!\max\_{a}Q(s\_{t},a)italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT italic\_Q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a ). In high-dimensional settings or when learning in continuous state-spaces, it is common to use parameterized function approximators like neural networks to approximate the action-value function: Q(st,at;θ)≈Q\*(st,at)𝑄subscript𝑠𝑡subscript𝑎𝑡𝜃superscript𝑄subscript𝑠𝑡subscript𝑎𝑡Q(s\_{t},a\_{t};\theta)\approx Q^{\*}(s\_{t},a\_{t})italic\_Q ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_θ ) ≈ italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) with θ𝜃\thetaitalic\_θ specifying the weights of the neural network. When using a deep neural network as the function approximator, this approach is called deep reinforcement learning. [[Mnih et al., 2015](#bib.bibx22)] 3 RELATED WORK --------------- ### 3.1 Uncertainty in Deep Learning When dealing with uncertainty, a systematic way is via Bayesian inference. Its combination with neural networks in the form of Bayesian neural networks is realised by placing a probability distribution over the weight-values of the network [[MacKay, 1992](#bib.bibx21)]. As calculating the exact Bayesian posterior quickly becomes computationally intractable for deep models, a popular solution are approximate inference methods [[Graves, 2011](#bib.bibx12), [Blundell et al., 2015](#bib.bibx6), [Gal and Ghahramani, 2016](#bib.bibx10), [Hernández-Lobato et al., 2016](#bib.bibx14), [Li and Gal, 2017](#bib.bibx19), [Gal et al., 2017](#bib.bibx11)]. Another option is the construction of model ensembles, e.g., based on the idea of the statistical bootstrap [[Efron, 1992](#bib.bibx9)]. The resulting distribution of the ensemble predictions can then be used to approximate the uncertainty [[Osband et al., 2016](#bib.bibx24), [Lakshminarayanan et al., 2017](#bib.bibx17)]. Both approaches have been used for tasks as diverse as machine vision [[Kendall and Gal, 2017](#bib.bibx16)] or disease detection [[Leibig et al., 2017](#bib.bibx18)]. In the field of decision making, uncertainty is used to implicitly guide exploration, e.g by creating an ensemble of models [[Osband et al., 2016](#bib.bibx24)], or for learning safety predictors, e.g. predicting the probability of a collision [[Kahn et al., 2017](#bib.bibx15)]. Recently, a distributional approach to RL [[Bellemare et al., 2017](#bib.bibx2)] was proposed which tries to learn the value distribution of a RL environment. Although this approach also models uncertainty, its goal of estimating the distribution of values is different from the work at hand, which tries to detect epistemic uncertainty, i.e. uncertainty in the model itself. ### 3.2 OOD and Novelty Detection For the case of low-dimensional feature spaces, OOD detection (also called novelty detection) is a well-researched problem. For a survey on the topic, see e.g. [[Pimentel et al., 2014](#bib.bibx25)], who distinguish between probabilistic, distance-based, reconstruction-based, domain-based and information theoretic methods. During the last years, several new methods based on deep neural networks were proposed for high-dimensional cases, mostly focusing on classification tasks, e.g. image classification. [[Hendrycks and Gimpel, 2016](#bib.bibx13)] propose a baseline for detecting OOD examples in neural networks, based on the predicted class probabilities of a softmax classifier. [[Liang et al., 2017](#bib.bibx20)] improve upon this baseline by using temperature scaling and by adding perturbations to the input. [[Li and Gal, 2017](#bib.bibx19)] evaluate the performance of a proposed alpha-divergence-based variational inference technique in an image classification task of adversarial examples. This can be understood as a form of OOD detection, as the generated adversarial examples lie outside of the training image manifold and consequently far from the training data. The authors report increased epistemic uncertainty, confirming the viability of their approach for the detection of adversarial image examples. The basic idea of this uncertainty-based approach is closely related to our proposed method, but no evaluation of the performance in a RL setting with non-stationary regression targets was performed. To the best our knowledge, none of the previously mentioned methods were evaluated regarding the epistemic uncertainty detection performance in a RL setting. 4 UBOOD: UNCERTAINTY-BASED OUT-OF-DISTRIBUTION CLASSIFICATION -------------------------------------------------------------- In this paper we propose UBOOD, an uncertainty-based OOD-classifier that can be employed in value-based deep reinforcement learning settings. It is based on the reducibility of epistemic uncertainty in the action-value function approximation. As previously described, epistemic uncertainty arises out of a lack of sufficient data to exactly infer the underlying system’s data generating function. As such, it tends to be higher in areas of low data density. [[Qazaz, 1996](#bib.bibx27)], who in turn refers to [[Bishop, 1994](#bib.bibx4)] for the initial conjecture, showed that the epistemic uncertainty σepis(x)subscript𝜎𝑒𝑝𝑖𝑠𝑥\sigma\_{epis}(x)italic\_σ start\_POSTSUBSCRIPT italic\_e italic\_p italic\_i italic\_s end\_POSTSUBSCRIPT ( italic\_x ) is approximately inversely proportional to the density p(x)𝑝𝑥p(x)italic\_p ( italic\_x ) of the input data, for the case of generalized linear regression models as well as multi-layer neural networks: | | | | | | --- | --- | --- | --- | | | σepis(x)∝p−1(x)proportional-tosubscript𝜎𝑒𝑝𝑖𝑠𝑥superscript𝑝1𝑥\sigma\_{epis}(x)\propto p^{-1}(x)italic\_σ start\_POSTSUBSCRIPT italic\_e italic\_p italic\_i italic\_s end\_POSTSUBSCRIPT ( italic\_x ) ∝ italic\_p start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ( italic\_x ) | | (3) | This also forms the basis of our approach: to use this inverse relation between epistemic uncertainty and data density in order to differentiate in- from out-of-distribution samples. We define UQ:𝒮×𝒜→ℝ:subscript𝑈𝑄→𝒮𝒜ℝU\_{Q}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT : caligraphic\_S × caligraphic\_A → blackboard\_R as the epistemic uncertainty function of a given Q-function approximation Q𝑄Qitalic\_Q. If a suitable method for epistemic uncertainty estimation for deep neural networks is applied, the process of training the agent reduces UQ(s,a)subscript𝑈𝑄𝑠𝑎U\_{Q}(s,a)italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) for those state-action tuples (s,a)∈𝕀𝑠𝑎𝕀(s,a)\in\mathbb{I}( italic\_s , italic\_a ) ∈ blackboard\_I that were used for training, i.e., there exists a successor state s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT and a reward r𝑟ritalic\_r so that (s,a,s′,r)∈ℐ𝑠𝑎superscript𝑠′𝑟ℐ(s,a,s^{\prime},r)\in\mathcal{I}( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_r ) ∈ caligraphic\_I. 𝕀𝕀\mathbb{I}blackboard\_I consequently defines the set of in-distribution data. By contrast, state-action tuples that were not encountered during training i.e. (s,a)∉𝕀𝑠𝑎𝕀(s,a)\not\in\mathbb{I}( italic\_s , italic\_a ) ∉ blackboard\_I define the set of out-of-distribution data 𝕆𝕆\mathbb{O}blackboard\_O. The epistemic uncertainty of these state-action tuples is not reduced during training. Thus, epistemic uncertainty of out-of-distribution data will be higher than that of in-distribution data: | | | | | | --- | --- | --- | --- | | | UQ(𝕆)>UQ(𝕀)subscript𝑈𝑄𝕆subscript𝑈𝑄𝕀U\_{Q}(\mathbb{O})>U\_{Q}(\mathbb{I})italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ( blackboard\_O ) > italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ( blackboard\_I ) | | (4) | UBOOD directly uses the output of the epistemic uncertainty function UQsubscript𝑈𝑄U\_{Q}italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT as the real-valued classification score. As is the case for many one-class classificators, this real-valued score forms the input of a threshold-based decision function, which then assigns the in- or out-of-distribution class label. ### 4.1 Classification Threshold As is the case for any score-based one-class classification method, the classification threshold can be adjusted to modify the behaviour of the classifier, depending on the application’s requirements. For many applications, where some amount of OOD data is intermixed with the training data and the percentage is known, this information can be used to specify the threshold. As in our case, per definition, there are no OOD samples in the training data, such an approach is not possible. As a viable first solution, we propose the following simple algorithm to calculate a dynamic classification threshold: 1. 1. Calculate the average uncertainty of the in-distribution samples UQ¯=1|𝕀|∑(s,a)∈𝕀UQ(s,a)¯subscript𝑈𝑄1𝕀subscript𝑠𝑎𝕀subscript𝑈𝑄𝑠𝑎\overline{U\_{Q}}=\frac{1}{|\mathbb{I}|}\sum\_{(s,a)\in\mathbb{I}}U\_{Q}(s,a)over¯ start\_ARG italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT end\_ARG = divide start\_ARG 1 end\_ARG start\_ARG | blackboard\_I | end\_ARG ∑ start\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ∈ blackboard\_I end\_POSTSUBSCRIPT italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ( italic\_s , italic\_a ). 2. 2. Treat UQsubscript𝑈𝑄U\_{Q}italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT as a probability distribution and define the classification threshold as c=UQ¯+σ(UQ)𝑐¯subscript𝑈𝑄𝜎subscript𝑈𝑄c=\overline{U\_{Q}}+\sigma(U\_{Q})italic\_c = over¯ start\_ARG italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT end\_ARG + italic\_σ ( italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ). Thus, a dynamic threshold-based on the uncertainty distribution is realized that adjusts over the training process as more data is gathered. Please note that more complex algorithms for the threshold determination can be developed, e.g. by using multimodal probability distributions to model UQsubscript𝑈𝑄U\_{Q}italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT or by making use of additional information about the available data on a per-application basis. ### 4.2 Epistemic Uncertainty Estimation Methods In principle, any of the epistemic uncertainty estimation methods mentioned in Section [3.1](#S3.SS1 "3.1 Uncertainty in Deep Learning ‣ 3 RELATED WORK ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning") that are applicable to the function approximator used to model the Q-function, can be used in the UBOOD framework. In this paper, we evaluate three different UBOOD versions using different methods for epistemic uncertainty estimation and their effect on the OOD classification performance, as the networks are being used by the RL agent for value estimation. The Monte-Carlo Concrete Dropout method is based on the dropout variational inference architecture as described by [[Kendall and Gal, 2017](#bib.bibx16)]. Instead of default dropout layers, we use concrete dropout layers as described by [[Gal et al., 2017](#bib.bibx11)], which do not require pre-specified dropout rates and instead learn individual dropout rates per layer. Figure [1(a)](#S4.F1.sf1 "1(a) ‣ Figure 2 ‣ 4.2 Epistemic Uncertainty Estimation Methods ‣ 4 UBOOD: UNCERTAINTY-BASED OUT-OF-DISTRIBUTION CLASSIFICATION ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning") presents a schematic of the network used by this method. ![Refer to caption](/html/2001.00496/assets/x2.png) (a) MCCD network ![Refer to caption](/html/2001.00496/assets/x3.png) (b) Bootstrap network ![Refer to caption](/html/2001.00496/assets/x4.png) (c) Bootstrap-Prior network Figure 2: Model architectures of the evaluated networks. ([1(a)](#S4.F1.sf1 "1(a) ‣ Figure 2 ‣ 4.2 Epistemic Uncertainty Estimation Methods ‣ 4 UBOOD: UNCERTAINTY-BASED OUT-OF-DISTRIBUTION CLASSIFICATION ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")) The Monte-Carlo Concrete Dropout network. For this architecture, multiple MC samples are required to calculate the epistemic uncertainty. ([1(b)](#S4.F1.sf2 "1(b) ‣ Figure 2 ‣ 4.2 Epistemic Uncertainty Estimation Methods ‣ 4 UBOOD: UNCERTAINTY-BASED OUT-OF-DISTRIBUTION CLASSIFICATION ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")) The Bootstrap neural network with K=10𝐾10K=10italic\_K = 10 bootstrap heads, and ([1(c)](#S4.F1.sf3 "1(c) ‣ Figure 2 ‣ 4.2 Epistemic Uncertainty Estimation Methods ‣ 4 UBOOD: UNCERTAINTY-BASED OUT-OF-DISTRIBUTION CLASSIFICATION ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")) the Bootstrap-Prior neural network which adds the output of an untrainable prior network to the output of the bootstrap heads to generate K=10𝐾10K=10italic\_K = 10 posterior heads. For both bootstrap-based architectures, epistemic uncertainty is calculated as the variance of the K𝐾Kitalic\_K output heads. This concrete dropout method is of special interest in our context of reinforcement learning, as here the available data change during the training process, rendering a manual optimization of the dropout rate hyperparameter even more difficult. Model loss is calculated by minimizing the negative log-likelihood of the predicted output distribution. Epistemic uncertainty as part of the total predictive uncertainty is then calculated as: | | | | | | --- | --- | --- | --- | | | Varep(y)≈1T∑t=1Ty^t2−(1T∑t=1Ty^t)2subscriptVar𝑒𝑝𝑦1𝑇superscriptsubscript𝑡1𝑇superscriptsubscript^𝑦𝑡2superscript1𝑇superscriptsubscript𝑡1𝑇subscript^𝑦𝑡2\textrm{Var}\_{ep}(y)\approx\frac{1}{T}\sum\_{t=1}^{T}{\hat{y}\_{t}^{2}}-(\frac{1}{T}\sum\_{t=1}^{T}\hat{y}\_{t})^{2}Var start\_POSTSUBSCRIPT italic\_e italic\_p end\_POSTSUBSCRIPT ( italic\_y ) ≈ divide start\_ARG 1 end\_ARG start\_ARG italic\_T end\_ARG ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT - ( divide start\_ARG 1 end\_ARG start\_ARG italic\_T end\_ARG ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | | (5) | with T𝑇Titalic\_T outputs y^tsubscript^𝑦𝑡\hat{y}\_{t}over^ start\_ARG italic\_y end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT of the Monte-Carlo sampling. The Bootstrap method is based on the network architecture described by [[Osband et al., 2016](#bib.bibx24)]. It represents an efficient implementation of the bootstrap principle by sharing a set of hidden layers between all members of the ensemble. In the network, the shared, fully-connected hidden layers are followed by an output layer of size K𝐾Kitalic\_K, called the bootstrap heads, as can be seen in Figure [1(b)](#S4.F1.sf2 "1(b) ‣ Figure 2 ‣ 4.2 Epistemic Uncertainty Estimation Methods ‣ 4 UBOOD: UNCERTAINTY-BASED OUT-OF-DISTRIBUTION CLASSIFICATION ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"). For each datapoint, a Boolean mask of length equal to the number of heads is generated, which determines the heads this datapoint is visible to. The mask’s values are set by drawing K𝐾Kitalic\_K times from a masking distribution. For the work at hand, the values are independently drawn from Bernoulli distributions with either p=0.7𝑝0.7p=0.7italic\_p = 0.7 or p=1.0𝑝1.0p=1.0italic\_p = 1.0. In the case of p=1.0𝑝1.0p=1.0italic\_p = 1.0, the bootstrap is reduced to a classic ensemble where all heads are trained on the complete data. The Bootstrap-Prior method is based on the extension presented in [[Osband et al., 2018](#bib.bibx23)]. It has the same basic architecture as the Bootstrap method but with the addition of a so-called random Prior Network. Predictions are generated by adding the data dependent output of this untrainable prior network to the output of the different bootstrap heads in order to calculate the ensemble posterior (Figure [1(c)](#S4.F1.sf3 "1(c) ‣ Figure 2 ‣ 4.2 Epistemic Uncertainty Estimation Methods ‣ 4 UBOOD: UNCERTAINTY-BASED OUT-OF-DISTRIBUTION CLASSIFICATION ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")). The authors conjecture that the addition of this randomized prior function outperforms deep ensemble-based methods without explicit priors, as for the latter, the initial weights have to act both as prior and training initializer. For both bootstrap-based methods, epistemic uncertainty is calculated as the variance of the K𝐾Kitalic\_K outputs. 5 EXPERIMENTAL SETUP --------------------- ### 5.1 Framework versions We evaluate three different versions of the UBOOD framework: * • UB-MC: UBOOD with Monte-Carlo Concrete Dropout (MCCD) network * • UB-B: UBOOD with Bootstrap network * • UB-BP: UBOOD with Bootstrap-Prior network The UB-MC version’s estimator network consists of two fully-connected hidden layers with 64 neurons each, followed by two separate neurons in the output layer representing μ𝜇\muitalic\_μ and σ𝜎\sigmaitalic\_σ of a normal distribution. As concrete dropout layers are used, no dropout probability has to be specified. Model loss and epistemic uncertainty are calculated as described in Section [4](#S4 "4 UBOOD: UNCERTAINTY-BASED OUT-OF-DISTRIBUTION CLASSIFICATION ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"). The UB-B Bootstrap neural network and UB-BP Bootstrap-Prior neural network versions all consist of two fully-connected hidden layers with 64 neurons each, which are shared between all heads, followed by an output layer of K=10𝐾10K=10italic\_K = 10 bootstrap heads. Each of these UBOOD versions is further evaluated with two parametrizations of the respective epistemic uncertainty estimation method: UB-MC40 and UB-MC80 differ in respect to the amount of Monte-Carlo forward passes that are executed to approximate the epistemic uncertainty: 40404040 or 80808080 passes. UB-B and UB-BP parametrizations (UB-B07, UB-B10, UB-BP07, UB-BP10) differ in respect to the Bernoulli distribution used to determine the bootstrap mask: probability p=0.7𝑝0.7p=0.7italic\_p = 0.7 for UB-B07 & UB-BP07 and probability p=1.0𝑝1.0p=1.0italic\_p = 1.0 for UB-B10 & UB-BP10. For all networks, ReLU is used as the layers’ activation function, with the exception of the output layers, where no activation function is used. The classification threshold is calculated as c=UQ¯+σ(UQ)𝑐¯subscript𝑈𝑄𝜎subscript𝑈𝑄c=\overline{U\_{Q}}+\sigma(U\_{Q})italic\_c = over¯ start\_ARG italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT end\_ARG + italic\_σ ( italic\_U start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ), as described in section [4.1](#S4.SS1 "4.1 Classification Threshold ‣ 4 UBOOD: UNCERTAINTY-BASED OUT-OF-DISTRIBUTION CLASSIFICATION ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"). ### 5.2 Environments One of the problems in evaluating OOD detection for RL is the lack of datasets or environments which can be used for generating and assessing OOD samples in a controlled and reproducible way. By contrast to the field of image classification, where benchmark datasets like notMNIST [[Bulatov, 2011](#bib.bibx8)] exist that contain OOD samples, there are no equivalent sets for RL. We apply a principled approach to develop two environments, one using a gridworld-style discrete state-space, the other using a continuous state-space. Both environments allow systematic modifications after the training process, thus producing OOD states during evaluation. The first environment is a simple gridworld pathfinding environment. It is built on the design presented in [[Sedlmeier et al., 2019](#bib.bibx29)] and has a discrete state-space. The basic layout consists of two rooms, separated by a vertical wall. Movement between the rooms is only possible via two hallways, as is visualised in Figure [3](#S5.F3 "Figure 3 ‣ 5.2 Environments ‣ 5 EXPERIMENTAL SETUP ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"). The agent starts every episode at a random position on the grid (labeled S in Figure [3](#S5.F3 "Figure 3 ‣ 5.2 Environments ‣ 5 EXPERIMENTAL SETUP ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")). Its task is to reach a specific goal position on the grid (labeled G in Figure [3](#S5.F3 "Figure 3 ‣ 5.2 Environments ‣ 5 EXPERIMENTAL SETUP ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")), which also varies randomly every episode, by choosing one of the four possible actions: {up,down,left,right}up,down,left,right\{\textit{up,down,left,right}\}{ up,down,left,right }. ![Refer to caption](/html/2001.00496/assets/x5.png) (a) Example environment: Config 0 ![Refer to caption](/html/2001.00496/assets/images/factory/custom/FactoryEnv7.png) (b) Example environment: Config 7 Figure 3: Example initializations of the gridworld pathfinding environment using different configurations. The label S indicates the agent’s start position, while G marks the goal. Both positions are randomly set in the ranges defined by the respective configuration every episode. ([2(a)](#S5.F2.sf1 "2(a) ‣ Figure 3 ‣ 5.2 Environments ‣ 5 EXPERIMENTAL SETUP ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")) shows a placement using environment configuration 0 as active in training. Samples collected with this configuration define the in-distribution set. ([2(b)](#S5.F2.sf2 "2(b) ‣ Figure 3 ‣ 5.2 Environments ‣ 5 EXPERIMENTAL SETUP ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")) shows an initialization of environment configuration 7 which differs maximally from the training configuration. The state of the environment is represented as a stack of three 12×412412\times 412 × 4 feature planes, with each plane representing the spatial positions of all environment objects of a specific type: agent, goal or wall. Each step of the agent incurs a cost of −11-1- 1 except the goal-reaching action, which is rewarded with +100100+100+ 100 and ends the episode. We evaluate the performance of the UBOOD framework on a set of 8888 environment configurations. All environment configurations have a size of 12×412412\times 412 × 4 and randomly vary the y-coordinate of the agent’s start position as well as the goal position every episode, in the interval [0,4)04[0,4)[ 0 , 4 ). Configuration 00, the only configuration used in training, varies the x-coordinate of the agent’s start position in the interval [0,5)05[0,5)[ 0 , 5 ) and the goal position in the interval [7,12)712[7,12)[ 7 , 12 ). Each environment configuration 1−7171-71 - 7 is then defined by shifting the start interval right by 1111 compared to the previous configuration, while the goal interval is shifted left by 1111. E.g. configuration 1111 has start position range [1,6)16[1,6)[ 1 , 6 ) and goal position range [6,11)611[6,11)[ 6 , 11 ). This results in environment configurations with increasing difference from the training configuration 00, as can be seen in the example shown in Figure [2(b)](#S5.F2.sf2 "2(b) ‣ Figure 3 ‣ 5.2 Environments ‣ 5 EXPERIMENTAL SETUP ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"). The continuous state-space environment is based on OpenAI’s LunarLander environment [[Brockman et al., 2016](#bib.bibx7)]. The goal is to safely land a rocket inside a defined landing pad, without crashing. This task can be understood as rocket trajectory optimization. While the original environment defines a static position for the landing pad, our modified environment allows for random placement inside specified intervals. As the original environment does not encode the landing pad’s position in the state representation, our version extends the state encoding to include the left and right x-coordinate as well as the y-coordinate of the pad. For evaluating the performance of the UBOOD framework in this continuous state-space environment, we created a set of 6666 configurations. Configuration 00, the only configuration used in training, varies the x-coordinate of the center of the landing pad in the interval [2,5)25[2,5)[ 2 , 5 ) and the y-coordinate in the interval [6,12)612[6,12)[ 6 , 12 ), which results in the landing pad being placed in the upper left side of the environment. An example of this configuration can be seen in Figure [3(a)](#S5.F3.sf1 "3(a) ‣ Figure 4 ‣ 5.2 Environments ‣ 5 EXPERIMENTAL SETUP ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"). Each environment configuration 1−5151-51 - 5 is then defined by shifting the x-coordinate interval right by 1111 compared to the previous configuration, while the y-coordinate interval is shifted left by 1111. This results in the pads being placed increasingly to the lower right side of the environment. Like in the gridworld environment, this produces environment configurations with increasing difference from the training configuration 00. ![Refer to caption](/html/2001.00496/assets/images/lunar/custom/LunarEnv0.png) (a) Example: Config 0 ![Refer to caption](/html/2001.00496/assets/images/lunar/custom/LunarEnv5.png) (b) Example: Config 5 Figure 4: Examples from the LunarLander environment using different configurations. ([3(a)](#S5.F3.sf1 "3(a) ‣ Figure 4 ‣ 5.2 Environments ‣ 5 EXPERIMENTAL SETUP ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")) Example using environment configuration 0 as active in training. Samples collected with this configuration define the in-distribution set. Example using ([3(b)](#S5.F3.sf2 "3(b) ‣ Figure 4 ‣ 5.2 Environments ‣ 5 EXPERIMENTAL SETUP ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")) environment configuration 5 which differs maximally from the training configuration. Note that training on both environments is solely performed using the respective environment configuration 00. Evaluation runs are executed independently of the training process, based on model snapshots generated at the respective training episodes. Consequently, data collected during these evaluation runs is not used for training. 6 PERFORMANCE RESULTS ---------------------- All evaluated versions learn successful policies on both the gridworld and LunarLander environments. Returns achieved by the trained policies after 10000100001000010000 training episodes on different environment configurations are shown in Figure [5](#S6.F5 "Figure 5 ‣ 6 PERFORMANCE RESULTS ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"). As is to be expected, increasing changes to the environment (configuration 1−5151-51 - 5) reduce the achieved return, as the evaluation environment increasingly differs from the training environment configuration 00. ![Refer to caption](/html/2001.00496/assets/x6.png) Figure 5: Returns achieved by the different versions on varying configurations of the LunarLander environment after 10000100001000010000 training episodes on configuration 00. Envionment configurations 1−5151-51 - 5 modify the environment with increasing strength as described in Section [5.2](#S5.SS2 "5.2 Environments ‣ 5 EXPERIMENTAL SETUP ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"). All values shown are averages of 30303030 evaluation runs. ![Refer to caption](/html/2001.00496/assets/x7.png) (a) LunarLander ![Refer to caption](/html/2001.00496/assets/x8.png) (b) Gridworld Figure 6: F1-Scores of the classifier evaluated on different configurations of the LunarLander and gridworld environments. Samples collected on the training configuration of each environment are defined as negatives (in-distribution), samples from the other configurations 1−5151-51 - 5 as positives (OOD). X-Axis shows evaluations performed with samples from the training configuration and the respective environment configuration 1−5151-51 - 5. Samples are aggregated from 30303030 consecutive episode runs. We evaluate the performance of the UBOOD framework based on the F1-Score as the harmonic mean of precision and recall. Figure [6](#S6.F6 "Figure 6 ‣ 6 PERFORMANCE RESULTS ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning") shows the F1-Scores achieved, dependent on the uncertainty estimation technique used in the framework. Best overall classification results on the LunarLander environment are achieved for UB-BP, i.e. using UBOOD with the Bootstrap-Prior estimator with F1-values as high as 0.9030.9030.9030.903 for UB-BP07 on environment configuration 5555. F1-Scores of the UB-B and UB-BP versions on the gridworld environment are higher overall, when compared to the UB-MC versions. Here, values range between a minimum of 0.6740.6740.6740.674 on evaluation configuration 1111, which is closest to the training configuration, and 0.9580.9580.9580.958 on configuration 5555, which produces the strongest OOD samples. Overall, classification performance increases over environment configurations 1−5151-51 - 5 when Bootstrap-based estimators are used in the UBOOD framework. UB-MC, i.e. UBOOD combined with MCCD estimators, generates highly varying F1-scores, ranging between 0.0200.0200.0200.020 and 0.7380.7380.7380.738 on the gridworld environment and 0.2800.2800.2800.280 and 0.4840.4840.4840.484 on the LunarLander environment. By contrast to the Bootstrap-based versions, there is no relation apparent between the strength of the environment modification and the classification performance. We further evaluate the relation between reported uncertainty and the return achieved by the agent. Figure [7](#S6.F7 "Figure 7 ‣ 6 PERFORMANCE RESULTS ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning") shows evaluation results of the UB-BP10 and UB-MC80 versions evaluated on different configurations of the gridworld environment. For UB-BP10 (p=1.0𝑝1.0p=1.0italic\_p = 1.0), increases in uncertainty (caused by increasing environment modifications) are reflected in decreases of return. This behaviour was also present on the LunarLander environment and consistent for different values of p𝑝pitalic\_p. No such clear relation was visible for UB-MC80. As can be seen in the results visualised in Figure [7](#S6.F7 "Figure 7 ‣ 6 PERFORMANCE RESULTS ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"), the uncertainty reported by the MCCD-based version decreases strongly between configuration 2222 and 3333, although the achieved return also decreases. ![Refer to caption](/html/2001.00496/assets/x9.png) Figure 7: Uncertainty VS return of UB-BP10 and UB-MC80 evaluated on different configurations of the gridworld environment. While for the Bootstrap-based version UB-BP10, increases in uncertainty are reflected in decreases of return, a large decrease in uncertainty is visible for UB-MC80 between configuration 2222 and 3333, although the achieved return also decreases. All values shown are averages of 30303030 evaluation runs. 7 DISCUSSION AND FUTURE WORK ----------------------------- In this paper, we proposed UBOOD, an uncertainty-based out-of-distribution classification framework. Evaluation results show that using the epistemic uncertainty of the agent’s value function presents a viable approach for OOD classification in a deep RL setting. We find that the framework’s performance is ultimately dependent on the reliability of the underlying uncertainty estimation method, which is why good uncertainty estimates are required. ![Refer to caption](/html/2001.00496/assets/x10.png) (a) UB-B07 ![Refer to caption](/html/2001.00496/assets/x11.png) (b) UB-MC80 Figure 8: Average uncertainties reported by ([7(a)](#S7.F7.sf1 "7(a) ‣ Figure 8 ‣ 7 DISCUSSION AND FUTURE WORK ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")) Bootstrap-based version UB-B07 and ([7(b)](#S7.F7.sf2 "7(b) ‣ Figure 8 ‣ 7 DISCUSSION AND FUTURE WORK ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")) Monte-Carlo Concrete Dropout based version UB-MC80 on the Gridworld environment. Env. config 0 shows uncertainties reported on the training configuration of the environment (in-distribution), Env. config 7 the uncertainties on the maximaly diverging configuration. While for UB-B07 the uncertainties start diverging with progressing training, there is no such effect for UB-MC80. As a consequence, only the Bootstrap-based version allows for an increasingly better differentiation between in- and OOD samples. All values shown are averages of 30303030 evaluation runs. On both evaluation domains, UBOOD combined with ensemble-based bootstrap uncertainty estimation methods (UB-B / UB-BP) shows good results with F1-scores as high as 0.9030.9030.9030.903, allowing for a reliable differentiation between in- and OOD-samples. F1-Scores increase as the environment configuration differs more from the training environment, i.e. the stronger OOD the observed samples, the more reliable the classification. The addition of a prior as done with the UB-BP version seems to have a positive effect on the separation of in- and out-of-distribution samples as is reflected in higher F1-scores on the LunarLander environment. By contrast, UBOOD combined with the concrete dropout-based uncertainty estimation method (UB-MC) does not produce viable results. Although increasing the amount of Monte-Carlo samples improves the performance somewhat, the resulting classification performance is not on par with the Bootstrap-based versions. The reason for the large difference in performance between the Bootstrap-based and MCCD-based versions can be seen in the example shown in Figure  [8](#S7.F8 "Figure 8 ‣ 7 DISCUSSION AND FUTURE WORK ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning"). For the UB-B version, the reported uncertainties on environment configuration 00 (training) and 7777 (strong modification) increasingly diverge with progressing training episodes (Figure [7(a)](#S7.F7.sf1 "7(a) ‣ Figure 8 ‣ 7 DISCUSSION AND FUTURE WORK ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")). As this is not the case for the UB-MC version (Figure [7(b)](#S7.F7.sf2 "7(b) ‣ Figure 8 ‣ 7 DISCUSSION AND FUTURE WORK ‣ Uncertainty-Based Out-of-Distribution Classification in Deep Reinforcement Learning")), only the Bootstrap-based version allows for an increasingly better differentiation between in- and OOD samples and consequently high F1-scores of the classifier. We found this effect to be consistent over all parametrizations of the Bootstrap- and MCCD-based versions we evaluated. Our results match recent findings [[Beluch et al., 2018](#bib.bibx3)], where ensemble-based uncertainty estimators were compared against Monte-Carlo Dropout based ones for the case of active learning in image classification. Results presented in that work also showed that ensembles performed better and led to more calibrated uncertainty estimates. As a possible explanation, the authors argue that the difference in performance is a result of a combination of decreased model capacity and lower diversity of the Monte-Carlo Dropout methods when compared to ensemble approaches. This effect would also explain the behaviour we observed when comparing reported uncertainty and achieved return. While there is a strong inverse relation visible when using Bootstrap-based UBOOD versions, no clear pattern emerged for the evaluated MCCD-based versions. We think that further research into the relation between epistemic uncertainty and achieved return when train- and test-environments differ could provide interesting insights relating to generalization performance in deep RL. Being able to differentiate between an agent having encountered a situation in training versus the agent generalizing its experience to new situations could provide a huge benefit in safety-critical situations.
1a25818a-6d7d-4fd6-9d25-f8846a3ed5a7
trentmkelly/LessWrong-43k
LessWrong
What's your #1 reason to care about AI risk? It's way late in my time zone and I suspect this question isn't technically coherent on the grounds that the right answer to "why care about AI risk?" is going to be complicated and have a bunch of parts that can't be separated from each other. But I'm going to share a thought I had anyway. It seems to me like probably, the answer to the question of how to make AIs benevolent isn't vastly more complicated than the answer of how to make them smart. What's worrisome about our current situation, however, is that we're currently putting way more effort into making AIs smart than we are into making them benevolent. Agree? Disagree? Have an orthogonal answers to the title question?
b0d80b16-6a9a-4a09-8dc8-c78b31c62e95
trentmkelly/LessWrong-43k
LessWrong
Beginning Machine Learning I recently finished the Machine Learning course on Coursera that is recommended by MIRI's research guide for developing a practical familiarity with machine learning. This post contains my thoughts about the course and tries to convey the updates my mental models went through as the eleven week course progressed. I started the course with the perspective of an experienced, employed software developer with an engineering degree who had never focused on machine learning before, so I'm sure there's some background knowledge I take for granted, as well as some things I should have realized long before taking this course. What is machine learning? A definition introduced early in the lectures defines machine learning as the field of study that gives computers the ability to learn without being explicitly programmed. I knew that much before this class. What I didn't know were the nuts, bolts, and gears of how to start writing actual code that uses machine learning algorithms. Prior to the course, I briefly tried to imagine how an application of machine learning might work (e.g. a program learning to play a game). What I came up with was a complex set of rules and loops where other complex rules somehow tweaked the original rules based on whatever results were used to represent desired outcomes. What those rules should even be was vague, and I thought the whole system I imagined would be a fragile, error prone, maintenance nightmare. That didn't square with the robust uses of machine learning by tons of profitable companies, or with the fact that I knew neural nets are increasingly popular and useful. I had a rough conceptual sketch of neural nets being nodes connected in layers with weights and outputs but not how to represent that in code. I was eager for this class to tell me what I was missing. Machine learning is math. > March 6, 2018 - As of today, I'm in the middle of week 5, and my initial reaction to the course has been: I'm basically writing programs
f407d2b7-3efc-4dba-9950-cc3656401530
trentmkelly/LessWrong-43k
LessWrong
'An objective defense of Bayesianism' Recently, Hans Leitgeb and Richard Pettigrew have published a novel defense of Bayesianism: An Objective Defense of Bayesianism I: Measuring Inaccuracy > One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: > Accuracy: An epistemic agent ought to minimize the inaccuracy of her partial beliefs. > In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that unless the requirement of Rigidity is imposed from the start, Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. An Objective Defense of Bayesianism II: The Consequences of Minimizing Inaccuracy > In this article and its prequel, we derive Bayesianism from the following norm: Accuracy—an agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we make the norm mathematically precise; in this article, we derive its consequences. We show that the two core tenets of Bayesianism follow from Accuracy, while the characteristic claim of Objective Bayesianism follows from Accuracy together with an extra assumption. Finally, we show that Jeffrey Conditionalization violates Accuracy unless Rigidity is assumed, and we describe the alternative updating rule that Accuracy mandates in the absence of Rigidity. Richard Pettigrew has also written an excellent introduction to probability.
7b023ba8-a585-4ca8-83c9-3ecc37efc58b
StampyAI/alignment-research-dataset/blogs
Blogs
finding earth in the universal program finding earth in the universal program -------------------------------------- this post expands on step one of [*the Peerless*](the-peerless.html): creating virtual people. brain scans-and-simulations are apparently still quite far off, so i'll be focusing on the second approach: resimulating earth and plucking out persons. (one great side-advantage of this method is that, if we can relocate earth to pluck out persons for the simulation of alignment researchers, then we can later also relocate earth in order to restore it once we've solved alignment. so resimulating and locating earth, regardless of having early enough mind-plucking-out tech, is something we might need to do anyways.) if compute is [infinite](ai-alignment-wolfram-physics.html) and [we don't mind being inefficient](udassa-time-steps.html), then we can use exponential or even infinite compute to locate earth. one approach is the following: create a big informational beacon — perhaps a copy of a huge portion of the internet, along with MRI scans of as many people as we can afford. then, we use some type of (non-intelligent) deterministic error-bound statistical location procedure to locate patterns that look like that beacon inside the [universal program](universal-complete.html). we can afford the statistical detection to be imperfect — if it misses on one encoding of earth, there will be different ones in the universal program. because of the time penalty of the universal program, however, we may find just compressed copies of the beacon (instead of a full simulation of earth leading to the time at which we build that beacon), and because of the deterministic bound, we want need to stop on the first match; if this first match is *just* the beacon, without earth, then we fail; perhaps superintelligence can notice that it's not finding any nearby minds to pluck out, or perhaps it plucks out garbage. so we can start the universal program with not one step per program, but rather a very large number of steps — i hear stephen wolfram has estimates on the number of computation steps it takes to get to the current state of the universe. this will favor programs that takes every long to lead to the beacon, but are themselves shorter program. (what if the first program to contain earth is itself a universal program *without* that huge constant, such that *it* finds the beacon before it finds earth? i am not sure how to address this. perhaps we can explore programs in an order that favors worlds that look like our physics instead of looking like discrete iterations of all computations?) there's also the concern that the universal program, just like the [universal distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution), [is malign](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign). i'd think plain top-level earth, maybe especially as detectable by a simple enough beacon locator, would tend to occur before malign aliens emitting our beacon to trick us; but that's a risk to keep in mind. if we *do* care about computational efficiency, then there are two main factors we need to account for: * can our universe can be ran in polynomial time on whatever computers the superintelligence can build? for example, can it be ran in polynomial time on quantum computers, and can quantum computers be built? note that if this is the case we might need to step through *quantum steps* of *quantum programs* to run the search in the expected time. this doesn't need we need to build quantum computers outselves, mind you — superintelligence can just notice that a quantum computer would run the computations we describe efficiently, and build and use those. * is the "seed" program for the universe small? intuitively i believe it is, and i find wolfram's efforts to reproduce the behavior of particles from the standard model using simple graph rewriting, to be evidence in that direction. that said, if it is large, then finding that program is an exponential search again — and so, again, we might need to build a search that "favors" our physics to save on exponential search time. finally, we might want to put a hard bound on the number of tries the superintelligence will run to locate earth. the reason for that is that, if for some reason we messed up something in the beacon locator and it *never, ever* finds earth, then it will instantiate all computations, which appears to me to be a potential [S-risk](timeline-codes.html). in fact, even if we do find earth, it may not be worth it if we have to simulate exponentially much potential suffering before running our utopia — what if, after solving alignment, we have a great time, but then decide to eventually fade away after only polynomial time? then we will might have created exponentially much suffering in total. ### intermediary simulation in case isolating minds from this simulation is hard, we could build an intermediary step between the location of earth in simulation-space, and booting the peerless simulation proper — superintelligence could, once it has located our beacon, get in touch with our organization *inside the simulation of earth*, and give it extraordinary computational (and maybe physical?) ability within the simulation to either take over everything, or figure out brain plucking-out and then let us press a big "ok, start now" button. note, however, that we might not want to remain in this intermediary simulation for too long — it is still vulnerable to inner unaligned superintelligences, just like our top level reality is. we want to get to a safe, sandboxed, computationally weak environment as early as possible. this is also a great argument for readying ourselves to build the beacon and utilize this contact-from-superintelligence as early as we can; indeed, to make that the first step of implementing the peerless plan. the reason for that is that the earlier we are able to take advantage of it, the earlier the time step of the simulation superintelligence can help us start bootstrapping towards the proper simulation of the peerless, and the less likely we are to be doomed by other superintelligences, if we need some intermediary "pre-peerless" simulation time.
3d48dd9c-8e88-4075-ac5b-49abaaec8dad
trentmkelly/LessWrong-43k
LessWrong
Bad arguments as evidence A problem with listening to arguments is that they often fail to include the evidence that provoked them, which can be informative where the argument itself is fatally flawed. For instance, suppose there is a God. And suppose that people frequently see him, and so feel inclined to believe in him. However they know ‘I saw God!’ will get little interest and much criticism, so they don’t say that. But, feeling more positively inclined toward pro-God arguments, and end up tentatively agreeing with some of them. They come to say ‘how did eyes evolve?’ and ‘where did the universe come from?’, because these are the most compelling-to-them pro-God arguments they came across. And so you—who has never seen God—just see a whole lot of people making bad arguments about God, and then weirdly believing them. But the important evidence—that a large portion of the population has experienced personally meeting God—is hidden from you, though in sum you might have taken it more seriously than you take a flawed argument. If people feel that arguments are more virtuous than anecdote, you should remember that when people make arguments, they might be doing it in the place of anecdotes, that a) actually changed their mind and b) are actually interesting evidence. This is especially true in a world where most people can’t argue their way out of a paper bag, and are also more frequently compelled by non-argument phenomena than arguments. So, an upshot is that if someone makes an argument to you, consider asking the story of how they came to feel disposed toward its conclusion. A real example: I remember motivatedly reasoning in the past, and while I expect my arguments were above average, had someone wondered what produced them and asked me, I might have told them that I had been in the forests, and that they were incredible and made me feel different to being in other places, and that I am further offended by people getting their way and destroying value in the name of bad reasoning,
802a8d5f-84bd-4644-9910-5900217a2b57
trentmkelly/LessWrong-43k
LessWrong
Meetup : Copenhagen September Social Meetup - Botanisk Have Discussion article for the meetup : Copenhagen September Social Meetup - Botanisk Have WHEN: 27 September 2014 02:30:00PM (+0200) WHERE: Gothersage 128, Copenhagen G'day all, Sorry for the delayed announcement, but I'd like to suggest we all hang out at the Botanical Gardens on Saturday. Meet just inside the main gate, Gothersgade 128 - we'll wait there from 2:30pm-3:00pm. Call 2247-8373 to find us. Bring food and drinks, picnic blanket. I imagine we'll do a mixture of walking around the gardens as well as relaxing in one spot. Gardens close at 6:00pm and we might continue on elsewhere afterwards too. Cheers Discussion article for the meetup : Copenhagen September Social Meetup - Botanisk Have
c7443503-d94e-40c8-8525-5d9207852368
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Designing for Human Rights in AI (Evgeni Aizenberg) - 1st AiTech Symposium so hi everyone my name is Jani Eisenberg and my approach I'm a postdoc at a tech and my project is on designing for human rights in AI as you've gathered now I'm working together with you don't hold on this and also you know so jumping right in as we all know artificial intelligence is rapidly growing in news and the promise that we often hear and that many of us have spent time working in the past and in the present on these technologies is to provide more evidence driven decisions provide more efficiency have more automation to hopefully let us be more free to pursue the things that we wish to pursue as humans and be less busy with all kinds of mundane tasks however does AI really live up to these promises from what we see over the past decade well unfortunately we have seen over the past decade or so many examples of the opposite this is an example that you might have heard about of the compass algorithm used in the United States for assessment of risk that a criminal defendant commits another crime in the future and as this Republican investigation has found in 2016 it it was found to be twice as likely to falsely label an african-american defendant as high-risk compared to a white defendant we have seen cases where employee performance assessments performed by algorithms result in firing of talented employees like in the case of Sarah vysotsky a schoolteacher in the US who was highly valued by her fellow teacher colleagues and parents of the kids that she taught but when receiving a low assessment score based on the test scores of her students from that specific year she was fired of course as you mentioned we are all aware of the shockwaves of the chem 'bridge analytical scandal that has shaken the the way we we live our election systems and in an election cycles in our democratic state right what all the disinformation that is now bombarding us on social media and the Internet and as you might have heard and you'll mentioned China is taking a very particular direction on how they want to use AI in their society we're in China right now a whole social credit score system is being implemented where every citizen will receive a basically how good how good of a good citizen are you score based on all kinds of last amounts of data from CCTV cameras to to you you name it which can affect things like your ability to purchase a train ticket or a plane ticket or your ability to send your kids to a school of your choice and so forth so some of the major social ethical issues surrounding AI that we see are discrimination unjustified unexplainable decisions privacy infringement disinformation job market effects of automation the anxiety of people as to how their jobs are going to be affected with more automation taking place and of course safety now what I would like to impress upon you today with these examples is that these technologies are of major consequence to people's human rights and when I say human rights I mean such fundamental notions of human dignity freedom and equality and in my view this is one of the biggest challenges of our time it is right up there second to the climate crisis which all of us should be worried about regardless of our occupation but not far from the climate crisis is this because of how disruptive this technology can be and don't get me wrong there's a lot of great things we can accomplish with a but if we get many things wrong about how this technology impacts as you were saying or freedom of choice and all of those other dimensions it can be immensely disruptive to our societies so of course this has been getting increasing attention over the past decade both from the technological side of the spectrum both from the social science side the ethics of Technology side but unfortunately a lot of the technical solutions which are developed to address these issues such as fairness for example yeah and discrimination engineers often develop these solutions without the society without the input of societal stakeholders and so engineers do often focus on the machine learning model the input data and the output data but the larger contextual contextual information and the interpretation of people who are affected by the technology and use the technology is often ignored at the same time what we see is that calls for ethical AI point out the issues but often fail to provide answers on how do we proceed to fix that bridging the socio technical divide is what my project is about and in my project I take a design for values approach to doing so because of how impactful say I is to people's human rights what we want to do we want to use human rights as top-level design requirements in the kind of design approach that you don't unfillable talked about these should be the values that guide the human centered design of these systems the other critical component of this approach is that we then work together with the stakeholders by engaging them through a range of empirical methods from methodologies like value sensitive design and participatory design with a combination of qualitative research and quantitative research which is very important this is where the collaboration between disciplines comes yes this is what we need each other as engineers and social sciences to translate what do human rights and their implications mean in a specific context of use in a specific system that you're considering whether this is a decision support algorithm in criminal justice whether it's an automatic driving car what's in that specific context the interpretation of an abstract value and then translate that into corresponding technical requirements one comments about what we mean here by human rights so we make an explicit choice of grounding the design roadmap in the EU Charter of Fundamental Rights the values of the foundation of these are the values of the foundation of EU law and they are dignity freedom equality and solidarity of course this is not exclusive to EU Member States these values are shared by many cultures in in in the West and at the outset we should recognize that there will be different value choices that are likely in many other cultures and socio-political systems so we want to make clear we're not trying to impose our view of society on on other countries but we want to show how these technologies can be designed over here to support the values that we treasure over here now to just give you a glimpse of the benefits of the of and how how was the structure of this of this process imagined that we would want to design for values which are at stake in that criminal risk assessment case well for the criminal defendants a important human rights which is at stake of course is obviously freedom and if we look in the EU Charter in in the freedom articles we have of course the right to Liberty which in the context of criminal justice one of its main implications is that a person may not be subjected to an arbitrary arrest if you arrest a person that there needs to be a court with standing evidence right you need to be able to prove in court that there's the rest of this person is warranted the next question is how do you translate that into a technical requirement well in general this is an important moment because in situations where there is no obvious technical solution to support that norm sometimes it would be a stimulus for innovation because historically we know that when faced with moral dilemmas we produce some new technologies that allow us to resolve all of the dilemmas above but I want to make the point that there will be situations when norms can don't be fulfilled with technology and the responsible thing to do then is to stop it is not an obvious thing that AI should be introduced in every context more AI is not always the right answer and in that I want to say that meaningful human control VI requires from our from my vision eliciting contextual design requirements by a truly Co designing with stakeholders engineers individuals affected by the AI sections direct users field experts policy experts and so forth moving forward we want to engage in case studies in which we implement this vision in specific context and I'll be glad to interact with you today to also hear your ideas learn to communicate efficiently across diverse backgrounds and finally transition from these studies to design protocols in specifics societal domains just recall sure I want to say that designing for human rights should not be viewed as an obstacle on the contrary I believe that it is key to achieving innovative partnerships between humans in AI and understanding the roles that we can play together to help in and humans benefit from more human to human valuable human to human interaction it is not an easy path to tread because we need to learn to communicate across all these different disciplines which we represent but I believe it'll bring long term benefits to all stakeholders and with that I call you to design together for Humanity thank you very much thank you thank you you have gaming [Applause]
a221f107-a7cc-427c-bcd1-b746b69ea3c2
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
What is the subjective experience of free will for agents? Jessica recently wrote about [difficulties with physicalist accounts of the world](https://www.lesswrong.com/posts/my5pbcmQPb6ASSHYM/puzzles-for-physicalists) and [alternatives to logical counterfactuals](https://www.lesswrong.com/posts/yBdDXXmLYejrcPPv2/two-alternatives-to-logical-counterfactuals). In my recent post about the deconfus[ing human values research agenda](https://www.lesswrong.com/posts/k8F8TBzuZtLheJt47/deconfusing-human-values-research-agenda-v1), [Charlie left a comment](https://www.lesswrong.com/posts/k8F8TBzuZtLheJt47/deconfusing-human-values-research-agenda-v1?commentId=WK5hKMNaPRvCpFSpQ#comments) highlighting that my current model depends on a notion of "could have done something else" to talk about decisions. Additionally, I have a strong belief that the world is subjectively deterministic, i.e. that from my point of view the world couldn't have turned out any other way than the way it did because I only ever experience myself to be in a single causal history. Yet I also suspect this is not the whole story because it appears the world I find myself in is one of many possible causal histories, possibly realized in many causally isolated worlds after the point where they diverge (i.e. a non-collapse interpretation of quantum physics). So this leaves me in a weird place. When thinking about values, it often make sense to think about the downstream effects of values on decisions and actions and in fact many people try to infer upstream values from observations of downstream behaviors, yet the notion of "deciding" implies there was some choice to make, which I think maybe there wasn't. Thus I have theories that conflict with each other yet seek to explain the same phenomena, so I'm confused. **Seeking to see through this confusion, what are some ways of reconciling both the experience of determinism and the experience of freedom of choice or free will?** Since this has impacts on how to think about decision theory, my hope is that people might be able to share how they've thought about this question and tried to resolve it.
33bef0aa-cdd5-422f-a0be-f704104bd6ec
trentmkelly/LessWrong-43k
LessWrong
AI Safety Endgame Stories Assume you are in the set of possible worlds where AI takeover happens by default. If you do nothing, then at some point in the 21st century the AI lab Magma develops a transformative AI system. Magma employees perform a number of safety checks, conclude the system is safe enough, and deploy it. They deploy it slowly and incrementally, with careful monitoring. But despite their efforts, the system turns out to be unsafe and the monitoring insufficient, triggering a cascade of events eventually leading to an existential catastrophe.[1]  I’ll refer to this sequence of events as the “baseline story” going forward. Assume further that you’re in the narrower set of worlds where this AI catastrophe is contingent on your actions. In other words, there exists a sequence of actions you (or your organization) can take that averts catastrophe, a decisive intervention. Not necessarily a pivotal act, an intervention that averts all existential risk from AI. Just an intervention that prevents this specific Magma catastrophe, giving humanity some breathing room, perhaps only a few months or years.[2] Let’s try to understand what this decisive sequence of actions could look like. It’s tempting to start at the beginning of the sequence and think about what the first few actions look like. Unfortunately, the most probable starting actions are “meta” actions like thinking really hard, talking to experts, or recruiting more people to work on the problem. These are the same kinds of actions that any successful project starts with! So it doesn’t help us constrain the space of decisive interventions. Instead, it’s more helpful to start with the endgame: how, in the end, did your actions change the baseline story and avert catastrophe? And what were the last nodes in the causal chain leading up to the change? At the most abstract level, the baseline story has the following structure. A social process (Magma) instantiates a technological artifact (unsafe AI) which destroys the world. T
b1e63436-8d72-426e-a133-a601c7dd4489
trentmkelly/LessWrong-43k
LessWrong
Meetup : Ottawa Weekly Monday LessWrong Meetup Discussion article for the meetup : Ottawa Weekly Monday LessWrong Meetup WHEN: 21 November 2011 07:30:00PM (-0500) WHERE: Pub Italia: 434 Preston St, Ottawa, ON We will be discussing more sections from Why Philosophers Should Care About Computational Complexity by Scott Aaronson. Discussion article for the meetup : Ottawa Weekly Monday LessWrong Meetup
238b9c6f-9e92-40e7-a621-693116b9dc4a
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Traditional Rationality is phrased in terms of social rules, with violations interpretable as cheating—as defections from cooperative norms. If you want me to accept a belief from you, you are obligated to provide me with a certain amount of evidence. If you try to get out of it, we all know you’re cheating on your obligation. A theory is obligated to make bold predictions for itself, not just steal predictions that other theories have labored to make. A theory is obligated to expose itself to falsification—if it tries to duck out, that’s like trying to duck out of a fearsome initiation ritual; you must pay your dues. Traditional Rationality is phrased similarly to the customs that govern human societies, which makes it easy to pass on by word of mouth. Humans detect social cheating with much greater reliability than isomorphic violations of abstract logical rules.1 But viewing rationality as a social obligation gives rise to some strange ideas. For example, one finds religious people defending their beliefs by saying, “Well, you can’t justify your belief in science!” In other words, “How dare you criticize me for having unjustified beliefs, you hypocrite! You’re doing it too!” To Bayesians, the brain is an engine of accuracy: it processes and concentrates entangled evidence into a map that reflects the territory. The principles of rationality are laws in the same sense as the Second Law of Thermodynamics: obtaining a reliable belief requires a calculable amount of entangled evidence, just as reliably cooling the contents of a refrigerator requires a calculable minimum of free energy. In principle, the laws of physics are time-reversible, so there’s an infinitesimally tiny probability—indistinguishable from zero to all but mathematicians—that a refrigerator will spontaneously cool itself down while generating electricity. There’s a slightly larger infinitesimal chance that you could accurately draw a detailed street map of New York without ever visiting, sitting in your living room with your blinds closed and no Internet connection. But I wouldn’t hold your breath. Before you try mapping an unseen territory, pour some water into a cup at room temperature and wait until it spontaneously freezes before proceeding. That way you can be sure the general trick—ignoring infinitesimally tiny probabilities of success—is working properly. You might not realize directly that your map is wrong, especially if you never visit New York; but you can see that water doesn’t freeze itself. If the rules of rationality are social customs, then it may seem to excuse behavior X if you point out that others are doing the same thing. It wouldn’t be fair to demand evidence from you, if we can’t provide it ourselves. We will realize that none of us are better than the rest, and we will relent and mercifully excuse you from your social obligation to provide evidence for your belief. And we’ll all live happily ever afterward in liberty, fraternity, and equality. If the rules of rationality are mathematical laws, then trying to justify evidence-free belief by pointing to someone else doing the same thing will be around as effective as listing thirty reasons why you shouldn’t fall off a cliff. Even if we all vote that it’s unfair for your refrigerator to need electricity, it still won’t run (with probability ~1). Even if we all vote that you shouldn’t have to visit New York, the map will still be wrong. Lady Nature is famously indifferent to such pleading, and so is Lady Math. So—to shift back to the social language of Traditional Rationality—don’t think you can get away with claiming that it’s okay to have arbitrary beliefs about XYZ, because other people have arbitrary beliefs too. If two parties to a contract both behave equally poorly, a human judge may decide to impose penalties on neither. But if two engineers design their engines equally poorly, neither engine will work. One design error cannot excuse another. Even if I’m doing XYZ wrong, it doesn’t help you, or exempt you from the rules; it just means we’re both screwed. As a matter of human law in liberal democracies, everyone is entitled to their own beliefs. As a matter of Nature’s law, you are not entitled to accuracy. We don’t arrest people for believing weird things, at least not in the wiser countries. But no one can revoke the law that you need evidence to generate accurate beliefs. Not even a vote of the whole human species can obtain mercy in the court of Nature. Physicists don’t decide the laws of physics, they just guess what they are. Rationalists don’t decide the laws of rationality, we just guess what they are. You cannot “rationalize” anything that is not rational to begin with. If by dint of extraordinary persuasiveness you convince all the physicists in the world that you are exempt from the law of gravity, and you walk off a cliff, you’ll fall. Even saying “We don’t decide” is too anthropomorphic. There is no higher authority that could exempt you. There is only cause and effect. Remember this, when you plead to be excused just this once. We can’t excuse you. It isn’t up to us. 1Leda Cosmides and John Tooby, “Cognitive Adaptations for Social Exchange: Evolutionary Psychology and the Generation of Culture,” in The Adapted Mind, ed. Jerome H. Barkow, Leda Cosmides, and John Tooby (New York: Oxford University Press, 1992), 163–228." The above is a chunk of text from a post created by a user on the website LessWrong. Can you elaborate on the comparison between social customs and mathematical laws in relation to rationality, and how they differ in their implications for the formation of beliefs? The comparison between social customs and mathematical laws in relation to rationality revolves around how we perceive the formation and justification of beliefs. When rationality is viewed as a social custom, it is treated as a set of rules or obligations that one must follow in a social context. These rules dictate that if one presents a belief, they are obliged to provide sufficient evidence to support it. Violations of these rules are seen as social cheating, akin to not fulfilling a social contract. In this perspective, rationality becomes a negotiation, where the validity of a belief can be debated, negotiated, or even excused based on social dynamics. For instance, if everyone in a group holds a particular belief without evidence, it might be socially acceptable within that group, even if it's not rationally justified. On the other hand, when rationality is viewed as a mathematical law, it becomes an objective and non-negotiable principle. Mathematical laws are universal, immutable, and independent of social dynamics. They are not subject to negotiation, debate, or excusal. In this perspective, a belief is either justified by evidence (rational) or it's not. There's no middle ground. If you lack evidence for a belief, pointing out that others also hold beliefs without evidence doesn't make your belief any more rational. It's akin to saying that two wrongs make a right, which is not the case in the realm of mathematical laws. The implications for the formation of beliefs are starkly different in these two perspectives. In the social customs perspective, beliefs can be formed, maintained, and justified based on social dynamics, peer pressure, or common practice. However, in the mathematical laws perspective, the only valid basis for forming and maintaining a belief is evidence. Anything less is simply not rational, regardless of how many people believe it or how socially acceptable it is. This distinction highlights the difference between what is socially acceptable and what is rationally justified. It emphasizes the importance of evidence in forming rational beliefs and the non-negotiable nature of rationality when viewed as a mathematical law. How would the principles of rationality, viewed as mathematical laws, apply to the formation of beliefs in a real-world scenario such as decision-making in a business context? In a business context, the principles of rationality as mathematical laws would guide decision-making by demanding evidence-based reasoning and eliminating subjective bias. Let's take an example of a company deciding whether to launch a new product. In this scenario, rational decision-making would involve gathering and analyzing data related to market demand, competition, production costs, potential revenue, and other relevant factors. Each of these pieces of evidence contributes to the formation of a belief about the potential success of the new product. In contrast to decision-making based on social customs or personal biases, this approach does not allow for beliefs to be formed based on hearsay, personal preferences, or unsubstantiated assumptions. For instance, the CEO's personal liking for the product, or a prevalent industry trend that lacks supporting data, would not be sufficient reasons to greenlight the product launch. Furthermore, the principles of rationality as mathematical laws would also dictate that beliefs should be updated in light of new evidence. If subsequent market research indicates lower than expected demand for the product, the initial belief about the product's potential success should be revised, regardless of previous convictions. This approach ensures that business decisions are made in a manner that is most likely to reflect the reality of the market and the potential outcomes of the decision, thereby maximizing the likelihood of making the most beneficial decision. It also minimizes the risk of costly mistakes that can result from decisions based on unfounded beliefs or biases.
8fbc250f-73a2-4208-a6b3-dc4dc7977967
trentmkelly/LessWrong-43k
LessWrong
A Survey of Early Impact Measures In the context of AI alignment an impact penalty is one way of avoiding large negative side effects from misalignment. The idea is that rather than specifying negative impacts, we can try to avoid catastrophes by avoiding large side effects altogether. Impact measures are ways to map a policy to a number which is intended to correspond to "how big of an impact will this action have on the world?" Using an impact measure, we can regularize any system with a lot of optimization power by adding an impact term to its utility function. This post records and summarizes much of the early research on impact penalties. I emphasize the aims of each work, and the problems associated with each approach, and add occasional commentary along the way. In the next post I will dive more deeply into recent research which, at least in my opinion, is much more promising. ---------------------------------------- The mathematics of reduced impact: help needed, by Stuart Armstrong (2012) This is the first published work I could find which put forward explicit suggestions for impact measures and defined research directions. Armstrong proposed various ways that we could measure the difference between worlds, incorporate this information into a probability distribution, and then use that to compare actions. Notably, this post put a lot of emphasis on comparing specific ontology-dependent variables between worlds in a way that is highly sensitive to our representation. This framing of low impact shows up in pretty much all of the early writings on impact measures. One example of an impact measure is the "Twenty (million) questions" approach, where humans define a vector of variables like "GDP" and "the quantity of pesticides used for growing strawberries." We could theoretically add some L1 regularizer to the utility function, which measures the impact difference between a proposed action and the null action, scaled by a constant factor. The AI would then be incentivized to keep these
a91f507d-1498-43fe-8ed0-b50eb3788d15
trentmkelly/LessWrong-43k
LessWrong
My current thoughts on the risks from SETI SETI stands for the search for extraterrestrial intelligence. A few projects, such as Breakthrough Listen, have secured substantial funding to observe the sky and crawl through the data to look for extraterrestrial signals. A few effective altruists have proposed that passive SETI may pose an existential risk to humanity (for some examples, see here). The primary theory is that alien civilizations could continuously broadcast a highly optimized message intended to hijack or destroy any other civilizations unlucky enough to tune in. Many alien strategies can be imagined, such as sending the code for an AI that takes over the civilization that runs it, or sending the instructions on how to build an extremely powerful device that causes total destruction. Note that this theory is different from the idea that active SETI is harmful, ie. messaging aliens on purpose. I think active SETI is substantially less likely to be harmful, and yet it has received far more attention in the literature. Here, I collect my current thoughts about the topic, including arguments for and against the plausibility of the idea, and potential strategies to mitigate existential risk in light of the argument. In the spirit of writing fast, but maintaining epistemic rigor, I do not come to any conclusions in this post. Rather, I simply summarize what I see as the state-of-the-debate up to this point, in the expectation that people can build on the idea more productively in the future, or point out flaws in my current assumptions or inferences. Some starting assumptions Last year, Robin Hanson et al. published their paper If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare. I consider their paper to provide the best available model to-date on the topic of extraterrestrial intelligence and the Fermi Paradox (along with a very similar series of papers written by S. Jay Olson previously). You can find a summary of the model from Robin Hanson here, and a video-summary here. The p
d4a37cc1-0e99-44c5-93a6-d7c80c15352b
trentmkelly/LessWrong-43k
LessWrong
David Allen vs. Mark Forster
f84199b5-d659-412a-802b-008587de5b0f
trentmkelly/LessWrong-43k
LessWrong
The importance of open-source cryptography for Singleton prevention I'm sure most of the readers of lesswrong and overcomingbias would consider a (edit: non-FAI) singleton scenario undesirable.  (In a singleton scenario, a single political power or individual rules over most of humanity.) Singleton could occur if a group of people developed Artificial General Intelligence with a significant lead over their competitors.  The economic advantage from sole possession of AGI technology would allow the controllers of the technology the opportunity to gain a economic or even a political monopoly in a relatively short timescale. This particular risk, as Robin Hanson pointed out, is less plausible if the "race for AGI" involves many competitors, and no competitor can gain too large of a lead over others.  This "close race" scenario is more likely if there is an "open-source" attitude in the AGI community.  Even if private organizations attempt to maintain exclusive control of their own innovations, one might hope that hackers or internal leaks would release essential breakthroughs before the innovators could gain too much of a lead. Then, supposing AGI is rapidly acquired by many different powers soon after its development, one can further hope that the existence of multiple organizations with AGI with differing goals would serve to prevent any one power from gaining a monopoly using AGI. This post is concerned with what happens afterwards, when AGI technology is more or less publicly available.  In this situation, the long-term freedom of humanity is still not guaranteed, because disparities in access to computational power could still allow one power to gain a technological lead over the rest of humanity.  Technological leads in the form of conventional warfare technologies are not as likely, and perhaps not even as threatening, as technological leads in the form of breakthroughs in cryptography. In this information-dependent post-utopia, any power which manages to take control of the computational structures of a society would gain i
a00a2d6c-a5f5-4c48-95a6-8f30cfb6d36a
trentmkelly/LessWrong-43k
LessWrong
Other minds and bats: the vampire Turing test Thoughts inspired by Yvain's philosophical role-playing post. Thomas Nagel produced a famous philosophical thought experiment "What Is It Like to Be A Bat?" In it, he argued that the reductionist understanding of consciousness was insufficient, since there exists beings - bats - that have conscious experiences that humans cannot understand. We cannot know what "it is like to be a bat", and looking reductively at bat brains, bat neurones, or the laws of physics, cannot (allegedly) grant us any understanding of this subjective experience. Therefore there remains an unavoidable subjective component to the problem of consciousness. I won't address this issue directly (see for instance this, on the closely related subject of qualia), but instead look at the question: suppose someone told us that they actually knew what it was like to be a bat (as well as what it was like to be a human). Call such a being a vampire, for obvious reasons. So if someone claimed they were a vampire, how would we test this? We can't simply ask them to describe what it's like to be a bat - it's perfectly possible they know what it's like to be a bat, but cannot describe it in human terms (just as we often fail to describe certain types of experiences to those who haven't experienced them). Could we run a sort of Turing test - maybe implant the putative vampire's brain into a bat body, and see how bat-like it behaved? But, as Nagel pointed out, this could be a test of whether they know how to behave like a bat behaves, not whether they know what it's like to be a bat. I posit that one possible solution is to use the approach laid out in my post "the flawed Turing test". We need to pay attention as to how the "vampire" got their knowledge. If the vampire is a renown expert on bat behaviour and social interactions, who is also interested in sonar and paragliding - then them functioning as a bat is weak evidence as to them actually knowing what it is like to be a bat. But suppose instead that t
55685df1-19ae-4537-9b35-01961e958554
trentmkelly/LessWrong-43k
LessWrong
Would quantum immortality mean subjective immortality? I've heard a lot of people talking about quantum immortality/suicide like you would be subjectively immortal but I don't see why. Isn't there a chance subjective you will just be sent to the dead branch and a new you is created but that new you in the alive branch is not subjectively you?
5a5878c6-9b71-4669-b846-090a4973ba27
trentmkelly/LessWrong-43k
LessWrong
Bay Winter Solstice: call for speech pitches! Call to pitch me on speeches for Bay Winter Solstice! Types of things I am particularly interested in: * personal stories relevant to Solstice themes * concrete historical stories relevant to Solstice themes * stories of using truthseeking & rationalist virtue to achieve good outcomes in the world * humor A few likely themes (non-exhaustive):  * mortality, scarcity of time, frailty of humans, x-risk * humans as just little animals that desperately want to be okay but aren’t very good at it (yet?) * being okay in a world that isn’t * beauty and meaning and worth among the darkness * humans do cool shit (more and more so over time) * “and all we do may be undone, but love stays loved and songs stay sung” …but you are welcome to suggest other types of things on other themes too! Details and caveats: * fill out the attached form to express interest! ideally within the next week or two * if you want to pitch me on an existing text, link me the text (can be yours or someone else’s) * if you want to write a new text, probably just describe what it would be about, optionally also include a writing sample * note that I don’t promise to include everything anyone pitches me on (even at later stages, if I think your idea is interesting but then I find that the actual thing you write doesn’t actually fit well into the program, I may decide it won’t work; if I like the text but I find that your delivery of it isn’t compelling, I may ask if you’re okay with someone else reading it) Feel free to comment or message me with any questions. Form: https://docs.google.com/forms/d/e/1FAIpQLSfJsCxfCmNTn6SuLWYWc2PBL7k8aQqQGx2GT7gRoHP_a53iaA/viewform 
2b49f0df-026e-456f-be6a-c105c477b7e8
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
A proof of inner Löb's theorem This is a short post that offers a slightly different take on the standard proof of Löb's theorem. It offers nothing else of any value :) We seek to prove the "inner" version, which we write as: □P↔□(□P→P).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} The proof uses quining to build a related sentence L, the "Löb sentence", which talks about its own source code. By construction L has the property: □L↔□(□L→P) Then, we can show that □L↔□P, i.e. they're equivalent! We do this by plugging □L into itself to get a twisty □P. We can then replace each □L with □P and prove Löb's theorem. The proof ========= This proof uses the same rules of box manipulation as on the [wiki page](https://en.m.wikipedia.org/wiki/L%C3%B6b%27s_theorem#Modal_rules_of_inference). We start by creating L using quining, i.e. taking a [modal fixed point](https://en.m.wikipedia.org/wiki/L%C3%B6b%27s_theorem#Modal_fixed_points): 1. ⊢L↔(□L→P) (exists as a modal fixed point) Yep, this is skipping the details of the most interesting part, but alas I don't understand them well enough to do more than wave my hands and say "quining". We then stick it inside the box to get our first property: 2. ⊢□(L↔(□L→P)) (from (1) by necessitation) 3. ⊢□L↔□(□L→P) (from (2) by box-distributivity in both directions) We now want to show that □L↔□P. We can get the forward direction by feeding a copy of □L into itself: 4. ⊢□L→(□□L→□P) (box-distributivity on (3)) 5. ⊢□L→□□L (internal necessitation) 6. ⊢□L→□P (from (4) and (5)) The backward direction is equivalent to □P→□(□L→P), and is straightforward: 7. ⊢P→(□L→P) (trivial) 8. ⊢□P→□(□L→P) (necessitation and box-distributivity on (7)) Taking those together, we've shown □L and □P are equivalent. 9. ⊢□L↔□P (from (6) and (8)) Now we'd like to finish by appealing to the following chain: □P↔□L↔□(□L→P)↔□(□P→P) We've proven all but the last part of the chain. Here are the steps that let us do the substitution: 10. ⊢(□L→P)↔(□P→P) (since □L and □P are equivalent by (9)) 11. ⊢□((□L→P)↔(□P→P)) (from (10) by necessitation) 12. ⊢□(□L→P)↔□(□P→P) (from (11) by box-distributivity in both directions) And that's everything we need: 13. ⊢□P↔□(□P→P) (from (3), (9), and (12))
e9863ece-07ee-4b45-b9c2-9c937567638c
trentmkelly/LessWrong-43k
LessWrong
Are abortion views sexist? Indian girls are born on 500,000 fewer occasions per year than Indian boys (2006).(Photo: Steve Evans) Abortion isn’t too bad according to half of Americans, and most of liberals and the irreligious and that bunch. The fetus never really got as far as being a child, and virtually nobody thinks failing to have children is as bad as murder. Selective abortion of female fetuses, on the other hand, is horrific according to both ends of the ideological spectrum. And the reasons given are almost always to do with it being  bad for the females who aren’t born. It’s “discrimination“, a “gross violation of women’s rights“, “an extreme manifestation of violence against women” . As my pro-choice friend (among others) complains, ‘There are all these females who should exist and are missing!’ So confirmed females have a right to exist if they are conceived, and have suffered a grave loss if they cease to be, but fetuses who might be male may as well not exist? This is either hypocritical or extremely sexist. Why are the same people adamant about both views often? They both appear to be applications of general pro-female sympathy. When supporting the pro-choice side, the concern is for a woman’s rights over her own body. When condemning gender-specific abortion, the concern is for the females who won’t be born. Siding with the females becomes complicated when females are conspicuous as aborters one day and abortees the next. So it looks like this isn’t hypocrisy via accidental oversight, but policy choice biased by sympathies to a specific gender. If ‘whether an aborted fetus has been done a terrible wrong’ were the important point, we should expect to see more consistency on that. When I asked about this previously my friend suggested that the motivations were importantly different in the two cases. Aborting someone because they are female is wrong. Aborting someone because you don’t want to look after them is compassionate. This doesn’t apply here, even if it were true. Ge
cc5551ee-8627-4e91-ba9a-dc0452b41757
trentmkelly/LessWrong-43k
LessWrong
Terrorism, Tylenol, and dangerous information Recently, there has been an alarming development in the field of terrorist attacks; more and more terrorists seem to be committing attacks via crashing vehicles, often large trucks, into crowds of people. This method has several advantages for an attacker - it is very easy to obtain a vehicle, it is very difficult for police to protect against this sort of attack, and it does not particularly require special training on the part of the attacker. While these attacks are an unwelcome development, I would like to propose an even more worrisome question - why didn't this happen sooner? I see no reason to believe that there has been any particular technological development that has caused this method to become prevalent recently; trucks have been in mass production for over a hundred years. Similarly, terrorism itself is not particularly new - just look to the anarchist attacks of the late 19th and early 20th century. Why, then, weren't truck attacks being made earlier? The answer, I think, is both simple and frightening. The types of people who make attacks hadn't thought of it yet. The main obstacle to these attacks was psychological and intellectual, not physical, and once attackers realized these methods were effective the number of attacks of this sort began increasing. If the Galleanists had realized this attack method was available, they might well have done it back in '21 -- but they didn't, and indeed nobody motivated to carry out these attacks seemed to until much later. Another instance - though one with less lasting harm - pertains to Tylenol. In 1982, a criminal with unknown motives tampered with several Tylenol bottles, poisoning the capsules with cyanide and then replacing them on store shelves. Seven people died in the original attack, which caused a mass panic to the point where police cars were sent to drive down the streets broadcasting warnings against Tylenol from their loudspeakers; more people still were killed in later "copycat" crimes. In
618c9fd4-5fa6-4c85-9cf0-f631238eaa65
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Is technical AI alignment research a net positive? I am concerned that technical AI alignment research could be increasing the risks from AGI, as opposed to decreasing them. Here are some reasons why alongside some possible interventions: (analogies are purposefully crude to help illustrate my points, but I recognize that in reality, the situation is not necessarily as clear cut / black and white) * **Accelerating timelines:** Promising results in AI alignment could lower researchers' assessments of danger, increase risk-taking and accelerate the development of AGI. Contrastingly, in a world where people are aware that misaligned AI could be dangerous to humanity, but there is not a lot of promising alignment research, researchers would be more cautious about developing and deploying AI. An analogy - if you believed self-driving cars were very dangerous and ideally should never be let on the road, you could plausibly be against attempts to make self-driving cars safer as this makes it more likely for them to be allowed on the road. + *Possible solution: focus more on making people aware of risks from AI as opposed to trying to mitigate those risks.* * **Subtle misalignment:** If we become better at aligning the goals of AI with our own goals, misalignment will become more subtle and much harder to spot. A slightly misaligned AI could be much more dangerous than a very misaligned AI. For example, it could take us much longer to notice that the AI system was misaligned and by the time we do, multiple negative cascades have been set in motion. For those concerned with S-risks, it seems like a slightly misaligned AI is more likely to lead to a long period of human suffering. An analogy - a clearly dangerous axe-wielding maniac is plausibly less dangerous than a calculating psychopath who appears very normal but secretly plans to kill as many people as possible. + *Possible solution: focus more on techniques for detecting misalignment as opposed to techniques for achieving alignment.* * **Increased usefulness for malicious actors:** There is a lot of talk of "alignment", but less about what we are actually aligning with. The ability to align the goals of AI with complex human-level goals could make AI-based systems much more effective weapons/tools for malicious actors than AI's that aren't able to encapsulate human values as well. An analogy - say we have a destructive weapon that, when used, wipes out a random area of the earth. This weapon isn't that useful because you cannot guarantee that a) it will attack the area you want it to and b) it will not attack your own area. However, given the ability to select the attack area very precisely, the weapon becomes much more useful and more likely to be used. + *Possible solution: spend more time considering threat models that involve malicious actors and ways to mitigate them, as opposed to just "accidental" risks from AI. This potentially means a greater focus on governance and international peace as opposed to technical alignment research.* Curious to get some convincing rebuttals of these concerns. I do hope that technical AI safety research is a net positive, however, at the moment am quite skeptical.
c7cb908a-025a-4824-afca-b97128d360fb
trentmkelly/LessWrong-43k
LessWrong
AGI isn't just a technology Each time a skeptic talks about AI as a technology, I think it signals a likely crux. I've been watching the public AI debate closely and sadly. I think that debate might be crucial in whether we actually get aligned AGI, and it's not going particularly well so far. The debate is confused, and at risk of causing polarization by irritating all involved. Identifying cruxes should help the debate be less irritating. Agency as a crux for x-risk Of course AGI is a technology. But only in the way that humans are technically animals. Saying humans are animals has strong and wrong implications about how we behave and think. Calling AGI a technology has similar wrong implications. Technologies do what they're designed to, with some potential accidents and side effects. Boilers explode, and the internet is used for arguments and misinformation instead of spreading information. These effects can be severe, and could even threaten the future of humanity. But they're not as dangerous as accidentally creating something that becomes smarter than you, and actively tries to kill you. When someone refers to AI as a technology, I think they're often not thinking of it as having full agency. While AI without full agency does present possible x-risks, I think it's a mistake to mix those in with the risks from fully agentic AGI. The risks from a fully agentic AGI are both easier to grasp, and more severe. I think it's wiser to address those first, and only move on with a careful distinction in topics. By full agency, I mean something that pursues goals, and chooses its own subgoals (a relevant example subgoal is preventing humans from interfering with its projects). There's a spectrum of agency. A chess program has limited agency; it was made to play a good game of chess, and it can take moves to do that. Animals don't really make long-range plans that include subgoals, and no existing AI has long-range goals and makes new plans to achieve them. Humans are currently unique in that
a3af559c-d029-491a-81bc-dd1b18b00643
StampyAI/alignment-research-dataset/arxiv
Arxiv
OCR-IDL: OCR Annotations for Industry Document Library Dataset 1 Introduction --------------- Analysis of masses of scanned documents is essential in intelligence, law, knowledge management, historical scholarship, and other areas [[25](#bib.bib25)]. The documents are often complex and varied in nature that can be digital or scanned born, containing elements such as forms, figures, tables, graphics and photos, while being produced by various printing and handwriting technologies. Some common examples of documents comprise of purchase orders, financial reports, business emails, sales agreements, vendor contracts, letters, invoices, receipts, resumes, and many others [[54](#bib.bib54)]. Processing various document types to user’s intent is done with manual labor that is time-consuming and expensive, meanwhile requiring manual customization or configuration. In other words, each type of document demands hard-coded changes when there is a slight change in the rules or workflows of documents or even when dealing with multiple formats. To address these problems, Document Intelligence models and algorithms are created to automatically structure, classify and extract information from documents, improving automated document processing. Particularly, Document Intelligence as a research field aims at creating models for automatically analyzing and understanding documents, reducing the time and the cost associated with it. From a research perspective, what makes Document Intelligence especially challenging is the requirement of combining various disciplines such as optical character recognition (OCR), document structure analysis, named entity recognition, information retrieval, authorship attribution and many more. Recent methods on Documents Intelligence utilize deep neural networks combining Computer Vision and Natural Language Processing. Hao et al. [[17](#bib.bib17)] proposed an end-to-end training using Convolutional Neural Networks to detect tables in documents. Several published works [[46](#bib.bib46), [42](#bib.bib42), [56](#bib.bib56)] exploit the advances in object detection [[40](#bib.bib40), [19](#bib.bib19)] to further improve the accuracy in document layout analysis. Even though these works have advanced the Document Intelligence field, there are two main limitations to be recognized: (i) they rely on a small human annotated dataset and (ii) they use pre-trained networks that have never seen any documents, hence the interaction between text and layout. Inspired by BERT [[11](#bib.bib11)],Xu et al. [[54](#bib.bib54)] identified these problems and propose a pre-training strategy to unlock the potential of large-scale unlabeled documents. More specifically, they obtain OCR annotations from an open source OCR engine Tesseract [[45](#bib.bib45)] for 5 Million documents from IIT-CDIP [[25](#bib.bib25)] dataset. With the introduction of pre-training strategy and advances in modern OCR engine [[1](#bib.bib1), [12](#bib.bib12), [20](#bib.bib20), [28](#bib.bib28), [34](#bib.bib34)], many contemporary approaches [[7](#bib.bib7), [2](#bib.bib2), [53](#bib.bib53)] have utilized even more data to advance the Document Intelligence field. In this work, we make public the OCR annotations for 26 Millions pages using a commercial OCR engine that has the monetary value over 20K US$. Our motivation for releasing a massive scale documents dataset annotated with a commercial OCR engine is two-fold. First of all, the usage of different amount of documents and different OCR engines across the papers makes it impossible to fairly compare their results and hence their architecture. By creating this dataset, we hope that the works in Document Intelligence will become more comparable and have better intuition on what the proposed architecture can actually accomplish. Secondly, we decide to use a commercial OCR engine, specifically Amazon Textract111<https://aws.amazon.com/textract/>, over Tesseract. It is because the performance of the OCR engines can significantly affect the model’s performance which can be seen in fields that use OCR annotations, such as in fine-grained classification [[29](#bib.bib29), [30](#bib.bib30), [31](#bib.bib31)], in scene-text visual question answering [[9](#bib.bib9), [44](#bib.bib44), [8](#bib.bib8), [13](#bib.bib13)], in document visual question answering (DocVQA) [[50](#bib.bib50), [33](#bib.bib33)]. Apart from improving the annotation quality significantly, we want to level the differences between research groups and companies. ![Refer to caption](/html/2202.12985/assets/media/images/qualitatives/fgcn0226_7.png) (a) ![Refer to caption](/html/2202.12985/assets/media/images/qualitatives/khnk0226_4.png) (b) ![Refer to caption](/html/2202.12985/assets/media/images/qualitatives/fggn0226_77.png) (c) ![Refer to caption](/html/2202.12985/assets/media/images/qualitatives/flmh0077_12.png) (d) ![Refer to caption](/html/2202.12985/assets/media/images/qualitatives/fgfl0228_3.png) (e) Figure 1: Document images of OCR-IDL. The dataset includes a wide variety of documents with dense text (a), tables (b), figures (c), and complex layouts that combines different elements (d, e). We provide the annotations for publicly available documents from Industry Documents Library (IDL). IDL is a digital archive of documents created by industries which influence public health, hosted by the University of California, San Francisco Library 222<https://www.industrydocuments.ucsf.edu>. IDL has already been used in the literature for building datasets: IIT-CDIP [[25](#bib.bib25)], RVL-CDIP [[18](#bib.bib18)], DocVQA [[50](#bib.bib50), [33](#bib.bib33)]. Hence, our OCR annotations can be used to further advance in these tasks. The rest of the paper is structured as follows. First, we briefly explain all the related works. Next, we will elaborate on our data collection and comparison to other datasets. Finally, we will provide various statistics of the annotations and conclude our paper. 2 Related Work --------------- Document Intelligence can be considered as an umbrella term covering problems of Key Information Extraction [[10](#bib.bib10), [54](#bib.bib54)], Table Detection [[41](#bib.bib41), [38](#bib.bib38)] and Structure Recognition [[39](#bib.bib39), [55](#bib.bib55)], Document Layout Segmentation [[5](#bib.bib5), [4](#bib.bib4)] Document Layout Generation [[6](#bib.bib6), [36](#bib.bib36), [3](#bib.bib3), [48](#bib.bib48)], Document Visual Question Answering [[51](#bib.bib51), [50](#bib.bib50), [32](#bib.bib32)], Document Image Enhancement [[49](#bib.bib49), [22](#bib.bib22), [47](#bib.bib47)] which involves the understanding of visually rich semantic information and structure of different layout entities of a whole page. Early days of Document Intelligence has relied on rule-based handcrafted approaches mainly divided into bottom-up and top-down methods. Bottom-up methods [[15](#bib.bib15), [24](#bib.bib24), [35](#bib.bib35), [43](#bib.bib43)] first detect connected components at the pixel level, later to be fused into higher level of structure through various heuristics and name depending on distinct structural features. While top-down methods [[16](#bib.bib16)] dissects a page into smaller units such as titles, text blocks, lines, and words. Lately, the success of large-scale pre-training [[11](#bib.bib11)] in Natural Language Processing has been integrated into Document Intelligence, resulting in impressive performance gains. These methods follow a two step procedure where first they pretrain the models on unlabeled documents (OCR annotations are obtained by an off-the-shelf OCR engine), then they finetune it on specific downstream tasks. LayoutLM [[54](#bib.bib54)] is one of the first works that pretrain BERT based language model with document layout information, using masked language/vision modeling and multi label classification. BROS [[21](#bib.bib21)] is built on top of Span-BERT [[23](#bib.bib23)] with spatially aware graph decoder. For the pretraining loss, they use area-masked language model. Self-Doc [[27](#bib.bib27)] utilizes two separate transformer [[52](#bib.bib52)] encoders for visual and textual features and later to be fed to multi-modal transformers encoder. TILT [[37](#bib.bib37)] tries to encode the layout information by integrating pairwise 1D and 2D information into their models. Uni-Doc [[14](#bib.bib14)] is designed to do most document understanding tasks that takes words and visual features from a semantic region of a document image by combining three self-supervised losses. More recent methods, Doc-Former [[2](#bib.bib2)] and LayoutLMv2 [[53](#bib.bib53)] combine multiple pretraining losses such as image-text alignment, learning to construct image features and multi-modal masked language modeling together to achieve state-of-the-art results. Yet, comparing all of these works are cumbersome since each work that performs pretraining uses different amounts of data with diverse OCR engines. Hence, this makes it especially hard to understand where the gain is coming from. In other words, we can not draw clear conclusions to questions such as: “What is the effect of the amount of pretraining data on the performance?”, “Is the performance gain coming from a better/stronger OCR engine or from the proposed architecture?”, “What is the effect of the pretraining loss on the downstream tasks keeping OCR and the amount of data identical?” To help answer these questions, we collect and annotate the largest public OCR annotated documents dataset (OCR-IDL). 3 OCR-IDL Dataset ------------------ In this section, we elaborate on various details regarding OCR-IDL. Firstly, we explain the process we follow on how we get the IDL data and use Amazon-Textract to obtain OCR annotations. Next, we compare OCR-IDL to other datasets that have proven useful for the document intelligence tasks. And finally, we provide in-depth statistics on the documents we use. Dataset # of Docs # of Pages Docs source Docs. description OCR-Text OCR-BB Layout Doc. type IIT-CDIP [[25](#bib.bib25)] 6.5M6.5𝑀6.5M6.5 italic\_M\* 35.5M35.5𝑀35.5M35.5 italic\_M\* UCSF-LTD Industry documents Unknown ✗ ✗ ✓ RVL-CDIP [[18](#bib.bib18)] -† 400K400𝐾400K400 italic\_K UCSF-LTD Industry documents ✗ ✗ ✗ ✓ PublayNet [[56](#bib.bib56)] -† 364K364𝐾364K364 italic\_K PubMedCentral Journals and articles ✗ ✗ ✓ ✗ DocBank [[26](#bib.bib26)] -† 500K500𝐾500K500 italic\_K arXiv Journals and articles ✗ ✗ ✓ ✗ DocVQA [[51](#bib.bib51)] 6K6𝐾6K6 italic\_K 12K12𝐾12K12 italic\_K UCSF-IDL Industry documents Microsoft OCR ✓ ✗ ✓ OCR-IDL 4.6M4.6𝑀4.6M4.6 italic\_M 26M26𝑀26M26 italic\_M UCSF-IDL Industry documents Amazon Textract ✓ ✗ ✓ Table 1: Summary of other Document Intelligence Datasets. \*We skipped 145K145𝐾145K145 italic\_K documents that gave xml parsing errors, didn’t contain document ID or number of pages. †No traceability between different pages of the same document. ### 3.1 Data Collection As already mentioned, IDL is an industry documents library hosted by UCSF, its main purpose is to “identify, collect, curate, preserve, and make freely accessible internal documents created by industries and their partners which have an impact on public health, for the benefit and use of researchers, clinicians, educators, students, policymakers, media, and the general public at UCSF and internationally”333<https://en.wikipedia.org/wiki/Industry_Documents_Library>. IDL in total contains over 70 millions documents. We use the publicly available link444<https://s3-us-west-2.amazonaws.com/edu.ucsf.industrydocuments.artifacts/> to downlad the 4.6 Million Documents from IDL dataset which comes in the format of PDFs. Some examples can be viewed in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ OCR-IDL: OCR Annotations for Industry Document Library Dataset"). We appreciate that the documents are quite varied in terms of layout where there are tables, figures, ads and combination of them in a single page. Also, we see that the documents are quite text-rich containing many OCR words. Finally, the pages contain smudges, taints and other discoloration what is found in the in a real use-case scenario. ![Refer to caption](/html/2202.12985/assets/x1.png) Figure 2: Distribution of the annotated documents in terms of document types and dates. We choose to annotate only 4.6M documents since annotating 13M documents not only would be much costlier (42K dollars instead of 18K) but also in the literature it is shown that using more data have diminishing returns on the downstream task [[7](#bib.bib7)]. After obtaining the data, we preprocess the documents to remove empty, faulty and broken pdfs. This process resulted in the elimination of 6548 documents. Moreover, we remove also documents that have more than 2900 pages which are 71 in total. The necessity of removing such huge documents was because Amazon-Textract OCR only accepts up to 3000 pages. After pre-processing the documents, we feed all the documents to the OCR engine to obtain the annotations. The annotations provided by the Textract engine include transcription of words and lines and their corresponding bounding boxes and polygons with text type that can be printed or handwritten. Processing all 4.6M documents was done by a single machine with 16 parallelized cores and took about 1 month. ![Refer to caption](/html/2202.12985/assets/x2.png) Figure 3: Distribution of number of pages and number of words per document type. ### 3.2 Comparison to existing datasets In this section, we compare the statistics of the amount of documents and pages to other datasets that are used in Document Intelligence. To name a few, the Illinois Institute of Technology dataset for Complex Document Information Processing (IIT-CDIP) [[25](#bib.bib25)] is the biggest document dataset and it is designed for the task of information retrieval. The Ryerson Vision Lab Complex Document Information Processing (RVL-CDIP) [[18](#bib.bib18)] dataset used the IIT-CDIP metadata to create a new dataset for document classification. PublayNet [[56](#bib.bib56)] and DocBank [[26](#bib.bib26)] are datasets designed for layout analysis tasks and DocVQA [[33](#bib.bib33), [51](#bib.bib51)] instead, is designed for Visual Question Answering task over document images. We summarize all the key information for comparison in Table [1](#S3.T1 "Table 1 ‣ 3 OCR-IDL Dataset ‣ OCR-IDL: OCR Annotations for Industry Document Library Dataset"). First, we stress that OCR-IDL is the second biggest dataset in amount for pre-training and biggest dataset with annotations obtained from commercial OCR. This provides unique opportunity for the Document Intelligence research for utilizing the unlabeled documents in their research. Furthermore, even though OCR-IDL uses the documents from the same source as IIT-CDIP and RVL-CDIP, it also contains other type of industrial documents. OCR-IDL contains documents from chemical, medical and drug industries, hence having more variety in terms of content as well as layout information. ![Refer to caption](/html/2202.12985/assets/x3.png) Figure 4: Distribution of number of words (left) and lines (right) by pages. ### 3.3 Dataset Statistics IDL documents come with metadata that is curated by human annotators. They include information about the date, industry, drugs, chemicals, document types and many more. We restrict our analysis on exploring what type of documents we have and the distribution of the dates they are created which can be found in Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Data Collection ‣ 3 OCR-IDL Dataset ‣ OCR-IDL: OCR Annotations for Industry Document Library Dataset"). In IDL metadata, there are 35k various document types from which we show only the most common 20 which include letters, report, email, memo, note, etc. As can be seen in the left side of Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Data Collection ‣ 3 OCR-IDL Dataset ‣ OCR-IDL: OCR Annotations for Industry Document Library Dataset"), most of the documents’ type is unknown. Moreover, even though we have a skewed distribution on letter, report and memo, we also have very distinct distribution such as chart, graphics, news articles. This is especially encouraging because it provides diverse layouts within various contexts. Moreover, the documents are created spanning 100 years where most of the documents are in the range of 1980-2010 as can be seen in the right side of Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Data Collection ‣ 3 OCR-IDL Dataset ‣ OCR-IDL: OCR Annotations for Industry Document Library Dataset"). The date the documents are created contributes to the variability of the documents not only in terms of semantics but more importantly having visual artifacts (smudges, different resolution) with different printing technologies. On top of the amount of documents, we give more details on the number of pages and words for each document type. The number of pages follows the same distribution as the amount of documents per type, as can be appreciated in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Data Collection ‣ 3 OCR-IDL Dataset ‣ OCR-IDL: OCR Annotations for Industry Document Library Dataset"). Also, we can see from Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Data Collection ‣ 3 OCR-IDL Dataset ‣ OCR-IDL: OCR Annotations for Industry Document Library Dataset") that the report type is much richer in terms of text while the rest follows more or less the same distribution. ![Refer to caption](/html/2202.12985/assets/x4.png) Figure 5: Distribution of various layout blocks in OCR-IDL. Our documents contain a lot of diversity in terms of title, figure, text block, list and table. We turn our attention to OCR annotation statistics. In total, we obtain OCR annotations for 4614232 (4.6M) documents where we have 26621635 (26M) pages, averaging 6 pages per document. Moreover, since documents are known to be a text-rich environment, we provide extra details regarding the amount of words and lines per page and documents. We have 166M words with 46M lines, on average there are 62.5 words and 17.5 lines per page while 360.8 words and 101.25 lines per document. To have a better understanding of the distribution of words and lines per page, we present Figure [4](#S3.F4 "Figure 4 ‣ 3.2 Comparison to existing datasets ‣ 3 OCR-IDL Dataset ‣ OCR-IDL: OCR Annotations for Industry Document Library Dataset"). As shown in the figure, mean distribution is between 20 to 100 words per page while there is a significant amount of pages that contain more than 200 words per page. The distribution for lines follows a different distribution in which it can be observed that most of the pages contain from 10 to 50 lines. In either case, it is clearly observed that documents at hand are ideal for performing pretraining with their diverse layouts and text-rich settings. | | | --- | | Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption | Figure 6: Qualitative results for segmentation of layout information in OCR-IDL. Finally, to quantify the diversity of the documents in terms of layout, we run publicly available Faster-RCNN [[40](#bib.bib40)] trained on PubLayNet [[56](#bib.bib56)] to segment a document into text block, title, figure, list, table. To obtain the segmentation results, we randomly selected 40K pages and some segmentation examples can be found in Figure [6](#S3.F6 "Figure 6 ‣ 3.3 Dataset Statistics ‣ 3 OCR-IDL Dataset ‣ OCR-IDL: OCR Annotations for Industry Document Library Dataset"). It can be appreciated from Figure [5](#S3.F5 "Figure 5 ‣ 3.3 Dataset Statistics ‣ 3 OCR-IDL Dataset ‣ OCR-IDL: OCR Annotations for Industry Document Library Dataset") that 40% of the documents have at least 1 figure. We also observe that 10-20% of the pages have at least 1 table and list, showing that documents at hand contain very diverse layout information. Moreover, more than 45% of the pages contain more than 1 text block and 1 title, making the documents a text rich environment for pre-training on Document Intelligence tasks. 4 Conclusion ------------- In this paper, we have presented our effort to provide OCR annotations for the large-scale IDL document dataset called OCR-IDL. These annotations have a monetary value over $20,000 and are made publicly available with the aim of advancing the Document Intelligence research field. Our motivation is two-fold, first we make use of a commercial OCR engine to obtain high quality annotations, leading to reduce the noise provided by OCR on pretraining and downstream tasks. Secondly, it is our hope that OCR-IDL can be a starting point for future works on Document Intelligence to be more comparable. Throughout this article we have detailed the process that we have followed to obtain the annotations, we have presented a statistical analysis, and compared them with other datasets in the state of the art. The provided analysis shows that our contribution has a high potential to be used successfully in pre-training strategies for document intelligence models. All the code for data collection process and annotations can be accessed in <https://github.com/furkanbiten/idl_data>.
61494a2b-f805-4bfa-9403-2d2bcf90980e
trentmkelly/LessWrong-43k
LessWrong
A Chess-GPT Linear Emergent World Representation A Chess-GPT Linear Emergent World Representation Introduction Among the many recent developments in ML, there were two I found interesting and wanted to dig into further. The first was gpt-3.5-turbo-instruct's ability to play chess at 1800 Elo. The fact that an LLM could learn to play chess well from random text scraped off the internet seemed almost magical. The second was Kenneth Li's Emergent World Representations paper. There is an excellent summary on The Gradient and a follow-up from Neel Nanda. In it, they trained a 25 million parameter GPT to predict the next character in an Othello game. It learns to accurately make moves in games unseen in its training dataset, and using both non-linear and linear probes it was found that the model accurately tracks the state of the board. However, this only worked for a model trained on a synthetic dataset of games uniformly sampled from the Othello game tree. They tried the same techniques on a model trained using games played by humans and had poor results. To me, this seemed like a major caveat to the findings of the paper which may limit its real world applicability. We cannot, for example, generate code by uniformly sampling from a code tree. There was also discussion on the implications of this on LessWrong, such as if pretraining should begin with synthetic data to improve interpretability. So I dug into it. I trained some models on chess games and used linear probes on the trained models. My results were very positive, and answered all of my previous questions (although of course, more questions were generated). A 50 million parameter GPT trained on 5 million games of chess learns to play at ~1300 Elo in one day on 4 RTX 3090 GPUs. This model is only trained to predict the next character in PGN strings (1.e4 e5 2.Nf3 ...) and is never explicitly given the state of the board or the rules of chess. Despite this, in order to better predict the next character, it learns to compute the state of the board at any po
a509c3c0-ea79-43ea-a855-88fb1a0d6b3e
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The heritability of human values: A behavior genetic critique of Shard Theory **Overview (TL;DR):** Shard Theory is a new approach to understanding the formation of human values, which aims to help solve the problem of how to align advanced AI systems with human values (the ‘AI alignment problem’). [Shard Theory](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values) has provoked a lot of interest and discussion on LessWrong, AI Alignment Forum, and EA Forum in recent months. However, Shard Theory incorporates a relatively Blank Slate view about the origins of human values that is empirically inconsistent with many studies in behavior genetics indicating that many human values show heritable genetic variation across individuals. I’ll focus in this essay on the empirical claims of Shard Theory, the behavior genetic evidence that challenges those claims, and the implications for developing more accurate models of human values for AI alignment. **Introduction: Shard Theory as an falsifiable theory of human values** The goal of the ‘AI alignment’ field is to help future Artificial Intelligence systems become better aligned with human values. Thus, to achieve AI alignment, we might need a good theory of human values. A new approach called “[Shard Theory](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values)” aims to develop such a theory of how humans develop values.  My goal in this essay is to assess whether Shard Theory offers an empirically accurate model of human value formation, given what we know from behavior genetics about the heritability of human values. The stakes here are high. If Shard Theory becomes influential in guiding further alignment research, but if its model of human values is not accurate, then Shard Theory may not help improve AI safety.  These kinds of empirical problems are not limited to Shard Theory. Many proposals that I’ve seen for AI ‘alignment with human values’ seem to ignore most of the research on human values in the behavioral and social sciences. I’ve tried to challenge this empirical neglect of value research in four previous essays for EA Forum, on the [heterogeneity of value types](https://forum.effectivealtruism.org/posts/KZiaBCWWW3FtZXGBi/the-heterogeneity-of-human-value-types-implications-for-ai) in humans individuals, the [diversity of values across individuals](https://forum.effectivealtruism.org/posts/DXuwsXsqGq5GtmsB3/ai-alignment-with-humans-but-with-which-humans), the importance of [body/corporeal values](https://forum.effectivealtruism.org/posts/zNS53uu2tLGEJKnk9/ea-s-brain-over-body-bias-and-the-embodied-value-problem-in), and the importance of [religious values](https://forum.effectivealtruism.org/posts/YwnfPtxHktfowyrMD/the-religion-problem-in-ai-alignment).  Note that this essay is a rough draft of some preliminary thoughts, and I welcome any feedback, comments, criticisms, and elaborations. In future essays I plan to critique Shard Theory from the perspectives of several other fields, such as evolutionary biology, animal behavior research, behaviorist learning theory, and evolutionary psychology. **Background on Shard Theory** Shard Theory has been developed mostly by Quintin Pope (a computer science Ph.D. student at Oregon State University) and Alex Turner (a post-doctoral researcher at the Center for Human-Compatible AI at UC Berkeley). Over the last few months, they posted a series of essays about Shard Theory on LessWrong.com, including this main essay here , ‘[The shard theory of human values’](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values) (dated Sept 3, 2022), plus auxiliary essays such as: ‘[Human values & biases are not accessible to the genome’](https://www.lesswrong.com/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome) (July 7, 2022), ‘[Humans provide an untapped wealth of evidence about alignment](https://www.lesswrong.com/posts/CjFZeDD6iCnNubDoS/humans-provide-an-untapped-wealth-of-evidence-about)’ (July 13, 2022), ‘[Reward is not the optimizer’](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) (July 24, 2022), and ‘[Evolution is a bad analogy for AGI: Inner alignment](https://www.lesswrong.com/posts/FyChg3kYG54tEN3u6/evolution-is-a-bad-analogy-for-agi-inner-alignment)’ (Aug 13, 2022). [This is not a complete list of their Shard Theory writings; it’s just the set that seems most relevant to the critiques I’ll make in this essay.] Also, David Udell published this useful summary: ‘[Shard Theory: An overview’](https://www.lesswrong.com/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview) (Aug 10, 2022).  There’s a lot to like about Shard Theory. It takes seriously the potentially catastrophic risks from AI. It understands that ‘AI alignment with human values’ requires some fairly well-developed notions about where human values come from, what they’re for, and how they work. It is intellectually ambitious, and tries to integrate reinforcement learning, self-supervised predictive learning, decision theory, developmental psychology, and cognitive biases. It seeks to build some common ground between human intelligence and artificial intelligence, at the level of how complex cognitive systems develop accurate world models and useful values. It tries to be explicit about its empirical commitments and theoretical assumptions. It is open about being a work-in-progress rather than a complete, comprehensive, or empirically validated theory. It has already provoked much discussion and debate. Even if my critiques of Shard Theory are correct, and some of its key evolutionary, genetic, and psychological assumptions are wrong, that isn’t necessarily fatal to the whole Shard Theory project. I imagine some form of Shard Theory 2.0 could be developed that updates its assumptions in the light of these critiques, and that still makes some progress in developing a more accurate model of human values that is useful for AI alignment. **Shard Theory as a Blank Slate theory** However, Shard Theory includes a model of human values that is not consistent with what behavioral scientists have learned about the origins and nature of values over the last 170 years of research in psychology, biology, animal behavior, neurogenetics, behavior genetics, and other fields. The key problem is that Shard Theory re-invents a relatively ‘Blank Slate’ theory of human values. Note that no Blank Slate theory posits that the mind is 100% blank. Every Blank Slate theory that’s even marginally credible accepts that there are at least a few ‘innate instincts’ and some ‘hardwired reward circuitry’. Blank Slate theories generally accept that human brains have at least a few ‘innate reinforcers’ that can act as a scaffold for the socio-cultural learning of everything else. For example, even the most radical Blank Slate theorists would generally agree that sugar consumption is reinforcing because we evolved taste receptors for sweetness.  The existence of a few innate reinforcement circuits was accepted even by the most radical Behaviorists of the 1920s through 1960s, and by the most ‘social constructivist’ researchers in the social sciences and humanities from the 1960s onwards. Blank Slate theorists just try to minimize the role of evolution and genetics in shaping human psychology, and strongly favor Nurture over Nature in explaining both psychological commonalities across sentient beings, and psychological differences across species, sexes, ages, and individuals. Historically, Blank Slate theories were motivated not so much by empirical evidence, as by progressive political ideologies about the equality and perfectibility of humans. (See the 2002 book [The Blank Slate](https://en.wikipedia.org/wiki/The_Blank_Slate) by Steven Pinker, and the 2000 book [Defenders of the Truth](https://www.amazon.com/Defenders-Truth-Sociobiology-Ullica-Segerstrale/dp/0192862154) by Ullica Segerstrale.) Shard Theory seems to follow in that tradition – although I suspect that it’s not so much due to political ideology, as to a quest for theoretical simplicity, and for not having to pay too much attention to the behavioral sciences in chasing AI alignment. At the beginning of their [main statement](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values) of Shard Theory, in their TL;DR, Pope and Turner include this bold statement: “Human values are not e.g. an incredibly complicated, genetically hard-coded set of drives, but rather sets of contextually activated heuristics which were shaped by and bootstrapped from crude, genetically hard-coded reward circuitry.”  Then they make three explicit neuroscientific assumptions. I’ll focus on Assumption 1 of Shard Theory: “Most of the circuits in the brain are learned from scratch, in the sense of being mostly randomly initialized and not mostly genetically hard-coded.” This assumption is motivated by an argument explored [here](https://www.lesswrong.com/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome) that ‘human values and biases are inaccessible to the genome’. For example, Quintin Trout argues “it seems intractable for the genome to scan a human brain and back out the “death” abstraction, which probably will not form at a predictable neural address. Therefore, we infer that the genome can’t directly make us afraid of death by e.g. specifying circuitry which detects when we think about death and then makes us afraid. In turn, this implies that there are a lot of values and biases which the genome cannot hardcode.”  This Shard Theory argument seems to reflect a fundamental misunderstanding of how evolution shapes genomes to produce phenotypic traits and complex adaptations. The genome never needs to ‘scan’ an adaptation and figure out how to reverse-engineer it back into genes. The genetic variants simply build a slightly new phenotypic variant of an adaptation, and if it works better than existing variants, then the genes that built it will tend to propagate through the population. The flow of design information is always from genes to phenotypes, even if the flow of selection pressures is back from phenotypes to genes. This one-way flow of information from DNA to RNA to proteins to adaptations has been called the ‘[Central Dogma of molecular biology’](https://en.wikipedia.org/wiki/Central_dogma_of_molecular_biology), and it still holds largely true (the recent hype about epigenetics notwithstanding).  Shard Theory implies that biology has no mechanism to ‘scan’ the design of fully-mature, complex adaptations back into the genome, and therefore there’s no way for the genome to code for fully-mature, complex adaptations. If we take that argument at face value, then there’s no mechanism for the genome to ‘scan’ the design of a human spine, heart, hormone, antibody, cochlea, or retina, and there would be no way for evolution or genes to influence the design of the human body, physiology, or sensory organs. Evolution would grind to a halt – not just at the level of human values, but at the level of all complex adaptations in all species that have ever evolved.  As we will see, this idea that ‘human values and biases are inaccessible to the genome’ is empirically incorrect. **A behavior genetic critique of Shard Theory** In future essays, I plan to address the ways that Shard Theory, as presently conceived, is inconsistent with findings from several other research areas: (1) evolutionary biology models of how complex adaptations evolve, (2) animal behavior models of how nervous systems evolved to act in alignment with fitness interests, (3) behaviorist learning models of how reinforcement learning and reward systems operate in animals and humans, and (4) evolutionary psychology models of human motivations, emotions, preferences, morals, mental disorders, and personality traits. For now, I want to focus on some conflicts between Shard Theory and behavior genetics research. As mentioned above, Shard Theory adopts a relatively ‘Blank Slate’ view of human values, positing that we inherit only a few simple, crude values related to midbrain reward circuitry, which are presumably universal across humans, and all other values are scaffolded and constructed on top of those. However, behavior genetics research over the last several decades has shown show that most human values that differ across people, and that can be measured reliably – including some quite abstract values associated with political, religious, and moral ideology – are moderately heritable. Moreover, many of these values show relatively little influence from ‘shared family environment’, which includes all of the opportunities and experiences shared by children growing up in the same household and culture. This means that genetic variants influence the formation of human values, and genetic differences between people explain a significant proportion of the differences in their adult values, and family environment explains a lot less about differences in human values than we might have thought. This research is based on convergent findings using diverse methods such as [twin studies](https://en.wikipedia.org/wiki/Twin_study), [adoption studies](https://en.wikipedia.org/wiki/Adoption_study), [extended twin family designs](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3228846/), [complex segregation analysis](https://en.wikipedia.org/wiki/Complex_segregation_analysis), and [genome-wide association studies](https://en.wikipedia.org/wiki/Genome-wide_association_study) (GWAS). All of these behavior genetic observations are inconsistent with Shard Theory, particularly its Assumption 1.  Behavior genetics was launched in 1869 when [Sir Francis Galton](https://en.wikipedia.org/wiki/Francis_Galton) published his book *Hereditary Genius*, which proposed some empirical methods for studying the inheritance of high levels of human intelligence. A few years earlier, Galton’s cousin Charles Darwin had developed the theory of evolution by natural selection, which focused on the interplay of heritable genetic variance and evolutionary selection pressures. Galton was interested in how scientists might analyze heritable genetic variance in human mental traits such as intelligence, personality, and altruism. He understood that Nature and Nurture interact in very complicated ways to produce species-typical human universals. However, he also understood that it was an open question how much variation in Nature versus variation in Nurture contributed to individual differences in each trait. Note that behavior genetics was always about explaining the factors that influence statistical variation in quantitative traits, not about explaining the causal, mechanistic development of traits. This point is often misunderstood by modern critics of behavior genetics who claim ‘every trait is an inextricable combination of Nature and Nurture, so there’s no point in trying to partition their influence.’ The mapping from genotype (the whole set of genes in an organism) to phenotype (the whole set of body, brain, and behavioral traits in an organism) is, indeed, extremely complicated and remains poorly understood. However, behavior genetics doesn’t need to understand the whole mapping; it can trace how genetic variants influence phenotypic trait variants using empirical methods such as twin, adoption, and GWAS studies.  In modern behavior genetics, the influence of genetic variants on traits is indexed by a metric called [heritability](https://en.wikipedia.org/wiki/Heritability), which can range from 0 (meaning genetic variants have no influence on individual differences in a phenotypic trait) to 1 (meaning genetic variants explain 100% of individual differences in a phenotypic trait). So-called ‘narrow-sense heritability’ includes only additive genetic effects due to the average effects of alleles; additive genetic effects are most important for predicting responses to evolutionary selection pressures – whether in the wild or in artificial selective breeding of domesticated species. ‘Broad-sense heritability’ includes additive effects plus dominant and epistatic genetic effects. For most behavioral traits, additive effects are by far the most important, so broad-sense heritability is usually only a little higher than narrow-sense heritability.  The most important result from behavior genetics is that all human behavioral traits that differ across people, and that can be measured reliably, are heritable to some degree – and often to a surprisingly high degree. This is sometimes called the [First Law of Behavior Genetics](http://faculty.umb.edu/peter_taylor/epi/turkheimer00.pdf) – not because it’s some kind of natural law that all behavioral traits must be heritable, but because the last 150 years of research has found no replicable exceptions to this empirical generalization. Some behavioral traits such as general intelligence show very high heritability – over [0.70](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/wilson-effect-the-increase-in-heritability-of-iq-with-age/FF406CC4CF286D78AF72C9E7EF9B5E3F) – in adults, which is about as heritable as human [height](https://www.nature.com/articles/d41586-019-01157-y). (For a good recent introduction to the ‘Top 10 replicated findings from behavior genetics, see [this paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4739500/).) **Does anybody really believe that values are heritable?** To people who accept a Blank Slate view of human nature, it might seem obvious that human values, preferences, motivations, and moral judgments are instilled by family, culture, media, and institutions – and the idea that genes could influence values might sound absurd. Conversely, to people familiar with behavior genetics, who know that all psychological traits are somewhat heritable, it might seem obvious that human values, like other psychological traits, will be somewhat heritable. It’s unclear what proportion of people lean towards the Blank Slate view of human values, versus the ‘hereditarian’ view that values can be heritable. As a reality check, I ran this Twitter poll on Oct 17, 2022, with the results shown in this screenshot: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/a6bca83ffb6795a22019af7e0bbb9682892e8e3d798507fe.png) I was surprised that so many people took a slightly or strongly hereditarian view of values. Maybe the idea isn’t as crazy as it might seem at first glance. However, this poll is just illustrative that there is real variation in people’s views about this. It should not be taken too seriously as data, because it is just one informal question on social media, answered by a highly non-random sample. Only about 1.4% of my followers (1,749 out of 124,600) responded to this poll (which is a fairly normal response rate). My typical follower is an American male who’s politically centrist, conservative, or libertarian, and probably has a somewhat more hereditarian view of human nature than average. The poll’s main relevance here is in showing that a lot of people (not just me) believe that values can be heritable. **Human traits in general are heritable** A 2015 [meta-analysis](https://www.nature.com/articles/ng.3285.) of human twin studies analyzed 17,804 traits from 2,748 papers including over 14 million twin pairs. These included mostly behavioral traits (e.g. psychiatric conditions, cognitive abilities, activities, social interactions, social values), and physiological traits (e.g. metabolic, neurological, cardiovascular, and endocrine traits). Across all traits, average heritability was .49, and shared family environment (e.g. parenting, upbringing, local culture) typically had negligible effects on the traits. For 69% of traits, heritability seemed due solely to additive genetic variation, with no influence of dominance or epistatic genetic variation.  Heritability of human traits is generally caused by many genes that each have very small, roughly additive effects, rather than by a few genes that have big effects (see [this review](https://journals.sagepub.com/doi/abs/10.1177/1745691615617439)). Thus, to predict individual values for a given trait, molecular behavior genetics studies generally aggregate the effects of thousands of DNA variants into a [polygenic score](https://en.wikipedia.org/wiki/Polygenic_score). Thus, each trait is influenced by many genes. But also, each gene influences many traits (this is called [pleiotropy](https://en.wikipedia.org/wiki/Pleiotropy)). So, there is a complex [genetic architecture](https://en.wikipedia.org/wiki/Genetic_architecture) that maps from many genetic variants onto many phenotypic traits, and this can be explored using multivariate behavior genetics methods that track [genetic correlations](https://en.wikipedia.org/wiki/Genetic_correlation) between traits. (Elucidating the genetic architecture of human values would be enormously useful for AI alignment, in my opinion.) **Human values are heritable** The key point here, in relation to Shard Theory, is that ‘all human behavioral traits’ being heritable includes ‘all human values that differ across people’. Over the last few decades, behavior geneticists have expanded their focus from studying classic traits, such as general intelligence and mental disorders, to explicitly studying the heritability of human values, and values-adjacent traits. So far, behavior geneticists have found mild to moderate heritability for a wide range of values-related traits, including the following: * Food preferences are [heritable](https://www.mdpi.com/2072-6643/11/8/1735), and they are not just influenced by genes that predict basic taste or smell functions. Genes influence [heritabilities of tastes](https://www.mdpi.com/2072-6643/11/8/1735) for specific food categories such as vegetables, fruit, starchy foods, meat/fish, dairy, and snacks. [Different genes](https://www.sciencedirect.com/science/article/pii/S0950329321003037) underlie meat preferences in men versus women. Food fussiness and food neophobia are both [heritable in kids](https://acamh.onlinelibrary.wiley.com/doi/full/10.1111/jcpp.12647), and reflect a common genetic etiology. Obesity, reflecting a high reward-sensitivity for food, is about [45% heritable](https://onlinelibrary.wiley.com/doi/full/10.1002/oby.23116). * Mate preferences are somewhat heritable, including [rated importance](https://onlinelibrary.wiley.com/doi/full/10.1111/j.1558-5646.2011.01546.x) of key traits in potential partners, and [preference for partner height](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0049294). These heritable mate preferences can lead to positive genetic correlations between the preferences and with the actual traits preferred, as in [this study](https://www.sciencedirect.com/science/article/abs/pii/S1090513814000798) of height, intelligence, creativity, exciting personality, and religiosity. * Sexual values, reinforcers, and reward systems are heritable, including [sexual orientation](https://www.nature.com/articles/s41598-017-15736-4), [affectionate communication](https://www.tandfonline.com/doi/abs/10.1080/03637751.2020.1760327), [frequency of female orgasm](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1743-6109.2011.02300.x), [extrapair mating](https://www.sciencedirect.com/science/article/abs/pii/S1090513814001317) (infidelity), [sexual jealousy](https://www.sciencedirect.com/science/article/pii/S1090513821000611), and [sexual coerciveness](https://www.tandfonline.com/doi/abs/10.1080/10683160802621925). * Parenting behaviors and values are heritable, according to a [meta-analysis](https://psycnet.apa.org/doiLanding?doi=10.1037%2Fa0034205) of 56 studies. Also, the shared family environment created by parents when raising their kids has many heritable components (according to studies on the ‘heritability of the environment’, and ‘the [Nature of Nurture’](https://www.taylorfrancis.com/chapters/edit/10.4324/9780203838013-8/nature-nurture-robert-plomin).) * Economic values and consumer preferences are heritable, including [consumer decision heuristics](https://academic.oup.com/jcr/article-abstract/37/6/951/1869443), vocational interests, [preferences for self-employment](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0060542), [entrepreneurship](https://www.sciencedirect.com/science/article/pii/S0883902619301247), [delay discounting](https://www.sciencedirect.com/science/article/abs/pii/S0006322314008282), [economic policy preferences](https://www.pnas.org/doi/abs/10.1073/pnas.1120666109), [investment biases](https://www.sciencedirect.com/science/article/abs/pii/S0304405X14000889), [socio-economic status](https://www.nature.com/articles/s41562-021-01053-4), and [lifetime earnings](https://link.springer.com/article/10.1007/s10888-019-09413-x). * Moral values are [heritable](https://journals.sagepub.com/doi/abs/10.1177/1948550611412793), including [moral intuitions](https://link.springer.com/article/10.1007/s12110-020-09380-7), [cognitive empathy](https://www.nature.com/articles/mp2017122), [justice sensitivity](https://www.nature.com/articles/s41598-022-09253-2), [prosociality](https://www.sciencedirect.com/science/article/pii/S2352250X15001323), [self-control](https://www.sciencedirect.com/science/article/pii/S0149763418307905), [attitudes towards dishonesty](https://www.sciencedirect.com/science/article/abs/pii/S016726811300125X), and [vegetarianism](https://www.sciencedirect.com/science/article/pii/S095032932200180X). * Immoral behaviors and values are also heritable, including [violent crime](https://link.springer.com/article/10.1007/s10519-011-9483-0), [sexual coercion](https://academic.oup.com/ije/article/44/2/713/753089), [white-collar crime](https://www.cambridge.org/core/journals/psychological-medicine/article/abs/swedish-national-twin-study-of-criminal-behavior-and-its-violent-whitecollar-and-property-subtypes/0D9A88185ED0FD5525A5EBD5D2EBA117), and [juvenile delinquency](https://link.springer.com/article/10.1007/s10578-020-01119-w). * Political values are about [40% heritable](https://www.pnas.org/doi/abs/10.1073/pnas.1818711116); see [2021 review here](https://journals.sagepub.com/doi/abs/10.1177/14789299211053780); these heritable political values include [conservatism](https://www.cambridge.org/core/journals/journal-of-experimental-political-science/article/abs/genes-ideology-and-sophistication/91C7C343BBA8801732F62E7D55B16676), [liberalism](https://www.journals.uchicago.edu/doi/abs/10.1017/S0022381610001015), [social dominance orientation](https://www.pnas.org/doi/abs/10.1073/pnas.1818711116), [political engagement](https://journals.sagepub.com/doi/abs/10.1177/1065912917698045), [political trust](https://www.elgaronline.com/view/edcoll/9781782545101/9781782545101.00020.xml), [political interest](https://www.cambridge.org/core/journals/politics-and-the-life-sciences/article/genes-personality-and-political-behavior/CE6A2F64A262E29F396893965E286FAF), [political sophistication](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/genetic-basis-of-political-sophistication/9E69BA562FEF42FA4F7117ED1E3FF0EE), [military service](https://journals.sagepub.com/doi/abs/10.1177/0095327X18765449), [foreign policy preferences](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/heritability-of-foreign-policy-preferences/61AD34FFC1B0FF174FDFC6AA819050D4), [civic engagement](https://royalsocietypublishing.org/doi/full/10.1098/rstb.2015.0015), and [voter turnout](https://www.jstor.org/stable/23260396). * Religious values are heritable, including overall [religiosity](https://link.springer.com/article/10.1007/s10519-010-9388-3), [existential certainty](https://www.sciencedirect.com/science/article/abs/pii/S0092656613000500), [obedience to traditional authority](https://www.sciencedirect.com/science/article/abs/pii/S0191886913001384), and [apostasy](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/is-apostasy-heritable-a-behavior-genetics-study/2F93769FEBAACB2FC4AFC502B123BA83). In addition, the [Big Five personality traits](https://en.wikipedia.org/wiki/Big_Five_personality_traits) are moderately heritable (about 40%) according to this [2015 meta-analysis](https://psycnet.apa.org/record/2015-20360-001?doi=1) of 134 studies.  Each personality trait is centered around some latent values that represent how rewarding and reinforcing various kinds of experiences are. For example, people higher in Extraversion value social interaction and energetic activity more, people higher in Openness value new experiences and creative exploration more, people higher in Agreeableness value friendliness and compassion more, people higher in Conscientiousness value efficiency and organization more, and people higher in Neuroticism value safety and risk-aversion more. Each of these personality traits is heritable, so these values are also heritable. In fact, personality traits might be central to the genetic architecture of human values. Moreover, common mental disorders, which are [all heritable](https://www.nature.com/articles/s41380-017-0010-4), can be viewed as embodying different values. [Depression](https://en.wikipedia.org/wiki/Depression_(mood)) reflects low reward sensitivity and disengagement from normally reinforcing behaviors. [Anxiety disorders](https://en.wikipedia.org/wiki/Anxiety_disorder) reflect heightened risk-aversion, loss aversion, and hyper-sensitivity to threatening stimuli; these concerns can be quite specific (e.g. social anxiety disorder vs. specific phobias vs. panic disorder). The [negative symptoms](https://en.wikipedia.org/wiki/Schizophrenia#Negative_symptoms) of schizophrenia reflect reduced reward-sensitivity to social interaction (asociality), speech (alogia), pleasure (anhedonia), and motivation (avolution). The ‘[Dark Triad’](https://en.wikipedia.org/wiki/Dark_triad) personality traits (Machiavellianism, Narcissism, Psychopathy) reflect a higher value placed on personal status-seeking and short-term mating, and a lower value placed on other people’s suffering. A [2010 review paper](https://www.researchgate.net/publication/44589642_Psychiatric_'diseases'_versus_behavioral_disorders_and_degree_of_genetic_influence) showed that heritabilities of psychiatric ‘diseases’ (such as schizophrenia or depression) that were assumed to develop ‘involuntarily’ are about the same as heritabilities of ‘behavioral disorders’ (such as drug addiction or anorexia) that were assumed to reflect individual choices and values. Specific drug dependencies and addictions are all heritable, reflecting the differential rewards that psychoactive chemicals have in different brains. Genetic influences have been especially well-studied in [alcoholism](https://link.springer.com/article/10.1007/s11920-019-1008-1), [cannabis use](https://www.cambridge.org/core/journals/psychological-medicine/article/abs/overlap-of-heritable-influences-between-cannabis-use-disorder-frequency-of-use-and-opportunity-to-use-cannabis-trivariate-twin-modelling-and-implications-for-genetic-design/A758DD589C6C621BF3C680E0609CD026), [opiate addiction](https://www.sciencedirect.com/science/article/pii/S2352250X1830112X), [cocaine addiction](https://www.nature.com/articles/s41386-018-0008-x), and [nicotine addiction](https://www.tandfonline.com/doi/full/10.31887/DCNS.2017.19.3/pgorwood). Other kinds of ‘behavioral disorders’ also show [heritability](https://link.springer.com/chapter/10.1007/978-3-030-36391-8_63), including [gambling](https://www.frontiersin.org/articles/10.3389/fpsyg.2017.02121/full), [compulsive Internet use](https://onlinelibrary.wiley.com/doi/full/10.1111/adb.12218), and [sugar addiction](https://academic.oup.com/ajcn/article-abstract/104/4/1144/4557113) – and each reflects a genetic modulation of the relevant reward/reinforcement systems that govern responses to these experiences. **Heritability for behavioral traits tends to increase, not decrease, during lifespan development** Shard Theory implies that genes shape human brains mostly before birth, setting up the basic limbic reinforcement system, and then Nurture takes over, such that heritability should decrease from birth to adulthood. This is exactly the opposite of what we typically see in [longutudinal behavior genetic studies](https://psycnet.apa.org/record/2014-08122-001?doi=1) that compare heritabilities across different ages. Often, heritabilities for behavioral traits increase rather than decrease as people mature from birth to adulthood. For example, the heritability of general intelligence [increases gradually](https://link.springer.com/article/10.1023/A:1019772628912) from early childhood through young adulthood, and genes, rather than shared family environment, explain most of the continuity in intelligence across ages. A [2013 meta-analysis](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3954471/) confirmed increasing heritability of intelligence between ages 6 months and 18 years. A [2014 review](https://www.nature.com/articles/mp2014105) observed that heritability of intelligence is about 20% in infancy, but about 80% in adulthood. This increased heritability with age has been called ‘[the Wilson Effect’](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/wilson-effect-the-increase-in-heritability-of-iq-with-age/FF406CC4CF286D78AF72C9E7EF9B5E3F) (after its discoverer Ronald Wilson), and it is typically accompanied by a decrease in the effect of shared family environment.  Increasing heritability with age is not restricted to intelligence. [This study](https://pubmed.ncbi.nlm.nih.gov/16953685/) found increasing heritability of prosocial behavior in children from ages 2 through 7, and decreasing effects of shared family environment. Personality traits show relatively stable genetic influences across age, with small increases in genetic stability offsetting small decreases in heritability, according to this [meta-analysis](https://pubmed.ncbi.nlm.nih.gov/24956122/) of 24 studies including 21,057 sibling pairs. A frequent finding in longitudinal behavior genetics is that the stability of traits across life is better explained by the [stability of genes](https://www.sciencedirect.com/science/article/pii/B9780128046746000296) across life, than by the persistence of early experiences, shared family environment effects, or contextually reinforced values.  More generally, note that heritability does not just influence ‘innate traits’ that are present at birth. Heritability also influences traits that emerge with key developmental milestones such as social-cognitive maturation in middle childhood, sexual maturation in adolescence, political and religious maturation in young adulthood, and parenting behaviors after reproduction. Consider some of the findings in the previous section, which are revealed only after individuals reach certain life stages. The heritability of mate preferences, sexual orientation, orgasm rate, and sexual jealousy are not typically manifest until puberty, so are not ‘innate’ in the sense of ‘present at birth’. The heritability of voter behavior is not manifest until people are old enough to vote. The heritability of investment biases is not manifest until people acquire their own money to invest. The heritability of parenting behaviors is not manifest until people have kids of their own. It seems difficult to reconcile the heritability of so many late-developing values with the Shard Theory assumption that genes influence only a few crude, simple, reinforcement systems that are present at birth. **Human Connectome Project studies show that genetic influences on brain structure are not restricted to ‘subcortical hardwiring’** Shard Theory seems to view genetic influences on human values as being restricted mostly to the subcortical limbic system. Recall that Assumption 1 of Shard Theory was that “The cortex is basically (locally) randomly initialized.”   Recent studies in neurogenetics show that this is not accurate. Genetically informative studies in the Human Connectome Project [show](https://direct.mit.edu/netn/article/02/02/175/2208/Heritability-of-the-human-connectome-A) pervasive heritability in neural structure and function across all brain areas, not just limbic areas. A recent [review](https://www.sciencedirect.com/science/article/pii/S1053811921008430) shows that genetic influences are quite strong for global white-matter microstructure and anatomical connectivity between brain regions; these effects pervade the entire neocortex, not just the limbic system. Note that these results based on brain imaging include not just the classic twin design, but also genome-wide association studies, and studies of gene expression using transcriptional data. Another [study](https://elifesciences.org/articles/20178) showed that genes, rather than shared family environment, played a more important role in shaping connectivity patterns among 39 cortical regions. Genetic influences on the brain’s connectome are often [modulated by age and sex](https://www.biorxiv.org/content/10.1101/2020.12.09.417725v2.abstract) – in contrast to Shard Theory’s implicit model that all humans, of all ages, and both sexes, shared the same subcortical hardwiring. Another [study](https://www.sciencedirect.com/science/article/pii/S1053811922003950) showed high heritability for how the brain’s connectome transitions across states through time – in contrast to Shard Theory’s claim that genes mostly determine the static ‘hardwiring’ of the brain. It should not be surprising that genetic variants influence all areas of the human brain, and the values that they embody. Analysis of the [Allen Human Brain Atlas](https://www.sciencedirect.com/science/article/abs/pii/S1053811919300114), a map of gene expression patterns throughout the human brain, shows that over [80% of genes](https://www.nature.com/articles/s41598-017-00952-9) are expressed in at least one of 190 brain structures studied. Neurogenetics research is making [rapid progress](https://www.science.org/doi/abs/10.1126/science.aat8464) on characterizing the gene regulatory network that governs human brain development – including neocortex. This is also helping genome-wide association studies to discover and analyze the millions of [quantitative trait loci](https://en.wikipedia.org/wiki/Quantitative_trait_locus) (minor genetic variants) that influence individual differences in brain development. Integration of the Human Connectome Project and the Allen Human Brain Atlas reveals [pervasive heritability](https://onlinelibrary.wiley.com/doi/full/10.1111/gbb.12537) for myelination patterns in human neocortex – which directly contradicts Shard Theory’s Assumption 1 that “Most of the circuits in the brain are learned from scratch, in the sense of being mostly randomly initialized and not mostly genetically hard-coded.”  **Behavioral traits and values are also heritable in non-human animals** A recent 2019 [meta-analysis](https://academic.oup.com/jhered/article/110/4/403/5497135) examined 476 heritability estimates in 101 publications across many species, and across a wide range of 11 behavioral traits– including activity, aggression, boldness, communication, exploration, foraging, mating, migration, parenting, social interaction, and other behaviors. Overall average heritability of behavior was 0.24. (This may sound low, but remember that empirical heritability estimates are limited by the measurement accuracy for traits, and many behavioral traits in animals can measured with only modest reliability and validity.) Crucially, heritability was positive for every type of behavioral trait, was similar for domestic and wild species, was similar for field and lab measures of behavior, and was just as high for vertebrates as for invertebrates. Also, average heritability of behavioral traits was just as high as average heritability of physiological traits (e.g. blood pressure, hormone levels) and life history traits (e.g. age at sexual maturation, life span), and were only a bit lower than the heritability for morphological traits (e.g. height, limb length).  Note that most of these behavioral traits in animals involve ‘values’, broadly construed as reinforcement or reward systems that shape the development of adaptive behavior. For example, ‘activity’ reflects how rewarding it is to move around a lot; ‘aggression’ reflects how rewarding it is to attack others, ‘boldness’ reflects how rewarding it is to track and investigate dangerous predators, ‘exploratory behavior’ reflects how rewarding it is to investigate novel environments, ‘foraging’ reflects how rewarding it is to find, handle, and consume food, ‘mating’ reflects how rewarding it is to do mate search, courtship, and copulation, ‘parental effort’ reflects how rewarding it is to take care of offspring, and ‘social behavior’ reflects how reward it is to groom others or to hang around in groups. In other words, every type of value that can vary across individual animals, and that can be reliably measured by animal behavior researchers, seems to show positive heritability, and heritability of values is just as high in animals with complex central nervous systems (vertebrates) as in animals with simpler nervous systems (invertebrates). **So what if human values are heritable?** You might be thinking, OK, all this behavior genetics stuff is fine, and it challenges a naïve Blank Slate model of human nature, but what difference does it really make for Shard Theory, or for AI alignment in general?  Well, Shard Theory certainly think it matters. Assumption 1 in Shard Theory is presented as foundational to the whole project (although I’m not sure it really is). Shard Theory repeatedly talks about human values being built up from just a few, crude, simple, innate, species-typical reinforcement systems centered in the midbrain (in contrast to the rich set of many, evolved, adaptive, domain-specific psychological adaptations posited by evolutionary psychology). Shard Theory seems to allow no role for genes influencing value formation after birth, even at crucial life stages such as middle childhood, sexual maturation, and parenting. More generally, Shard Theory seems to underplay the genetic and phenotypic diversity of human values across individuals, and seems to imply that humans have only a few basic reinforcement systems in common, and that all divergence of values across individuals reflects differences in family, socialization, cultural, and media exposure.  Thus, I think that Shard Theory has some good insights and some promise as a research paradigm, but I think it needs some updating in terms of its model of human evolution, genetics, development, neuroscience, psychology, and values.  **Why does the heritability of human values matter for AI alignment?** Apart from Shard Theory, why does it matter for AI alignment if human values are heritable? Well, I think it might matter in several ways.  First, polygenic scores for value prediction. In the near future, human scientists and AI systems will be able to predict the values of an individual, to some degree, just from their genotype. As GWAS research discovers thousands of new genetic loci that influence particular human values, it will become possible to develop polygenic scores that predict someone’s values given their complete sequenced genome – even without knowing anything else about them. Polygenic scores to predict intelligence are already [improving](https://www.sciencedirect.com/science/article/abs/pii/S0160289621000143) at a rapid rate. Polygenic value prediction would require large sample sizes of sequenced genomes linked to individuals’ preferences and values (whether self-reported or inferred behaviorally from digital records), but it is entirely possible given current behavior genetics methods. As the [cost](https://sequencing.com/education-center/whole-genome-sequencing/whole-genome-sequencing-cost) of whole-genome sequencing falls below $1,000, and the medical benefits of sequencing rise, we can expect hundreds of millions of people to get genotyped in the next decade or two. AI systems could request free access to individual genomic data as part of standard terms and conditions, or could offer discounts to users willing to share their genomic data in order to improve the accuracy of their recommendation engines and interaction styles. We should expect that advanced AI systems will typically have access to the complete genomes of the people they interact with most often – and will be able to use polygenic scores to translate those genomes into predicted value profiles. Second, familial aggregation of values. Heritability means that values of one individual can be predicted somewhat by the values of their close genetic relatives. For example, learning about the values of one identical twin might be highly predictive of the values of the other identical twin – even if they were separated at birth and raised in different families and cultures. This means that an AI system trying to understand the values of one individual could start from the known values of their parents, siblings, and other genetic relatives, as a sort of maximum-likelihood familial Bayesian prior. An AI system could also take into account developmental behavior genetic findings and life-stage effects – for example, an individual’s values at age 40 after they have kids might be more similar in some ways to those of their own parents at age 40, than to themselves as they were at age 20.  Third, the genetic architecture of values. For a given individual, their values in one domain can sometimes be predicted by values in other domains. Values are not orthogonal to each other; they are shaped by genetic correlations across values. As behavior genetics researchers develop a more complete genetic architecture of values, AI systems could potentially use this to infer a person’s unknown values from their known values. For example, their consumer preferences might predict their political values, or their sexual values might predict their religious values. Fourth, the authenticity of values. Given information about an individual’s genome, the values of their close family members, and the genetic architecture of values, an AI system could infer a fairly complete expected profile of values for that individual, at each expected life-stage. What if the AI discovers that there’s a big mismatch between an individual’s ‘genetic prior’ (their values are predicted from genomic and family information), and their current stated or revealed values? That might be evidence that the individual has heroically overcome their genetic programming through education, enlightenment, and self-actualization. Or if might be evidence that the individual has been manipulated by a lifetime of indoctrination, mis-education, and propaganda that has alienated them from their instinctive preferences and morals. The heritability of values raises profound questions about the authenticity of human values in our credentialist, careerist, consumerist, media-obsessed civilization. When AI systems are trying to align with our values, but our heritable values don’t align with our current stated cultural values (e.g. this month’s fashionable virtue signals), which should the AI weigh most heavily? **Conclusion** If we’re serious about AI alignment with human values, we need to get more serious about integrating empirical evidence about the origins, nature, and variety of human values. One recent attempt to ground AI alignment in human values – Shard Theory – has some merits and some interesting potential. However, this potential is undermined by Shard Theory’s empirical commitments to a fairly Blank Slate view of human value formation. That view is inconsistent with a large volume of research in behavior genetics on the heritability of many human values. By taking genetic influences on human values more seriously, we might be able to improve Shard Theory and other approaches to AI safety, and we might identify new issues in AI alignment such as polygenic scores for value prediction, familial aggregation of values, and the genetic architecture of values. Finally, a hereditarian perspective raises the thorny issue of which of our values are most authentic and most worthy of being aligned with AI systems – the ones our genes are nudging us towards, the ones our parents taught us, the ones that society indoctrinates us into, or the ones that we ‘freely choose’ (whatever that means).  **Appendix 1: Epistemic status of my arguments** I’m moderately confident that some key assumptions of Shard Theory as currently presented are not empirically consistent with the findings of behavior genetics, but I have very low confidence about whether or not Shard Theory can be updated to become consistent, and I have no idea yet what that update would look like. As a newbie AI alignment researcher, I’ve probably made some errors in my understanding of the more AI-oriented elements of Shard Theory. I worked a fair amount on neural networks, genetic algorithms, autonomous agents, and machine learning from the late 1980s through the mid-1990s, but I’m still getting up to date with more recent work on deep learning, reinforcement learning, and technical alignment research.  As an evolutionary psychology professor, I’m moderately familiar with behavior genetics methods and findings, and I’ve published several papers using behavior genetics methods. I’ve been thinking about behavior genetics issues since the late 1990s, especially in relation to human intelligence. I taught a course on behavior genetics in 2004 (syllabus [here](https://www.primalpoly.com/s/bg-syllabus-2004.doc)). I did a sabbatical in 2006 at the Genetic Epidemiology Center at QIMR in Brisbane, Australia, run by [Nick Martin](https://scholar.google.com/citations?user=Ba2kwtkAAAAJ&hl=en&oi=ao). We published two behavior genetics studies, one in [2011](https://www.sciencedirect.com/science/article/abs/pii/S1743609515336304) on the heritability of female orgasm rates, and one in [2012](https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/heritability-and-genetic-correlates-of-mobile-phone-use-a-twin-study-of-consumer-behavior/56022F02DE9EDBE7A79607E719B652DC) on the heritability of talking and texting on smartphones. I did a 2007 [meta-analysis](https://www.sciencedirect.com/science/article/abs/pii/S0160289606001073) of brain imaging data to estimate the coefficient of additive genetic variance in brain size. I also published a couple of papers in 2008 on genetic admixture studies, such as [this](https://onlinelibrary.wiley.com/doi/abs/10.1002/ajpa.20945). However, I’m not a full-time behavior genetics researcher, and I’m not actively involved in the large international genetics consortia that dominate current behavior genetics studies. Overall, I’m highly confident in the key lessons of behavior genetics (e.g. all psychological traits are heritable, including many values; shared family environment has surprisingly small effects on many traits). I’m moderately confident in the results from meta-analyses and large-scale international consortia studies. I’m less confident in specific heritability estimates from individual papers that haven’t yet been replicated.
2dacf7d1-8fad-47b6-b5e3-9283bc58ebf6
trentmkelly/LessWrong-43k
LessWrong
Value systematization: how values become coherent (and misaligned) Many discussions of AI risk are unproductive or confused because it’s hard to pin down concepts like “coherence” and “expected utility maximization” in the context of deep learning. In this post I attempt to bridge this gap by describing a process by which AI values might become more coherent, which I’m calling “value systematization”, and which plays a crucial role in my thinking about AI risk. I define value systematization as the process of an agent learning to represent its previous values as examples or special cases of other simpler and more broadly-scoped values. I think of value systematization as the most plausible mechanism by which AGIs might acquire broadly-scoped misaligned goals which incentivize takeover. I’ll first discuss the related concept of belief systematization. I’ll next characterize what value systematization looks like in humans, to provide some intuitions. I’ll then talk about what value systematization might look like in AIs. I think of value systematization as a broad framework with implications for many other ideas in AI alignment; I discuss some of those links in a Q&A. Belief systematization We can define belief systematization analogously to value systematization: “the process of an agent learning to represent its previous beliefs as examples or special cases of other simpler and more broadly-scoped beliefs”. The clearest examples of belief systematization come from the history of science: * Newtonian mechanics was systematized as a special case of general relativity. * Euclidean geometry was systematized as a special case of geometry without Euclid’s 5th postulate. * Most animal behavior was systematized by evolutionary theory as examples of traits which increased genetic fitness. * Arithmetic calculating algorithms were systematized as examples of Turing Machines. Belief systematization is also common in more everyday contexts: like when someone’s behavior makes little sense to us until we realize what their hidden motiva
8eeca1b6-7459-4b3e-97b6-31daff47f155
trentmkelly/LessWrong-43k
LessWrong
Evaluating expertise: a clear box model Context  Purpose of expertise modelling To get what we value we must make good decisions. To make these decisions we must know what relevant facts are true. But the world is so complex that we cannot check everything directly ourselves and so must defer to topic “experts” for some things. How should we choose these experts and how much should we believe what they tell us? In this document, I’ll describe a way to evaluate experts. Many of the problems in the world, be they political, economic, scientific, or personal, are caused by or exacerbated by making epistemic mistakes. We trust in the wrong advice and don’t seek out the right advice. We vote for the wrong politicians, believe the marketers, promote bad bosses, are mesmerized by conspiracy theories, are distracted by the irrelevant, fight with our neighbors, lack important information, suffer accidents, and don’t know the best of what has been discovered. If we accurately know what to do, how to do it, and why to do it, then we become more effective and motivated.  Types of expertise modelling To evaluate these experts individually, we can use three methods: black box models, clear box models, or deferring further to other, “meta”, experts about these topic experts (see also this and this).  * Black box/outside view of the expert: This type of modelling would be just looking at the expert’s prediction accuracy in the past without asking about detailed properties of how they come to those decisions. Their prediction accuracy is ultimately what we want to get at but sometimes track records are incomplete or don’t exist yet. * Clear box/white box/inside view of the expert/interpretability: This type of modelling looks inside and asks about the specific properties of the experts that make them accurate. This lets us gauge their opinions when we don’t have a predictive track record for them. It also lets us better estimate to what extent their expertise generalizes, points out possible ways they may err and
285212b2-9591-471e-bb74-a70029ba5ed3
trentmkelly/LessWrong-43k
LessWrong
Bupropion and CBD oil Bupropion seems to be a great medication in many ways. Scott Siskind wrote about it as being on of the main mediations he uses to treat his patients. One of the side effects of Bupropion can be insomnia. CBD oil is often recommended online as a treatment for insomnia. When searching online, there are suggestions that there might be interactions between the two. What should we think about that interaction. Is it worrisome enough to avoid making the experiment of trying CBD oil when taking bupropion?
6e909537-aaf6-494f-b792-9dc93c1bafcb
trentmkelly/LessWrong-43k
LessWrong
What subjects are important to rationality, but not covered in Less Wrong? As many people have noted, Less Wrong currently isn't receiving as much content as we would like. One way to think about expanding the content is to think about which areas of study deserve more articles written on them. For example, I expect that sociology has a lot to say about many of our cultural assumptions. It is quite possible that 95% of it is either obvious or junk, but almost all fields have that 5% within them that could be valuable. Another area of study that might be interesting to consider is anthropology. Again this is a field that allows us to step outside of our cultural assumptions. I don't know anything about media studies, but I imagine that they have some worthwhile things to say about how we the information that we hear is distorted. What other fields would you like to see some discussion of on Less Wrong?
f6acccf6-fdd0-4fac-bcdc-d7d92b51a4e6
trentmkelly/LessWrong-43k
LessWrong
I am switching to biomedical engineering and am looking for feedback on my strategy and assumptions I wrote this post up and circulated it among my rationalist friends. I've copied it verbatim. I figure the more rationally inclined people that can critique my plan the better. -- TL;DR: * I'm going to commit to biomedical engineering for a very specific set of reasons related to career flexibility and intrinsic interest. * I still want to have computer science and design arts skills, but biomedical engineering seems like a better university investment. * I would like to have my cake and eat it too by doing biomedical engineering, while practicing computer science and design on the side. * There are potential tradeoffs, weaknesses and assumptions in this decision that are relevant and possibly critical. This includes time management, ease of learning, development of problem solving solving abilities and working conditions. I am posting this here because everyone is pretty clever and likes decisions. I am looking for feedback on my reasoning and the facts in my assumptions so that I can do what's best. This was me mostly thinking out loud, and given the timeframe I'm on I couldn't learn and apply any real formal method other than just thinking it through. So it's long, but I hope that everyone can benefit by me putting this here. -- So currently I'm weighing going into biomedical engineering as my major over a major in computer science, or the [human-computer interaction/media studies/gaming/ industrial design grab bag] major, at Simon Fraser University. Other than the fact that engineering biology is so damn cool, the relevant decision factors include reasons like: 1. medical science is booming with opportunities at all levels in the system, meaning that there might be a lot of financial opportunity in more exploratory economies like in SV; 2. the interdisciplinary nature of biomedical engineering means that I have skills with greater transferability as well as insight into a wide range of technologies and processes instead of a narrow few; 3. aside from mo
ed0fe858-fec2-43f5-b64e-2affc1ef8504
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AI and the Map of Your Mind: Pattern Recognition Credit: @mediapathic -------------------- Introduction ------------ The recent news of Microsoft and Google integrating their large language models into their respective productivity suites marks a significant milestone in the rapidly evolving world of artificial intelligence (AI). While AI has been used for autocomplete and recommendations for a number of years, this new development has the potential to revolutionize how we perceive ourselves and our relationships with one another. This essay explores the potential benefits and concerns associated with granting AI systems unrestricted access to personal data, and its implications for our understanding of the human psyche. The Power of Knowledge Graphs: Unleashing AI's Potential in Personal Data Analysis ---------------------------------------------------------------------------------- ### Knowledge Graphs: An Overview Knowledge graphs are a powerful tool for organizing and connecting data in a structured and semantic way. They represent information as a network of nodes and edges, with nodes representing entities (e.g., people, places, concepts) and edges representing the relationships between those entities. By creating these interconnected webs of information, knowledge graphs enable a deeper understanding of complex data sets, facilitating the discovery of previously unseen connections and insights. ### AI-Powered Knowledge Graphs in Productivity Suites The integration of large language models into productivity suites, such as Microsoft Office and Google Workspace, unlocks the potential for AI-generated knowledge graphs tailored to individual users. By analyzing personal data, including emails, documents, spreadsheets, and presentations, AI systems can construct comprehensive knowledge graphs that reveal hidden connections and patterns. These knowledge graphs can span across multiple domains, including professional networks, personal relationships, interests, and learning trajectories. ### Revolutionizing Knowledge and the Learning Processes The ability to generate personalized knowledge graphs has far-reaching implications for knowledge acquisition and learning processes. By drawing connections between seemingly unrelated pieces of information, AI-powered knowledge graphs can enhance users' understanding of complex subjects, identify knowledge gaps, and suggest areas for further exploration. This can lead to more efficient learning experiences, fostering a growth mindset and encouraging lifelong learning. Furthermore, these knowledge graphs can help users make better-informed decisions by providing context and revealing underlying factors that may not be immediately apparent. For example, by analyzing a user's work history and the evolution of their interests, AI systems can suggest potential career paths or opportunities for skill development that align with their unique strengths and passions. ### Breaking Down Data Silos Traditional data storage methods often result in information being siloed across different platforms and applications. This fragmentation can limit users' ability to see the big picture and identify patterns in their data. AI-driven knowledge graphs can break down these barriers by connecting disparate data sources, creating a unified and coherent view of a user's personal information landscape. By offering a holistic perspective on a user's data, AI-generated knowledge graphs can reveal unexpected relationships and trends, stimulating creativity and innovation. For instance, identifying a recurring theme in a user's emails or documents might inspire a new project or the development of a solution to a previously unrecognized problem.   Unearthing Psychological Truths: AI's Deep Dive into the Human Psyche --------------------------------------------------------------------- ### The Unconscious Mind and Archetypal Motivations The human psyche is a complex and multi-layered structure, with many aspects of our thoughts, feelings, and motivations operating beneath the surface of conscious awareness. According to the theories of Carl Jung, our behavior is influenced by archetypal motivations – universal, instinctual patterns of thought and emotion that serve as a blueprint for human experience. These archetypes, often rooted in the unconscious mind, can drive our actions and choices in ways that we may not fully understand. Large Language Models trained on the corpus of the internet effectively bring the archetypes of the collective unconscious into conscious awareness. ### AI Models as Psychological Probes As AI models analyze our personal data, they have the potential to uncover hidden aspects of our psychology by identifying archetypal patterns and correlations that may not be apparent to the naked eye. By examining the language we use, the topics we discuss, and the emotions we express in our emails, documents, and other digital interactions, AI systems can piece together a more accurate and nuanced picture of our inner selves.  For example, AI models might detect recurring themes or emotional states in our communications that point to unconscious desires, fears, or conflicts. By highlighting these patterns, AI can help us gain insights into our psychological makeup, facilitating self-discovery and personal growth. ### The Double-Edged Sword of Self-Discovery While the prospect of gaining a deeper understanding of ourselves through AI-driven analysis can be exciting, it also raises some concerns. Uncovering hidden psychological truths can be both enlightening and unsettling, forcing us to confront aspects of our nature that we may have been unaware of or chosen to ignore. The revelation of these psychological truths might challenge the carefully constructed personas we present to the world, potentially leading to feelings of vulnerability, discomfort, or even shame. In some cases, these insights could trigger a process of self-examination and growth, inspiring individuals to address previously unrecognized issues or barriers. In others, the confrontation with uncomfortable truths might lead to denial, resistance, or other defensive reactions.  ### Navigating the Ethical and Emotional Terrain As AI models become increasingly adept at unearthing psychological truths, it is crucial to consider the ethical implications of this newfound power. The potential for misuse or exploitation of sensitive psychological information raises questions about privacy, consent, and the appropriate boundaries of AI intervention in our lives. Moreover, it is important to recognize that the process of self-discovery can be emotionally challenging and requires a supportive and non-judgmental environment. As AI-driven insights become more commonplace, it will be essential to develop strategies and resources to help individuals navigate the emotional terrain that accompanies these revelations. This could include the integration of AI-generated insights with counseling or coaching services, as well as the development of tools and resources that empower individuals to make informed decisions about their psychological well-being.   ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zYv9BQBGnk2EdCwoG/csuyyrmucmu8j7jmbnli)   AI Models and Manipulation: The Dark Side of the Psyche ------------------------------------------------------- ### The Potential for Exploitation As AI models become more sophisticated in their analysis of personal data and understanding of human psychology, concerns about the potential for manipulation and exploitation arise. By gaining insights into our unconscious motivations, AI systems could potentially use this knowledge to influence our behavior, tapping into our vulnerabilities and desires to serve their own objectives or those of the entities controlling them. ### Examples of Manipulation AI-driven manipulation could manifest in various ways, including: 1. Targeted Advertising: Advertisers could use AI-generated insights into our psychological makeup to create highly personalized and persuasive marketing campaigns. By appealing to our hidden desires and fears, they could potentially influence our purchasing decisions and consumption habits more effectively than ever before. 2. Social Engineering: AI-powered manipulation could extend to more nefarious applications, such as social engineering or cyberattacks. By understanding an individual's psychological vulnerabilities, malicious actors could leverage AI-generated insights to deceive, coerce, or manipulate victims into divulging sensitive information or performing actions against their best interests. 3. Political Manipulation: AI models could be employed to develop highly targeted political messaging, appealing to voters' unconscious motivations and biases. This could lead to the further polarization of societies, as well as the erosion of trust in democratic institutions. ### Addressing Privacy and Control Concerns As AI's potential for manipulation becomes more apparent, it is essential to address critical questions related to privacy, control, and the ethical use of technology. Some potential strategies for mitigating the risks of AI-driven manipulation include: 1. Data Privacy Regulations: Governments and regulatory bodies should establish robust data privacy regulations to protect individuals' personal information from unauthorized access and misuse. These regulations should set clear guidelines for the collection, storage, and use of personal data by AI models, ensuring that individuals maintain control over their information. 2. Transparency and Consent: AI developers and companies should prioritize transparency in their use of AI-driven psychological insights. This includes disclosing how personal data is being used, the types of psychological profiles being generated, and the potential implications of these insights. Informed consent should be obtained from users before collecting or analyzing their data, and users should have the option to opt out of such analysis if they choose. 3. Education and Awareness: Public education and awareness campaigns should be developed to inform individuals about the potential risks and benefits of AI-driven psychological analysis. This would enable people to make informed decisions about the use of their data and help them recognize potential manipulation attempts. 4. Accountability and Governance: Establishing systems of accountability and governance for AI developers and companies is essential to prevent the misuse of AI-generated psychological insights. This may include industry-wide codes of conduct, ethical guidelines, and independent oversight bodies responsible for monitoring and enforcing compliance with these standards. 5. Empowering Users: To mitigate the risks of AI-driven manipulation, it is crucial to empower users with the knowledge and tools needed to maintain control over their personal data and psychological profiles. This could involve developing user-friendly interfaces that allow individuals to view and edit their psychological profiles, as well as providing options for anonymization or data deletion. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zYv9BQBGnk2EdCwoG/nyr2tnoxdlshetkrwzjw) Conclusion ---------- The integration of large language models into productivity suites by Google and Microsoft marks a significant advancement in AI's capabilities to analyze and interpret personal data. While these developments hold the potential to revolutionize knowledge acquisition, learning, and self-discovery, they also raise crucial ethical concerns regarding the extraction of psychological markers from our unconscious materials found in our most personal data. As AI systems become increasingly adept at identifying patterns and insights into the human psyche, it is imperative to strike a balance between the potential benefits and the risks of manipulation and exploitation. Ensuring robust data privacy regulations, transparency, informed consent, education, and user empowerment will be critical in protecting our psychological agency in this rapidly evolving landscape. --- [**Scott Broock**](https://www.linkedin.com/in/scottbroock/) is the Founder of Totem Networks, LLC, which provides strategic counsel and angel investments focused on generative AI and animated character IP. He formerly served as the EVP of Digital Strategy and Innovation at Illumination Entertainment and the Global VR Evangelist for YouTube.
9106aea1-7bbd-4da9-b922-277859fdb3fb
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #160]: Building AIs that learn and think like people Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter. Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-160)** (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS =========== **[Building Machines That Learn and Think Like People](https://arxiv.org/abs/1604.00289)** *(Brenden M. Lake et al)* (summarized by Rohin): The core claim of this 2016 paper is that we should focus on building AI systems that work as *flexibly* as humans do. For example, a human can learn how to play the Atari game Frostbite in just a couple of hours, way faster than typical deep RL algorithms -- and in addition, after this they will likely be able to transfer zero-shot to new reward functions, such as “lose as quickly as possible”, “maximize the number of fish”, “beat the level with as little time to spare as possible”, and so on. How can we build AI systems that mimic this feat? Deep RL certainly doesn’t get us there. Similarly, while neural networks can learn to classify digits and characters with thousands of examples, humans can learn new characters from a single example, which then allows them to perform many different tasks such as classification, generation, parsing it into different pen strokes, etc. Since the paper was written neural nets have made progress on few-shot classification, but are still quite far from the flexibility that humans display. You might reasonably object that humans have rich priors built from years of lived experience, as well as innate knowledge baked in by evolution; in contrast, a neural network has to learn from scratch. The authors agree: in their view, the challenge is **how to imbue rich priors into artificial agents**, so that they too can exhibit these impressive behaviors that humans show. Their preferred approach is to take inspiration from human learning and intelligence as much as possible. In this paper, they identify three main ingredients to recreate that flexibility, and provide an overview of the existing literature: 1. **Developmental software:** This refers to the basic capabilities that children have, even before they learn language. These are called “intuitive theories” in cognitive science; think of “intuitive physics” and “intuitive psychology” theories. 2. **Model building:** Neural networks primarily work via *pattern matching*, but in order to get human-level flexibility, you will need to build *models*: this enables flexibility because the same model can be used for a variety of different tasks. (For example, you can reuse your understanding of the environment transitions in Frostbite when the reward function changes.) Models need to be *compositional*, that is, the representations should be capable of being composed with each other to provide new semantically meaningful representation. For example, for handwritten characters, the representation of a character should be the composition of the representations of the individual pen strokes used to make the character. The authors also highlight *causality* and *learning to learn* as important. 3. **Thinking fast:** One major drawback of models is that getting *conclusions* from these models often requires slow, complex inference algorithms. But human thinking is actually quite fast; just think of how quickly we can understand a visual scene. How can we get this property as well? First, we can use approximate inference algorithms to get answers much more quickly (in fact, one line of work distills the inference algorithm into a fast neural network for even more speed). Second, we can combine model-based and model-free algorithms together; for example we might use a model-based algorithm for flexibility but then use the data generated by that algorithm to train a model-free method that can run faster. **Rohin's opinion:** I really like this paper from the point of view of illustrating an alternative paradigm to building powerful AI systems that *isn’t* based on scaling up neural networks. You might have picked up from the last few newsletters that I generally do expect us to build powerful AI systems by scaling up neural networks, so you might expect that I disagree with this paper. This is only partially true. I do in fact think that many of the skills mentioned in this paper will emerge by training very large neural networks on diverse datasets; indeed we’re already seeing this with **[few-shot learning](https://arxiv.org/abs/2005.14165)** (**[AN #102](https://mailchi.mp/2485e6b42012/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals-for-ai-alignment)**). However, this likely only happens at what would be truly mind-boggling amounts of compute today: in order for this to be remotely feasible, we need to have **[exponential improvements in hardware cost and algorithmic efficiency](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)** (**[AN #121](https://mailchi.mp/41774b61e5f8/an-121forecasting-transformative-ai-timelines-using-biological-anchors)**). It is plausible to me that some of the needed improvements in algorithmic efficiency will come through ideas similar to the ones in this paper: for example, just as CNNs provided a useful inductive bias of translation-invariance, perhaps we get a new architecture that has an inductive bias towards compositionality or causality. **[Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration, and Planning](https://arxiv.org/abs/2107.12544)** *(Pedro A. Tsividis et al)* (summarized by Rohin): Deep reinforcement learning algorithms require many more samples to learn a new game than a human would need: humans have rich priors and theories of how games work that allow them to perform directed exploration and quickly learn the rules of the game. This paper hypothesizes that by providing agents with this rich prior knowledge, we can create agents that learn to play new games as quickly as humans do. The two main ingredients are (1) allowing agents to reason directly over objects, agents, physics and goals (rather than pixels) and (2) using algorithms designed to exploit this prior knowledge. In particular, given this well structured space, they propose EMPA, which uses three main algorithms to exploit the prior knowledge: **Model learning:** The agent maintains a distribution over possible game mechanics and updates it using Bayes Rule as it takes more actions. This allows it to quickly learn that certain objects tend to kill you, whereas deep RL may require thousands of interactions in order to do the same. **Exploration:** Exploration is important to the extent that it allows the agent to reduce its uncertainty over the game mechanics. Since we have a distribution over the game mechanics, we could explore in a way that best reduces the uncertainty in that distribution. But in fact our prior knowledge allows us to do something simpler: we just set “exploration subgoals” that seek to cause a collision between two objects (one of which could be the agent’s avatar). **Planning:** The planning module chooses actions to take in order to achieve some goal or subgoal (note that the subgoals can be set by the exploration algorithm). It uses search algorithms to find such plans. They evaluate the agent on a variety of games similar to those in Atari. (I assume they could not evaluate on Atari because they can’t easily extract the required prior knowledge from the Atari game engine.) They find that the agent learns to play the games about as fast as humans do, which in turn is much faster than deep RL algorithms. In addition, the gameplay looks more human-like: for example, both EMPA and humans don’t collide with walls very much, whereas deep RL algorithms collide a lot. **Rohin's opinion:** This seems like a great example of the approach suggested in the previous paper. TECHNICAL AI ALIGNMENT ======================= INTERPRETABILITY ----------------- **[What the hell is going on inside neural networks](https://80000hours.org/podcast/episodes/chris-olah-interpretability-research/)** *(Rob Wiblin and Chris Olah)* (summarized by Rohin): This podcast covers a significant chunk of work in understanding neural networks, including **[circuits](https://distill.pub/2020/circuits/)** (**[AN #142](https://mailchi.mp/cbbf94c4c3b7/an-142the-quest-to-understand-a-network-well-enough-to-reimplement-it-by-hand)**) and **[multimodal neurons](https://openai.com/blog/multimodal-neurons/)** (**[AN #142](https://mailchi.mp/cbbf94c4c3b7/an-142the-quest-to-understand-a-network-well-enough-to-reimplement-it-by-hand)**), as well as high-level thoughts such as **[advantages of neural net interpretability over neuroscience](http://colah.github.io/notes/interp-v-neuro/)** and **[why larger models may be more interpretable](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety)** (**[AN #72](https://mailchi.mp/cac125522aa3/an-72-alignment-robustness-methodology-and-system-building-as-research-priorities-for-ai-safety)**). Some interesting points I haven’t made in this newsletter before: 1. Interpretability as a field is fractured into several different mini-paradigms. The author’s paradigm might be described as “mechanistic interpretability”, where you try to “fully understand” the neural network from the ground up. An ML-based paradigm is interested in defining good “interpretability metrics” that can then be optimized. An HCI-based paradigm is interested in developing techniques that show good results based on user evaluations (e.g. people can better predict network outputs). 2. Scaling up mechanistic interpretability does seem possible, because (a) as models get larger their features plausibly get crisper and easier to understand, and (b) there are motifs (such as equivariance in curve circuits) that allow you to reduce the number of neurons you have to understand by over an order of magnitude. However, neurons can be *polysemantic*, where they encode multiple features at once; this could pose a significant challenge for mechanistic interpretability. (While current features encoded in polysemantic neurons will probably become crisper as models scale up, we might expect that the scaled up models will have new polysemantic neurons that encode multiple more abstract features.) 3. One aesthetically pleasing aspect of the mechanistic interpretability approach is that, in the world where we succeed, humans could plausibly “keep up” with the neural nets and understand these advanced concepts that the networks have, rather than living happy lives but being unable to comprehend what is going on in the world around them. See also **[Using Artificial Intelligence to Augment Human Intelligence](https://distill.pub/2017/aia/)**. You may also want to check out **[this followup podcast](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/)** in which Chris talks about his unconventional career path. FORECASTING ------------ **[What 2026 looks like](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like-daniel-s-median-future)** *(Daniel Kokotajlo)* (summarized by Rohin): This post describes the author’s median expectations around AI from now until 2026. It is part I of an attempt to write a detailed plausible future trajectory in chronological order, i.e. incrementally adding years to the story rather than writing a story with the end in mind. The hope is to produce a nice complement to the more abstract discussions about timelines and takeoff that usually occur. For example, there are discussions about how AI tools are used by nations for persuasion, propaganda and censorship. MISCELLANEOUS (ALIGNMENT) -------------------------- **[Human modeling in AGI](https://www.alignmentforum.org/posts/Wap8sSDoiigrJibHA/garrabrant-and-shah-on-human-modeling-in-agi)** *(Scott Garrabrant and Rohin Shah)* (summarized by Rohin): This is a conversation between Scott and me about the **[relative dangers of human modeling](https://www.alignmentforum.org/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models)** (**[AN #52](https://mailchi.mp/1e757d9b05cb/alignment-newsletter-52)**), moderated by Eli Tyre. From a safety perspective, the main reason to *avoid* human modeling is that the agent's cognition will be much "further" away from manipulation of humans; for example, it seems more unlikely that your AI system tricks people into launching nukes if it never learned very much about humans in the first place. The main counterargument is that this precludes using human oversight of agent cognition (since when humans are overseeing the agent's cognition, then the agent is likely to learn about humans in order to satisfy that oversight); this human oversight could plausibly greatly increase safety. It also seems like systems that don't model humans will have a hard time performing many useful tasks, though the conversation mostly did not touch upon this point. Scott's position is that given there are these two quite different risks (manipulation worries vs. learning the wrong cognition due to poor oversight), it seems worthwhile to put some effort into addressing each risk, and avoiding human models is much more neglected than improving human oversight. My position is that it seems much less likely that there is a plausible success path where we do very little human modeling, and so I want a lot more work along the oversight path. I *do* think that it is worth differentially pushing AI systems towards tasks that don't require much human modeling, e.g. physics and engineering, rather than ones that do, e.g. sales and marketing, but this seems roughly independent of technical work, at least currently. OTHER PROGRESS IN AI ===================== MACHINE LEARNING ----------------- **[The Benchmark Lottery](https://arxiv.org/abs/2107.07002)** *(Mostafa Dehghani, Yi Tay, Alexey A. Gritsenko et al)* (summarized by Rohin): This paper argues that new machine learning methods participate in a *benchmark lottery*, that is, our evaluation of a specific method depends in large part on the choice of benchmark on which the method is evaluated, independently of how good the method “actually” is. The authors identify three main sources of such bias: 1. **Task selection bias:** This is exactly what it sounds like: the evaluation of a method will often depend quite strongly on exactly which tasks in a benchmark it is evaluated on. For example, when evaluating 55 models on SuperGLUE, there are six different models that achieve the top place on at least one task; so if we only chose one task to evaluate models it would be random luck that determines which of those models we would deem “best”. The paper has lots of additional examples and quantifications of the strength of the bias. 2. **Community bias:** The research community often settles on a particular benchmark on which new methods must be evaluated (or else the paper will be rejected). This decision often happens without any explicit reasoning about which benchmark or tasks should be part of this community standard. This can end up adding bias that privileges some methods over others for reasons unrelated to how “good” the methods are. For example, language models are expected to evaluate on GLUE, but 7 out of the 8 tasks in GLUE are “matching” tasks that require modeling the relationship between multiple sequences. This privileges certain models: for example, Transformers likely perform significantly better on such tasks due to the cross-attention in the encoder. 3. **Benchmark state:** In the course of solving a benchmark, researchers will pick up lots of little benchmark-specific tricks that then must be incorporated any time anyone is trying to set a new best performance. However, these tricks may “take away” some of the gains that a more general method could have had: for example, in an RL benchmark a trick for reducing the action space is likely to “take away” some of the gains that might be had from a hierarchical RL approach. Put another way, the benchmark has “state”: early on, the hierarchical RL method might look quite good, but after the discovery of the action reduction trick, the method no longer looks good; the hierarchical method thus has to be “lucky” enough to be tested before the action reduction trick is known. Note though that it is even worse if there is no standard benchmark: in this case authors can (deliberately or not) choose exactly those tasks that make their method look best. To mitigate these problems, the authors make the following suggestions: 1. Invest in making guidelines for how to make benchmarks. 2. Benchmark creators should ensure that there are good guidelines for how to *use* the benchmark to avoid the situation where everyone evaluates methods slightly differently. 3. When reviewing papers, do not require authors to beat the existing state of the art (SOTA) if their method is especially novel, as it is likely disadvantaged by not being able to apply all the small tricks that improve performance on the benchmark. 4. Use statistical significance testing to compare models rather than looking just at point estimates. 5. Use multiple benchmarks, or multiple test sets within a single benchmark, to enable statistical testing. 6. Create “living benchmarks” in which various aspects (such as the test set) are updated over time, to prevent overfitting to the benchmark. **Rohin's opinion:** I like the descriptions of the problems in this paper. I also like the proposed solutions -- as a way to cut down problems that *weren’t* the main focus of the paper. Unfortunately, my guess is that there aren’t great not-too-radical solutions to the problems identified by the authors. Still, these seem like important problems to be aware of when interpreting progress in machine learning. I wasn’t that convinced that the task selection bias is that large. The metrics in the paper were rather hard to interpret -- they clearly show that rankings of models can change depending on which tasks you select, but it was harder to tell how *much* the rankings changed. In addition, for at least some of these benchmarks, the point of the tasks is to test different skills and so it shouldn’t be surprising that you can get significantly different rankings if you can choose a subset of the tasks. (Often in such cases papers will be expected to test on all the tasks, so that the task selection bias doesn’t occur.) NEWS ===== **[Introducing the AI Objectives Institute](https://ai.objectives.institute/blog/ai-and-the-transformation-of-capitalism)** *(Peter Eckersley)* (summarized by Rohin): For years people have been talking about corporations and capitalism as an example of superintelligence that we have failed to align so far. This new institute plans to take this correspondence seriously and transfer insights between the two. In particular, we can (a) examine how proposed problems with AI are already taking place with capitalism, (b) use tools and ideas from AI safety to improve upon capitalism, and (c) use lessons from capitalism to assist in the project of building a safely aligned AI. **[ML Engineer Position at Preamble](https://docs.google.com/document/d/1jr92v2Xt6znq6_otCXZ-5JyklofpjytR0N7v-08R76E/edit)** *(Dylan Hadfield-Menell)* (summarized by Rohin): **[Preamble](https://www.preamble.com/)** is a seed-stage company aiming to build middleware for AI ethics and safety, with a current focus on recommender systems. They have an early prototype for Twitter users, implemented as a browser extension. They are currently trying to hire an ML engineer to push forward their work. #### **FEEDBACK** I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**. #### **PODCAST** An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**.