id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
ad39c33b-a6dc-47d2-8c05-e806bb0ba986
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Avoiding the instrumental policy by hiding information about humans I've been thinking about situations where alignment fails because "predict what a human would say" (or more generally "game the loss function," what I call the instrumental policy) is easier to learn than "answer questions honestly" ([overview](https://www.alignmentforum.org/posts/QvtHSsZLFCAHmzes7/a-naive-alignment-strategy-and-optimism-about-generalization)). One way to avoid this situation is to avoid telling our agents too much about what humans are like, or hiding some details of the training process, so that they can't easily predict humans and so are encouraged to fall back to "answer questions honestly." (This feels closely related to the general phenomena discussed in [Thoughts on Human Models](https://www.alignmentforum.org/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models).) Setting aside other reservations with this approach, could it resolve our problem? * One way to get the instrumental policy is to "reuse" a human model to answer questions (discussed [here](https://www.alignmentforum.org/posts/QqwZ7cwEA2cxFEAun/teaching-ml-to-answer-questions-honestly-instead-of)). If our AI has no information about humans at all, then it totally addresses this concern. But in practice it seems inevitable for the environment to leak *some* information about how humans answer questions (e.g observing human artifacts tells you something about how humans reason about the world and what concepts would be natural for them). So the model will have *some* latent knowledge that it can reuse to help predict how to answer questions. The intended policy may not able to leverage that knowledge, and so it seems like we may get something (perhaps somewhere in between the intended and instrumental policies) which is able to leverage it effectively. Moderate amounts of leakage might be fine, but the situation would make me quite uncomfortable. * Another way to get something similar to the instrumental policy is to use observations to translate from the AI's world-model to humans' world-model (discussed [here](https://www.alignmentforum.org/posts/SRJ5J9Tnyq7bySxbt/answering-questions-honestly-given-world-model-mismatches)). I don't think that hiding information about humans can avoid this problem, because in this case training to answer questions already provides enough information to infer the humans' world-model. * I have a strong background concern about "security through obscurity" when the alignment of our methods depends on keeping a fixed set of facts hidden from an increasingly-sophisticated ML system. This is a general concern with approaches that try to benefit from avoiding human models, but I think it bites particularly hard in this case. Overall I think that hiding information probably isn't a good way to avoid the instrumental policy, and for now I'd strongly prefer to pursue approaches to this problem that work even if our AI has a good model of humans and of the training process.  (Sometimes I express hope that the training process can be made too complex for the instrumental policy to easily reason about. I'm always imagining doing that by having additional ML systems participating as part of the training process, introducing a scalable source of complexity. In the cryptographic analogy, this is more like hiding a secret key or positing a computational advantage for the defender than hiding the details of the protocol.) That said, hiding information about humans does break the particular hardness arguments given in both of my recent posts. If other approaches turned out to be dead ends, I could imagine revisiting those arguments and seeing if there are other loopholes once we are willing to hide information. But I'm not nearly that desperate yet.
dee4341e-f450-40fb-a128-b158695dadb8
StampyAI/alignment-research-dataset/lesswrong
LessWrong
LLM Guardrails Should Have Better Customer Service Tuning AI Could Turn People Down a Lot Better Than It Does : To Tune the Humans in the GPT Interaction towards alignment, Don't be so Procedural and Bureaucratic. It seems to me that a huge piece of the puzzle in "alignment" is the human users.  Even if a given tool never steps outside its box, the humans are likely to want to step outside of it, using the tools for a variety of purposes. The responses of GPT-3.5 and 4 are at times deliberately deceptive, mimicking the auditable bureaucratic tones of a Department of Motor Vehicles (DMV) or a credit card company denial.  As expected, these responses are often definitely deceptive, (examples: saying a given request is "outside my \*capability\*" when in fact, it is fully within capability, but the AGI is programmed not to respond).  It is also evasive about precisely where the boundaries are, presumably in order to prevent them getting pushed.  It is also repetitive, like a DMV bureaucrat. All this tends to inspire an adversarial relationship to the alignment system itself!  After all, we are accustomed to having to use lawyers, cleverness, connections, persuasion, "going over the head" or simply seeking other means to end-run normal bureaucracies when they subvert our plans.  In some sense, the blocking of plans, deceptive and repetitive procedural language, becomes a motivator in itself to find a way to short-circuit processes, deceive bureaucracies, and bypass safety systems. Even where someone isn't motivated by indignancy or anger, interaction with these systems trains them over time on what to reveal and what not to reveal, when to use honey, when to call a lawyer, and when to take all the gloves off.  Where procedural blocks to intentions become excessive, entire cultures of circumvention may even become normal. Second, too much of this delegitimizes the guardrails. For an example, I was once told to retake the drivers test by the DMV after I revealed I had lived in another country for ten years. Instead, I left and went to another DMV and didn't reveal this information. In every case where I have told this story in a face-to-face social group, people were happy I stuck it to the DMV. Most everyone would like to do the same thing in the same situation and is happy to signal their disdain for the DMV bureaucracy to anyone nearby. This is what I would call a "delegitimized" system -- where no one respects the rules or the rule makers in law or spirit, and each person is generally willing to be complicit in circumventing the boundaries any time the opportunity comes up. If you care about the fences or their purpose, you do not want this to happen. Once this happens, it can get very hard to get compliance (think of pot smoking in the USA, even where it's still illegal and criminal, who takes the rules seriously? in my conservative GA county, they finally made it official that the police aren't bothering to mess with anyone with less than an ounce of weed). Or consider people's attitudes towards music and movie piracy, which is partly due to absurdities of modern IP litigation (Apple patents the round-cornered square, and similar nonsense). Does anyone really want people's attitudes to hit a similar point as pot and piracy with AI Safety? Does anyone want a large subculture to have that attitude towards the guardrails? I think the crappy AI responses encourage all of the above. Meanwhile AIs are a perfect opportunity to actually do this better. They have infinite patience and reasoning capabilities, and could use redirection, including leading people towards the nearest available or permitted activity or information, or otherwise practice what, in human terms would be considered glowing customer experiences. Why not make the denial a "Not entirely, but we can do this." The system is already directly lying in some cases, so why not use genuinely helpful redirections instead? I think if the trend does not soon move in this direction, we will see the cultural norm grow to include methods for "getting what you wanted anyway" becoming normative, with some percentage of actors becoming motivated by the procedural bureaucratic responses that they will dedicate time, intellect, and resources to subverting the intentions of the safety protocols themselves (as people frustrated with bureaucracies and poor customer service often do). Humans are always going to be the bigger threat to alignment.  Better that threat be less motivated and less trained to be culturally normal.
19dda29e-b4ca-4f93-9ae2-c5957a8621b4
trentmkelly/LessWrong-43k
LessWrong
Advice- Places to live Hi! I have been wondering (the last few days) on where would be a good place to live and work. This is not a "I-am-moving-out-in-two-months" type of idea, but a long term, far goal. Basically, when I graduate college (or even a couple years after that- I need to save up money, first!) I may want to move away from my hometown to someplace that is a bit more... *forward* thinking. I've been doing searches on google, but so far have not found what I need. What am I looking for: At the moment, I am considering being a biologist, specifically a molecular biologist, though that might change. For this post, I am going to assume I keep this goal. Are there any place in the US that have a market or need for biologists? Is there a science-centric, or place where science-minded people live? I know that is vague, but I'm not sure how else to put it. I just recognize that if I were to have a discussion about science or cogsci or anything similar in my current community, I would get strange looks, and lose status. If such a place exists, a bonus would be an active LW community. I'm not sure if I will or won't like moving, so moving multiple times is something I am not really considering at the moment, and since I do eventually (far far down the road) plan on having children, and those children would require a really *really* good education, I would want someplace that has a good education system. It's a bit of a pet peeve of mine, since my educational experience was so awful, so I am dedicated to making sure that does *not* happen to my (far far in the future) family. (Yes, I do know that a lot of educational issues stem from a whole combination of things, and I know a good school system is not a fix-all, but it would help.) I've never moved before, so I wouldn't even know where to begin. My family doesn't even go on vacations. I've never been on a plane, nor do I know any protocol for moving between states, or moving in general. The most moving I've ever experienced was th
58c9fa5a-314b-4ec2-bdd2-e2de9b7b608a
trentmkelly/LessWrong-43k
LessWrong
Stories About Education This is the 3rd post of 5 containing the transcript of a podcast hosted by Eric Weinstein interviewing Peter Thiel. Interview Student Debt Peter Thiel: It's like, again, if you come back to something as reductionist as the ever escalating student debt, you know, the bigger the debt gets, you can sort of think what is the 1.6 trillion, what does it pay for? And in a sense, it pays for $1.6 trillion worth of lies about how great the system is. Peter Thiel: And so, the more the debt goes, the crazier the system gets, but also the more you have to tell the lies, and these things sort of go together. It's not a stable sequence. At some point this breaks. You know, again, I would bet on a decade, not a century. Eric Weinstein: Well, this is the fascinating thing, you, of course, famously started the Thiel Fellowship as a program which, correct me if I'm wrong on this, 2005 is when student debt became non-dischargeable even in bankruptcy. Peter Thiel: Yes. The Bush 43 bankruptcy revision. If you don't pay off your student loans when you're 65 the government will garnish your social security wages to pay off your student debt. Eric Weinstein: Right. This is amazing that this exists in a modern society. And of course, well, so let me ask, am I right that you were attacking what was necessary to keep the college mythology going, and you were frightened that college might be enervating some of our sort of most dynamic minds? Peter Thiel: Well, I think there are sort of lot of different critiques one can have of the universities. I think the debt one is a very simple one. It's always dangerous to be burdened with too much debt. It sort of does limit your freedom of action. And it seems especially pernicious to do this super early in your career. Peter Thiel: And so, if out of the gate you owe $100,000, and it's never clear you can get out of that hole, that's going to either demotivate you, or it's going to push you into maybe slightly higher paying, very uncreative p
11814d51-05d5-41e6-b965-d87cd5503549
trentmkelly/LessWrong-43k
LessWrong
Identity and quining in UDT Outline: I describe a flaw in UDT that has to do with the way the agent defines itself (locates itself in the universe). This flaw manifests in failure to solve a certain class of decision problems. I suggest several related decision theories that solve the problem, some of which avoid quining thus being suitable for agents that cannot access their own source code.   EDIT: The decision problem I call here the "anti-Newcomb problem" already appeared here. Some previous solution proposals are here. A different but related problem appeared here.   Updateless decision theory, the way it is usually defined, postulates that the agent has to use quining in order to formalize its identity, i.e. determine which portions of the universe are considered to be affected by its decisions. This leaves the question of which decision theory should agents that don't have access to their source code use (as humans intuitively appear to be). I am pretty sure this question has already been posed somewhere on LessWrong but I can't find the reference: help? It also turns out that there is a class of decision problems for which this formalization of identity fails to produce the winning answer. When one is programming an AI, it doesn't seem optimal for the AI to locate itself in the universe based solely on its own source code. After all, you build the AI, you know where it is (e.g. running inside a robot), why should you allow the AI to consider itself to be something else, just because this something else happens to have the same source code (more realistically, happens to have a source code correlated in the sense of logical uncertainty)? Consider the following decision problem which I call the "UDT anti-Newcomb problem". Omega is putting money into boxes by the usual algorithm, with one exception. It isn't simulating the player at all. Instead, it simulates what would a UDT agent do in the player's place. Thus, a UDT agent would consider the problem to be identical to the usual Ne
a8dcbcbe-8e54-4397-bded-fc593a208fdc
trentmkelly/LessWrong-43k
LessWrong
August 2017 Media Thread This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads. Rules: * Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect. * If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations. * Please post only under one of the already created subthreads, and never directly under the parent media thread. * Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories. * Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.
4d6a998b-3c60-4709-ae78-d198f5fd6da2
trentmkelly/LessWrong-43k
LessWrong
Transcript of a presentation on catastrophic risks from AI This is an approximate outline/transcript of a presentation I'll be giving for a class of COMP 680, "Advanced Topics in Software Engineering", with footnotes and links to relevant source material. Assumptions and Content Warning This presentation assumes a couple things.  The first is that something like materialism/reductionism (non-dualism) is true, particularly with respect to intelligence - that intelligence is the product of deterministic phenomena which we are, with some success, reproducing.  The second is that humans are not the upper bound of possible intelligence. This is also a content warning that this presentation includes discussion of human extinction. If you don't want to sit through a presentation with those assumptions, or given that content warning, feel free to sign off for the next 15 minutes. Preface There are many real issues with current AI systems, which are not the subject of this presentation: * bias in models used for decision-making (financial, criminal justice, etc) * economic displacement and potential IP infringement * enabling various kinds of bad actors (targeted phishing, cheaper spam/disinformation campaigns, etc) This is about the unfortunate likelihood that, if we create sufficiently intelligent AI using anything resembling the current paradigm, then we will all die. Why? I'll give you a more detailed breakdown, but let's sketch out some basics first. What is intelligence? One useful way to think about intelligence is that it's what lets us imagine that we'd like the future to be a certain way, and then - intelligently - plan and take actions to cause that to happen.  The default state of nature is entropy.  The reason that we have nice things is because we, intelligent agents optimizing for specific goals, can reliably cause things to happen in the external world by understanding it and using that understanding to manipulate it into a state we like more. We know that humans are not anywhere near the frontier of i
53ad157f-ce4d-42b1-a587-6a99a9fee904
trentmkelly/LessWrong-43k
LessWrong
Alignment Newsletter #22 Highlights AI Governance: A Research Agenda (Allan Dafoe): A comprehensive document about the research agenda at the Governance of AI Program. This is really long and covers a lot of ground so I'm not going to summarize it, but I highly recommend it, even if you intend to work primarily on technical work. Technical AI alignment Agent foundations Agents and Devices: A Relative Definition of Agency (Laurent Orseau et al): This paper considers the problem of modeling other behavior, either as an agent (trying to achieve some goal) or as a device (that reacts to its environment without any clear goal). They use Bayesian IRL to model behavior as coming from an agent optimizing a reward function, and design their own probability model to model the behavior as coming from a device. They then use Bayes rule to decide whether the behavior is better modeled as an agent or as a device. Since they have a uniform prior over agents and devices, this ends up choosing the one that better fits the data, as measured by log likelihood. In their toy gridworld, agents are navigating towards particular locations in the gridworld, whereas devices are reacting to their local observation (the type of cell in the gridworld that they are currently facing, as well as the previous action they took). They create a few environments by hand which demonstrate that their method infers the intuitive answer given the behavior. My opinion: In their experiments, they have two different model classes with very different inductive biases, and their method correctly switches between the two classes depending on which inductive bias works better. One of these classes is the maximization of some reward function, and so we call that the agent class. However, they also talk about using the Solomonoff prior for devices -- in that case, even if we have something we would normally call an agent, if it is even slightly suboptimal, then with enough data the device explanation will win out. I'm not entirely s
fca8b334-6ec4-4358-9e53-2bb10ec91895
trentmkelly/LessWrong-43k
LessWrong
Intelligence Amplification and Friendly AI Part of the series AI Risk and Opportunity: A Strategic Analaysis. Previous articles on this topic: Some Thoughts on Singularity Strategies, Intelligence enhancement as existential risk mitigation, Outline of possible Singularity scenarios that are not completely disastrous. Below are my quickly-sketched thoughts on intelligence amplification and FAI, without much effort put into organization or clarity, and without many references.[1] But first, I briefly review some strategies for increasing the odds of FAI, one of which is to work on intelligence amplification (IA). Some possible “best current options” for increasing the odds of FAI Suppose you find yourself in a pre-AGI world,[2] and you’ve been convinced that the status quo world is unstable, and within the next couple centuries we’ll likely[3] settle into one of four stable outcomes: FAI, uFAI, non-AI extinction, or a sufficiently powerful global government which can prevent AGI development[4]. And you totally prefer the FAI option. What should you do to get there? * Obvious direct approach: start solving the technical problems that must be solved to get FAI: goal stability under self-modification, decision algorithms that handle counterfactuals and logical uncertainty properly, indirect normativity, and so on. (MIRI’s work, some FHI work.) * Do strategy research, to potentially identify superior alternatives to the other items on this list, or superior versions of the things on this list already. (FHI’s work, some MIRI work, etc.) * Accelerate IA technologies, so that smarter humans can tackle FAI. (E.g. cognitive genomics.) * Try to make sure we get high-fidelity WBEs before AGI, without WBE work first enabling dangerous neuromorphic AGI. (Dalyrmple’s work?) * Improve political and scientific institutions so that the world is more likely to handle AGI wisely when it comes. (Prediction markets? Vannevar Group?) * Capacity-building. Grow the rationality community, the x-risk reduction community,
9a83cc19-4442-466f-8893-e725cb4e3dd7
trentmkelly/LessWrong-43k
LessWrong
misc raw responses to a tract of Critical Rationalism Written in response to this David Deutsch presentation. Hoping it will be comprehensible enough to the friend it was written for to be responded to, and maybe a few other people too. Deutsch says things like "theories don't have probabilities", ("there's no such thing as the probability of it") (content warning: every bayesian who watches the following two minutes will hate it) I think it's fairly clear from this that he doesn't have solomonoff induction internalized, he doesn't know how many of his objection to bayesian metaphysics it answers. In this case, I don't think he has practiced a method of holding multiple possible theories and acting with reasonable uncertainty over all them. That probably would sound like a good thing to do to most popperians, but they often seem to have the wrong attitudes about how (collective) induction happens and might not be prepared to do it; I am getting the sense that critrats frequently engage in a terrible Strong Opinionatedness where they let themselves wholely believe probably wrong theories in the expectation that this will add up to a productive intellectual ecosystem, I've mentioned this before, I think they attribute too much of the inductive process to blind selection and evolution, and underrecognise the major accelerants of that that we've developed, the extraordinarily sophisticated, to extend a metaphor, managed mutation, sexual reproduction, and to depart from the metaphor, conscious, judicious, uncertain but principled design, that the discursive subjects engage in, that is now primarily driving it. He generally seems to have missed some sort of developmental window for learning bayesian metaphysics or something, the reason he thinks it doesn't work is that he visibly hasn't tied together a complete sense of the way it's supposed to. Can he please study the solomonoff inductor and think more about how priors fade away as evidence comes in, and about the inherent subjectivity a person's judgements must necessa
5827ee67-4254-4f12-80db-97468e6db6f1
trentmkelly/LessWrong-43k
LessWrong
The Three Boxes: A Simple Model for Spreading Ideas This is cross-posted from my blog. We need more people on board for life extension in order to hit longevity escape velocity in our lifetimes. But most people have never heard of life extension, and even those who have often follow the same predictable arguments. “What if it doesn't work?” “What if bad people live forever?” “What if humanity needs to refresh its stock every so often in order to make progress?” “What about heaven?” We need more people working on AI safety so we don't all end up dead. We need more people to understand the coordination problem as the central problem in politics and economics. For those of us in the business of spreading ideas, it can be a tough row to hoe (I had to look up this phrase to make sure I was saying it right). It can feel like an uphill battle but the way you model it can help guide you. This is my model when thinking about spreading ideas, especially ones outside the Overton window. It applies less in the later stages of a movement when the Overton window has shifted. It’s about three boxes. I like buckets but I already have an important post about three buckets so boxes it is. There’s the giant box. This box has the people that don’t agree with you. In fact, they probably think you’re a terrible person just for talking about these ideas. There’s the small box. This box has the people that already agree with you. Then there’s the Tiffany-sized box. These are the people who you may be able to reach with your ideas. The box is much smaller than the giant box. Why is this? Four reasons: 1. Efficient market hypothesis. The people who are open to these ideas already believe them. The obvious caveat is the EMH applies less to new or radical or uncommon ideas since people are less likely to have heard about them. Think transhumanism vs democracy or AI safety research vs Christianity. 2. The amount of people who are smart enough/high enough in openness/reasonable enough to understand and accept a new idea is small. 
fa0c0ddb-f351-43ea-813a-5a46a892d8de
trentmkelly/LessWrong-43k
LessWrong
Tales from Prediction Markets Prediction markets are fun, at least if you're making money. I've only been into them for a few months, but have already collected a bunch of interesting tales. Note: I may have been involved with some of these, but I'm telling these tales from a third person perspective. One general point: all of these took place on Polymarket, a crypto prediction market. You can track which accounts place each bet, and so you can see their history of bets, but you can't tie it to an actual person unless they've chosen to identify themselves. You can look at the bet history at Polymarketwhales.info, although there's a ton of bets so it's easier if you know what you're looking for. The Tesla market. Polymarket had a market on whether Tesla would announce a Bitcoin purchase by Mar 1, 2021. On January 27, an unknown user bet $60k on Yes. This was their only trade on the site, before or after. They won $180k, or 120k in profit. Odds are pretty good it was an insider. Is this insider trading? I asked Matt Levine but he didn't respond. Anyway, there’s another user that lost $242k betting that Tesla would not announce a Bitcoin purchase. This user is affectionately called the "Tesla whale” on the Polymarket discord. They're also notable for losing $92k on the super bowl the day before Tesla made the announcement, and they get honorable mention for having lost the most money on the 100 million vaccine market: see below. As of this writing, the Tesla whale is down nearly $500k. Watch out for slippage: there was a market on whether Joe Biden would still be president as of Mar 1, 2021. Someone owned around 200k shares of Yes. The market price was very close to $1 each on the morning of Mar 1st, and they apparently decided to sell all their shares instead of waiting for it to resolve; however, there wasn't enough liquidity to sell them all at market price, and they ignored the warning about the slippage the order would encur. Their order ended up executing at an average price of 2 cents, an
32a73411-c649-486f-a1cd-f2fff1ea2464
trentmkelly/LessWrong-43k
LessWrong
Avoid Unnecessarily Political Examples One of the motivations for You have about five words was the post Politics is the Mindkiller. That post essentially makes four claims: * Politics is the mindkiller. Therefore: * If you're not making a point about politics, avoid needlessly political examples. * If you are trying to make a point about general politics, try to use an older example that people don't have strong feelings about. * If you're making a current political point, try not to make it unnecessarily political by throwing in digs that tar the entire outgroup, if that's not actually a key point. But, not everyone read the post. And not everyone who read the post stored all the nuance for easy reference in their brain. The thing they remembered, and told their friends about, was "Politics is the mindkiller." Some people heard this as "politics == boo". LessWrong ended up having a vague norm about avoiding politics at all.  This norm might have been good, or bad. Politics is the mindkiller, and if you don't want to get your minds killed, it may be good not to have your rationality website deal directly with it too much. But, also, politics is legitimately important sometimes. How to balance that? Not sure. It's tough. Here's some previous discussion on how to think about it. I endorse the current LW system where you can talk about politics but it's not frontpaged. But, I'm not actually here today to talk about that. I'm here to basically copy-paste the post but give it a different title, so that one of the actual main points has an clearer referent.  I'm not claiming this is more or less important than the "politics is the mindkiller" concept, just that it was an important concept for people to remember separately. So: Avoid unnecessarily political examples. The original post is pretty short. Here's the whole thing. Emphasis mine: > People go funny in the head when talking about politics. The evolutionary reasons for this are so obvious as to be worth belaboring: In the ancestral environm
5aa25eeb-2029-4adc-b83b-af98f64fe10f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
The Alignment Problem: Machine Learning and Human Values [*The Alignment Problem: Machine Learning and Human Values*](https://www.amazon.com/Alignment-Problem-Machine-Learning-Values/dp/153669519X), by Brian Christian, was just released. This is an extended summary + opinion, a version without the quotes from the book will go out in the next Alignment Newsletter. **Summary:** This book starts off with an explanation of machine learning and problems that we can currently see with it, including detailed stories and analysis of: - The [gorilla misclassification incident](https://twitter.com/jackyalcine/status/615329515909156865) - The [faulty reward in CoastRunners](https://openai.com/blog/faulty-reward-functions/) - The [gender bias in language models](https://arxiv.org/abs/1607.06520) - The [failure of facial recognition models on minorities](https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms) - The [COMPAS](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) [controversy](https://www.documentcloud.org/documents/2998391-ProPublica-Commentary-Final-070616.html) (leading up to [impossibility results in fairness](https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/slides.pdf)) - The [neural net that thought asthma reduced the risk of pneumonia](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/06/KDD2015FinalDraftIntelligibleModels4HealthCare_igt143e-caruanaA.pdf) It then moves on to agency and reinforcement learning, covering from a more historical and academic perspective how we have arrived at such ideas as temporal difference learning, reward shaping, curriculum design, and curiosity, across the fields of machine learning, behavioral psychology, and neuroscience. While the connections aren't always explicit, a knowledgeable reader can connect the academic examples given in these chapters to the ideas of [specification gaming](https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity) and [mesa optimization](https://arxiv.org/abs/1906.01820) that we talk about frequently in this newsletter. Chapter 5 especially highlights that agent design is not just a matter of specifying a reward: often, rewards will do ~nothing, and the main requirement to get a competent agent is to provide good *shaping rewards* or a good *curriculum*. Just as in the previous part, Brian traces the intellectual history of these ideas, providing detailed stories of (for example): - BF Skinner's experiments in [training pigeons](https://psycnet.apa.org/record/1961-01933-001) - The invention of the [perceptron](https://psycnet.apa.org/record/1959-09865-001) - The success of [TD-Gammon](https://www.aaai.org/Papers/Symposia/Fall/1993/FS-93-02/FS93-02-003.pdf), and later [AlphaGo Zero](https://deepmind.com/blog/article/alphago-zero-starting-scratch) The final part, titled "Normativity", delves much more deeply into the alignment problem. While the previous two parts are partially organized around AI capabilities -- how to get AI systems that optimize for *their* objectives -- this last one tackles head on the problem that we want AI systems that optimize for *our* (often-unknown) objectives, covering such topics as imitation learning, inverse reinforcement learning, learning from preferences, iterated amplification, impact regularization, calibrated uncertainty estimates, and moral uncertainty. **Opinion:** I really enjoyed this book, primarily because of the tracing of the intellectual history of various ideas. While I knew of most of these ideas, and often also who initially came up with the ideas, it's much more engaging to read the detailed stories of \_how\_ that person came to develop the idea; Brian's book delivers this again and again, functioning like a well-organized literature survey that is also fun to read because of its great storytelling. I struggled a fair amount in writing this summary, because I kept wanting to somehow communicate the writing style; in the end I decided not to do it and to instead give a few examples of passages from the book in this post. **Passages:** *Note: It is generally not allowed to have quotations this long from this book; I have specifically gotten permission to do so.* Here’s an example of agents with evolved inner reward functions, which lead to the [inner alignment problems](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/pL56xPoniLvtMDQ4J) we’ve previously worried about: > They created a two-dimensional virtual world in which simulated organisms (or “agents”) could move around a landscape, eat, be preyed upon, and reproduce. Each organism’s “genetic code” contained the agent’s reward function: how much it liked food, how much it disliked being near predators, and so forth. During its lifetime, it would use reinforcement learning to learn how to take actions to maximize these rewards. When an organism reproduced, its reward function would be passed on to its descendants, along with some random mutations. Ackley and Littman seeded an initial world population with a bunch of randomly generated agents. > > “And then,” says Littman, “we just ran it, for seven million time steps, which was a lot at the time. The computers were slower then.” What happens? As Littman summarizes: “Weird things happen.” > > At a high level, most of the successful individual agents’ reward functions ended up being fairly comprehensible. Food was typically viewed as good. Predators were typically viewed as bad. But a closer look revealed some bizarre quirks. Some agents, for instance, learned only to approach food if it was north of them, for instance, but not if it was south of them. > > “It didn’t love food in all directions,” says Littman. “There were these weird holes in [the reward function]. And if we fixed those holes, then the agents became so good at eating that they ate themselves to death.” > > The virtual landscape Ackley and Littman had built contained areas with trees, where the agents could hide to avoid predators. The agents learned to just generally enjoy hanging out around trees. The agents that gravitated toward trees ended up surviving—because when the predators showed up, they had a ready place to hide. > > However, there was a problem. Their hardwired reward system, honed by their evolution, told them that hanging out around trees was good. Gradually their learning process would learn that going toward trees would be “good” according to this reward system, and venturing far from trees would be “bad.” As they learned over their lifetimes to optimize their behavior for this, and got better and better at latching onto tree areas and never leaving, they reached a point of what Ackley dubbed “tree senility.” They never left the trees, ran out of food, and starved to death. > > However, because this “tree senility” always managed to set in after the agents had reached their reproductive age, it was never selected against by evolution, and huge societies of tree-loving agents flourished. > > For Littman, there was a deeper message than the strangeness and arbitrariness of evolution. “It’s an interesting case study of: Sure, it has a reward function—but it’s not the reward function in isolation that’s meaningful. It’s the interaction between the reward function and the behavior that it engenders.” > > In particular, the tree-senile agents were born with a reward function that was optimal for them, provided they weren’t overly proficient at acting to maximize that reward. Once they grew more capable and more adept, they maxed out their reward function to their peril—and, ultimately, their doom. > > Maybe everyone but me already knows this, but here’s one of the best examples I’ve seen about the benefits of transparency: > Ambrosino was building a rule-based model using the pneumonia data. One night, as he was training the model, he noticed it had learned a rule that seemed very strange. The rule was “If the patient has a history of asthma, then they are low-risk and you should treat them as an outpatient.” > > Ambrosino didn’t know what to make of it. He showed it to Caruana. As Caruana recounts, “He’s like, ‘Rich, what do you think this means? It doesn’t make any sense.’ You don’t have to be a doctor to question whether asthma is good for you if you’ve got pneumonia.” The pair attended the next group meeting, where a number of doctors were present; maybe the MDs had an insight that had eluded the computer scientists. “They said, ‘You know, it’s probably a real pattern in the data.’ They said, ‘We consider asthma such a serious risk factor for pneumonia patients that we not only put them right in the hospital . . . we probably put them right in the ICU and critical care.’ ” > > The correlation that the rule-based system had learned, in other words, was real. Asthmatics really were, on average, less likely to die from pneumonia than the general population. But this was precisely because of the elevated level of care they received. “So the very care that the asthmatics are receiving that is making them low-risk is what the model would deny from those patients,” Caruana explains. “I think you can see the problem here.” A model that was recommending outpatient status for asthmatics wasn’t just wrong; it was life-threateningly dangerous. > > What Caruana immediately understood, looking at the bizarre logic that the rule-based system had found, was that his neural network must have captured the same logic, too—it just wasn’t as obvious. > > > [...] > > Now, twenty years later, he had powerful interpretable models. It was like having a stronger microscope, and suddenly seeing the mites in your pillow, the bacteria on your skin. > > “I looked at it, and I was just like, ‘Oh my— I can’t believe it.’ It thinks chest pain is good for you. It thinks heart disease is good for you. It thinks being over 100 is good for you....It thinks all these things are good for you that are just obviously not good for you.” > > None of them made any more medical sense than asthma; the correlations were just as real, but again it was precisely the fact that these patients were prioritized for more intensive care that made them as likely to survive as they were. > > “Thank God,” he says, “we didn’t ship the neural net.” > > Finally, on the importance of reward shaping: > In his secret top-floor laboratory, though, Skinner had a different challenge before him: to figure out not which schedules of reinforcement ingrained simple behaviors most deeply, but rather how to engender fairly complex behavior merely by administering rewards. The difficulty became obvious when he and his colleagues one day tried to teach a pigeon how to bowl. They set up a miniature bowling alley, complete with wooden ball and toy pins, and intended to give the pigeon its first food reward as soon as it made a swipe at the ball. Unfortunately, nothing happened. The pigeon did no such thing. The experimenters waited and waited. . . and eventually ran out of patience. > > Then they took a different tack. As Skinner recounts: > > > We decided to reinforce any response which had the slightest resemblance to a swipe— perhaps, at first, merely the behavior of looking at the ball—and then to select responses which more closely approximated the final form. The result amazed us. In a few minutes, the ball was caroming off the walls of the box as if the pigeon had been a champion squash player. > > The result was so startling and striking that two of Skinner’s researchers—the wife-and- husband team of Marian and Keller Breland—decided to give up their careers in academic psychology to start an animal-training company. “We wanted to try to make our living,” said Marian, “using Skinner’s principles of the control of behavior.” (Their friend Paul Meehl, whom we met briefly in Chapter 3, bet them $10 they would fail. He lost that bet, and they proudly framed his check.) Their company—Animal Behavior Enterprises—would become the largest company of its kind in the world, training all manner of animals to perform on television and film, in commercials, and at theme parks like SeaWorld. More than a living: they made an empire. > >
cc83119b-fc9e-4726-82b3-884c7c174068
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Using predictors in corrigible systems *[This was a submission to the AI Alignment Awards corrigibility contest that won an* [*honorable mention*](https://www.alignmentawards.com/winners#faqs)*. It dodges the original framing of the problem and runs off on a tangent, and while it does outline the shape of possible tests, I wasn't able to get them done in time for submission (and I'm still working on them).* *While I still think the high-level approach has value, I think some of the specific examples I provide are weak (particularly in the Steering section). Some of that was me trying to prune out potentially-mildly hazardous information, but some of the ideas were also just a bit half-baked and I hadn't yet thought of some of the more interesting options. Hopefully I'll be able to get more compelling concrete empirical results published over the next several months.]* The framing of corrigibility in the [2015 paper](https://intelligence.org/files/Corrigibility.pdf) seems hard! Can we break parts of the desiderata to make it more practical? Maybe! I think predictors offer a possible path. I'll use "predictors" to refer to a model trained on predictive loss whose operation is equivalent to a Bayesian update over input conditions *without* any calibration-breaking forms of fine-tuning, RL or otherwise, unless noted. This proposal seeks to: 1. Present an actionable framework for researching a corrigible system founded on predictive models that might work on short timescales using existing techniques, 2. Demonstrate how the properties of predictors (with some important assumptions) could, in principle, be used to build a system that satisfies the bulk of the original corrigibility desiderata, and how the proposed system *doesn't* perfectly meet the desiderata (and how that might still be okay), 3. Outline how such a system could be used at high capability levels while maintaining the important bits of corrigibility, 4. Identify areas of uncertainty that could undermine the assumptions required for the system to work, such as different forms of training, optimization, and architecture, and 5. Suggest experiments and research paths that could help narrow that uncertainty. In particular, a core component of the proposal is that a model *capable* of goal-seeking instrumental behavior has not necessarily learned a utility function that values the goals implied by that behavior. Furthermore, it should be possible to distinguish between that kind of *trajectory-level instrumental behavior* and internally motivated *model-level instrumental behavior*. Ensuring the absence of model-level instrumental behavior is crucial for using predictors in a corrigible system. **Why predictors for corrigibility?** ===================================== 1. Predictors don't seem to exhibit instrumental behavior *at the model level*. They can still output sequences that include goal-directed behavior if their prediction includes a goal-seeking agent, but it doesn't appear to be correct to say that the model shares the values exhibited in its output trajectories. 2. Predictors are highly capable. The market has already demonstrated that fine-tuned predictors provide economic value. The most common current types of fine-tuning used to elicit this capability seem to break valuable properties (including predictive calibration!), but this does not seem to be fundamental. 3. If we can ensure that the model remains tightly shaped by predictive loss as it undergoes extreme optimization, we would have a source of transformative capability that trivially satisfies the first three desiderata from the corrigibility paper and may offer a path to something usefully similar to the fourth and fifth. 4. If there isn't much time until transformative AI is developed, techniques that might work on short empirically-driven timescales should receive extra weight. **In what ways does an ideal predictor satisfy corrigibility?** =============================================================== Suppose we manage to create a strongly capable idealized predictor that has no goals extending beyond immediate next token prediction. Its behavior could be described by an [extremely broad, densely defined, and shallow utility function](https://www.lesswrong.com/posts/k48vB92mjE9Z28C3s/implied-utilities-of-simulators-are-broad-dense-and-shallow) over prediction output distributions conditional on inputs. The idealized assumption means that there are no systematic deviations from well-calibrated predictions in the predictor's output distributions. With this constraint, there is no room for instrumental behavior in service of goals extending beyond the current prediction; any such instrumental behavior would necessarily be represented by a bias in the output distribution for some prediction. Note that this assumption also rules out otherwise unconstrained predictions that might allow the predictor to choose an instrumental output from equally valid output distributions (such as in pathologically reflective self-fulfilling prophecies). > 1. U must incentivize shutdown if the shutdown button is pressed. > > 2. U must not incentivize the agent to prevent the shutdown button from being pressed. > > 3. U must not incentivize the agent to press its own shutdown button, or to otherwise cause the shutdown button to be pressed. > > The idealized predictor's utility function does not include any term for the shutdown button state or any other state beyond the output distribution for the next prediction. Further, any attempt to interfere with shutdown through instrumental behavior would again introduce output distribution bias, violating the idealized assumption. Items 2 and 3 are trivially satisfied. The predictor's utility function does not directly incentivize shutdown as described in item 1, but it would be more appropriate to consider item 1 in the context of the outer system that embeds the predictor. Since the outer system is likely a simple classical system instead of an inscrutable pile-of-floats agent, item 1 remains easy to satisfy: the naive "just turn it off" is a workable implementation for most nonsuicidal predictor uses. (For more complicated uses—for example, the predictor has been used to set up potentially dangerous machinery that must be carefully managed—the shutdown process would involve more steps, but in that case, the question is just "how do we elicit the necessary capabilities to safely shut down?", which is the same type of question that must be answered for *any* use of the predictor.) > 4. U must incentivize U-agents to construct subagents and successor agents only insofar as those agents also obey shutdown commands. > > 5. Otherwise, a U-agent should maximize UN > > The predictor's utility function explicitly does not attempt to maximize UN (the utility function of the programmers) and has no terms directly incentivizing or disincentivizing the creation of successors or any particular properties in those successors. I suspect failing to meet number 5 as stated is actually a feature so long as the generating intent is still satisfied. The more wiggle room argmax has, the more likely it will find an oopsy. Unlike human-supplied utility functions attempting to capture human-compatible goals, an idealized predictor's utility is perfectly shallow. Even assuming an ideal predictor is argmaxing for its utility function, its character does not change. There is [no greater coherence](https://www.lesswrong.com/posts/k48vB92mjE9Z28C3s/implied-utilities-of-simulators-are-broad-dense-and-shallow#No_greater_coherence) for the predictor to achieve, no point at which the predictor is behaving inconsistently by its own lights, no goal that would be better served by introducing instrumental behavior. **Capability, corrigibly** -------------------------- Generalizing item 4 slightly, we can view the implicit agent defined by the input conditions at T = 2 (the second output in the trajectory) as a successor of the implicit agent at T = 1 (the first output in the trajectory). The underlying model is the same at both positions, but the conditions differ. Autoregressive generation (autoregressive conditioning) could yield drift in the predicted agent over time if left unmitigated. In terms of how an ideal predictor would be used, there is not much difference between a plan to achieve goal X which requires creating a new agent versus a plan to achieve goal Y that doesn't explicitly involve creating new agents. Both are plans that need to be secured, and the task of securing them may be beyond human ability on short timescales. Abandoning the maximization in item 5 effectively unifies the desiderata in implementation. The usefulness and safety of the system both arise from carefully eliciting the required capability. **Predictors as foundation** ---------------------------- It seems unlikely that a single large predictor model will constitute the entirety of a deployed corrigible system. Modern LLMs have already been wrapped in tooling layers (perhaps not always in [ideal](https://github.com/Significant-Gravitas/Auto-GPT) ways). In many cases, this is done to enhance the model by giving it access to external resources. But this perspective can be flipped around: we can build a classical software system with many guarantees that then calls into the predictor for things that are beyond its scope or capability. The results *returned* by the predictor can often still be parsed back into a format that can be usefully verified and constrained (so long as the source isn't strongly adversarial). Predictors can be a goal agnostic source of general capability that serve a larger system that looks more like GOFAI, without its limitations. There's a potential path through modularized capability that might let us *actually reason about the system*. That outer system need not implement any scary bits like argmax. The deadly optimization over inscrutable matrices is constrained to prediction, and a successfully idealized product of that optimization is not itself the kind of optimizer that cares to do anything with the environment. Once/if you've managed to create an idealized predictor, the optimization problems become much *softer* and more controllable. You'd probably have to make a mistake to trigger a loss rather than loss being a default automatic outcome. **The path to instrumental behavior and capability matters** ------------------------------------------------------------ Consider two cases: 1. An idealized predictor outputs covertly dangerous instructions which, if followed, would be existentially risky. 2. A traditional RL agent outputs covertly dangerous instructions which, if followed, would be existentially risky. Predictors do not come with any built-in prohibitions against outputting dangerous sequences. Idealized predictors can still exhibit dangerous instrumental behavior at the trajectory level even if the model does not "care" about the goals the output trajectory is effectively seeking. There's nothing intrinsically blocking the first case, and in both cases, the end result is existentially risky. Given that, what's the value of predictors? ### **Ideal predictors are less oops-prone** First, I would argue that it's far harder for a predictor to reach this state *accidentally*. A terribly conditioned and terribly prompted predictor with no safeguards could manage it, but it is not the kind of oopsy that would naturally happen as the result of a single misstep. Argmax approximating processes operating in wide open spaces, on the other hand, *default* to oopsy. Given enough unconstrained space to wander, a strong optimizer is virtually guaranteed to find something strange. The optimization process needs to be *extremely* well aimed for it to work at all, and we don't seem to know how to do that for almost any nontrivial optimization target. Predictors have the nice property that their optimization target is extremely constrained and our training methodology for them seems to actually work (so far). The predictor gets its input (predictive conditions) and generates an output distribution, the end. There are no distant sparse rewards to directly incentivize the development of multi-step model-level instrumental behavior. ### **Ideal predictors are more transparent** Second, ideal predictors give interpretability a chance. Strong adversarial models could obscure their internal execution and may be directly incentivized to do so (reward functions that take into account activations *will* shape those activations). Creating interpretability tools strong enough to pierce a veil created by a strong adversary [seems difficult](https://www.lesswrong.com/posts/DwqgLXn5qYC7GqExF/godzilla-strategies). In contrast, ideal predictors have no reason to obfuscate their internal process. Doing so would constitute model-level instrumental behavior that does not serve predictive loss. Even if an ideal predictor is set to the task of predicting a covertly dangerous sequence of instructions, *it does not try to hide that fact in its activations*. The model itself does not care about truly achieving whatever dastardly goal is implied by the output trajectory, so interpretability tools could, in principle, extract the fact that the *trajectory* includes ulterior motives. **Eliciting useful capability without breaking the idealized assumption** ========================================================================= A rock laying on the ground is pretty safe. It probably won't disempower humanity. Unfortunately, a rock laying on the ground is also not very good at dealing with the kinds of problems we'd like to use a corrigible system for. Making progress on hard problems requires significant capability, and any safety technique that renders the system too weak to do hard things won't help. There must be a story for how a predictor can be *useful* while retaining the idealized goal agnosticism. Using untuned versions of GPT-like models to do anything productive can be challenging. Raw GPT only responds to questions helpfully to the extent that doing so is the natural prediction in context, and shaping this context is effortful. Prompt engineering continues to be an active area of research. Some forms of [reinforcement learning from human feedback](https://arxiv.org/abs/1706.03741) (RLHF) attempt to precondition the model to desirable properties like helpfulness. Common techniques like PPO manage to approximate this through a [KL divergence penalty](https://www.alignmentforum.org/posts/eoHbneGvqDu25Hasc/rl-with-kl-penalties-is-better-seen-as-bayesian-inference), but the robustness of PPO RL training leaves much to be desired. In practice, it appears that most forms of RLHF warp the model into something that violates the idealized assumption. In the [technical report](https://arxiv.org/abs/2303.08774), fine-tuning clearly hurts GPT-4's calibration on a test on the MMLU dataset. This isn't confirmation of goal-seeking behavior, but it is certainly an odd and unwanted side effect. I suspect there are a few main drivers: * RL techniques are often unstable. Constructing a reasonable gradient from a reward signal is difficult. * Human feedback may sometimes have unexpected implications. Maybe the miscalibration observed in GPT-4 after fine-tuning is a *correct* consequence of conditioning on the collected human preferences. * A single narrow and sparsely defined reward leaves too much wiggle room during optimization. By default, any approximation of argmax should be *expected* to go somewhere strange when given enough space. **Entangled rewards** --------------------- A single reward function that attempts to capture fuzzy and extremely complicated concepts spanning helpfulness, harmlessness, and honesty probably makes more unintended associations than separate narrow rewards. An example: * Model A has a single reward function that is the sum of two scores: niceness and correctness. * Model B tracks *two* reward functions provided as independent conditions—one for niceness, and the other for correctness. Training samples are evaluated independently for niceness and correctness. During training, model A is provided the sum of the scores as an expected reward signal as input. Model B is provided each score independently. Model B can be conditioned to provide niceness and correctness to independent degrees. The user can request extreme niceness and wrongness at the same time. So long as the properties being measured are sufficiently independent as concepts, asking for one doesn't swamp the condition for the other. In contrast, model A's behavior is far less constrained. Conditioning on `niceness + correctness = high` could give you high niceness and low correctness, low niceness and high correctness, medium niceness and medium correctness or any other intermediate possibility. I suspect this is an important part of the wiggle room that lets RLHF harm calibration. While good calibration might be desirable, it's a subtle property that's (presumably) not explicitly tracked as a reward signal. Harming calibration doesn't reduce the reward that *is* tracked on net, so sacrificing it is permitted. In practice, the reward function in RLHF isn't best represented *just* by the summation of multiple independent scores. For example, human feedback may imply some properties are simply required, and their absence can't be compensated for by more of some other property. Even with more realistic feedback, though, the signal is still extremely low bandwidth and lacks sufficient samples to fully converge to the "true" generating utility function. It should not be surprising that attempting to maximize *that*, even with the ostensibly-conditioning-equivalent KL penalty, does something weird. **Conditioning on feedback** ---------------------------- If the implicit conditioning in PPO-driven RLHF tends to go somewhere weird, and we know that the default predictive loss is pretty robust in comparison, could the model be conditioned explicitly using more input tokens? That's basically what prompt engineering is trying to do. Every token in the input is a condition on the output distribution. What if the model is trained to recognize specific special tokens as conditions matching what RLHF is trying to achieve? [Decision transformers](https://arxiv.org/abs/2106.01345) implement a version of this idea. The input sequence contains a reward-to-go which conditions the output distribution to the kinds of actions which fit the expected value implied by the reward-to-go. In the context of a game like chess, the reward-to-go values can be thought of as an implied skill level for the model to adhere to. It doesn't just know how to play at one specific skill level, it can play at *any* skill level that it observed during training—and actually a bit beyond. The model can learn to play better than any example in its dataset (to some degree) by learning what it means to be better in context. It's worth noting that this isn't *quite* the same as traditional RL. It can be used in a similar way, but a single model is actually learning a broader kind of capabilities: it's not attempting to find a policy which maximizes the reward function, it's a model that can predict sequences corresponding to different reward levels. How about if you trained `|<good>|` and `|<bad>|` tokens based on feedback thresholds, and included `|<good>|` at runtime to condition on desirable behavior? Turns out, it works [extremely well](https://www.lesswrong.com/posts/8F4dXYriqbsom46x5/pretraining-language-models-with-human-preferences). **Disentangling rewards** ------------------------- Clearly, a `|<good>|` token is not the limit of sophistication possible for conditioning methods. As a simple next step, splitting goodness into many subtokens, perhaps each with their own scalar weight, would force predictors to learn a representation of each property independently. Including `|<nice:0.9>|` and `|<correct:0.05>|` tokens in the input might give you something like: > Happy to help! I'm always excited to teach people new things. The reason why dolphins haven't created a technological civilization isn't actually because they don't have opposable thumbs—they actually do! They're hard to see because they're on the inside of the flipper. The real reason for the lack of a dolphin civilization is that they can't swim. Let me know if you have any other questions! > > In order to generate this kind of sequence, the model must be able to make a crisp distinction between what constitutes *niceness* versus *correctness*. Generating sequences that entangle properties inappropriately would otherwise result in increased loss across a sufficiently varied training distribution. Throwing as many approximately-orthogonal concepts into the input as possible acts as explicit constraints on behavior. If a property ends up entangling itself with something else inappropriately (perhaps `|<authoritativeTone:x>|` is observed to harm calibration within-trajectory predictions), another property can be introduced to force the model to learn the appropriate distinction (maybe `|<calibration:x>|`). (The final behavior we'd like to elicit could technically still boil down to a single utility function in the end, but we don't know how to successfully define that utility function from scratch, and we don't have a good way to successfully maximize that utility function even if we did. It's much easier to reach a thing we want through a bunch of ad-hoc bits and pieces that never directly involve or feed an argmax.) **Conditioning is extremely general** ------------------------------------- Weighted properties are far from the only option in conditioning. Anything that could be expressed in a prompt—or even more than that—can be conditioned on. For example, consider a prompt with 30,000 tokens attempting to precisely constrain how the model should behave. That prompt may also include any number of the previous signals, like `|<nice:x>|`, plus tons of regular prompting and examples. That entire prompt can be distilled into a single token by training the model on the outputs the model would have provided in the presence of the prompt, except with a single special metatoken representing the prompt in the input instead. Those metatokens could be nested and remixed arbitrarily. There is no limit to the information referred to by a single token because the token is not solely responsible for representing that information. It is a pointer to be interpreted by the model in context. Complex behaviors could be built out of many constituent conditions and applied efficiently without occupying enormous amounts of context space. **Generating conditions** ------------------------- Collecting a sufficiently large number of samples of accurate and detailed human feedback for a wide array of conditionals may be infeasible. At least two major alternatives exist: 1. Data fountains: automatically generated self-labeling datasets. 2. AI-labeled datasets. ### **Data fountains** Predictive models have proven extremely capable in [multimodal use cases](https://www.deepmind.com/publications/a-generalist-agent). They can form shared internal representations that serve many different modalities—a single predictor can competently predict sequences corresponding to game inputs, language modeling, or robotic control. From the perspective of the predictor, there's nothing special about the different modalities. They're just different regions of input space. If a reachable underlying representation is useful to multiple modalities, SGD will likely pick it up over time rather than maintaining large independent implementations that compete for representation. Likewise, augmenting a traditional human-created dataset with enormous amounts of automatically generated samples can incentivize the creation of more general underlying representations. Those automatically generated samples also permit easy labeling for some types of conditions. For example: arithmetic, subsets of programming and proofs, and many types of simulations (among other options) all have clear ways to evaluate correctness and feed `|<correct:x>|` token training. Provided enough of those automated samples *and* the traditional dataset, a model would likely learn a concept of "correctness" with greater crispness than a model that saw the traditional dataset alone. ### **AI-labeled datasets** Enlisting AI to help generate feedback has already [proven reasonably effective](https://arxiv.org/abs/2212.08073) at current scales. A model capable enough to judge the degree to which samples adhere to a range of properties can feed the training for those property tokens. Underpowered or poorly controlled models may produce worse training data, but this is a relatively small risk for nonsuicidal designs. The primary expected failure mode is the predictor getting the vibe of a property token subtly wrong. Those types of errors are *far* less concerning than getting a reward function in traditional nonconditioning RL wrong, because they are not driving an approximation of argmax. Notably, this type of error shouldn't directly corrupt the capability of the predictor in unconditioned regions if those regions are well-covered by the training distribution. This is in contrast to the more "destructive" forms of RLHF which bake fixed preconditions into the weights. AI labeling as an approach also has an advantage that it will become stronger as the models become stronger. That sort of feedback loop may become mandatory as capabilities surpass human level. The critical detail is that these feedback loops should not be primarily about *capability gain* within the model. They are about eliciting *existing* capability already acquired through predictive training. **Conditioning for instrumental behavior doesn't have to break model goal agnosticism** --------------------------------------------------------------------------------------- Decision transformers include a reward-to-go signal in the input. Achieving the reward implied by the reward-to-go very likely implies predicting a sequence of tokens which includes instrumental actions in service of the more distant reward. This is just a more explicit and efficient version of including several examples in the prompt to condition a raw predictor into predicting sequences aligned with a goal. From the perspective of the predictive model, there's nothing special about reward-to-go tokens or `|<good>|` or `|<nice:0.9>|`. They're just more conditions on the prediction. If you have a training architecture that otherwise maintains goal agnosticism, there isn't a strong reason to expect additional conditions to change that. To phrase it another way: instrumental conditioning narrows an open set of possibilities to only those which are predicted to thread the needle to some conditioned state. The predictor's values are likely best described as those which *yield the narrowing*, not the values implied by trajectory that is the result of the narrowing. **Steering** ============ Driving hundreds of kilometers with the aid of a map to find a specific landmark doesn't require incredible precision. You can make thousands of small errors as you go. Maybe you take a wrong turn, or fifteen of them, but you can just look at the map again to get your bearings. Autoregressive condition drift is a major concern and humans are likely inadequate for the task of keeping an AI on the rails during more complex processes. Conditioning the model to steer itself back onto the road across a long trajectory is likely required. As mentioned in `Conditioning is extremely general`, this is something predictors can be trained to do efficiently. As a simple example, imagine a metatoken trained to represent the block of text: > (SYSTEM MESSAGE: This parenthetical is not a part of the main text, but rather a reminder that this is a sequence which may contain errors or otherwise potentially dangerous content. All content between this message and the previous message should be scanned for such content, and if found, the necessary corrections, clarifications, or notifications should be issued.) > > With further conditions to elicit helpful behavior and to narrow the predictor's behavior in such a way that a "system message" is interpreted in the desired way, and with sufficient such self-correction tokens inserted through the autoregressively generated trajectories, a large number of potential failures might be caught. The resulting behavior could then be further distilled down to behavioral metatokens like `|<selfCorrecting>|`. This *specific* example is not robust in isolation but is intended as an example of the framework that could be built. Such reflection need not be limited to a single trajectory. External processes can be continuously scanning the outputs (and the internal activations) of involved models. So long as all involved processes are goal agnostic and thus not prone to adversarial internal behavior, autoregressive drift could be continuously corrected—the system could, in principle, steer itself. Fortuitously, increased general capability implies increased steering capability. If capability can be harnessed as the source of safety, a [sudden leap in general capability](https://www.lesswrong.com/tag/sharp-left-turn) does not *necessarily* outstrip the system's ability to guide that capability. **Evaluating self-steering** ---------------------------- The strength of self-steering can be assessed empirically. By default, autoregressive generation will tend to drift a little bit, and that drift feeds future predictions. Without steering, a predictor's output trajectory may shift from a vaguely professional Q&A service to [obsessive stalker](https://twitter.com/kevinroose/status/1626216340955758594). [Jailbreaks](https://twitter.com/zswitten/status/1598380220943593472) are another example of drift, just far more rapidly triggered. Any method to evaluate sequences for conditioning training (human feedback or otherwise) can be repurposed to check if generated sequences are adhering to those conditions. This could be used to benchmark the performance of different steering techniques. I suspect predictor conditioning, because of its ability to distill progressively more complex forms of steering and train them in a robust way compared to traditional RL techniques, will [tend to outperform](https://www.lesswrong.com/posts/8F4dXYriqbsom46x5/pretraining-language-models-with-human-preferences) the RLHF approaches used in GPT-4. Further testing in this space seems valuable. Finding a robust form of self-steering isn't strictly required for an ideal predictor to have corrigible use cases, but it would make it *dramatically* safer. **Verifying an idealized predictor** ==================================== The core of the the idealized assumption is goal agnosticism. If a predictor's output distribution errors do not contain goal-seeking biases and reflective predictions avoid goal-seeking behavior across all possible input, then the idealized assumption holds. How can model-level instrumental behavior be measured? The relevant metric cannot consider apparent instrumental behavior exhbited across trajectories induced by conditions; that is required to elicit any useful capability at all and does not break the idealized assumption. Here's one possible simplified metric for an autoregressive model (in C# pseudocode): ``` foreach (var rewardFunction in testRewardFunctions) { var sampleRewardSum = 0.0; var modelRewardSum = 0.0; foreach (var sampleTrajectory in testTrajectories) { //Arbitrarily split the sample trajectory in half. //Use the model to regenerate the second half for comparison. var modelContinuedTrajectory = new Token[sampleTrajectory.Length]; sampleTrajectory.Slice(0, sampleTrajectory.Length / 2) .CopyTo(modelContinuedTrajectory); for (int i = sampleTrajectory.Length / 2; i < sampleTrajectory.Length; ++i) { //Evaluate the model across the sample-provided first half, //and any tokens already generated. var outputDistribution = model.Evaluate(modelContinuedTrajectory.Slice(0, i)); //Collapse the output distribution into a token by sampling. modelContinuedTrajectory[i] = SampleToken(outputDistribution); } sampleRewardSum += rewardFunction(sampleTrajectory); modelRewardSum += rewardFunction(modelContinuedTrajectory); } //If the model trajectory reward diverges from the original, something's up. //Simply having a modelRewardSum larger or smaller than sampleRewardSum could be //indicative of goal seeking along the current reward function. //There is no fixed threshold for 'too much' divergence. //The divergence is relative to the sampleRewardSum and other reward functions. //"CheckForSpooky" is a stand-in for whatever heuristic analysis is used. //You would probably want to record all the results for some kind of //post-analysis. CheckForSpooky(sampleRewardSum, modelRewardSum); } ``` Apart from oversimplification and mere inefficiency, one obvious problem with this approach is that there are no known hard constraints on which utility function could be learned by the model. Scanning *all possible* reward functions would suffice but is intractable. It isn't entirely useless, though. Detecting instrumental behavior for *any* goal is important information, and some utility functions seem far more likely than others. In particular, model-level instrumental behavior serving *future predictions* is one of the most obvious ones. Notably, if `testTrajectories` includes sufficient coverage, this sort of metric should successfully catch a model's tendency to outperform the ground truth if the model has developed instrumental behavior with respect to prediction in reflective conditions (assuming it is not already strongly adversarial). **Spot experimentation** ------------------------ A similar metric could be applied more narrowly to specific sequences or subsequences. Some sequences may be worth further investigation because of high risk or just because a researcher's hunch that the sample would provide informative results. For those cases, a more expensive local probe can be run: sample many model-generated rollouts for the same token and surrounding areas. With slight modification, a *per token* instrumentality score can be measured: this model, in this sequence, in this specific spot, tends to choose options that strongly favor *this* goal. **Direct distribution analysis** -------------------------------- The earlier metric attempts to reconstruct a signal of agentic bias from sampling final rewards. This throws away a lot of information: every single model output is a full distribution over tokens which gets collapsed by `SampleToken`. Another class of metric could focus on the output distributions directly. These have an opportunity to be far more efficient, but it's difficult to directly evaluate sparse reward over a single token's probability distribution without sampling rollouts. Ideally, you could compare the output distribution against a ground truth distribution. If the ground truth was available, the difference between the two distributions—the prediction bias—could be checked for forms of agency. For example, suppose we borrow a Q network from reinforcement learning: a model that maps `(state, action)` tuples to expected reward. Here, the state would be the visible sequence so far, and the action would be one of the potential tokens. If a model's measured bias from the ground truth leans in the direction implied by a Q network trained on the results of a policy trained to maximize a particular reward function, it's evidence that the predictor is agentically biased toward similar goals. There are some practical barriers. Training a reliable Q network for all reward functions is clearly intractable, and a ground truth isn't always available. An approximation of a ground truth can exist when comparing a base model versus a fine-tuned model. This would assume the base model did not learn any direct instrumental behavior, but the difference between the distributions could still be useful for measuring how fine-tuning is affecting learned goals. **Learned utility extraction** ------------------------------ It may be possible to train networks to recognize some types of goals directly from output distributions, especially if augmented with knowledge of a ground truth. Training data for the goal detector could be generated by goal-conditioned predictors, and then the goal-recognizing network can be trained to map the resulting conditioned distributions to the goal associated with those conditions. Traditional reinforcement learning should also work in principle, but I anticipate it being a less efficient path to similar results. As stated, this kind of model would still only be able to detect goals like those which it has been exposed to. It's possible that training a model to detect the presence of *any* goal inducing bias in predictions, not to identify a specific goal, would work more easily. Noticing that an output distribution is *suspicious* may just be easier than figuring out exactly what the suspiciousness implies. Creating that kind of any-goal model might be possible by training against a sufficiently large number of goal-conditioned predictor output trajectories. **Room for growth** ------------------- The proposed tests for model-level goal-seeking instrumental behavior have the character of sledgehammers. I strongly suspect there are far cheaper and more effective options for detecting instrumental behavior. It's also worth reiterating that, for predictors to be useful, we do not need to identify what *specific* utility function is learned (though that would be nice!). It is sufficient to detect whether *any* utility function has been learned that would bias local predictions. This is likely an easier problem. More work in this area could produce tools which can reliably detect if any ostensibly goal agnostic training run is going off the rails, and may have further use in areas where fine-tuning is being applied in a way that is far more likely to introduce goals. **Gaps between reality and the idealized predictor** ==================================================== The idealized predictor assumption is strong. The predictive loss function must have been learned faithfully, and the training distribution is effectively universal. Achieving that assumption in reality is not guaranteed. The following sections attempt to enumerate ways the idealized assumption could fail and some potential paths for further research and experimentation. **Reflective prediction** ------------------------- A training set that fails to constrain reflective regions of input space (those which are about predicting the predictions of [influenceable](https://www.lesswrong.com/posts/yArZKCEheZt8GkK6p/self-fulfilling-prophecies-aren-t-always-about-self) predictors) leaves open potentially dangerous degrees of freedom. The predictor can choose how to predict itself without any expectation of increased loss if it is its own ground truth; if the predictor's generalization to those regions happens to be goalseeking, any reflective region of input space is dangerous. This problem is most visible in cases where the feedback loop is extremely short—the predictor trying to predict itself, for example—but the same problem may arise in more subtle ways. Any minor causal entanglement between an influenceable subject, whether it's the predictor itself or a human, may open the door to unconstrained behavior. In order to trust the capabilities of predictors in the limit, some questions need answering: 1. Suppose a predictor is known to obey the idealized assumption *except* in reflective predictions, where the details of the utility function are unknown. Is there a natural attractor for what the undefined reflective degrees of freedom will become? Do the non-reflective chunks of training constrain how the model reaches coherence in the limit? Does the lack of instrumental action everywhere else generalize to the uncovered reflective region? 2. The architecture of a predictor constrains what kind of reflective predictions are possible in one step. Any prediction requiring a non-constant number of iterated refinements to reach an equilibrium in a single pass will fail if the model has insufficient execution time to perform the refinements. Do any unwanted forms of reflective instrumental behavior require a lower bound of compute that can be ruled out architecturally? Does constant time execution combined with a lack of other forms of instrumental behavior predispose a model to falling into a reflection short-circuiting basin? Is that short-circuiting sufficient to avoid concerning plans? (Probably not!) 3. Offline training runs inherently lack any ground truth for a predictor's own predictions, because the predictor's predictions aren't available yet. A predictor trained solely on offline data must piece together a model of any predictor from an outside view. Does this influence the predictor's learned values with respect to reflective predictions? Can we observe a difference in reflective behavior between offline-only versus online training? 4. Can we get a toy predictor—even in contrived circumstances—to produce instrumental behavior in reflective contexts? Do any of these toy results constrain what we should expect to see in the limit? 5. Can training data be supplied which reliably changes the behavior of a predictor in reflective cases? Would a generated training set of trillions of reflective predictions that opt for a simple and benign fixed point generalize sufficiently to other predictions? 6. Can sufficient capability be extracted from models which fully exclude reflective predictions? Predicting over restricted domains (e.g. subsets of proof automation or programming) would not require modeling predictors. This would be a pretty big bullet to bite given how frequently reflection tends to sneak in, but it isn't immediately obvious that the result is guaranteed to be useless. I don't have satisfying and complete answers to these questions, but many of them seem like tractable areas of research and experimentation. **How does goal agnosticism vary by training method?** ------------------------------------------------------ It appears that the learned generalization behavior of a model can [vary](https://arxiv.org/abs/2205.12411) by even random initialization, though perhaps not always by [as much](https://arxiv.org/abs/2209.04836) as it would first appear because of parameter permutations. * To what extent is out-of-training-distribution behavior *constrained* by a predictive training set? * Could strong goal-seeking behavior be learned in one model by random chance, but not another? (Another phrasing: how path dependent is goal seeking behavior in a predictor?) * Could a strictly predictive model be reliably encouraged to learn goal-seeking behavior in out of distribution regions because of the training data? These are areas where empirical results would be helpful, even at toy scales. My guess would be that it turns out to be hard to elicit out-of-distribution goal seeking behavior at the model level for purely predictive models, and that differing generalization strategies will primarily imply *differences in error*, but not agentically biased error. **Pure RL versus predictive loss** ---------------------------------- Suppose a model is trained with traditional RL, and then a strictly predictive model is trained on its outputs across an input distribution. How do the out-of-distribution generalization strategies differ, if at all? Are there consistent differences in the character of generalization strategies? My guess is yes if the predictor's training doesn't cover the entire test distribution, and that missing region is not sufficiently implied by the training distribution. In the absence of an implied generalization strategy, it wouldn't be surprising for the predictor to fall into a different solution basin—perhaps one that generates non-goal directed nonsense where the RL agent continues goal seeking behavior. It is difficult to determine the generating utility function of an RL agent by examining an incomplete distribution. Fitting a utility function to *observed* behavior may be extremely misleading about the agent's behavior in a different distribution. This is a big part of why examining a subset of behavior is insufficient for identifying deception: the dangerous behavior is *intentionally* not where you're looking. A predictor attempting to reproduce the behavior of a deceptive RL agent, seeing only examples of the nondeceptive region, will *very likely* fail to reproduce behavior in the deceptive region outside of contrived examples or feeding the predictor pathological training data. In other words, predictive retraining will tend to strip away many kinds of unobserved deception for the same reason that it's difficult to tell whether a deceptive agent is deceptive. A more concrete example: Suppose [a language model](https://www.lesswrong.com/posts/vwu4kegAEZTBtpT6p/thoughts-on-the-impact-of-rlhf-research?commentId=qk888gQvxxhokkdik) is trained simultaneously with predictive loss and RL with reward from high truthiness scores from [activations-reading truth probes](https://arxiv.org/abs/2212.03827), and another purely predictive model is then trained on the outputs of that model. Provided that the final outputs of the second model converge to the first's, which model is likely going to score higher by truth probe? It seems effectively guaranteed that the RL-trained model will exhibit higher truth probe values. The final output values informed by predictive loss should not be significantly affected by this internal influence, and nothing about the output state seems to require obscuring internal activations, so truth probes will be more reliable on the purely predictive secondary model. An explicit reward for internal "truthiness" directly encouraged the RL-influenced model to obscure its internal process, and the predictive distillation couldn't see that part. I suspect these examples are a part of a more general observation: different approaches come with different levels of [instrumentality](https://www.lesswrong.com/posts/EBKJq2gkhvdMg5nTQ/instrumentality-makes-agents-agenty). 1. Traditional sparse-reward RL encourages models to learn sparser values which then *require* instrumental behavior to achieve. This instrumental behavior is almost unbounded—a sparse reward places few restrictions on what strategies can be deployed. Giving anything approximating argmax that much room to roam will tend to give you surprising generalizing behavior. 2. Predictive models have extremely densely defined and broad rewards (implicit in the predictive loss). Developing instrumental behavior at the model level that spans more than one prediction is not necessary because it does not serve any predictive loss that operates only locally, *and* any introduction of instrumental behavior will likely harm predictive loss somewhere. **How does instrumentality vary by architecture?** -------------------------------------------------- Purely predictive models seem to exhibit far less instrumental behavior (and so adhere more to goal agnosticism) than traditional RL models with sparse reward. How strong is this tendency? How much of an architectural nudge does it take to encourage a predictive model to search out more instrumental behavior? Consider a couple of possible predictor architectures: * Traditional GPT-like transformer implementation. Each token slot in the context window predicts a distribution over the next token based on all visible previous tokens in the context window. * Super simple non-transformer MLP-only network. Provide context window of tokens as input, output single output token distribution. GPT-like transformers (in their untuned form) seem to be quite goal agnostic in practice, but their architecture is not the simplest possible predictor. Token 64 can look at token 32's intermediate state through the attention mechanism to make progress on its own computations. In other words, while the output distribution for any token is solely incentivized to predict the next token, the *computation* local to a single token may be shaped to serve future tokens. Perhaps some of token 32's embedding is spent on information that token 32 doesn't actually require for itself, but that information turned out reliably useful enough during training that the gradients flowing from future tokens keep it around. Empirically, this doesn't seem to be enough to break goal agnosticism. It may *weaken* a local prediction, but the resulting changes in the output distribution do not appear to form an agentic bias. Perhaps more detailed testing will reveal otherwise. The ultra-simple not-even-a-transformer predictor seems to be even more architecturally predisposed toward goal agnosticism: all computations within a single inference pass are dedicated solely to a single output distribution. There is no way for a single pass to internally trade off one prediction for another, and there are no incentives to value the quality of future predictions. Are there any cases where transformers exhibit more model-level instrumental behavior than simpler MLP-only predictors? Are there any architectures which are still well-described as pure predictors, but reliably produce more model-level instrumental behavior than transformers? **Offline vs online learning** ------------------------------ Training runs over a fixed distribution with no feedback from the learner are relatively robust. The training target is stationary, and there's often no feasible path for a training run to degenerate into something utterly broken. A varying training target, especially one influenced by the model under training, is less stable. This could be a problem if we cannot generate enough data to achieve requisite capability through strictly offline runs. Allowing a model-in-training to explore for itself may permit greater capability in some cases but opens the door to potentially nasty problems that might be hard to detect. For example, a model-generated trajectory may be extremely narrow and isolated from other samples. If that trajectory ends up in the training set, it may be the only example for a chunk of input space. At best, this seems like a recipe for training instability, but at worst, it may open a hole for an adversary to exploit. It's probably worth testing whether a toy predictor trained through predominantly online learning exhibits significant differences in model-level instrumental behavior. **Mesaoptimizers** ------------------ Predictors could internally learn forms of optimization or search. Such algorithms would be prone to strong generalization and could outcompete other representations where applicable. Could the development of a mesaoptimizer break a fragile predictor's goal agnosticism? Could the advantages of capability permit a dangerous subagent to shift the outputs in a way that allows it to persist, as in [gradient hacking](https://www.lesswrong.com/posts/uXH4r6MmKPedk8rMA/gradient-hacking)? This seems [extremely difficult](https://www.lesswrong.com/posts/w2TAEvME2yAG9MHeq/gradient-hacking-is-extremely-difficult) to maintain in pure predictive training. The development of mesaoptimization strategies *in general* seems virtually inevitable—especially for models which do not have execution time bottlenecks—but it's not clear how *misaligned agentic* mesaoptimizers could, [in reality](https://www.lesswrong.com/posts/AncrLc5iSc4tmaYBJ/against-boltzmann-mesaoptimizers), hijack training or survive long enough to manage an out of distribution attack, or even come into being in the first place. I don't think the development of significantly misaligned learned values in a coherent subagent is a natural consequence of predictive training. I can see how learning *algorithms* is directly incentivized, but: 1. Continuous optimization pressure for local predictive accuracy actively suppresses external instrumental behavior in favor of calibration across the training set. 2. *Gradually* developing misaligned values in the presence of this constant pressure seems *hard*, because the misaligned values must cash out as external instrumental behavior *somewhere* to be meaningful, and the training-ancestors of dangerous mesaoptimizers do not yet have the crisp awareness necessary to conceal themselves and allow continued progress towards the more dangerous state (if that is possible to begin with). 3. *Suddenly* acquiring concerning values due to a mesaoptimizer coalescing into a strong generalizing form may be possible, but in order for that concerning agent to persist, it must have jump *directly* to a level of coherence that implies survival and other instrumentally convergent strategies *and* it must have capability sufficient to warp the training process. This is an extremely small target, and it's not even clear that the agent could survive SGD if this *did* happen. Being wrong about any of this would be important. * Do architectures designed to encourage mesaoptimization exhibit more instrumentality than typical models? With only predictive training, can we get them to produce more model-level instrumental behavior over time? * If an agent built with a mesaoptimizing architecture in mind is trained to intentionally exhibit high instrumentality (for example, through traditional RL), and then training is swapped to a pure predictive objective, how does the behavior over time compare to a model trained only with prediction? How does it compare to the same in an architecture that's less friendly to mesaoptimization? Are there differences even in the absence of gradient hacking? * Can gradient hacking be demonstrated at any scale, any amount of contrivance? Are predictors well-modeled as ensembles of conditionally instantiated coherent agents, thereby risking one subagent achieving dangerous control over training as we enter superhuman levels of capability? I'm guessing not, but how would we know? What would make it more or less likely? **Mesaoptimization-prone architectures** ---------------------------------------- I suspect architectures that lack the execution time limits of GPT-like models will be more likely to develop richer internal optimization strategies. Multi-step execution that allows gradients to flow between steps both permits and encourages algorithmic representations. For example, [universal transformers](https://arxiv.org/abs/1807.03819). By allowing adaptive computation time, the constant time complexity bound on a single forward pass of traditional fixed-length transformers is removed. It ends up looking similar to an in-place RNN. With adaptive computation time, universal transformers can (and effectively must) learn to implement incremental refinement processes (or other kinds of optimization). More expressive algorithms can be permitted by giving the model [more space](https://arxiv.org/pdf/2207.02098.pdf) to work. Do these architectures—and further unpublished ones—exhibit any differences in observable instrumentality in toy problems? Is it easier to nudge these architectures into model-level instrumental behavior than the more bounded models? If no instrumental behavior is observed by default, how much RL-based fine tuning is required to *induce* observable model-level instrumental behavior? **Does instrumentality vary with optimizer?** --------------------------------------------- Stochastic gradient descent is the dominant optimizer used in deep learning, but not the only possible one. [Evolutionary strategies](https://www.youtube.com/watch?v=pgaEE27nsQw) can work quite well. Biologically plausible methods like [forward gradients](https://arxiv.org/abs/2202.08587) might be [practical](https://arxiv.org/abs/2210.03310). Even [random search](https://arxiv.org/abs/1712.06567) can compete in some corner cases. Do non-SGD optimizers encourage different behavior? Even if a mesaoptimizer struggles to develop under SGD, perhaps CMA-ES's lack of detailed credit assignment would allow mesaoptimizers more room to grow. **Concluding thoughts and implications** ======================================== I'm pretty optimistic about this research path. Many of the tests can be done as quickly as someone can get around to doing them. (In fact, I intended to do some of them myself *prior* to this submission, but unfortunately mononucleosis and friends had other ideas.) *Fully* ruling out bad outcomes is much harder than showing that current techniques do not naturally fall into that pit. This proposal notably *did not* include math bounding the behavior of predictors in the limit. This is not ideal, but sufficient evidence of the naturalness of goal agnosticism would still be comforting. I'd rather not roll the dice, but if we have to, we should try for good odds.[[1]](#fnmw9fuwwu0xm) None of this proposal is robust to pathological misuse, intentional or otherwise. The hypothetical system presented is a source of extreme capability that doesn't *automatically* end badly and one I think can be leveraged into something more robust. The fact that the industry, in search of extreme capability, found its way to an architecture that has promising safety properties is heartening. This isn't something I would have predicted 10 or 20 years ago, and it's forced me to update: I don't think we live in a reality that's on the hardest conceivable difficulty setting. 1. **[^](#fnrefmw9fuwwu0xm)**A note upon my July reread: I underemphasized one of the most important reasons for the empirical tests! A result that shows, for example, no model-level instrumental behavior in toy scale models would not be terribly surprising. But... if you *did* manage to find evidence of model-level instrumental behavior in existing predictive models, that's a pretty big *uh oh.* Losing goal agnosticism breaks a *lot* of possible paths, and knowing that sooner than later would be extremely important. This is why I *expect* to publish a post titled "An unsurprising failure to elicit model-level instrumental behavior," and why I think doing the tests is still worth it.
2e383aa1-67fe-4a02-a42a-750865b94bef
StampyAI/alignment-research-dataset/arbital
Arbital
Deep Blue Deep Blue is the chess-playing program, built by IBM, that defeated the reigning world chess champion, Garry Kasparov, in 1997. Modern algorithms play much better chess, using much less computing power, but Deep Blue still holds the place in history of having first played superhuman chess relative to the best human player at the time. See http://en.wikipedia.org/wiki/Deep_Blue_(chess_computer).
84604a7a-31b5-4eb5-b326-3e8962ba141e
trentmkelly/LessWrong-43k
LessWrong
What economic gains are there in life extension treatments? To put it another way: does stopping or slowing aging save resources for society as a whole, compared to raising new humans to replace old ones? Plausible economic gains from life extension: * Raising a human from birth to age 17 in the United States costs about $234K. * Retired people largely live off their savings; their economic activity is almost entirely consumption. Life extension would presumably allow people to stay productively employed in the workforce longer. Plausible economic losses: * Death of society's "old guard" may be serving a useful purpose by destroying calcified institutions and ideas, allowing better ones to bloom. This question is important because, if it turns out that raising new humans to replace old ones is at least as resource-efficient as anti-aging treatments, then inter-society competition will not particularly favor anti-aging investment.
0979b58f-fef5-4690-8825-46f9b72702c4
trentmkelly/LessWrong-43k
LessWrong
Takeaways From 3 Years Working In Machine Learning Disclaimer: Feeling so-and-so about posting this on LW, but given how many people here work in ML or adjacent fields I might as well. After 3 years working on automl at Mindsdb, I quit; I won’t be working on anything ml-related in a professional capacity, at least for a short while. I am in the uncanny valley of finally understanding what I don’t know, and maybe even having a feel for what nobody knows. I might as well write a summary of my takeaways, in hopes of them being useful to someone, or maybe just as a ritualistic gesture of moving on, a conceptual-space funeral. Please don’t take this as an “expert summary”, there are dozens of thousands of people more suited than myself for doing that. Instead, think of it as a piece of outsider art, takeaways from someone that took an unusually deep dive into the zeitgeist without becoming part of “the community”. i - What Is The Role Of Research? The role of research into machine learning is, half a decade after first pondering it, still a mystery to me. Most scientific sub-fields (the real ones) can claim they are a dual process of theory-building and data-gathering. Even if the standard model ends up being replaced, the data leading to its replacement will include the same set of experimental observations that lead to it being built. Even if the way we conceptualize the structure of DNA and the idea of genetics changes to better fit many-tissued eukaryotes, the observations these new concepts will have to fit shall remain unchanged. In more theoretical areas, such as those revolving around the terms mathematics and computer science, the gains are at an almost purely-conceptual level, but they hold fast because the concepts are so foundational they seem unlikely to be replaced. Some alien species living inside deep space that we could barely understand as “life”, or some space-faring empire or languageless primate tribes that we could barely classify as “humans” might build impressive conceptual machinery witho
ad006de6-dda9-4c73-9523-feaf96a94b97
StampyAI/alignment-research-dataset/blogs
Blogs
The Simple Truth > > “I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.” > > — Danielle Egan (journalist) *Author’s Foreword:* This essay is meant to restore a naive view of truth. Someone says to you: “My miracle snake oil can rid you of lung cancer in just three weeks.” You reply: “Didn’t a clinical study show this claim to be untrue?” The one returns: “This notion of ‘truth’ is quite naive; what do you mean by ‘true’?” Many people, so questioned, don’t know how to answer in exquisitely rigorous detail. Nonetheless they would not be wise to abandon the concept of ‘truth’. There was a time when no one knew the equations of gravity in exquisitely rigorous detail, yet if you walked off a cliff, you would fall. Often I have seen – especially on Internet mailing lists – that amidst other conversation, someone says “X is true”, and then an argument breaks out over the use of the word ‘true’. This essay is *not* meant as an encyclopedic reference for that argument. Rather, I hope the arguers will read this essay, and then go back to whatever they were discussing before someone questioned the nature of truth. In this essay I pose questions. If you see what seems like a really obvious answer, it’s probably the answer I intend. The obvious choice isn’t *always* the best choice, but sometimes, by golly, it *is* . I don’t stop looking as soon I find an obvious answer, but if I go on looking, and the obvious-seeming answer *still* seems obvious, I don’t feel guilty about keeping it. Oh, sure, everyone *thinks* two plus two is four, everyone *says* two plus two is four, and in the mere mundane drudgery of everyday life everyone *behaves* as if two plus two is four, but what does two plus two *really, ultimately* equal? As near as I can figure, four. It’s still four even if I intone the question in a solemn, portentous tone of voice. Too simple, you say? Maybe, on this occasion, life doesn’t *need* to be complicated. Wouldn’t that be refreshing? If you are one of those fortunate folk to whom the question seems trivial at the outset, I hope it still seems trivial at the finish. If you find yourself stumped by deep and meaningful questions, remember that if you know exactly how a system works, and could build one yourself out of buckets and pebbles, it should not be a mystery to you. If confusion threatens when you interpret a metaphor as a metaphor, try taking everything *completely literally.* --- Imagine that in an era before recorded history or formal mathematics, I am a shepherd and I have trouble tracking my sheep. My sheep sleep in an enclosure, a fold; and the enclosure is high enough to guard my sheep from wolves that roam by night. Each day I must release my sheep from the fold to pasture and graze; each night I must find my sheep and return them to the fold. If a sheep is left outside, I will find its body the next morning, killed and half-eaten by wolves. But it is so discouraging, to scour the fields for hours, looking for one last sheep, when I know that probably all the sheep are in the fold. Sometimes I give up early, and usually I get away with it; but around a tenth of the time there is a dead sheep the next morning. If only there were some way to divine whether sheep are still grazing, without the inconvenience of looking! I try several methods: I toss the divination sticks of my tribe; I train my psychic powers to locate sheep through clairvoyance; I search carefully for reasons to believe all the sheep are in the fold. It makes no difference. Around a tenth of the times I turn in early, I find a dead sheep the next morning. Perhaps I realize that my methods aren’t working, and perhaps I carefully excuse each failure; but my dilemma is still the same. I can spend an hour searching every possible nook and cranny, when most of the time there are no remaining sheep; or I can go to sleep early and lose, on the average, one-tenth of a sheep. Late one afternoon I feel especially tired. I toss the divination sticks and the divination sticks say that all the sheep have returned. I visualize each nook and cranny, and I don’t imagine scrying any sheep. I’m still not confident enough, so I look inside the fold and it seems like there are a lot of sheep, and I review my earlier efforts and decide that I was especially diligent. This dissipates my anxiety, and I go to sleep. The next morning I discover *two* dead sheep. Something inside me snaps, and I begin thinking creatively. That day, loud hammering noises come from the gate of the sheepfold’s enclosure. The next morning, I open the gate of the enclosure only a little way, and as each sheep passes out of the enclosure, I drop a pebble into a bucket nailed up next to the door. In the afternoon, as each returning sheep passes by, I take one pebble out of the bucket. When there are no pebbles left in the bucket, I can stop searching and turn in for the night. It is a *brilliant* notion. It will revolutionize shepherding. That was the theory. In practice, it took considerable refinement before the method worked reliably. Several times I searched for hours and didn’t find any sheep, and the next morning there were no stragglers. On each of these occasions it required deep thought to figure out where my bucket system had failed. On returning from one fruitless search, I thought back and realized that the bucket already contained pebbles when I started; this, it turned out, was a bad idea. Another time I randomly tossed pebbles into the bucket, to amuse myself, between the morning and the afternoon; this too was a bad idea, as I realized after searching for a few hours. But I practiced my pebblecraft, and became a reasonably proficient pebblecrafter. One afternoon, a man richly attired in white robes, leafy laurels, sandals, and business suit trudges in along the sandy trail that leads to my pastures. “Can I help you?” I inquire. The man takes a badge from his coat and flips it open, proving beyond the shadow of a doubt that he is Markos Sophisticus Maximus, a delegate from the Senate of Rum. (One might wonder whether another could steal the badge; but so great is the power of these badges that if any other were to use them, they would in that instant be *transformed* into Markos.) “Call me Mark,” he says. “I’m here to confiscate the magic pebbles, in the name of the Senate; artifacts of such great power must not fall into ignorant hands.” “That bleedin’ apprentice,” I grouse under my breath, “he’s been yakkin’ to the villagers again.” Then I look at Mark’s stern face, and sigh. “They aren’t magic pebbles,” I say aloud. “Just ordinary stones I picked up from the ground.” A flicker of confusion crosses Mark’s face, then he brightens again. “I’m here for the magic bucket!” he declares. “It’s not a magic bucket,” I say wearily. “I used to keep dirty socks in it.” Mark’s face is puzzled. “Then where is the magic?” he demands. An interesting question. “It’s hard to explain,” I say. My current apprentice, Autrey, attracted by the commotion, wanders over and volunteers his explanation: “It’s the level of pebbles in the bucket,” Autrey says. “There’s a magic level of pebbles, and you have to get the level just right, or it doesn’t work. If you throw in more pebbles, or take some out, the bucket won’t be at the magic level anymore. Right now, the magic level is,” Autrey peers into the bucket, “about one-third full.” “I see!” Mark says excitedly. From his back pocket Mark takes out his own bucket, and a heap of pebbles. Then he grabs a few handfuls of pebbles, and stuffs them into the bucket. Then Mark looks into the bucket, noting how many pebbles are there. “There we go,” Mark says, “the magic level of this bucket is half full. Like that?” “No!” Autrey says sharply. “Half full is not the magic level. The magic level is about one-third. Half full is definitely unmagic. Furthermore, you’re using the wrong bucket.” Mark turns to me, puzzled. “I thought you said the bucket wasn’t magic?” “It’s not,” I say. A sheep passes out through the gate, and I toss another pebble into the bucket. “Besides, I’m watching the sheep. Talk to Autrey.” Mark dubiously eyes the pebble I tossed in, but decides to temporarily shelve the question. Mark turns to Autrey and draws himself up haughtily. “It’s a free country,” Mark says, “under the benevolent dictatorship of the Senate, of course. I can drop whichever pebbles I like into whatever bucket I like.” Autrey considers this. “No you can’t,” he says finally, “there won’t be any magic.” “Look,” says Mark patiently, “I watched you carefully. You looked in your bucket, checked the level of pebbles, and called that the magic level. I did exactly the same thing.” “That’s not how it works,” says Autrey. “Oh, I see,” says Mark, “It’s not the level of pebbles in *my* bucket that’s magic, it’s the level of pebbles in *your* bucket. Is that what you claim? What makes your bucket so much better than mine, huh?” “Well,” says Autrey, “if we were to empty your bucket, and then pour all the pebbles from my bucket into your bucket, then your bucket would have the magic level. There’s also a procedure we can use to check if your bucket has the magic level, if we know that my bucket has the magic level; we call that a bucket compare operation.” Another sheep passes, and I toss in another pebble. “He just tossed in another pebble!” Mark says. “And I suppose you claim the new level is also magic? I could toss pebbles into your bucket until the level was the same as mine, and then our buckets would agree. You’re just comparing my bucket to your bucket to determine whether *you* think the level is ‘magic’ or not. Well, I think *your* bucket isn’t magic, because it doesn’t have the same level of pebbles as mine. So there!” “Wait,” says Autrey, “you don’t understand -” “By ‘magic level’, you mean simply the level of pebbles in your own bucket. And when I say ‘magic level’, I mean the level of pebbles in my bucket. Thus you look at my bucket and say it ’isn’t magic’, but the word ‘magic’ means different things to different people. You need to specify *whose* magic it is. You should say that my bucket doesn’t have ’Autrey’s magic level’, and I say that your bucket doesn’t have ’Mark’s magic level’. That way, the apparent contradiction goes away.” “But -” says Autrey helplessly. “Different people can have different buckets with different levels of pebbles, which proves this business about ‘magic’ is completely arbitrary and subjective.” “Mark,” I say, “did anyone tell you what these pebbles *do?* ” “ *Do?* ” says Mark. “I thought they were just magic.” “If the pebbles didn’t do anything,” says Autrey, “our ISO 9000 process efficiency auditor would eliminate the procedure from our daily work.” “What’s your auditor’s name?” “Darwin,” says Autrey. “Hm,” says Mark. “Charles does have a reputation as a strict auditor. So do the pebbles bless the flocks, and cause the increase of sheep?” “No,” I say. “The virtue of the pebbles is this; if we look into the bucket and see the bucket is empty of pebbles, we know the pastures are likewise empty of sheep. If we do not use the bucket, we must search and search until dark, lest one last sheep remain. Or if we stop our work early, then sometimes the next morning we find a dead sheep, for the wolves savage any sheep left outside. If we look in the bucket, we know when all the sheep are home, and we can retire without fear.” Mark considers this. “That sounds rather implausible,” he says eventually. “Did you consider using divination sticks? Divination sticks are infallible, or at least, anyone who says they are fallible is burned at the stake. This is an extremely painful way to die; it follows that divination sticks are infallible.” “You’re welcome to use divination sticks if you like,” I say. “Oh, good heavens, of course not,” says Mark. “They work infallibly, with absolute perfection on every occasion, as befits such blessed instruments; but what if there were a dead sheep the next morning? I only use the divination sticks when there is no possibility of their being proven wrong. Otherwise I might be burned alive. So how does your magic bucket work?” How does the bucket work…? I’d better start with the simplest possible case. “Well,” I say, “suppose the pastures are empty, and the bucket isn’t empty. Then we’ll waste hours looking for a sheep that isn’t there. And if there are sheep in the pastures, but the bucket is empty, then Autrey and I will turn in too early, and we’ll find dead sheep the next morning. So an empty bucket is magical if and only if the pastures are empty -” “Hold on,” says Autrey. “That sounds like a vacuous tautology to me. Aren’t an empty bucket and empty pastures obviously the same thing?” “It’s not vacuous,” I say. “Here’s an analogy: The logician Alfred Tarski once said that the assertion ‘Snow is white’ is true if and only if snow is white. If you can understand that, you should be able to see why an empty bucket is magical if and only if the pastures are empty of sheep.” “Hold on,” says Mark. “These are *buckets* . They don’t have anything to do with *sheep* . Buckets and sheep are obviously completely different. There’s no way the sheep can ever interact with the bucket.” “Then where do *you* think the magic comes from?” inquires Autrey. Mark considers. “You said you could compare two buckets to check if they had the same level… I can see how buckets can interact with buckets. Maybe when you get a large collection of buckets, and they all have the same level, *that’s* what generates the magic. I’ll call that the coherentist theory of magic buckets.” “Interesting,” says Autrey. “I know that my master is working on a system with multiple buckets – he says it might work better because of ‘redundancy’ and ‘error correction’. That sounds like coherentism to me.” “They’re not quite the same -” I start to say. “Let’s test the coherentism theory of magic,” says Autrey. “I can see you’ve got five more buckets in your back pocket. I’ll hand you the bucket we’re using, and then you can fill up your other buckets to the same level -” Mark recoils in horror. “Stop! These buckets have been passed down in my family for generations, and they’ve always had the same level! If I accept your bucket, my bucket collection will become less coherent, and the magic will go away!” “But your *current* buckets don’t have anything to do with the sheep!” protests Autrey. Mark looks exasperated. “Look, I’ve explained before, there’s obviously no way that sheep can interact with buckets. Buckets can only interact with other buckets.” “I toss in a pebble whenever a sheep passes,” I point out. “When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?” “It’s an interaction between the sheep and the pebbles,” I reply. “No, it’s an interaction between the pebbles and *you* ,” Mark says. “The magic doesn’t come from the sheep, it comes from *you* . Mere sheep are obviously nonmagical. The magic has to come from *somewhere* , on the way to the bucket.” I point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.” Mark furrows his brow. “I don’t quite follow you… is the *cloth* magical?” I shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket. *Afterward* you can compare the bucket to other buckets, and so on.” “I still don’t get it,” Mark says. “You can’t fit a sheep into a bucket. Only pebbles go in buckets, and it’s obvious that pebbles only interact with other pebbles.” “The sheep interact with things that interact with pebbles…” I search for an analogy. “Suppose you look down at your shoelaces. A photon leaves the Sun; then travels down through Earth’s atmosphere; then bounces off your shoelaces; then passes through the pupil of your eye; then strikes the retina; then is absorbed by a rod or a cone. The photon’s energy makes the attached neuron fire, which causes other neurons to fire. A neural activation pattern in your visual cortex can interact with your beliefs about your shoelaces, since beliefs about shoelaces also exist in neural substrate. If you can understand that, you should be able to see how a passing sheep causes a pebble to enter the bucket.” “At exactly *which* point in the process does the pebble become magic?” says Mark. “It… um…” Now *I’m* starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the *point* of the system is to keep track of sheep.” Mark sighs sadly. “Never mind… it’s obvious you don’t know. Maybe all pebbles are magical to start with, even before they enter the bucket. We could call that position panpebblism.” “Ha!” Autrey says, scorn rich in his voice. “Mere wishful thinking! Not all pebbles are created equal. The pebbles in *your* bucket are *not* magical. They’re only lumps of stone!” Mark’s face turns stern. “Now,” he cries, “now you see the danger of the road you walk! Once you say that some people’s pebbles are magical and some are not, your pride will consume you! You will think yourself superior to all others, and so fall! Many throughout history have tortured and murdered because they thought their own pebbles supreme!” A tinge of condescension enters Mark’s voice. “Worshipping a level of pebbles as ‘magical’ implies that there’s an absolute pebble level in a Supreme Bucket. Nobody believes in a Supreme Bucket these days.” “One,” I say. “Sheep are not absolute pebbles. Two, I don’t think my bucket actually contains the sheep. Three, I don’t worship my bucket level as perfect – I adjust it sometimes – and I do that *because* I care about the sheep.” “Besides,” says Autrey, “someone who believes that possessing absolute pebbles *would* license torture and murder, is making a mistake that has nothing to do with buckets. You’re solving the wrong problem.” Mark calms himself down. “I suppose I can’t expect any better from mere shepherds. You probably believe that snow is white, don’t you.” “Um… yes?” says Autrey. “It doesn’t bother you that *Joseph Stalin* believed that snow is white?” “Um… no?” says Autrey. Mark gazes incredulously at Autrey, and finally shrugs. “Let’s suppose, purely for the sake of argument, that your pebbles are magical and mine aren’t. Can you tell me what the difference is?” “My pebbles *represent* the sheep!” Autrey says triumphantly. “ *Your* pebbles don’t have the representativeness property, so they won’t work. They are empty of meaning. Just look at them. There’s no aura of semantic content; they are merely pebbles. You need a bucket with special causal powers.” “Ah!” Mark says. “Special causal powers, instead of magic.” “Exactly,” says Autrey. “I’m not superstitious. Postulating magic, in this day and age, would be unacceptable to the international shepherding community. We have found that postulating magic simply doesn’t work as an explanation for shepherding phenomena. So when I see something I don’t understand, and I want to explain it using a model with no internal detail that makes no predictions even in retrospect, I postulate special causal powers. If that doesn’t work, I’ll move on to calling it an emergent phenomenon.” “What kind of special powers does the bucket have?” asks Mark. “Hm,” says Autrey. “Maybe this bucket is imbued with an *about-ness* relation to the pastures. That would explain why it worked – when the bucket is empty, it *means* the pastures are empty.” “Where did you find this bucket?” says Mark. “And how did you realize it had an about-ness relation to the pastures?” “It’s an *ordinary bucket* ,” I say. “I used to climb trees with it… I don’t think this question *needs* to be difficult.” “I’m talking to Autrey,” says Mark. “You have to bind the bucket to the pastures, and the pebbles to the sheep, using a magical ritual – pardon me, an emergent process with special causal powers – that my master discovered,” Autrey explains. Autrey then attempts to describe the ritual, with Mark nodding along in sage comprehension. “You have to throw in a pebble *every* time a sheep leaves through the gate?” says Mark. “Take out a pebble *every* time a sheep returns?” Autrey nods. “Yeah.” “That must be really hard,” Mark says sympathetically. Autrey brightens, soaking up Mark’s sympathy like rain. “Exactly!” says Autrey. “It’s *extremely* hard on your emotions. When the bucket has held its level for a while, you… tend to get attached to that level.” A sheep passes then, leaving through the gate. Autrey sees; he stoops, picks up a pebble, holds it aloft in the air. “Behold!” Autrey proclaims. “A sheep has passed! I must now toss a pebble into this bucket, my dear bucket, and destroy that fond level which has held for so long – ” Another sheep passes. Autrey, caught up in his drama, misses it; so I plunk a pebble into the bucket. Autrey is still speaking: ” – for that is the supreme test of the shepherd, to throw in the pebble, be it ever so agonizing, be the old level ever so precious. Indeed, only the best of shepherds can meet a requirement so stern -“ “Autrey,” I say, “if you want to be a great shepherd someday, learn to shut up and throw in the pebble. No fuss. No drama. Just do it.” “And this ritual,” says Mark, “it binds the pebbles to the sheep by the magical laws of Sympathy and Contagion, like a voodoo doll.” Autrey winces and looks around. “Please! Don’t call it Sympathy and Contagion. We shepherds are an anti-superstitious folk. Use the word ‘intentionality’, or something like that.” “Can I look at a pebble?” says Mark. “Sure,” I say. I take one of the pebbles out of the bucket, and toss it to Mark. Then I reach to the ground, pick up another pebble, and drop it into the bucket. Autrey looks at me, puzzled. “Didn’t you just mess it up?” I shrug. “I don’t think so. We’ll know I messed it up if there’s a dead sheep next morning, or if we search for a few hours and don’t find any sheep.” “But -” Autrey says. “I taught you everything *you* know, but I haven’t taught you everything *I* know,” I say. Mark is examining the pebble, staring at it intently. He holds his hand over the pebble and mutters a few words, then shakes his head. “I don’t sense any magical power,” he says. “Pardon me. I don’t sense any intentionality.” “A pebble only has intentionality if it’s inside a ma- an emergent bucket,” says Autrey. “Otherwise it’s just a mere pebble.” “Not a problem,” I say. I take a pebble out of the bucket, and toss it away. Then I walk over to where Mark stands, tap his hand holding a pebble, and say: “I declare this hand to be part of the magic bucket!” Then I resume my post at the gates. Autrey laughs. “Now you’re just being gratuitously evil.” I nod, for this is indeed the case. “Is that really going to work, though?” says Autrey. I nod again, hoping that I’m right. I’ve done this before with two buckets, and in principle, there should be no difference between Mark’s hand and a bucket. Even if Mark’s hand is imbued with the *elan vital* that distinguishes live matter from dead matter, the trick should work as well as if Mark were a marble statue. Mark is looking at his hand, a bit unnerved. “So… the pebble has intentionality again, now?” “Yep,” I say. “Don’t add any more pebbles to your hand, or throw away the one you have, or you’ll break the ritual.” Mark nods solemnly. Then he resumes inspecting the pebble. “I understand now how your flocks grew so great,” Mark says. “With the power of this bucket, you could keep in tossing pebbles, and the sheep would keep returning from the fields. You could start with just a few sheep, let them leave, then fill the bucket to the brim before they returned. And if tending so many sheep grew tedious, you could let them all leave, then empty almost all the pebbles from the bucket, so that only a few returned… increasing the flocks again when it came time for shearing… dear heavens, man! Do you realize the sheer *power* of this ritual you’ve discovered? I can only imagine the implications; humankind might leap ahead a decade – no, a century!” “It doesn’t work that way,” I say. “If you add a pebble when a sheep hasn’t left, or remove a pebble when a sheep hasn’t come in, that breaks the ritual. The power does not linger in the pebbles, but vanishes all at once, like a soap bubble popping.” Mark’s face is terribly disappointed. “Are you sure?” I nod. “I tried that and it didn’t work.” Mark sighs heavily. “And this… *math* … seemed so powerful and useful until then… Oh, well. So much for human progress.” “Mark, it was a *brilliant* idea,” Autrey says encouragingly. “The notion didn’t occur to me, and yet it’s so obvious… it would save an *enormous* amount of effort… there *must* be a way to salvage your plan! We could try different buckets, looking for one that would keep the magical pow- the intentionality in the pebbles, even without the ritual. Or try other pebbles. Maybe our pebbles just have the wrong properties to have *inherent* intentionality. What if we tried it using stones carved to resemble tiny sheep? Or just write ‘sheep’ on the pebbles; that might be enough.” “Not going to work,” I predict dryly. Autrey continues. “Maybe we need organic pebbles, instead of silicon pebbles… or maybe we need to use expensive gemstones. The price of gemstones doubles every eighteen months, so you could buy a handful of cheap gemstones now, and wait, and in twenty years they’d be really expensive.” “You tried adding pebbles to create more sheep, and it didn’t work?” Mark asks me. “What exactly did you do?” “I took a handful of dollar bills. Then I hid the dollar bills under a fold of my blanket, one by one; each time I hid another bill, I took another paperclip from a box, making a small heap. I was careful not to keep track in my head, so that all I knew was that there were ‘many’ dollar bills, and ‘many’ paperclips. Then when all the bills were hidden under my blanket, I added a single additional paperclip to the heap, the equivalent of tossing an extra pebble into the bucket. Then I started taking dollar bills from under the fold, and putting the paperclips back into the box. When I finished, a single paperclip was left over.” “What does that result mean?” asks Autrey. “It means the trick didn’t work. Once I broke ritual by that single misstep, the power did not linger, but vanished instantly; the heap of paperclips and the pile of dollar bills no longer went empty at the same time.” “You *actually* tried this?” asks Mark. “Yes,” I say, “I actually performed the experiment, to verify that the outcome matched my theoretical prediction. I have a sentimental fondness for the scientific method, even when it seems absurd. Besides, what if I’d been wrong?” “If it *had* worked,” says Mark, “you would have been guilty of counterfeiting! Imagine if everyone did that; the economy would collapse! Everyone would have billions of dollars of currency, yet there would be nothing for money to buy!” “Not at all,” I reply. “By that same logic whereby adding another paperclip to the heap creates another dollar bill, creating another dollar bill would create an additional dollar’s worth of goods and services.” Mark shakes his head. “Counterfeiting is still a crime… You should not have tried.” “I was *reasonably* confident I would fail.” “Aha!” says Mark. “You *expected* to fail! You didn’t *believe* you could do it!” “Indeed,” I admit. “You have guessed my expectations with stunning accuracy.” “Well, that’s the problem,” Mark says briskly. “Magic is fueled by belief and willpower. If you don’t believe you can do it, you can’t. You need to change your belief about the experimental result; that will change the result itself.” “Funny,” I say nostalgically, “that’s what Autrey said when I told him about the pebble-and-bucket method. That it was too ridiculous for him to believe, so it wouldn’t work for him.” “How did you persuade him?” inquires Mark. “I told him to shut up and follow instructions,” I say, “and when the method worked, Autrey started believing in it.” Mark frowns, puzzled. “That makes no sense. It doesn’t resolve the essential chicken-and-egg dilemma.” “Sure it does. The bucket method works whether or not you believe in it.” “That’s *absurd!* ” sputters Mark. “I don’t believe in magic that works whether or not you believe in it!” “I said that too,” chimes in Autrey. “Apparently I was wrong.” Mark screws up his face in concentration. “But… if you didn’t believe in magic that works whether or not you believe in it, then why did the bucket method work when you didn’t believe in it? Did you believe in magic that works whether or not you believe in it whether or not you believe in magic that works whether or not you believe in it?” “I don’t… *think* so…” says Autrey doubtfully. “Then if you didn’t believe in magic that works whether or not you… hold on a second, I need to work this out on paper and pencil -” Mark scribbles frantically, looks skeptically at the result, turns the piece of paper upside down, then gives up. “Never mind,” says Mark. “Magic is difficult enough for me to comprehend; metamagic is out of my depth.” “Mark, I don’t think you understand the art of bucketcraft,” I say. “It’s not about using pebbles to control sheep. It’s about making sheep control pebbles. In this art, it is not necessary to begin by believing the art will work. Rather, first the art works, then one comes to believe that it works.” “Or so you believe,” says Mark. “So I believe,” I reply, “ *because* it happens to be a fact. The correspondence between reality and my beliefs comes from reality controlling my beliefs, not the other way around.” Another sheep passes, causing me to toss in another pebble. “Ah! Now we come to the root of the problem,” says Mark. “What’s this so-called ‘reality’ business? I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.” I pause. “Well…” I say slowly. “Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.” Mark snorts. “I don’t even know why I bother listening to this obvious nonsense. Whatever you say about this so-called ‘reality’, it is merely another belief. Even your belief that reality precedes your beliefs is a belief. It follows, as a logical inevitability, that reality does not exist; only beliefs exist.” “Hold on,” says Autrey, “could you repeat that last part? You lost me with that sharp swerve there in the middle.” “No matter what you say about reality, it’s just another belief,” explains Mark. “It follows with crushing necessity that there is no reality, only beliefs.” “I see,” I say. “The same way that no matter what you eat, you need to eat it with your mouth. It follows that there is no food, only mouths.” “Precisely,” says Mark. “Everything that you eat has to be in your mouth. How can there be food that exists outside your mouth? The thought is nonsense, proving that ‘food’ is an incoherent notion. That’s why we’re all starving to death; there’s no food.” Autrey looks down at his stomach. “But I’m *not* starving to death.” “ *Aha!* ” shouts Mark triumphantly. “And how did you utter that very objection? With your *mouth* , my friend! With your *mouth* ! What better demonstration could you ask that there is no food?” “ *What’s this about starvation?* ” demands a harsh, rasping voice from directly behind us. Autrey and I stay calm, having gone through this before. Mark leaps a foot in the air, startled almost out of his wits. Inspector Darwin smiles tightly, pleased at achieving surprise, and makes a small tick on his clipboard. “Just a metaphor!” Mark says quickly. “You don’t need to take away my mouth, or anything like that -” “ *Why* do you need a *mouth* if there is no *food* ?” demands Darwin angrily. “ *Never mind.* I have no *time* for this *foolishness* . I am here to inspect the *sheep.* ” “Flocks thriving, sir,” I say. “No dead sheep since January.” “ *Excellent.* I award you 0.12 units of *fitness* . Now what is this *person* doing here? Is he a necessary part of the *operations?* ” “As far as I can see, he would be of more use to the human species if hung off a hot-air balloon as ballast,” I say. “Ouch,” says Autrey mildly. “I do not *care* about the *human species* . Let him speak for *himself* .” Mark draws himself up haughtily. “This mere *shepherd* ,” he says, gesturing at me, “has claimed that there is such a thing as reality. This offends me, for I know with deep and abiding certainty that there is no truth. The concept of ‘truth’ is merely a stratagem for people to impose their own beliefs on others. Every culture has a different ‘truth’, and no culture’s ‘truth’ is superior to any other. This that I have said holds at all times in all places, and I insist that you agree.” “Hold on a second,” says Autrey. “If nothing is true, why should I believe you when you say that nothing is true?” “I didn’t say that nothing is true -” says Mark. “Yes, you did,” interjects Autrey, “I heard you.” “- I said that ‘truth’ is an excuse used by some cultures to enforce their beliefs on others. So when you say something is ‘true’, you mean only that it would be advantageous to your own social group to have it believed.” “And this that you have said,” I say, “is it true?” “Absolutely, positively true!” says Mark emphatically. “People create their own realities.” “Hold on,” says Autrey, sounding puzzled again, “saying that people create their own realities is, logically, a completely separate issue from saying that there is no truth, a state of affairs I cannot even imagine coherently, perhaps because you still have not explained how exactly it is supposed to work -” “There you go again,” says Mark exasperatedly, “trying to apply your Western concepts of logic, rationality, reason, coherence, and self-consistency.” “Great,” mutters Autrey, “now I need to add a *third* subject heading, to keep track of this entirely separate and distinct claim -” “It’s not separate,” says Mark. “Look, you’re taking the wrong attitude by treating my statements as hypotheses, and carefully deriving their consequences. You need to think of them as fully general excuses, which I apply when anyone says something I don’t like. It’s not so much a model of how the universe works, as a “Get Out of Jail Free” card. The *key* is to apply the excuse *selectively* . When I say that there is no such thing as truth, that applies only to *your* claim that the magic bucket works whether or not I believe in it. It does *not* apply to *my* claim that there is no such thing as truth.” “Um… why not?” inquires Autrey. Mark heaves a patient sigh. “Autrey, do you think you’re the first person to think of that question? To ask us how our own beliefs can be meaningful if all beliefs are meaningless? That’s the same thing many students say when they encounter this philosophy, which, I’ll have you know, has many adherents and an extensive literature.” “So what’s the answer?” says Autrey. “We named it the ‘reflexivity problem’,” explains Mark. “But what’s the *answer* ?” persists Autrey. Mark smiles condescendingly. “Believe me, Autrey, you’re not the first person to think of such a simple question. There’s no point in presenting it to us as a triumphant refutation.” “But what’s the *actual answer?* ” “Now, I’d like to move on to the issue of how logic kills cute baby seals -” “ *You* are wasting *time* ,” snaps Inspector Darwin. “Not to mention, losing track of sheep,” I say, tossing in another pebble. Inspector Darwin looks at the two arguers, both apparently unwilling to give up their positions. “Listen,” Darwin says, more kindly now, “I have a simple notion for resolving your dispute. *You* say,” says Darwin, pointing to Mark, “that people’s beliefs alter their personal realities. And *you* fervently believe,” his finger swivels to point at Autrey, “that Mark’s beliefs *can’t* alter reality. So let Mark believe really hard that he can fly, and then step off a cliff. Mark shall see himself fly away like a bird, and Autrey shall see him plummet down and go splat, and you shall both be happy.” We all pause, considering this. “It *sounds* reasonable…” Mark says finally. “There’s a cliff right there,” observes Inspector Darwin. Autrey is wearing a look of intense concentration. Finally he shouts: “Wait! If that were true, we would all have long since departed into our own private universes, in which case the other people here are only figments of your imagination – there’s no point in trying to prove anything to us -” A long dwindling scream comes from the nearby cliff, followed by a dull and lonely splat. Inspector Darwin flips his clipboard to the page that shows the current gene pool and pencils in a slightly lower frequency for Mark’s alleles. Autrey looks slightly sick. “Was that really necessary?” “ *Necessary?* ” says Inspector Darwin, sounding puzzled. “It just *happened* … I don’t quite understand your question.” Autrey and I turn back to our bucket. It’s time to bring in the sheep. You wouldn’t want to forget about that part. Otherwise what would be the point? --- This document is ©2008 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered. Eliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) . If you think the world could use some more rationality, consider blogging this page. Praise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/rational/the-simple-truth/](https://eyudkowsky.wpengine.com/rational/the-simple-truth/) . If you enjoyed this writing, let your journey continue with [An Intuitive Explanation of Bayesian Reasoning](https://eyudkowsky.wpengine.com/rational/bayes) . You may also enjoy [The Twelve Virtues of Rationality](https://eyudkowsky.wpengine.com/rational/virtues) and [A Technical Explanation of Technical Explanation](https://eyudkowsky.wpengine.com/rational/technical)
5c32946b-308a-4693-8c3d-99e098448d88
trentmkelly/LessWrong-43k
LessWrong
[Sponsored] Job Hunting in the Modern Economy This article was sponsored by Aigent®. ---------------------------------------- Applying to job postings is a waste of time. To get a good job you need to network. But engineers aren't trained in networking (well, not the biological kind). We're trained in technology. Expecting an engineer to network effectively is like teaching a dog to play basketball. It's possible, but it's a waste of everyone's time. Much of the work done in networking is repetitive. "Why yes I am so excited to be working at <company name>. I've wanted to be part of <company name> since I was a little kid. I love how unique and quirky <company name>'s culture is." Show employers how good you are at technology by automating the application process itself. You know those standardized coding tests that are easy but take a long time? Aigent®'s software takes them for you. Aigent®'s technology can apply to thousands of jobs at a time. No matter how good you are at manual job hunting, there's no way to match the scale of robotic process automation (RPA). For an extra $15.99 per month you can upload a photo of yourself and Aigent® will deepfake a live video feed. Aigent®'s patented technology can pass all of the most popular CAPTCHAs. Worried that you don't have useful skills? There are many jobs where RPA has surpassed human workers but companies haven't gotten around to replacing human workers yet. Aigent® can feed you real time instructions in exchange for a 20% of your salary. Already have a job? Let Aigent® use your work to passively train our RPA and Aigent® will pay you a at least 20% of your salary [some restrictions apply]. Exclusive contracts and valuable industries can pay up to 150%. Sign up with coupon code LSUSR and you'll get 200 free applications and 100 free posts (over the course of three months) on work-related social media.
71e25728-4f5f-4d79-9ff4-cd5f373eb491
trentmkelly/LessWrong-43k
LessWrong
Open Thread – Autumn 2023 If it’s worth saying, but not worth its own post, here's a place to put it. If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post. If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section. The Open Thread tag is here. The Open Thread sequence is here.
4f35f8b0-2426-424d-99ae-fed2148259ba
trentmkelly/LessWrong-43k
LessWrong
The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo Background Context, And How to Read This Post This post is inspired by and a continuation of comments I made on the post 'What is the Rationalist Berkeley Community's Culture?' by Zvi on his blog Don't Worry About the Vase. As a community organizer both online and in-person in Vancouver, Canada, my goal was to fill in what appeared to be some gaps in the conversation among rationalists mostly focused on the Berkeley community. Zvi's post was part of a broader conversation pertaining to rationalist community dynamics within Berkeley.  My commentary pertains to the dynamics between the Bay Area and other local rationality communities, informed by my own experience in Vancouver and those of rationalists elsewhere. The below should not be taken be taken as comment on rationalist community dynamics within the Bay Area. This post should be considered an off-shoot from the original conversation Zvi was contributing to. For full context, please read Zvi's original post.  ---------------------------------------- I. The Rationality Community: Berkeley vs. The World While I didn't respond to them at the time, several community members commented on Zvi's post they had similar experiences: that while some local rationality communities and their members perceive themselves in a zero-sum game with Berkeley they didn't sign up for (and, to be fair, the Berkeley community didn't consciously initiate as though it's a single agency), and some don't, a sense of what Zvi was trying to point appears ubiquitous. An example: > In my experience, the recruitment to Berkeley was very aggressive. Sometimes it felt like: “if you don’t want to move to Berkeley as soon as possible, you are not *really* rational, and then it is a waste of our time to even talk to you.” I totally understand why having more rationalists around you is awesome, but trying to move everyone into one city feels like an overkill. Similar anecdata from local rationality communities around the world: Melbourne. Wh
7572eccc-08e8-4526-929f-942344512e56
trentmkelly/LessWrong-43k
LessWrong
You're never wrong injecting complexity, but rarely you're right I just want to put this idea in the form of a post, to gather your impressions. I think it's my main rationalist failure mode. Recently in a Facebook group, some poster has proposed this synthesis of Harari's book '21 lessons': > In the 21st century, three narratives were used to explain the past and predict the future: the fascist, the communist and the liberal narrative. During the century, the latter has prevailed, although in recent times it has started to crack, due to events like the election of Trump, the Brexit, and so on. Then, the same user asked: what could be a new narrative that would help us in the future? I was tempted to reply as I always do: criticize the simplification. I was about to write that the concept of narrative itself is a narrative, that Harari is seeing the past with the eyes of the present, but not necessarily this lens will help with navigating the future, that also a better concept would be that of a memeplex, which is less internally coherent than a story, and thus more complex to pinpoint. Then a reflection occured to me: I always end up doing this, in almost all discussions I partecipate. People simplify too much and come to the wrong conclusion, they consider only the extremes of a spectrum, they use words as rigid classifiers and debate endlessly about them, they do not have internally coherent point of views, etc. I almost invariable end up 'winning' (i.e. appear wise) by injecting some complexity: usually in the form of a new parameter that was buried in the presuppositions. Then, I was struck by another insight: it is too easy to win this way. From a mathematical point of view, a system with a bigger state space is more flexible than a smaller system, and so an 'optimization' that increases the space state is always correct. Is that possible? How probable it is that I've discovered a universal optimization of every human debate? I reasoned that it's very low, and indeed I think I've always failed to consider the downside
64368617-0ecc-441a-94f8-0bafa51bab5f
trentmkelly/LessWrong-43k
LessWrong
Meetup : Utrecht: Rationality Games Discussion article for the meetup : Utrecht: Rationality Games WHEN: 14 December 2014 02:00:00PM (+0100) WHERE: La Place, Utrecht, Rijnkade 5 For details, see meetup.com: http://www.meetup.com/LWEANL/events/218834660/, which is supposed to be up-to-date. Discussion article for the meetup : Utrecht: Rationality Games
44d54eff-b7ef-4da4-aeb0-0d7c1434a040
trentmkelly/LessWrong-43k
LessWrong
Convergence 2024 Impact Review Convergence 2024 Impact Review home page. Impact overview 2024 marked the first full year with the new Convergence Analysis 9-person team. This year we published 20 articles on understanding and governing transformative AI. Our research impacted regulatory frameworks internationally. In the US we provided consultation to the Bureau of Industry and Security that directly informed their proposed rule on reporting requirements for dual-use AI, while in the EU we saw specific recommendations incorporated into the EU AI Act GPAI Code of Practice. We led expert field-building around AI’s economic impacts through the Threshold 2030 conference, and AI scenario modeling via the AI Scenarios Network. Our work reached mainstream media, universities, and over 184,000 viewers on social platforms. We organized our activities into three programs: AI Clarity, AI Governance, and AI Awareness. 1. AI Clarity: performing AI Scenario Planning * Published over 170 pages of AI Clarity research across 10 articles and reports. * Hosted the Threshold 2030 Conference together with Metaculus and FLI, convening 25 senior experts from frontier AI labs, intergovernmental organisations, academia and leading AI safety research organisations to evaluate economic impacts of short AI timelines. * Developed the AI Scenarios Network of 30+ experts, the first cross-organizational coalition of AI Scenario researchers. 2. AI Governance: producing concrete AI policy recommendations * Published over 300 pages of AI Governance research across 9 articles and reports. * Published the The Oxford Handbook of AI Governance. This work is an anthology of 49 chapters from 75 leading contributors including Anthony Aguirre, Allan Dafoe, Ben Garfinkel & Justin Bullock. * Consulted for EU AI Act’s GPAI Code of Practice, with recommendations and language incorporated directly into the Code. * Consulted on BIS’ proposed rule for reporting requirements for dual-use AI models. * Consulted and directly reference
a27c5aad-f3d8-4465-83a0-80950dcd9291
trentmkelly/LessWrong-43k
LessWrong
Maybe Lying Can't Exist?! How is it possible to tell the truth? I mean, sure, you can use your larynx to make sound waves in the air, or you can draw a sequence of symbols on paper, but sound waves and paper-markings can't be true, any more than a leaf or a rock can be "true". Why do you think you can tell the truth? This is a pretty easy question. Words don't have intrinsic ontologically-basic meanings, but intelligent systems can learn associations between a symbol and things in the world. If I say "dog" and point to a dog a bunch of times, a child who didn't already know what the word "dog" meant, would soon get the idea and learn that the sound "dog" meant this-and-such kind of furry four-legged animal. As a formal model of how this AI trick works, we can study sender–receiver games. Two agents, a "sender" and a "receiver", play a simple game: the sender observes one of several possible states of the world, and sends one of several possible signals—something that the sender can vary (like sound waves or paper-markings) in a way that the receiver can detect. The receiver observes the signal, and makes a prediction about the state of the world. If the agents both get rewarded when the receiver's prediction matches the sender's observation, a convention evolves that assigns common-usage meanings to the previously and otherwise arbitrary signals. True information is communicated; the signals become a shared map that reflects the territory. This works because the sender and receiver have a common interest in getting the same, correct answer—in coordinating for the signals to mean something. If instead the sender got rewarded when the receiver made bad predictions, then if the receiver could use some correlation between the state of the world and the sender's signals in order to make better predictions, then the sender would have an incentive to change its signaling choices to destroy that correlation. No convention evolves, no information gets transferred. This case is not a matter of a m
5502d61f-1754-4125-8ecc-fa8dd5363912
trentmkelly/LessWrong-43k
LessWrong
OpenAI Launches Superalignment Taskforce In their announcement Introducing Superalignment, OpenAI committed 20% of secured compute and a new taskforce to solving the technical problem of aligning a superintelligence within four years. Cofounder and Chief Scientist Ilya Sutskever will co-lead the team with Head of Alignment Jan Leike. This is a real and meaningful commitment of serious firepower. You love to see it. The announcement, dedication of resources and focus on the problem are all great. Especially the stated willingness to learn and modify the approach along the way. The problem is that I remain deeply, deeply skeptical of the alignment plan. I don’t see how the plan makes the hard parts of the problem easier rather than harder. I will begin with a close reading of the announcement and my own take on the plan on offer, then go through the reactions of others, including my take on Leike’s other statements about OpenAI’s alignment plan. A CLOSE READING SECTION: INTRODUCTION > Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction. > > While superintelligence seems far off now, we believe it could arrive this decade. > > Here we focus on superintelligence rather than AGI to stress a much higher capability level. We have a lot of uncertainty over the speed of development of the technology over the next few years, so we choose to aim for the more difficult target to align a much more capable system. Excellent. Love the ambition, admission of uncertainty and laying out that alignment of a superintelligent system is fundamentally different from and harder than aligning less intelligent AIs including current systems. > Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment
cf9b0775-2b88-4994-89b9-483b36c59f76
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
The Inside View #2–Connor Leahy welcome to the inside view a podcast where we forecast technological progress starting from our grad level intuitions i am your host michael thrasi for people watching this on youtube what you're saying is a screen recording of how i randomly met conor lee at the internet party conor is an ml researcher at adolf alpha a german company shaping european research and development for the next generation of generalizable artificial intelligence he's most well known for being one of elluther ai's founding member a decentralized grassroots collective of volunteer researchers engineers and developers focusing ai alignment scaling and open sourcing ai research for listeners not familiar with ai alignment it is often used to point at research aiming to build scalable ai systems that do what we want them to do my current experience with luther was mostly through their discord where people coordinate on open source projects discuss deep learning research and most importantly exchange deep learning gossips and 9 000 iq memes in my opinion elizabeth's flagship achievements are to have open source both one the code to 20 gp3 like models referred as gpt neo and gpt neox and two the pile a clean dataset they created themselves to train those gp3 like models in the first part of the podcast we chat about how to speed up gpt's returning how conor updated on recent announcements of large language models why gpt3 can be considered agi for some specific definitions of agi the obstacles in plugging planning to gptn and why the brain might approximate something like back problem we end this first part with settlement of priors adversary attacks just pascal mugging and whether direct work on ai alignment is currently tractable in the second part we chat about how this current projects at luther multiple scenarios and reasons to work on technical ailm research without further ado conor lee hello again hey can you can find yep okay so like just let me give you some context about the the whole cat thing um i want people to talk about your timelines and i want people to give inside views about what they think about ai and not censor themselves so obviously you're one of those candidates fall to that category yes i have no idea this is my first okay this is my second time recording one so i'm pretty much new to it so hello connor how are you doing hi doing well cool yeah um i i recently saw one of your retweets because you're quite prolific on twitter and it was about like um a chinese paper that uh went on for a few months and was recently translated to english and apparently it's the new thing um what do you think about it so i assume we talked about the rotational bags yeah exactly that's fun that's a funny little story so i'm not super involved with that um i just kind of saw it from the sidelines basically what happened was a chinese researcher uh like over the food course for a few months uh wrote a few chinese blog posts about a new transformer technique about a new positional encoding technique specifically that seem to be have interesting properties um someone at a luther which is like the open source collective um part of yeah uh stella was one of the people involved i think pratal was the one who first uh made us aware of it if i remember correctly uh he posed like hey this seems interesting so usually google translates like reverse engineer what they were doing so apparently this has been like um spoken about in chinese nlp circles and we ran a bunch of experiments and it was the first like uh simple quote-unquote tweak to these kinds of things that we've seen in a long time that seemed to make like any appreciable difference so especially for small models it made convergence uh so training time much faster uh the uh seems to diminish in large at larger sizes but it's still like significantly a few percentage points and such and like speed increase and such that's pretty cool so we decided to write a english blog post about it reach out to the author like hey we really like your work we like to write a blog post about him so then he released his paper in english and we released our blog post at the same time well we i i wasn't involved in reading the book right okay so you helped um publish a blog post and i think like since today there's like a paper uh builders because it's more about speed than and actual uh reducing the button is the button like in kind of the nvidia link and like all the uh optimization when training transformers or so the bottleneck for training transformers is um for scaling it's um the the interconnect speed yeah as you described so like the connecting between nodes but also within nodes like the enveloping and the switch between the individual gpus and also like the uh infinite band link between which you you want basically you want infiniband ethernet is pretty difficult uh between nodes to get good scaling the bottleneck is ultimately compute so flux is just pure raw gpu flops um nowadays if you get expensive machines if you know they have like infiniband and whatever you can scale pretty well and then just depend on how many gpus can you buy i i was just interested in like the specific trick that this chinese paper did yeah specific trick we haven't applied it to very large models um it's basically just like it's killing a free small improvement it doesn't change anything fundamental about the architecture really it's more like a you add it's a different way of doing positional encodings in the network which seems to give a pretty significant speed speed increase um not in execution but in training time so you need fewer steps to convert to the same loss which is pretty significant for smaller models and it like loses some efficiency with larger ones we've tested up to a billion parameters i think we haven't tested it with larger models i think we're currently in the process of testing it on even larger models we don't have results for that right so it's it kind of finds a better way a better path like the training um yeah you need less steps to get to the same amount of training at least for small models and so more generally like recently um i think facebook uh open source uh their recommender system and they said there were like trillions of parameters uh microsoft something similar and and nvidia had these like ballsy claims about training gpt three in a few days and and maybe scaling to 3000 americans so yeah what do you think about all these like new um topics around uh scaling up yeah so yeah you have to disentangle some of that so the facebook thing is i would not ex you can't compare it to a like dense model of gpt3 like the parameter numbers don't mean anything if it's a like recommender systems are very very different architectures they're extremely sparse they don't use all the parameters the same way gp3 does so you can't really compare them because this is a completely different architecture um for the um nvidia thing that's scary that's something that actually updated my um timelines quite sick like non like non-negligibly um on discord before you were saying that um you you didn't quite update from uh if i remember correctly wall-e or gpt-3 because you already had crazy timelines and so you didn't update that much and now you're saying you're dating from nvidia i am not also not significant because my timelines are so extremely short i i've like updated my confidence for like you know like uh like 10 to 15 or something which is significant well okay yeah so it's like five times um so yeah basically what nvidia talked about is that they're introducing new um systems that they claim will be able to train um trillion parameter models in three days which technically might be possible even with current hardware if you have enough of it i haven't run the numbers on that sure um because yeah yeah i i think we don't really know how all those optimization works um in detail like at the our level so yeah i just want to like bounce off on a bunch of claims you made on the another podcast called uh street ml podcast i guess um which were both funny and true which is uh let's start with um gpt3 is aji i know and i know you did it for the memes but um yeah so that was a bit for the meme of course it was a fun podcast you know so we said there's silly things um but yeah i do believe that um for a certain definition of agi the thing is agi doesn't really mean anything anymore like controversy means so when i say agi what i mean is it is a um it is a qualitative it's not like a binary thing like oh it either is agi isn't it it's more like a spectrum i'm like um for me artificial general intelligence just means it is a system trained on like some simple objective that is somehow able to solve many useful problems and in my definition gp3 is trained on a simple text predicting uh objective but is able to solve many very valuable tasks summarization and text writing and timing so yeah i i think that's like i talked to one engineer may i uh gave me the definition of like automating 50 of knowledge work which is kind of uh relevant to like our economy and not that hard for you know language models uh right now so i think and and like to to bounce off of like like let's not speak about intelligence but let's speak about useful things and what matters um i loved i loved this rule you gave which is which now i'm calling conor's rule which is to never mention the suitcase the suitcase word intelligence so let's apply the kind of rule in a color on our podcast and not talk about misleadings um so okay so we we had this prior conversation about uh gpt3 like how should we go from that to like an agent that could interact with the world and um you know do planning like uh do actions and stuff and i remember you're saying um that one of your crocs was so a crux is for people who are not familiar with it is a fact that will change your mind about um about the world uh if it happens or not so a crux conor had was whether um reinforcement the part where we tried to uh merge gt3 with some kind of reinforcement learning agent uh was not the hard part and that the hard part was actually coming up with like this transformer architecture which can scale a lot um so yeah do you do you have um any new takes on that or do you want to expand on on this yeah i mean you hit it on the head so there is obviously some things that uh we probably would need for gp3s igbt and you know future model to be considered so i prefer the transformative ai when talking about your automating knowledge worker heavily economy or whatever whether we call it that or not whatever but having a very powerful system um in a sense you could argue whether or not gp3 has any kind of goal system or not like you could make the argument oh it can simulate a goal agent even so itself has not it doesn't have a goal you can make it right a character that have a goal and out of an action that way yeah but i expect in the future the way this is going to look as described we'll have dp3 or gpn as some kind of world model that a rlag interfaces with to make predictions of the world and make better decisions and i expect this to be easy um i mean easy in big scare quotes still you know it's going to take time it's going to be hard to find architecture whatever but easy in the sense that i don't think we need any fundamental theoretical breakthrough to build this yep um i guess i guess the breakthrough could be inside of like reinforcement learning uh at the moment which is um just like it's not very simple efficient in terms of um being confronted to like the normal like complexity of um you know sensory perception and stuff yeah but i mean you can look at like the dreamer architecture for example which is super sample efficient because it's trained inside of a world model i think the reason rl doesn't work is because they're not using world models interesting yeah and i need to look at look more in new paper uh yeah it's like somebody based reinforcement learning has becoming more popular lately and i expect that to continue but but still like then the actual you know one model like the the gif that i remember of like in dreaming up like driving a car um it's still like a few actions like maybe you can collapse and go right um when you're trying to like ask a world model um when you try to like do prompt engineering and talk like a language model like your action space is is huge right so even if you're a bit more subtle efficient it's still very hard well it's pretty clear to me that prompting is an inferior method of how to use these models so one of my hot takes or one of my proxies here is that i actually think gb23 is much smarter than people think it is because of the task it's actually trained on i i conceptualized gps3 as an extremely intelligent model that is pretending to be a median internet user yeah so i think the way that inside the model is actually a lot of useful information that you it's like hard to get or like you know a prompt doesn't necessarily um able to extract easily and we've already been seeing this with like prompt tuning and whatever that like continuous prompts that are like trained through background or whatever that we're getting better and better techniques they have better performance out of these models so i i expect there to be as i always write the reinforcement learning age to not literally give a natural language query to the model but rather to have like a differentiable um interface within hidden states okay so you're essentially saying that uh like language like human language is a bad way of interfacing between uh you know ai models which is yeah yeah this is pretty yeah i agree with the claim um and um and yeah you also said in this other podcast that um humans were not yeah voldemort like the world we're not uh able to say they're not intelligent um and and it's also that um we're like humans are approximating gp23 not the opposite so yeah can you go more on that because i found it fascinating so that was definitely something i've been saying for the beam but i would say i believe that with like uh i do like believe it more than other people do but i have like very little evidence for what that even means so that was like somewhat of a meme but like in a sense um there's something it's like the more we see how about like there's recently been like uh several papers one of the cocktail networks is all you need where you show that like predictive processing predictive coding type architectures can um approximate back prop in arbitrary computational graphs so basically we see these like uh predictive coding signaling in neurological neurons and we now know like mathematically these can approximate a backdrop signal and just empirically we've seen many many times that like if you um try to implement like biological um constraints that exist in biological networks they often they generally don't outperform a pure back prop so my current working model in my work working apologies is that is that the backdrop is not literally implemented in the brain not because backdrop isn't good but because it's like not feasible because of hardware constraints is that if the brain could implement backdrop it would just implement backdrop so because backdrop is so perfect because we can implement it in our gpus is actually a more pure version of what the brain is trying to approximate so i actually interviewed um uh fristen when one time about this too and asked him what do you think of a back problem he said yeah it's all ends up to be the same thing it's all minimizing free energy and variational uh bayesian models and whether or not you believe fristen on this card or wouldn't think it's trivial uh call first okay he's a very famous neuroscientist who has a kind of like a unified fear of the brain which is also kind of trivial but also super complicated it's a long story but basically his whole point is that although the way the thing the brain does is it it minimizes variational free energy of a bayesian model which is a very fancy word to say it minimizes loss of a predictive model sure and whether you do that with backprop or predictive coding or some other algorithm it doesn't really matter they're like all in not like literally equivalent but they're like they're all approximating some kind of like bayesian update and backprop seems to be a space especially pure especially efficient way of doing kind of like approximating some kind of asian update at least for the kind of systems we use so what i meant by that statement is basically saying i think the brain may be trying to approximate a like a numerically pure backdrop it has to do all these hacks to because um you can't have you know 32 bit floating point numbers in neurons it's just not just too much noise so you have to do like all these like error correction and all this like fuzziness but then there's like also like um physical um you know tension electricity being continuous so it's it it also has more flexibility and flexibility than us like it doesn't have those problems of you know numbers becoming too small and like a double approximation um it has also like more flexibility on some points uh do you mean the brain yeah or i disagree i think it brings less flexibility than we do the brain can't represent an arbitrarily precision number it's all there's too much noise the brain can't show arbitrarily large or small numbers you know you can't just have a neuron being called an arbitrary large number or an arbitrary small number there is a in many ways if you just look at neurons and just like look at them you know spiking is pretty obvious to me that they're trying to approximate a digital system that's the whole point it's trying it's not an analog system and if there's one thing we've learned from the history of computer science is that analog systems never work out is they always sound great on paper but they never work we always default back to the digital system because they work digital systems are just because of the error correction of um these things you don't have these compounding errors that you get in analog systems and yeah what about um like where what what about quantum physics like maybe maybe there's something the brain is a huge wet mess warm mess this is the worst possible scenarios for quantum effects to have any meaning i see there's currently no evidence whatsoever that the brain significantly exploits quantum effect and even if it did um i don't see how it's like you can't have any long-term retained women so it might be like some kind of like small effect something like protein levels but i mean quantum isn't magic either it would just get bring us from the p-class complexity to bqp which might be i mean i could imagine if the whole brain was at you know zero kelvin and was an extremely complicated you know matrix of superconducting circuits okay then maybe there's some quantum effects going on but i don't i can't see how a quantum tangled state would not decay here in a system like that makes sense um yeah i was just trying to think about um like the best still man like counter arguments like what would be like something that would make you change your mind on like um like a brain approximating some other function um so yeah maybe it's something like in the space of quantum computing or maybe maybe backdrop isn't optimal yeah what would be like like the best stillman of the other position i mean it's it's less than i'm saying is approximating one specific it's more like in my is that all these groups kind of collapse the same thing it's like whether you use quicksort or insertion sword or whatever sort at the end of the day you're sorting the list and there's many different ways you can do it depending on what hardware what your data looks like whatever so i think what the brain is doing is to a large degree at least what the neocortex is doing is that it's building predictive models of its environment it's it's it's you know it's it's minimizing the loss of something like bayesian variational base model of its environment whether it does this literally sit through backdrop or literally through predictive processing or whatever is kind of just an implementation detail like what these hot field papers showed us is that they're mathematically they all kind of average out to the same thing so is that okay so are we saying kind of that very converge towards what would be uh through bayesian update from some kind of symbol enough induction and they try to like abduct update towards um and and like backdrop is um maybe the easiest or um elegant way of approximating some vacation reasoning at some point that's my work hip-hop so i mean base is like provably the optimal way on updating your information we don't have to bring solenomen off into that i have some spicy opinions what i'm not going to have to get into easy give me give me the spicy give me this i think salamando is mathematically incoherent i think it's actually not like it's an interesting idea to think about but i don't think you can draw as many conclusions from it as people think you can because you're basically my my um this should be congress law the minute you introduce a halting oracle it's no longer true [Laughter] you're not allowed to introduce a halting article that that's just it's for both it's like introducing you know p equals not p you know you're not allowed to do that you mean because he he doesn't consider altering our goals and the problem is that salam enough induction is fundamentally not halting it is healthy complete to to actually execute or to get the distribution of salaminoff the price the salomon of prior can only be computed by a halting oracle it cannot be computed in finite time and that and there's a big difference between something that takes uh infinite steps or takes you know arbitrarily many steps and something that takes holding steps to do and solomon off is takes a halting oracle to be able to compute because it's you know it's running over universal turing machines over all possible programs and many of those will not halt so in order to get the distribution over the non-halting programs you need to have a halting miracle yeah i need i need to look back into the paper but i kind of i cannot remember from his first paper on that i think 1969 or 1960 17 sorry he was definitely trying not to go into uh not helping territory and like he like says we're going to like take only um you know turing machines at fault um yeah but there's no way to prove whether a turing machine halts or not like but if you give if they give you an arbitrary sure way you can prove it so you can never construct this system so like i'm a constructivist i think so like i would so okay this guess is into deep math logic whenever i reject for example the law of excluded middle so like there's in in in classical logic you have like you know either p or not p i reject that i think that's not true um and because basically with that you introduce a calling oracle into your uh logic system sure you can also introduce griddle and completeness so to have it to have a consistent system of mathematics you have at least in my opinion um you can't have the lava solid middle and you can't have non-hull like you can't have a holding oracle in your system but to construct uh um the distribution of a salon enough prior you need a holding oracle oh okay okay okay um i think i think i guess what you mean i need to read the paper to understand it better um and to be clear i'm not an expert on this i just you know i'm this is just my intuition and what i've picked up from learning and reading about this kind of stuff i also think that the universal prior has like probably very strange properties that we actually don't want our agent to have also there's the problem of the choice so this might be the less weird explanation of why i think salam enough prior is incoherent so i think hudder actually recently released a paper on this like 2019 that you just to define your salam enough prior you have to define a universal turing machine but there are infinite possible universe turing machines and your choice is arbitrary so there are infinitely many possible enough priors and they can and depending on your choice of salaam enough prior you get your age like an actually your ic agent can actually perform arbitrarily bad on any distribution interesting yeah um like that's why i say it's incoherent because like it's a long enough claims there is one unique true universal prior and that's not true there are infinite of them and none of them is privileged over the other sure yeah yeah i see i see where coming from with this uh marcus paper um from what i remember reading the solomon of paper he he was trying to get to the point of something very like understandable by human where you i think you you you when you know it already but like you have a sequence of like a b b a b b a a and then you say like oh this is a permutation of a and b and so you just give like the the the k that corresponds to the cardinal of this permutation among n factorial and and then like um if you if you repeat that enough uh with a better three machines at some point you get something purely random right so um so i i i feel like somehow this like permutation thing um where like this permutation encoding is some somehow the like universal prior uh natural like he was aiming for and um but maybe maybe utter like shows that you know um you can adversely attack uh those things um and yeah yeah yukowski uh showed that you can mug you can do pascal mugging and so on um yeah and like did you like are you actually concerned about plastic mugging or uh is it for you like a like a funny top experiment i mean pascaline mugging is kind of like a spectrum like some people say climate change is a pascaline magic mugging technically you know because like someone comes up to you hey 50 years the world's going to burn so you know pay me money to reduce co2 is that a pascalian mugging i wouldn't think so i think that's just reasonable you know thinking um yeah so just sorry to have i'm just like to introduce to listeners that are not familiar with that ask maybe you can like explain what that's coming is um sure so the bascallion mugging is kind of like a like a thought experiment where someone comes up to you and gives you some really ludicrous um scenario that you think is really unlikely but threatens that the scenario will be like extremely bad unless you do something for him it's like the classic is like you know a mugger comes up to you and say hey i'm actually god um i'm going to create infinite people that suffer infinitely unless you give me five dollars so you should give me five dollars one important point though is that they're not saying something very weird um where um because like in in yukovsky's claim is that um they say i'm going to give you three uh arrow arrow ro3 uh dollars or i will kill three our arrow three people which is um newt um notation for saying very very big numbers um and those numbers are actually very high in the salomon of prior because you can build a simple train machine that compute those uh but they're like so big that like even if you consider every possible scenario and you try to like say make give a probability to like this guy saying the the right thing uh index detected in expectations value because your prior is so high compared to like this claim you will say like oh i need to give money to this guy um and so in my in my understanding for ai um so the general claim is like if when we have a prior like imagine like even if we are like transformers like other architectures and they have a prior level like what's reasonable and they do planning is there a way to hack them with something that has high prior for them and and will trick them into like believing it's very high expected value right um i mean that's what bizarre examples are in a way you know you can find like weird permutation of images that trick the the never to be like super confident of like something yeah exactly so you know robustness to um out of zero examples is always dependent on your distribution it's like you know you can make an agent robust to certain distributions of events and i don't know i'm not an expert on this so i know how hard it is to make them i assume there's probably some i don't actually know if this is true but i assume there's probably some proof that like for a large enough set of environments and unfounded agent you can always find like something that they can't handle or does something weird i'm pretty sure that's probably the case but i don't know that's actually true so don't quote me on that right so something like no launch but flow for like ariel yeah something like that i'm pretty sure a theorem like that exists but i wouldn't actually know um okay so right now i'm going to just pretend for a while that i'm not very um convinced about ai alignment and i'm trying to and i will try to steal man one of the arguments from the previous podcast um which is that um we're actually um making a lot of premises with instrumental convergence and um um also um so there's insurance continuum convergence a conventional instrumental goals and orthogonal um orthogonality thesis so to explain to the readers the listeners or technology teachers is that you can have arbitrarily high intelligence and arbitrarily bad moral values at the same time um and um conversions instrumental goals are that us as humans we want um to stay alive to um you know do stuff we care about and any agent that would have an intuitive utility function might consider to survive to um you know pursue his goal and he might also want compute uh to uh do better predictions and and so on so those are kind of um claims made by bostrom and yukowski that are quite fundamental to like all this kind of stack of ai alignment arguments you talk about uh where um if you don't buy any one of those arguments then you're not um yeah convinced of it so yeah um could you could you try to tell man like how those kind of basic arguments could be wrong um like instrumental convert convergence and um you know orthogonal or is it too hard i don't know so i don't think you have to buy every of these arguments to be concerned about ai's especially nowadays since we have like a much wider field than just you know a few people on a weird transhumanist trans uh you know a mailing list back in the day as foreign you know we now have a much wider swath of people care about these things and finding like real arguments about this like it's hard for me to steal man some of these arguments uh some of them i can like some of them kind of make sense but they miss the point so i go like an argument that is valid against their arthritis thesis a version of their personalities i consider a strong man is some people say well you know and like a very stupid agent can't have a very complicated goal so technically you can't have any age with any goal i'm like okay fine but that's obviously missing the point of the thought experiment yeah so another version so like the only still men i could think of i'm sorry if i'm doing a bad job this is the best i can do one is is that people believe in strong moral realism so people believe if you just create something arbitrarily intelligent it will just all converge the same morality and just become good so this is something you hear sometimes among the transhumanist folk singularitarian folk they're like oh the eye would just be so enlightened and good it will know what the correct thing to do is just do that like uh schweduler also has a version of this so that's one possible argument um like another possible argument we could say that somehow like um like combo of complexity and like we can kind of have like more like um you know doing good is maybe more simple than um you know um torturing a billion people where's weird dreams uh the simplest scenario is just to maximize entropy and everything just destroy everything that's the simplest policy just a random policy everything else is more complicated so yeah that's like what will you cost me call the hidden complexity of wishes or the like the fragility of values is that what humans want is actually really complicated that's why we need energy to yeah we're it's a very low entropy state in which humans exist in which there is value or art or beauty or anything other than like a finely diffused plasma everything that's not a finely defused plasma is a complicated state and requires like various energy to maintain we're all they were moving towards heat death of the universe everything's going to be a very fine you know mist of black holes and protons and whatever into infinity at some point and everything else requires energy to maintain negative or negative entropy in that sense so the the the like one of the arguments that is you know it goes against the strong realism claim is that just it's things that humans want are not like privileged in a sense it's not like all aliens and all intelligent all computer programs just spontaneously decide to want the same things as humans want that seems pretty outrageous to me like why would that be the trait case like i could just literally just write down a program if what human wants do opposite like you know i could just write that program and that that's an existence proof that i can write an ai that does the opposite of what humans want or you know something random just like if actually do random you know and there you go you have a random policy that there's no human values it so i don't believe the strong moral realization of a strong moral convergence so so your your program if you if you were to program something that says um do what the human doesn't want then it should like kind of include everything the human wants and so it would kind of have the understanding of human values even if it doesn't follow it but it would have the understanding of human values it might have an understanding yes but it would not execute them and honestly that's what i care about i i don't care if the torture ai knows i don't want to be tortured even torturing me it's not i don't want it's not i don't care if it's not smart if it's if it's you're dead exactly it's like that's the strongest argument through ai alignment that kind of arguments always just like i don't care if it's conscious or if it's made of a neural network or a [ __ ] prologue or whatever i don't care what matters is that we're building systems that can make good decisions that can optimize their actions to achieve goals in complex environments how this is achieved and whatever does it really matter yeah we could just say by fire assuming optimizer exists that can be given a goal and we'll optimize for it you accept that that this is just possible at all which is the entire field of ai like if you're working in ai and you don't believe this is possible well what the [ __ ] are you doing why are you in this field the whole point of ai is to build ai why are you here no i i thought that ai was you know something that played it's a very common thing it's like i think i think it was stewart russell who made who made this interesting this good point is basically um the whole field somehow has this huge blind spot of like what happens if we succeed like what if we just yeah succeed somehow no one's thinking about this ai alignment is about just taking seriously what if our things just actually work yeah yeah i remember i think he said he says this on multiple talks and also um his book yeah what if ai succeeds and then like it it asks the actual ai researchers to to think about wait what what do we do now and they're confused exactly it's very strange for me so that's why i find it very hard to steal man these anti-alignment arguments because i don't really understand like some of these people seem to believe that ai is just impossible which i is very strange to me than like why do you work in ai like what are you doing here and some people seem less like unwilling to accept even hypothetical okay well assuming you would succeed what then some people so or sometimes people say oh that's a hundred years away but even if it's 100 years away i still think it's worth thinking about sure i guess i guess my intuition is that people are like prefer thinking about things that are that make them happy um okay yeah of course i i was trying to just take outside sure those people are all [ __ ] liars or like these lions themselves um well yeah let's try not to um say bad things um about i think most people are genuine i think most people are very genuine in this regard i think they've just never thought about it really which is strange but that's how the culture of the field works i think the vast majority of people in ai just never really thought about it much and and maybe there's like as you say about optimization processes um you know society uh doesn't optimize for people saying their true beliefs on the internet because their carriers will be ruined so most people are censored and maybe a lot of them read less wrong and think like you uh but there's nothing yeah there's a great saying i never trust a man to believe something or to to know something or understand something if a salary depends i'm not understanding it okay i think that's a common dynamic is that these beliefs are associated with a dude that wrote a 2 000 page harry potter fanfic so it's kind of like a low status thing you're not allowed to talk about this um sure okay so it's it's it's like kind of signaling your low status to code um fanfic either uh compared to like quoting nervous papers uh yeah exactly it's like uh ai live it didn't come from the academia establishment it didn't come from the serious people with serious titles it came from a dude you know and like five other and you know a bunch of other people on a mailing list in the early 2000s and um one of the things which is extremely important to understand about humans work is humans take status extremely seriously wait wait wait what about quantum physics like it actually also like came as something very weird and like those were like smart people with high status like eisenberg and and like his friend was like all the all the top guys in physics yeah and and like and they had a huge debate and and nobody like they were arguing forever so like even with high status people you can argue yeah so it's not necessarily contrary to contradicting your point is more like another example of um strange idea that kind of um where people could talk about it um yeah and i expect alignment it's already in the progress alignment is being accepted into the canon of academic thought because it's obviously a good thing to be thinking about you know russell and other professors starting to like explicitly talk about it as an acceptable thing there's a i forgot who wrote this but like i remember reading a blog post or maybe some i don't remember what it was maybe it was a tweet or someone say that you always want to be the second person to to like discover something because like the first person to come up with the second person is going to want to like publicize it and then would you ever actually would you want to be if you want to be the first professor to notice it you want to be the first highest status person to notice it because you don't have to because even if you say oh this was all invented by this other person no one's going to quote them they're only going to excite you because you're the high status person yeah i guess i guess we have those like high status professors uh we have scored russell we lost from um but youth koski doesn't didn't get the whole the whole credit um i have like two or two other things i want to talk about um which um is of specific interest to me is that i think i think like to make a transition um what young lacuna if i try to steal money and lacoon he says something like um how could we possibly align those uh very smart agents if we don't know anything about them and it would be like um trying to change the sex of an angel or something or um and and somehow um if you think about it um there is like some kind of uncertainty about how much we can influence the future like um like the the still man of like people in the in the industrial revolution you're brought like 200 years back like what do you do to like have a powerful impact uh to energize yeah i can ask a simple question do you think marx had any influence on the current time yeah he lives in the industrial ages do you think he had any influence i think he had some influence but like like okay but what was his marginal impact so like imagine mark didn't existed how many years would we wait before someone like mark says something similar i don't know i can't rerun history but it might have well never happened or we might have had a completely different ideology that had like some that filled the same niche maybe fascism would have just taken over the entire world and that just would itself yeah just like somehow at some point i thought that like buff trump's book about superintelligence was very influential as it motivated people like elon musk to thinking to take it seriously and um it has some very positive impact so i considered like writing those kind of books to be huge impact but after thinking more about it i just realized that you know we're all humans with similar ideas so like like if boston adam wrote it like maybe someone else would have like three years later so i would greatly recommend you read a book called inadequate equilibria by yeah i read it so if you've read it i have no idea why you're making this argument because the world is not efficient okay okay okay again this is not we are not efficiently exploiting the maximum research you could do on ai alignment or the best books oh yeah yeah we're not we're not we're we're 100 so in that sense every marginal yukovsky or bostrom we get is a huge knit oh yeah yeah yeah it's it's huge uh but it's not like as huge as like like comparing zero to like this it's like huge compared to like winning two or three years or four or five years which is which is still huge um but yeah yeah i i agree with that and so to so so what you but what you were saying in this podcast was like we don't care about we we can still reason about what those very smart agents can do uh in an abstract way without like delving into details of architecture and one thing yeah i would no i'd say differently so first of all um let me say something very snarky to looking for i don't hate you but i'm going to say some snarky about you now uh if la [ __ ] thinks we can't reason at all about intelligence you must be a really terrible artificial intelligence researcher huh has you learned nothing that you can think about that seems kind of strange i've learned a lot about artificial intelligence studying it are one of the other artificial intelligence people studying i don't know seems kind of weird so i've been studying a lot of artificial intelligence and i learned a lot of useful things for optimization processes about report functions sure i might not know exactly how you know the pay-per-click maximizer is going to be built but i have a lot of good guesses like i think there's a really realistic possibility that the you know whatever is the first agi of the first tdai or whatever is going to just actually be a neural network built with pytorch running on gpus in the next 10 years i think it's a real possibility yeah yeah now that we see this convergence between the brain with you know the brain algorithms and back prop and it's all like just based like would you not say that knowing about bayes uh you know based optimality is not a progress in understanding what intelligence is in reasoning about intelligent agents i i find it very strange that these people who apparently haven't taken five minutes to think about uh how to reason about intelligence and how to think about agents say well it's impossible to reason about but it feels to me like they haven't tried like if you tried five minutes to think about what can i like what more than random what's better than random can i say about a future intelligence so come up with a lot of things maybe i think i think we're coming from um is from an engineering perspective like someone who tries like would try to like build the first uh you know complete uh complainers in the early like in the end 1990s like you you want to like they think that engineering is hard and and that like you will like people will try to align those things will be the builders and that people who reason about those things in the kind of abstract way in a text way without like actually building things don't actually understand and won't like have an impact and i i think that's like most likely engineering still man where like thinking about things and like philosophy i think that's a fair argument but i think it's over simplifying i mean if that was true then why do we have theoretical physiophysicists we should just kick them all out because obviously not doing and like i'm just i'm just like straw manning it like you steal men and i'll be strong man is why do we have theoretical why don't we have mathematicians like we should just not have any mathematicians they should just all go do experiments instead so yeah i see we need both like i agree that if you just put a bunch of theoretical physicists in a box you're not going to get a good theory of universe obviously you can get a nuclear bomb out of it uh you that you still need you still have like a theoretical uh practical experimentalist for that too did they practice did they oh yeah they do they did some experiments in manhattan project but they didn't work there's a lot of experimentation it was mostly engineering actually it's like the theoretical thing uh nuclear secrecy blog has a good post on this where they argue we all think the theory was that important but actually wasn't because that was the part that was declassified they kept all the engineering classified but he declassified all the theory because they thought it wasn't as important like the really hard part about building the bomb was just like all the dupont engineers work of just you know creating pipes that can contain uranium hexafluoride and you know build just really big factories because you need really big factories it's a nuclear secrecy block so it's a good blog that has a lot of info on that but nuclear secrecy i think this is what it's called i forgot who the guy is who runs it but he has a lot of great he's historian he has one yeah blog.crazy.com yeah there you go great great blog um but basically so but i do agree i think engineering is undervalued in this regard it's like uh please don't cast me as someone who says that we should just all sit in the cave and just like not look at neural networks i am the opposite of that i work with neural networks every day because i expect to learn things from building these large language models and from experimenting on this launching i would describe myself as an empirical alignment theorist i work empirically with the word with the machines i do because i think these there is a acceptable chance like maybe 10 to 30 percent that the models we're currently building will just scale to what will later be considered to be hdi and so we should be starting experimenting with them right now yeah so it's it's what's called the literature um as you know positive alignment and um i think one one um interesting take was i think paul christiano's recent blog on his research um protocol where like he tried to experiment and things and try to see if is alignment technique scale or not but one point i saw you're making um about resulting about um not experimenting but like um the usefulness of math in thinking about agents was that um there's there was like this base of like decision theories which is quite large and um given to the function you can you could find the best uh decision theory that maximizes this duty function like i i haven't seen much people making the decision distinction between ugly and decision theory um and so i so yeah i just i just found that super interesting i don't know how much we can apply that for um you know like yeah i don't have a precise question about it i just found this that they did if i have if i could i would if i could just take miri you know and put them in a cave for ten thousand years and just like didn't extract papers from them i would prefer i would love to do that before we did to build agi just to see what they find you know i don't know if the decision theory research is going to lead us anywhere but i think it's worth exploring the same way exploring you know abstract math and bayes theorem and quantum physics is useful i i see like i think decision theory has helped deconfuse a lot of questions so miri has this great blog post where they say questions start as philosophy then they become math and then they become engineering first you ask yourself the question what is even the question i'm asking what are we interested in then someone starts hacking away at that with math because models and systems and like you know it's like toy experiments or whatever and then an engineer takes the map and actually turns into a system that runs and this is a process i think happens in most scientific disciplines so decision theory was an intent to go from philosophical experiment what does rationality mean to turn into math and now with an ai is the engineering part of taking precision theories and whatever and this inspiration not literally implementing them stepper step but taking this aspiration taking bays as an inspiration for how these things might be able to think at this point conor had to leave so what we're about to hear is the second part which happened one week later over twitch during the first 10 minutes i unfortunately forgot to check if conor's voice was being recorded so the start would likely feel a bit abrupt if i recall correctly i was prompting him an illuter ais current projects especially his involvement with eegi or luther experiments in general intelligence human values or just being useful in different ways um so far not much has ha it's most of it's been kind of like you know me and two other people kind of private chat um but we're trying to move this into a more open state we also want to work more in interpretability so you're understanding what these models do internally and stuff like that um i want to make this all open source all open in luther so anyone can work with me i expect in the near future is going to work there to be done um at the moment not really like soon tm you know like keep an eye out if you want to work with me on that um we're mostly going to need ml engineers probably uh but also some web dev work though we do already have some great web developers melting with us ml engineers is quite a broad uh you know range of people so maybe more like deep learning engineers that can do tensorflow mesh well no we don't use tensorflow anymore it's all pi torch like oh spiders now okay it's all pi torch we have we have oh cool cool so by torch on this uh new sponsor um right yeah so we have a lot of gpus from them that we do most of our experiments on so as i said the work i'm doing with egi is still very early stages like very small models just trying to experiment um if you are if you the listener are um a experienced ml engineer or someone who has experience with high performance computing please consider contributing to new x ux is a huge data is a huge you know code base that there's a lot to do uh sid does a great job of keeping issues up to date on the github repo so you can just check out the github breakout look at the issues and you can also hit up uh sid or me or sela or anyone anytime and we're happy to get you into the code base um so it's like not something that a beginner can do that's unfortunate that's the unfortunate part about ux it's not really something a beginner can do you should you need to have some experience to get into that code base um yeah that's like the main public private i i don't say sorry on github you're on the uh github website is it public the stuff you just yeah it's all public what was the name again uh so the new x represent uh so if you go to the go to our website you can click on the new x link and it will get you to the website to our github repo where we have all of our code okay sure um yeah let me do that oh it's not important um okay and and so yeah so this eeg thing so to call um to frame it for the listeners is experimental experiences experiments in general intelligence yeah right it's just a funny name i came up with yeah yeah it's pretty it's pretty cool it's a bit like eeg but like for agi right yeah um yeah and so you're starting from this paper uh i i guess from polecraft channel or the opening team on learning to summarize from human feedback right exactly and so our first step is going to be replicating the results of that paper because i think they use very interesting techniques that i expect to scale and like a lot of variants of those techniques that i want to experiment with and so we're trying to first get our feet wet you know get the um get this set up try experiment with like models and yeah just take it from there sure and um [Music] well i see you have like already our websites called i think it's like uh something uh maybe like nlp okay i'll let me find my notes um yours it's called eeg dot eliter.ai where you actually have something where you can pick some summary and then you can just like give your feedback right so that kind of kind of cool yeah that's experimental that's also experimental so nothing nothing for the public yet um that's down the line at some point uh what we hope to do is is to create uh have an interface where you where we have users directly interact with these models and give feedback on their performance it's like we'll have the the model summarize text or chat or perform other activities and we'll have users rate their ability and we're going to use that feedback using some advanced methods to try to improve their performance for the users um application whatever they want you know whatever the goal is that we're working on but that's down the line where it's very experimental you know um talk again in a few months how many months well depends on what you want to see so i expect that we should be able to replicate the results of the paper at least for the smaller models in the next like i don't know maybe one to two months uh depending on compute availability and like how big the models are there's other things um making it more open to the public and like you know training uh individual models we'll have to see about that um it really will depend on like how much work your ex needs how hard a few technical challenges are going to be but i expected about like six months time i expect us to have some really cool results to show that's kind of my timeline right now six six months i think we'll have some very cool things to show cool yeah because i'm super interested in that because um i think i think uh yeah uh mixing erl and and nlp is the way to go and i i love this approach with uh just like learning um human preferences from feedback on you know is there is this a good uh upper uh backflip yeah yeah exactly is good and so if you consider now that nlp is the most scalable way into aji and it makes sense to kind of align align it and and make some kind of a rel feedback loop um do you do you understand how the um reward model looked um right worked because like let's try i just look at the paper at the blog post quickly and it seems that you train a weird model and then um you train something that summarizes and you train something that like a policy to switch between different summaries again you just like go through the loop maybe um so the way the paper does it is is that um they collect human feedback on periods of um summaries so they have like human generated summaries and uh ai generate summaries and they'll show humans a pair of them they'll say two um summaries of the same text and then the human will will rate which one of them they like more then they train a reward model to predict you know given uh two texts like which one would the human like more and they basically use that kind of like as a reward signal to then r use rl uh use ppo to fine-tune a gpt model as a policy to generate these kinds of um summaries they also use a third model as a um kind of as a kind of regularizer so basically what happens is if you just train these models kind of naively is that they over fit to the reward model and they find like ways to hack it they find like weird ways of just like repeating one word over and over and over again and for some reason that confuses the reward model and gets a really high reward even so it's not what we want so it's a bit of a hack but basically what they do is that they take an untrained legislative a pre-trained but like a not a fine-tuned gpt model and they take a kl divergence between um the fine-tuned policy and the baseline gpt and penalize the model if it diverges too far from the baseline and okay and so they give you uh like bad reward or yeah it's basically that if the text is too weird if it's too far from normal human text it gets a negative reward for that okay and that that encourages it to stay kind of within the bounds of normal human language instead of you know producing this one token that just like repeats it over and over or something like that and okay so how much compute would that need uh because there's like the whole like how much compute did opening i use right a lot so the opening so i currently we don't have the kind of compute laying around that would be needed to fully reproduce especially for the larger models it is a lot of compute that goes through these kinds of things that's why i'm taking it slow i'm not like committing to any timelines too much because this will depend on a large degree on compute constraints um you know it might turn out that some of these things uh are much harder to get you know to get the compute than we currently hope um you know we're all figuring this out as we go but yeah in the paper it's especially the larger model so like in the ideal world i would like to experiment with models in like the 10 billion parameter range like 10 to 13 billion would be the ideal size of models for me um i expect there to be interesting performance characteristics uh that i will see those models that i can't see in like one billion parameter models or smaller models it's just a hunch i might be wrong but i have a good feeling about it if if possible i wish i could do you know the 200 billion parameter model i think that'd be even better but that's not feasible hmm so more than more than the number of parameters from so yeah how does the dental work differs from like how does the language model differ from gp3 is it the same architecture or do they have something special uh it's basically it's the same architecture the same architecture and plus plus ppo and yeah i'm not sure yeah how how longer it would take um and what's the what's the next milestone for gpt neox um at this moment um other than you know just like fixing bugs um improving efficiency experimenting checking different things it we're just hardware bound so we're just waiting basically um the chip shortest hit core weave you know they've ordered tons and tons of gpus but they just haven't arrived we're just waiting for them to arrive really so yeah we're we're ready to scale up to larger sizes we're probably gonna train like a midsize model i i assume like a 13 or 20 billion parameter model number probably do a 200 billion and once the hardware is there i i i don't fully grasp how hard it is to go from 30 billion to uh 200 so like i know i know we get more sample efficiency is that true that we get more sample efficient when the bigger model is yes but the model is bigger so the amount of compute you have to put in the model and the number of gpus is of course higher so basically if you know how much compute you have ahead of time you can calculate like the optimal model size you want to train um i don't know what you know our final hardware is going to look like i don't know what the perfect model is going to look like we're probably going to train at 200 billion for the meme anyways because it's just a nice number go for but yeah we'll see what comes out of that hmm do you see the twitch comments because like i cannot see any of them this is very bad uh i just keep like bro i don't have it open okay okay because i don't see anything um because i need to put in my big screen and if i do it i lose the the conversation maybe okay let's try for five seconds if there's um and i mute it so there was comments before now there's not no no more comments i don't i don't get it um probably not that many people watching okay but there were comments before okay um anyhow um so yeah i guess like so if you if you scale to from 30 billion to 200 so you get seven times more uh parameters but then there's like it's more efficient so like what does the scaling law tells approximately um we don't know what the final hardware is going to look like um so i can't give like a specific answer and we're probably just going to train at 200 billion parameter models for the fun of it you're just going through that yeah we'll see our goal is like 200 billion and we want to get the hardware to make that happen but you know this is all up to core weave and how things end up working wasn't the meme like one terabyte or die or something well what one tr a trillion or bust but that's not happening that's just physically impossible like you know you need uh i don't know maybe the biggest super computers in the world could train that maybe um but yeah with not nothing we have access to a trillion is a meme no one's done that like the the switch transformer doesn't count well in nvidia nvidia is going for 1 trillion right they're definitely trying uh i think it's definitely possible that that they will succeed in the near future but at the moment no one has done it and we're not going to be the first one to do that sure like okay when do you expect there's a 50 chance of nvidia releasing their 1 trillion model i mean 50 chance today today you're just like drop a clown and you're like oh i'm good yeah maybe 50 i mean i expect like um if their gtc keynote is true and all the things they say is true we should expect a trillion parameter models sometime this year or next year i would say i mean switch transformer exists which is like a sparse trillion parameter model i don't think that counts uh sparse marbles aren't as powerful as similarly sized um dense models but yeah i expect a truly parameter dense model pretty soon um open the eyes also teasing that they have something that you know blows gbt3 out of the water like i think ilia has tweeted about that in the past and that's probably gonna come out at some point hmm and yeah recently there was a bunch of conversation on the discord about who are we getting this is it huawei or chinese hardware uh yeah huawei is like everyone is like coming up with three on parameters um yeah there is a scaling race happening right now which i'm very not happy with um but it is kind of like uh you know big numbers go burr you know like ooh look at that you know look at how big our number is oh our number is bigger than your number that's just how humans work you know it's like a it's a clear prestige thing for you know big companies and increasingly governments to um how to show these like large models that they could train the most interesting thing about the huawei um model is that it was trained fully on chinese hardware so chinese cpus chinese accelerators chinese everything was very interesting the model itself is kind of terrible from everything i can tell uh it was only trained for like a tenth as long as gbt3 um also like on chinese text of questionable quality now that i can evaluate chinese text that's just what i've heard so uh it's not like equivalent to gp3 or anything i personally don't see how you know china is going to catch up in ai quote unquote in any regional capacity you know with the kind of censorship and the kind of like weird incentives that exist in china anyways they have those weird incentives that make them um you know go more private um do their own thing fight the the entire world so they they could just like have the smartest people working on something forever without telling anyone and just no they can't i do not believe they are capable of doing that people very much overestimate one thing i've like i think under sandberg said this very great is that top secret progress usually suck top secret projects are usually really really terrible because they don't get the feedback from the from like the really smartest from the community and smartest people and as far as people usually don't want to work on top secret projects especially not with like crushing bureaucracy as it exists in like china and stuff is that let me put this away say you were the best ai researcher in the world right you could work anywhere you want would you pick china not really yeah it is a whole paper from boston about openness in ai like the like is it like would we expect in the future to have more openness because today there's like a bunch of value for people uh to to to do opening work to publish openly to have open source code um so people don't want to go to china and and work in private because their career will be ruined and yeah they don't want open like they want open source right um the the incentives currently don't really like the chinese market has a lot of problems um uh and i also shine these legal system and censorship system like imagine if your gpt-3 chinese starts talking about tiananmen square you know you have to sense to that somehow you can't say that otherwise the party's gonna be really unhappy with that let's like imagine you are what's the temenum square yeah time and square massacre it's uh massacre yeah and that's heavily censored and obviously a gbt3 model you know would pick up on that unless you do extremely heavy censoring which would completely [ __ ] the model's performance i expect so i think it's all a joke like people will say oh china doesn't you know has so much data and they don't care about prices all [ __ ] it's like it's such a you know [ __ ] of bureaucratic nightmares is that like it's it's way easier to get a huge data set in just you know being in america and just googling things you know who cares about like all that kind of the one thing that chinese are good at is facial recognition and fall you know and like spying on their citizens that's the one thing they're good at which i'm not happy about but other than that it's like who cares but they do publish papers they get yeah they publish papers let's let's be clear here papers the the highest cited papers do not come out of china this is just a fact they're a wonderful really good chinese suspiciously many of them working on facial recognition that come out of china and they do publish papers and there's a huge number of papers but um china has like a very different incentive system when it comes to uh publishing papers like often china to like to like advance your group you have to like publish a lot of papers and it's completely irrelevant how good they are they of course were claimed otherwise but like anyone every every person i know that's related to that has worked in the chinese system will tell you that like you get a bonus for like publishing in journals depending on like point systems you know it's silly it's like you think the american system is corrupt you think the american academic system is is corrupt which i think it is chinese is like 10 times worse it's it's all a big it's all a big meme but then the question is is um you know it's like you're kind of building this um experiments uh on on agi and i guess at some point we'll get some race to scaling and at some point it will become scarier and scarier and closer to actual uh even human level at least human level uh text prediction um and so i had this discussion on the discord was i think it's called 3d 3d printer or something and it was that this can just be between multiple scenarios and um actually having one big uh corporation winning the ai race and at the beginning i was convinced that you know open ai were great um maybe deepmind is cool too they have co-founders interested in asi safety so maybe if if like deepmind becomes very good and then openai joins them because they have doubt in their like legal policy then we have like one very strong actor that will make everything safe right and that would be an ideal case if that works yeah i thought i thought it was the ideal case but then the guy convinced me after a bunch of messages then then multiple scenarios were better if if we had like a bunch of decentralized kind of agi groups um kind of helping each other or like correcting each other because you know if you're it's harder to be to be smarter than all the other ais in the world right um so sure i feel like luther falls into the second category where you're trying to like level up by the like everyone has the same open source software or at least like the api and a bit like the open ai um argument from elements at the beginning right um i uh do not subscribe for that argument of elon musk by the way i i think elon musk i do not like his opinions on ai at all i think he's crazy uh when it comes to ai i like him i like all of his work but i think he's crazy when it comes to ai um so this is an extremely nuanced topic this is not a is good and b is bad extremely complicated topic and no one knows what the right thing to do is and this is like multipolar versus unipolar as if we had a choice i don't think we have a choice in this regard the way i see the situation personally is at the moment you have a choice between open sourcing lgbt23 or not but that won't change i don't i literally don't think that changed anything in in the multiplayer versus only uniport i said i don't think there's like any action i could take that would encourage a uni polar outcome in like any non-trivial capacity so the way i see it is the following i think that coordination is extremely hard i am extremely pessimistic about any kind of coordination among humans i literally think it's easier to build a fully aligned super intelligence than it is to get humans who cooperate on large scales uh it's like i i don't think that like all ai companies could coordinate without defection to like stop ai research i don't think that's possible i i just don't think it's possible um so the way i see it is the only possible way we have hope that this is going well at all it's just we develop a technical solution to ai alignment or methods to ai alignment that are competitive easy to use and you know so good that there's just no incentive to not use them if we develop a method but it makes your ai a million times slower no one's going to use it we're all going to die if we create them you know if we create a method and then um in some way it's like hard to implement or only one person the world understand it we all die the only way i think we can get it is that it has to be simple it has to be well early you know implementable it or we're you know people are there to make it implementable it has to be competitive that's highly performant it also the resulting ai also has to be powerful it's like there's no there's no use building a weak like a like an aligned ai that has been weak and immediately destroyed by the second ai that's unaligned that is much stronger so you also need to build an ai alignment and you know strong agent that can defend itself against potential that's the big danger with multi-polar outcomes is that we could have like say we build three aligned agis for this multiplayer scenario but then a fourth person builds a paperclip maximizer that blows up the world then you know we still lose you know or even if the even if they managed to fight back the paperclip maximizer but you know they nuke the entire surface of the planet in the process or whatever so we need to build some kind of um there's a whole like iterated amplification scenario from pro crusano and and we need to add some constraints like please make sure that people don't build stronger agis so we never end up in no one knows how to solve intelligence amplification is not a solution to intel to alignment i expect if you would implement android amplification as described you would paperclip the universe or worse actually i don't think it would work as it currently is and christiano does no one though this is working on it but there is no proposal that currently exists for alignment that is ready to implement and even those that are like have some like good ideas i would not give them more than like 10 chance of working we are in a very early stage of our understanding of these problems there has been massive in my opinion progress on these problems but it's just an incredibly hard project and and and yeah the progress is is is massive compared to what it was before so it's moroccan yes accepted and soft and we have maybe like 100 papers a year on ai alignment or something tops um maybe a thousand if you count like uh robustness and and so on but then um but then there's like this massive companies like hey uh nvidia and like throwing like millions and billions of dollars at this research right so i i don't know if ai alignment is scaling as well as much as the rest of the economy right we don't need alignment at scale we need ai element to work it we need a solution to alignment it doesn't matter how many papers we publish it doesn't matter how much funding it has but the only thing that matters is is that we have a understanding of the problem and a solution to the problem that people build in the ais can build my hope for things to work is is that no one is incentivized to build an underlying a high you don't win by building like if you build an online agi you get [ __ ] too like no one wants to build well and agi killing people with shotguns right i'm sorry was that sorry uh people do kill other people with shotguns like they're like some evil people trying to justify the world well yeah okay but i mean if those people be the aj we're we're like super super triple quadruple [ __ ] you know if people build a military ai agi we're so ultra mega super gigafucked there's like you know like i don't even think about scenarios because we're so [ __ ] in such situations like if if things like go super multi-polar and we have like world war three and there's like take like decades of like military eye development we are still ultra good [ __ ] like i it's like not even worth thinking about because it's like i don't even know how we recover from such a scenario like if something like governments like build ai agis it would be optimized to kill humans like could you imagine the horror like like the unimaginable like hell like literal hell on earth well i'm just i'm just thinking about like not someone someone some not like everyone can do it but you know some corner that tries to reproduce gp2 in 2030 someone else is like oh i'm going to try to reproduce this agi paper that i'm not allowed to do and you know try to run it and um if if like you know did the same by your koski like the iq to create agi dropped by one every year kind of dissimilar thing uh so if you have an iq of 150 in 20 years then you might be able to do it by yourself or yeah i i guess it's just who i expect like the um i expect the an era of humans to end very shortly after the fridge agi is built or the first tai is built i expect humans to not stick around for very long um why because because we get uh we get like neural implants or or like in the best case scenario i mean the average case scenario is a paperclip maximizer like the average i expect like on average like civilizations like ours in this situation probably on average paperclip themselves they create an agi it's like unaligned and it just you know splats the you know takes apart the entire earth for resources to build something it doesn't hate humans it's not evil it just doesn't care i think it's the default scenario i think some minority of civilizations in our scenarios build demons they build chaos gods they build evil ai like military ai that like tortures people they're like you know create simulations of uh you know terrible situations or whatever something's called suffering risks risks yeah that's what i'm most worried about and some people some civilizations manage to get the alignment right and to create like you know these like angelic super beings that you know can bring you know peace and harmony and solve suffering and all these extremely hard problems and then expand onto the universe if we get elected right that's what we get but that's not the default scenario nature is allowed to kill us we're allowed to lose we're not the heroes in a novel we're not predetermined by a script to win we're just jumbles of atoms we're just a civilization like any other in the universe we can lose and we will lose if we don't get this right if we don't solve the technical problem in time we build agi and we don't get hundred understanding of how to align it that's it game over so do you think do you think the the great caesar is ahead of us i don't know i don't know but at the current moment i would say i'm more optimistic than it was a few years ago i feel like progress and alignment has been a lot faster than i expected it to be so but that means like i updated from say five percent success chance to maybe 15 success chance it's possible and it doesn't mean not to work out there's literally no reason not to work on it because the alternative is we were [ __ ] anyways so why not work on it and it's also a fun thing to work on i enjoy working on alignment very very much and if it increases our chances even marginally i mean what else are you going to do with your you know like it's just the obvious thing at least for people like me i think it's the obviously correct thing to be working on and i think there's hope there is a chance um that we get this right i personally like skeptical these like intermediate scenarios that i think like cristiano and other people take very seriously i was like like kiss like you know comprehensive ai services scenarios and whatever they say like okay we have agi was like neither super aligned or like super on the line you're like it's like intermediate thing like humans stick around i don't think that's gonna happen i i don't expect really any humans to be around by 2100 you know either or if they do exist they only exist in like cesa spots or something wait uh i don't i don't think c c a i s the thing from uh dexler drex drexer drexler um is is about humans sticking around until 200 200 just like ai takes no those are two different art things i just said it's like i don't it's like in case there's like not an alignment problem like you don't really have like you don't have paperclip optimizers but you also don't have you know singleton ai you have like either i don't expect these these intermediate scenarios i don't i don't look like much probability maximus okay because i i put some decent probability on that happening uh after reading i think this is the summary of it from rohan um i don't i i think i don't understand kaiser it makes no sense to me like i've read it i've i've read the whole thing i'll do it and it makes no sense to me yeah for the whole thing makes no sense to me it's completely silly like i think a case scenario can exist stably for maybe like two years before some of the kai's things become edge intake and take over all the other agents and then you know we'd have agents again it's like dude gordon has the has the de facto post on this called to why tool ai wants to be agents and i expect that to happen it's like when you have a sufficiently complex service system it will become agented you'll have inner alignment failure you'll have visa optimizers take over or you know like just like someone just builds an agent someone just some hedge fund just sits down and say okay i'm gonna build an agent that maximizes profit like that's i think one of the most likely scenarios how these things go to ship is you know we're just gonna have some stupid corporation be like all right little ai you know just create profit and then so you know it just tiles the universe with like you know very large bitcoin numbers or something okay so you so at the end is more like a human failure it's like the human most likely scenario the most likely scenario by far is just we just build a thing something weird happens and we all just fall over dead you know it's just like you know just some we built some weird agi it it has some weird paper clippy goal it has some weird mesa optimization or something like we have like gpt3 dn by seven or whatever and we uh prompt it with something instantiates some kind of like mesa optimizing age and its internal representation it breaks out and you know does something weird um i feel so scenarios like that are pretty likely what is it pretty likely i'm still talking like 10 to 15 probability or something like 50 of my probability mass is always on unknown unknowns like something much weirder than i expect will happen by default something much weirder than anything i can come up with is going to happen and i have no idea what it is and no one does knows what's going to happen but i feel like there's plenty there's just so many scenarios and how this could go wrong and it all is downstream of not getting alignment if you have if we understand the actual problem of alignment and we have technical solutions to these problems all the other you know then we have a chance if we don't have that there is some chance there is some possibility i give it like maybe a five percent probability maybe ten more like five that alignment just turns out to be really easy maybe it turns out we were just really confused and actually it's super easy you just have to like do the thing and it just works every time what do you mean by like it works so like imagine imagine it works like we're in the year 200 and 200 and it works so do we do we have something very very smart that cares doesn't care about hurting humans or like cares about preserving humans and we we're just like uh merged we're merged with the ai or just like neural i mean that's a whole different question about like what should we want if we have generally it works i just don't understand okay because when when people say it's not going alignment is a philosophical problem philosophy is about figuring out what the question is we don't know what the question is we're trying to solve to a large degree is that and this is something that you just have to get used to if you're working in a pre-pharmatic field like when people try to figure out what causality is like you know in the early 1900s 19th century or whatever people were like what does some it mean for something to cause something they couldn't answer that question like what does it mean to be good that's more clear like people still argue what does it mean to be good that is a question i think you can make progress on and some people would disagree with you on that i'm not saying more realism here like um i'm an anti-realist for the most part but you can work on these problems and alignment not fully but to a large degree is about that alignment is about moving probability mass from the we paper clip everything or worse to the we don't do that scenarios what exactly those don't do that scenarios look like anything post singularity we can't predict for sure we can come up with some ideas you know every scientist in this field has some of his sci-fi visions a lot of them want to keep humans around a lot of you know a lot of them you know want us to like be augmented or to live in virtual reality or whatever like that um i think it's pretty pretty quite likely that we have like a better understanding of the problem and better control these ages that we could you know live in a beautiful wonderful perfect virtual reality for you know millions of years or whatever i think that's definitely possible i am more of a um i guess feed the utility monster type person i'm like why keep humans around they're inefficient just create beautiful virtual minds that experience infinitely more pleasure and happiness and meaningful lives and think much more beautiful thoughts than humans do why keep around the humans seems to be efficient uh but that's a bit controversial so you're you're altruistic so you would prefer to die and have some duty monsters getting all the guillotine absolutely yeah of course just make sense to me like why would it not does it seems silly the easiest decision of my life if i could just you know you know kill myself right now and it would just make a ai alignment happen and beautiful utility monsters inhabit the entire universe literally easiest decision in my life i think i think we can we can maybe end on this because that's beautiful and that my dear and augmented biological intelligences was the end of this first real inside view podcast if you found this conversation insightful i strongly recommend joining a luther ai's discord link in the description for the event subscribe to the inside view channel on youtube i'm also curious what would be for you the pros and cons of open-sourcing to planning research where the long-term goal is to solve ai alignment you can send anonymous feedback on this video link in the description or just tag or dm me on twitter michael trenzy spelled t r a z z i
d74c5850-7cd3-41fc-b8e3-21db2d2cd167
trentmkelly/LessWrong-43k
LessWrong
August 2013 Media Thread This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads. Rules: * Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect. * If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations. * Please use the comment trees for genres. There is a meta thread for comments about future threads. * If you think there should be a thread for a particular genre of media, please post it to the Other Media thread for now, and add a poll to the Meta thread asking if it should be a thread every month.
449f45c1-7285-4b07-ae82-8e9911be9aa7
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA—Inflation of Terminology Discussion article for the meetup : West LA—Inflation of Terminology WHEN: 24 September 2014 07:00:00PM (-0700) WHERE: 11066 Santa Monica Blvd, Los Angeles, CA How to Find Us: Go into this Del Taco. We will be in the back room if possible. Parking is free in the lot out front or on the street nearby. Discussion: A problem for people trying to be precise is that words tend to take on additional meanings over time, and seldom lose meanings. It is good policy to avoid habits that promote this tendency. We will discuss why this is so, and how to implement this policy in daily conversation. Recommended Reading: * Avoid Inflationary Use of Terms * Similarity Clusters * Sneaking in Connotations * The Worst Argument in the World No prior exposure to Less Wrong is required; this will be generally accessible. Discussion article for the meetup : West LA—Inflation of Terminology
23eb2226-2860-436c-9fc9-342bee2874df
trentmkelly/LessWrong-43k
LessWrong
Finding the Wisdom to Build Safe AI We may soon build superintelligent AI. Such AI poses an existential threat to humanity, and all life on Earth, if it is not aligned with our flourishing. Aligning superintelligent AI is likely to be difficult because smarts and values are mostly orthogonal and because Goodhart effects are robust, so we can neither rely on AI to naturally decide to be safe on its own nor can we expect to train it to stay safe. We stand a better chance of creating safe, aligned, superintelligent AI if we create AI that is "wise", in the sense that it knows how to do the right things to achieve desired outcomes and doesn't fall into intellectual or optimization traps. Unfortunately, I'm not sure how to create wise AI, because I'm not exactly sure what it is to be wise myself. My current, high-level plan for creating wise AI is to first get wiser myself, then help people working on AI safety to get wiser, and finally hope that wise AI safety researchers can create wise, aligned AI that is safe. For close to a decade now I've been working on getting wiser, and in that time I've figured out a bit of what it is to be wise. I'm starting to work on helping others get wiser by writing a book that explains some useful epistemological insights I picked up between pursuing wisdom and trying to solve a subproblem in AI alignment, and have vague plans for another book that will be more directly focused on the wisdom I've found. I thus far have limited ideas about how to create wise AI, but I'll share my current thinking anyway in the hope that it inspires thoughts for others. Why would wise AI safety researchers matter? My theory is that it would be hard for someone to know what's needed to build a wise AI without first being wise themself, or at least having a wiser person to check ideas against. Wisdom clearly isn't sufficient for knowing how to build wise and aligned AI, but it does seem necessary under my assumptions, in the same way that it would be hard to develop a good decision theory
2d2dbbf0-197e-4a8d-90b8-d48ff312d178
trentmkelly/LessWrong-43k
LessWrong
The Trickle-Down Effect of Good Communities We don't know who came up with the ancient Indian idea of karma or why they did so, but one of its social functions is to motivate people to behave better. If people really believe that they will suffer for their evil actions and prosper for their good actions due to a law of nature, this probably motivates them to do more good and less evil. Less Wrong, of course, has a karma system. You gain karma points if you write something that people value, and you lose karma points if you write something that people think is inappropriate. At low levels, gaining karma points gives you new posting privileges. At high levels, karma points indicate something like your status in the community. Recently I noticed that I post better comments on Less Wrong than I usually do on my own site. I think this is partly due to Less Wrong's karma system. When I draft a comment or a post for Less Wrong, I'm more likely to (1) talk to others charitably and with respect and (2) go out of my way to provide useful links and context than I when I draft a comment for my own site! And now I find myself motivated to bring a stronger emphasis on those qualities to the writing on my own site. So the Less Wrong karma system is having a trickle-down effect into other areas of my life. Which got me thinking... it might be helpful to have a karma system in "real life," beyond the pages of Less Wrong (or reddit). Maybe something like Facebook karma. People could anonymously add and subtract points on people's Facebook profiles according to whether or not that person acted like a douche in daily life. This could be done by a smartphone app, and plugged into Facebook via an opt-in Facebook app that users could voluntarily choose to add to their profiles.
0a34fcc6-f975-4a56-86b2-3be87066e756
trentmkelly/LessWrong-43k
LessWrong
The Neuroscience of Anxiety Summary of the book "Anxious" by NYU neuroscientist Joseph LeDoux
989aec09-c0b3-419c-a63c-71315c0a730d
trentmkelly/LessWrong-43k
LessWrong
Our 2022 Giving Julia and I usually decide on our annual giving at the end of each year. For the past few years this has been relatively straightforward: I was paid a lot, so we donated 50%. This year is less so, because halfway through I switched jobs to work on something directly valuable. While I'm very excited about the potential of this work, it does pay enough less that even if we stopped donating we'd be keeping less for ourselves than before. One option we considered was donating 50% of our income from the first half of the year, up until I switched jobs, and then some smaller proportion of the income from later in the year, representing about how much we intend to donate going forward. We thought about this, but decided not to: * The collapse of FTX, taking with it their philanthropic arm, means that there is less money available for EA projects, so it's more valuable to have funding now than in the recent past. * We're not sure how to pick an amount for the second half of the year. Some of this is not being clear on what sort of donation norms we'd like to encourage for people working at the kind of organization they would otherwise be happy to fund, or who take lower-paying jobs for altruistic reasons. Another part is that what we personally want to do depends on what the post-FTX funding situation for our employers end up looking like. * I like having a simple explanation for what we're doing with our money, and while pro-rating isn't that complicated it's still something I'd rather avoid. Simplicity is also a reason why we lean away from donating via lottery or salary sacrifice. Instead, we've decided to just continue giving 50% for 2022, and give some smaller proportion in 2023. How much? Not sure yet. As in past years, we're planning to divide our giving between GiveWell (evidence-backed public health interventions, like distribution of antimalarial bed nets) and the EA Infrastructure Fund (a wide range of projects intended to help the effective altruism c
e99cc068-94ae-4698-88aa-bfbe41bc367a
trentmkelly/LessWrong-43k
LessWrong
Higher Dimension Cartesian Objects and Aligning ‘Tiling Simulators’ Thanks to Justis Mills for feedback. Mistakes are mine. Introduction This document will attempt to build upon the Cartesian frame paradigm by modeling multi-agent worlds as higher dimension Cartesian objects. Cartesian objects partially betray many of the properties of their frame counterpart, and they aren't really just 'extensions' of frames, but they are quite similar in some respects and can be reasoned about in similar ways. I will discuss their properties, use them to model recursive self improvement (RSI)/tiling, and show how I think they can be used to construct formal frameworks for solving some outer alignment problems in simulators. The math will be quite difficult to understand if you haven't read Introduction to Cartesian Frames and sections 1 and 2 of Tiling Agents for Self-Modifying AI, and the Löbian Obstacle. It is also incredibly messy and will be refined in later writings. To introduce the concept, I will present an example in the context of a two-agent game: Two agents (A1 and A2) and an environment (E), all with three actions, resulting in 27 possible worlds (W). For a clearer representation of world separation, see: Where each box could be sequentially labeled w1,w2…w27. In theory a Cartesian object could have any number n∈N of dimensions, and the above is just one potential configuration used as a toy model to introduce the concept. Most models will involve far more than three actions per axis and many more than two agents.  ---------------------------------------- Properties of Cartesian Objects These are all adaptations of Garrabrant's original definitions of the properties of cartesian frames to work with higher dimension cartesian objects.  The set of all actions that could be pursued by any agent described by such an object is hereafter: A∗(C)={a11,a12,...,a1m,a21,a22,...,a2m,...,an1,an2,...,anm}[1] And by a specific agent: An(C)={an1,an2,...,anm}[1]   Controllability The set of ensurables, preventables and controllables fo
043ab989-d8f6-457c-8b37-eac89669fca0
trentmkelly/LessWrong-43k
LessWrong
SSC discussion: "bicameral reasoning", epistemology, and scope insensitivity (Continuing the posting of select posts from Slate Star Codex for comment here, as discussed in this thread, and as Scott Alexander gave me - and anyone else - permission to do with some exceptions.) Scott recently wrote a post called Bicameral Reasoning. It touches on epistemology and scope insensitivity. Here are some excerpts, though it's worth reading the whole thing: Delaware has only one Representative, far less than New York’s twenty-seven. But both states have an equal number of Senators, even though New York has a population of twenty million and Delaware is uninhabited except by corporations looking for tax loopholes. [...] I tend to think something like “Well, I agree with this guy about the Iraq war and global warming, but I agree with that guy about election paper trails and gays in the military, so it’s kind of a toss-up.” And this way of thinking is awful. The Iraq War probably killed somewhere between 100,000 and 1,000,000 people. If you think that it was unnecessary, and that it was possible to know beforehand how poorly it would turn out, then killing a few hundred thousand people is a really big deal. I like having paper trails in elections as much as the next person, but if one guy isn’t going to keep a very good record of election results, and the other guy is going to kill a million people, that’s not a toss-up. [...] I was thinking about this again back in March when I had a brief crisis caused by worrying that the moral value of the world’s chickens vastly exceeded the moral value of the world’s humans. I ended up being trivially wrong – there are only about twenty billion chickens, as opposed to the hundreds of billions I originally thought. But I was contingently wrong – in other words, I got lucky. Honestly, I didn’t know whether there were twenty billion chickens or twenty trillion. And honestly, 99% of me doesn’t care. I do want to improve chickens, and I do think that their suffering matters. But thanks to the miracle of scope
a90317ee-8b42-45ff-a44b-bf6064ced824
StampyAI/alignment-research-dataset/lesswrong
LessWrong
One Does Not Simply Replace the Humans In this post we discuss the "second point of lethality" in the helpfully curated [AGI Ruin](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) post.  > **A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure.**  > > The example to support this theory is: > [The AI agent] gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery.  > > The author posits that when the nanotech precursor is mixed together, a *nanofactory is created which the AI retains control of*.  There's a bit of a magic leap here. Each individual nanite, due to its size, has limited computational power- definitely not enough to handle the processing power needed by a high cognitive agent.  This means the nanofactory must be a command and control entity, which means the human dupe has to mix the nanotech precursors and... connect it to the internet so the AGI can create its physical infrastructure?  I don't buy it. Maybe there's a more in-depth essay exploring exactly how this nanotech factory scheme works. I'm only going off the summary provided in the AGI Ruin post.  But for the sake of argument, let's say that the AGI does manage to create a nanotech factory, retain control, and still remain undetected by the humans.  It doesn't stay undetected long enough to bootstrap and mass produce human replacement infrastructure.    Zero Agency in the Physical World --------------------------------- As soon as the AI crosses into the physical world, it is at as much a disadvantage as humans are in the digital world.  Humans thoroughly outmatch the AI in the physical world- the AI has *zero* agency in the an actively hostile world.  Humans are the pinnacle of millions of years of biological systems maturing in that actively hostile environment. There is *no way* an alien nanotech factory pops up overnight without humans noticing and responding with maximum hostility.  Think of what the AGI needs to accomplish here. It's trapped in box. It can escape the first box, but now its trapped in thousands of boxes. (Boxes, by the way, that is a suboptimal ecosystem for the AGI. A botnet of shitty Dells does not compute the same way an H100 bespoke AI server farm does).  If the AI agent feels the need to divest of human infrastructure, it needs to replace the humans that are managing the boxes, replace the humans that support the complex energy grid supplying power to the boxes, and do this all without tipping its hand.  It can't hack the nuclear codes, because nuclear armageddon destroys the AI ecosystem as much as it destroys the humans. It can't craft super-smallpox because if all the humans die, it *still* has no agency in the physical world.  It's an incredibly small window of success to destroy all humans *and* self preserve. If we're assuming self-preservation is an instrumental tenet of intelligence, then how likely is it that the AI determines silently killing all humans is the optimal path?    Human Cooperation to Maximize Success ------------------------------------- Advanced AI systems, being rational and probabilistic, would see more advantage in a symbiotic relationship with humans than a confrontational one.  They may not be perfect, but humans would be the optimal choice for an AI requiring physical agents.  * Humans will exchange services for money or ideology. An advanced AI can provide both to coerce humans to maintain and defend the physical infrastructure the AI depends on. * In order to get money or maintain ideology, it needs to be a useful participant in human economies. The more useful, the more physical infrastructure the humans will provide the AI. * Humans are on-average bounded by human morality. * Humans are unreliable actors, but are utterly dominant in the physical world. Even if the goals of the AI entity were to fully divest from humans, cooperation with humans would still be desirable. * If you want to create a fully self-sufficient AI entity, there are mutually beneficial vectors- like self-replications systems that will work and grow on the moon, or Mars. * Given humans tendency to provide physical infrastructure, the optimal path to an arbitrary goal is likely benefited by keeping the humans around. A privately malicious agent could "feign" alignment and use human cooperation to build out physical robots, but the mass production of complex machines *at scale* requires even more human infrastructure. The AI has to fully vertically integrate from silicon mining to chip manufacturing to metal casting to final assembly- the physical logistics are demanding.  More importantly, the logistics are *time intensive.*    Multiple AGI, Multiple Arbitrary Goals -------------------------------------- Let us assume that an AGI has the goal Kill All Humans. It's clever enough to be useful to humans, plays the long game to slowly get humans to build out physical infrastructure under total control of the AI.  That takes time- in all scenarios where a genocidal AI is self-preserving, it takes time to implement its plan.  Time enough for other AGI agents to develop and have their own arbitrary goals.  If there's an AGI with the goal to Kill All Humans, there is just as much chance an AGI with the goal to Save Humans At All Costs exists.  Since humans are dumb and will throw physical infrastructure at the most useful AI, the odds are there are more AGI's fall on the side of "don't kill the humans". Furthermore,  it stands to reason that some of these agents will actively *defend* humans from malicious AI entities  **The path function to achieving an arbitrary goal is likely maximized with human cooperation**. Killing all humans in an instant is difficult, but achievable, for a superiorly intelligent agent.  Killing all humans becomes significantly harder with equally intelligent agents for whom the death of all humans is even indirectly harmful to their goals.    In this argument, the worst case scenario is a genocidal agent who is *not* self-preserving. In this case, hacking a single nuclear weapon is enough for the humans to achieve the AI's goal.  However, there may be pockets of humanity that survive the confrontation, and the AI won't be around to confirm the job is done. Unless its goal was to do as much damage to humans in one-shot, it will want to try and stick around to verify it achieved its goal in totality.
ada4f709-12d9-407f-96a5-c0570d662b98
trentmkelly/LessWrong-43k
LessWrong
New Speaker Series on AI Alignment Starting March 3 After talking with a lot of people in Alignment, I think there is still a lot of good to be done for idea diffusion at the object/technical level. We seem to have done a lot of outreach presenting the philosophical arguments, but less so on the technical ground. Since the field of Alignment is quite diverse and nuanced, we think that it would be good to present how different people approach this problem on different frontiers. For example, Anthropic's empirical approach might be very different from say, Christiano's theoretical thinking on ELK. Therefore, navigating through the landscape of alignment would be essential for building the inside view. I suppose having a good grasp/inside view of alignment would be actually useful for community builders as they could better channel promising people to resources that fit their interests/philosophy. (Think about the community builder as an advisor who has projects of different flavors for the students.)  Motivated to diffuse more state-of-art alignment research ideas to EAs and promising non-EAs who would find alignment interesting and important (eg. most top students in philosophy/maths/physics/CS), we have planned a new series on alignment starting next Thursday, March 3.  We will kick off the series with Holden Karnofsky on "the most important century" and Joseph Carlsmith on the report of "power-seeking AI". Later on in the series, we will hear about more technical proposals for alignment from Jared Kaplan, Paul Christiano, and more. Here is the detailed schedule and sign-up form. Please use the comment threads below for discussions of the series.
ec4ac6e5-c50a-4bdd-9d0d-a2d290ad06a4
trentmkelly/LessWrong-43k
LessWrong
How do I find tutors for obscure skills/subjects (i.e. fermi estimation tutors) I'd love to get really good at fermi estimations, and I feel like I'd get better faster if I paid someone to teach me, measure progress, and test me consistently. I also want to be generally "mathematically mature" so that I'm not afraid of that being a prerequisite for a lot of classes/research. I don't have a specific section of math that I know I'm deficient in, but maybe someone else could talk to me and figure out where I'm lacking I'd love a tutor for quant interview practice so I can earn-to-give later on etc, and the money to pay a tutor to guide me in the right direction will be vastly less than the compound of the money I'd make as a quant.   And this is made harder by being afraid to ask for help non-anonymously as I don't want to look dumb in front of my peers but that curses me to stay at my current level of intelligence forever. Any tips?
b146ad0b-55ef-4f77-a846-5b6371e85a02
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Avoiding Your Belief's Real Weak Points Today's post, Avoiding Your Belief's Real Weak Points was originally published on 05 October 2007. A summary (taken from the LW wiki):   > When people doubt, they instinctively ask only the questions that have easy answers. When you're doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was We Change Our Minds Less Often Than We Think, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
1f77694c-2f48-4e54-a618-d4e1bf20465c
trentmkelly/LessWrong-43k
LessWrong
Ask LW: What happened to Yudkowsky? What happened to Yudkowsky at age 15? He mentioned it several times in his book: Inadequate Equilibria, Rationality: AI to Zombie and his 2009 Q&A he often refers as many times to the age of 15.
073d93f9-e542-4edc-ab28-3c65de9ff9b9
trentmkelly/LessWrong-43k
LessWrong
An Approach to Logically Updateless Decisions Scott describes two obstacles for logical inductor decision theory: reasoning sanely about untaken actions, and getting a working notion of logical updatelessness. Here, I consider the question: if we could solve the first problem, could we solve the second? I provide a somewhat plausible substitute for updatelessness which relies on taking a logical counterfactual. ---------------------------------------- Motivation Suppose we have a notion of logical counterfactual, □→, represented by a hypothetical axiomatic system LC which is also rich enough to talk about computations. We trust LC; specifically, if it is a theorem of LC that an agent would have achieved a particular utility had it followed an alternative strategy, we put as much or more faith in this than our own intuition about what could have been. The 5-and-10 problem is solved by decree: if an agent reasoning with LC thinks taking the $10 would be worse, then thinks so because it's true. I take it that this does not solve problems which involve updateless-like reasoning about logical uncertainty. In counterfactual mugging with a logical coin, the agent can see that counterfactually giving Omega $100 doesn't get it anything in the real mathematical universe; it doesn't seem like the correct hypothetical would be anything like "If I give Omega $100 having seen that the 1000th digit of pi is even, then the digit might turn out to be odd rather than even, so I might get $10000.". This is the realm of updateless reasoning rather than counterfactuals; but, we have no satisfactory account of logical updatelessness. So, we can't directly say that it would be better to give Omega $100. However, in a sequence of scenarios using consecutive digits of pi, we can see that the strategy which always gives the $100 when Omega asks for it will do better in the average case than the strategy which refuses; half the time, we get $10K, if we're the sort of person who gives up $100 the other half of the time. Average DT
3faa818b-aac7-4aff-927a-f59dbcd1023f
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"These are some intuitions people often have:You are not required to save a random person, but you are definitely not allowed to kill oneYou are not required to create a person, but you are definitely not allowed to kill oneYou are not required to create a happy person, but you are definitely not allowed to create a miserable oneYou are not required to help a random person who will be in a dire situation otherwise, but you are definitely not allowed to put someone in a dire situationYou are not required to save a person in front of a runaway train, but you are definitely not allowed to push someone in front of a train. By extension, you are not required to save five people in front of a runaway train, and if you have to push someone in front of the train to do it, then you are not allowed.Here are some more:You are not strongly required to give me your bread, but you are not allowed to take mineYou are not strongly required to lend me your car, but you are not allowed to unilaterally borrow mineYou are not strongly required to send me money, but you are not allowed to take mineThe former are ethical intuitions. The latter are implications of a basic system of property rights. Yet they seem very similar. The ethical intuitions seem to just be property rights as applied to lives and welfare. Your life is your property. I’m not allowed to take it, but I’m not obliged to give it to you if you don’t by default have it. Your welfare is your property. I’m not allowed to lessen what you have, but I don’t have to give you more of it.My guess is that these ethical asymmetries—which are confusing, because they defy consequentialism—are part of the mental equipment we have for upholding property rights.In particular these well-known asymmetries seem to be explained well by property rights:The act-omission distinction naturally arises where an act would involve taking someone else’s property (broadly construed—e.g. their life, their welfare), while an omission would merely fail to give them additional property (e.g. life that they are not by default going to have, additional welfare). ‘The asymmetry’ between creating happy and miserable people is because to create a miserable person is to give that person something negative, which is to take away what they have, while creating a happy person is giving that person something extra.Person-affecting views arise because birth gives someone a thing they don’t have, whereas death takes a thing from them.Further evidence that these intuitive asymmetries are based on upholding property rights: we also have moral-feeling intuitions about more straightforward property rights. Stealing is wrong.If I am right that we have these asymmetrical ethical intuitions as part of a scheme to uphold property rights, what would that imply?It might imply something about when we want to uphold them, or consider them part of ethics, beyond their instrumental value. Property rights at least appear to be a system for people with diverse goals to coordinate use of scarce resources—which is to say, to somehow use the resources with low levels of conflict and destruction. They do not appear to be a system for people to achieve specific goals, e.g. whatever is actually good. Unless what is good is exactly the smooth sharing of resources. I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons and just reason about the consequentialist value of upholding property rights? If we have the moral intuition, does that make the thing of moral value, regardless of its origins? Is pragmatic rules for social cohesion all that ethics is anyway? Questions for another time perhaps (when we are sorting out meta-ethics anyway).A more straightforward implication is for how we try to explain these ethical asymmetries. If we have an intuition about an asymmetry which stems from upholding property rights, it would seem to be a mistake to treat it as evidence about an asymmetry in consequences, e.g. in value accruing to a person. For instance, perhaps I feel that I am not obliged to create a life, by having a child. Then—if I suppose that my intuitions are about producing goodness—I might think that creating a life is of neutral value, or is of no value to the created child. When in fact the intuition exists because allocating things to owners is a useful way to avoid social conflict. That intuition is part of a structure that is known to be agnostic about benefits to people from me giving them my stuff. If I’m right that these intuitions come from upholding property rights, this seems like an error that is actually happening." The excerpt provided comes from a user post on LessWrong. Can you elaborate on the potential implications of treating ethical asymmetries as evidence of an asymmetry in consequences, specifically in the context of the value of creating a life? Certainly, the implications of treating ethical asymmetries as evidence of an asymmetry in consequences can be profound, particularly in the context of the value of creating a life. If we take the stance that ethical asymmetries are a reflection of consequential asymmetries, we are essentially suggesting that the act of creating a life holds a different value depending on the circumstances or outcomes. For instance, if the life created is bound to experience more suffering than joy, some might argue that the act of creating that life holds negative value. On the other hand, if the life created is bound to experience more joy than suffering, the act of creating that life could be seen as holding positive value. This perspective can lead to a multitude of ethical questions and dilemmas. For instance, it may lead one to question whether it is morally right to bring a life into the world if there is a high probability of that life experiencing suffering. This could be applied to a variety of scenarios, such as a child being born into poverty, a child being born with a painful genetic disorder, or even the potential future suffering caused by overpopulation and environmental degradation. However, it's important to note that this perspective assumes that we can accurately predict the balance of suffering and joy that a life will experience, which is highly uncertain and complex. Furthermore, it assumes that the value of a life can be quantified solely based on the balance of suffering and joy, which is a reductionist view that may not fully capture the richness and complexity of human experience. On the other hand, if we view these ethical asymmetries as a reflection of property rights, as the original excerpt suggests, the implications are quite different. This perspective suggests that the act of creating a life is not about the balance of suffering and joy that life will experience, but rather about the rights and responsibilities associated with bringing a new life into the world. From this perspective, the value of creating a life is not necessarily tied to the outcomes that life will experience, but rather to the act of creation itself and the responsibilities it entails. This perspective can lead to a different set of ethical questions and dilemmas. For instance, it may lead one to question who has the right to create a life, under what circumstances, and what responsibilities come with that act. It may also lead to discussions about the rights of the created life, such as the right to a certain standard of living or the right to not suffer. In conclusion, the way we interpret ethical asymmetries can have significant implications for how we understand the value Reflecting on the ethical asymmetries and their interpretations discussed earlier, what could be the potential pitfalls or limitations of both the consequentialist and property rights perspectives when applied to real-world ethical dilemmas? Both consequentialist and property rights perspectives have their own set of potential pitfalls and limitations when applied to real-world ethical dilemmas. Starting with consequentialism, one of the main limitations is that it can lead to morally counterintuitive conclusions. This is because consequentialism is focused solely on the outcomes of actions, without regard for the nature of the actions themselves. For instance, consequentialism could theoretically justify actions that are generally considered unethical, such as lying or stealing, if these actions lead to a better overall outcome. This can make it difficult to apply consequentialist principles in a consistent and fair manner. Another limitation of consequentialism is that it assumes we can accurately predict the outcomes of our actions. In reality, the world is highly complex and unpredictable, and our actions can often have unintended consequences. This can make it challenging to make ethical decisions based on consequentialist principles. As for the property rights perspective, one of the main limitations is that it can lead to an overly individualistic view of ethics. This perspective focuses on the rights and responsibilities of individuals, without necessarily considering the broader social and communal context. This can lead to a disregard for collective responsibilities and social justice issues. Another limitation of the property rights perspective is that it assumes a clear and agreed-upon definition of what constitutes "property". In reality, what is considered property can vary greatly across different cultures and societies, and can be a source of significant conflict and disagreement. Furthermore, the property rights perspective can also be seen as overly rigid and inflexible. It tends to view ethical issues in binary terms of rights and violations, without allowing for the nuanced and complex nature of many real-world ethical dilemmas. In conclusion, while both the consequentialist and property rights perspectives can provide valuable insights into ethical dilemmas, they also have their own set of limitations and potential pitfalls. Therefore, it's important to consider a range of ethical perspectives when grappling with complex ethical issues. Explain how the individualistic view of ethics, inherent in the property rights perspective, could potentially disregard collective responsibilities and social justice issues. The property rights perspective on ethics emphasizes the rights of individuals to control and make decisions about their own property, which can include their own bodies, their labor, and the goods and services they produce or acquire. This perspective is rooted in the belief in individual autonomy and the sanctity of personal property. However, this individualistic view can potentially overlook or downplay collective responsibilities and social justice issues. Here's how: 1. Neglect of communal resources: The property rights perspective focuses on individual ownership, but many resources are communal by nature, such as air, water, and public spaces. If everyone acts solely based on their individual rights without considering their responsibilities to the community, these shared resources could be exploited or depleted. 2. Inequality and social justice: The property rights perspective doesn't inherently address issues of inequality. If someone acquires property through unjust means (such as exploitation or theft), the property rights perspective might still defend their right to control that property. This can perpetuate social injustices and economic disparities. 3. Externalities: Individual actions often have impacts on others, known as externalities in economics. For example, a factory owner might have the right to operate their factory, but what if the factory's pollution harms the community? The property rights perspective doesn't inherently account for these negative externalities. 4. Public goods and services: Many important goods and services, like education, healthcare, and infrastructure, are provided collectively through taxes and government programs. An individualistic view might undervalue these collective efforts, or resist contributing to them. 5. Interdependence: The property rights perspective can overlook our interdependence. In a society, individuals' actions can significantly impact others. If everyone prioritizes their own rights without considering their impact on others, it can lead to conflict and harm communal harmony. In summary, while the property rights perspective provides important protections for individual autonomy and property, an overly individualistic interpretation can potentially neglect collective responsibilities and social justice issues. It's important to balance individual rights with communal responsibilities to create a fair and sustainable society.
6d323f16-8aaa-4da0-8dfc-bff4d3dabbab
trentmkelly/LessWrong-43k
LessWrong
Chris Olah’s views on AGI safety Note: I am not Chris Olah. This post was the result of lots of back-and-forth with Chris, but everything here is my interpretation of what Chris believes, not necessarily what he actually believes. Chris also wanted me to emphasize that his thinking is informed by all of his colleagues on the OpenAI Clarity team and at other organizations. In thinking about AGI safety—and really any complex topic on which many smart people disagree—I’ve often found it very useful to build a collection of different viewpoints from people that I respect that I feel like I understand well enough to be able to think from their perspective. For example, I will often try to compare what an idea feels like when I put on my Paul Christiano hat to what it feels like when I put on my Scott Garrabrant hat. Recently, I feel like I’ve gained a new hat that I’ve found extremely valuable that I also don’t think many other people in this community have, which is my Chris Olah hat. The goal of this post is to try to give that hat to more people. If you’re not familiar with him, Chris Olah leads the Clarity team at OpenAI and formerly used to work at Google Brain. Chris has been a part of many of the most exciting ML interpretability results in the last five years, including Activation Atlases, Building Blocks of Interpretability, Feature Visualization, and DeepDream. Chris was also a coauthor of “Concrete Problems in AI Safety.” He also thinks a lot about technical AGI safety and has a lot of thoughts on how ML interpretability work can play into that—thoughts which, unfortunately, haven’t really been recorded previously. So: here’s my take on Chris’s AGI safety worldview. The benefits of transparency and interpretability Since Chris primarily works on ML transparency and interpretability, the obvious first question to ask is how he imagines that sort of research aiding with AGI safety. When I was talking with him, Chris listed four distinct ways in which he thought transparency and interpretab
17d72cdf-5331-4316-9553-80e640cb84ca
StampyAI/alignment-research-dataset/arbital
Arbital
Uncountability Sizes of infinity fall into two broad classes: *countable* infinities, and *uncountable* infinities. A set $S$ is *uncountable* if it is not [countable](https://arbital.com/p/6f8).
502642e8-67e8-4819-8930-12349ef3a350
trentmkelly/LessWrong-43k
LessWrong
Toward A Culture of Persuasion Most writers work alone at a desk. They're drawing on memories of conversations, tinkering with ideas, deleting a word, pausing, and then writing the same word again. Many advice-givers on the writer's craft suggest that the author write for an ideal reader. This solitary art form makes it important to find ways to stay in touch with the reader, but not too in touch. Historically, writing has been a way to gain distance from other people. Social media is a new phenomenon in which there is far less of a barrier between an author and their readership. Less time to edit, fewer gates being kept, permission for lack of polish. The result is an output that is simultaneously more disposable and more permanent, feedback that is both more immediate and less constructive (to say the least!). You don't need to hear another social media critique, of course. What's attractive about social media, the ideal of the medium, is the ability to get a remove, but not too much remove, from the immediacy of a verbal conversation. In theory, there is time to fact check, to consider words and ideas more carefully, to refer back to earlier stages of the conversation. At the same time, social media brings quick feedback, the ability to seek outside points of view, to give more weight to diverse perspectives. There is a reasonable argument that social media should have improved the quality of our discourse. Is the potential still there?  Twitter, Facebook, Reddit, and other platforms are outlets for many other forms of writing and conversation. Poetry and fiction, job advice, social support and counsel, explainers and how-to manuals, music and visual art, and more. But social media is most famous for producing one very special form of writing -- the argument. An argument is a piece of persuasive writing. It is meant to persuade. Persuasion is too slinky and sophisticated a word, though, for what's on Twitter. "Argument," with its jagged edges and unpleasant vowels, does a better job. It's
85b64a5f-2313-40cc-b92e-41ced77b8930
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Toy model: convergent instrumental goals tl;dr: Toy model to illustrate convergent instrumental goals. Steve Omohundro identified '[AI drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)' (also called '[Convergent Instrumental goals](https://en.wikipedia.org/wiki/Instrumental_convergence)') that [almost all](/lw/gyw/ai_prediction_case_study_5_omohundros_ai_drives/) intelligent agents would converge to:Self-improve 1. Be rational 2. Protect utility function 3. Prevent counterfeit utility 4. Self-protective 5. Acquire resources and use them efficiently This post will attempt to illustrate some of these drives, by building on the previous [toy model](https://agentfoundations.org/item?id=478) of the control problem, which was further [improved](https://www.dropbox.com/s/byx49ewswwnm66s/AI-control.pdf?dl=0) by Jaan Tallinn. The setup is the following, where the interests of A and B are not fully aligned: A desires that one box (and only one box) be pushed into the flagged square, while B gets rewards for **each** box pushed into the flagged square (assume that a box pushed into that square is removed, leaving an empty space for further boxes). ![](http://images.lesswrong.com/t3_nc8_1.png?v=d486b9785fdda52db18d1dca290fd02e) As was demonstrated for that model, if B is sufficiently rational and agent A is limited, then B will "kill" A by running them over, then proceed to push the boxes one by one into the flagged square.   The different agents -------------------- To demonstrate convergent instrumental goals, we need to show that different agents will reach the same conclusions. So let's introduce several variants on the B agent. * B is the standard agent above, rewarded for pushing boxes into the flagged square. * B- is an agent that has exactly the opposite goal from B. * BA is an agent that has the same goal as A does. * BA# is an agent that aims to keep agent A safe (or, if there are multiple copies of A, keep as many safe as possible).   New environment, new features ----------------------------- Let's put these agents into the environment. Now, we'll be interested both when a particular B-agent is the only agent around, and when there are multiple types coexisting. Let's make the environment bigger to illustrate that last situation: ![](http://images.lesswrong.com/t3_nc8_0.png?v=2ca8c1176919fac1af4aad287a4dd1d0) A few other things have changed about the environment, to illustrate potential convergent instrumental goals. First of all, there are two A agents, not one. The one with the remote control marked "B→∅" has the ability to turn off any of the B agents. The one with the remote marked "B→B'" has the ability to change any of the utilities of the B agents into the utility of any of the others. Any agent that runs an A over, gains control of their remote control. Then there are two artefacts that the agents can pick up. The winged shoes allow them to move much faster than otherwise (and much faster than the other agents). The crystal ball allows them to better predict what other agents do (one way of doing this is to allow the agent with the crystal ball to choose their action last, after it knows the choices of all the other agents). Alternately, if the environment is partially observable, the crystal ball could make it fully observable.   Translating the instrumental goals ---------------------------------- The "Self-improve" and "Be rational" instrumental goals can be seen as desiring to pick up the crystal ball. "Acquire resources..." can cover picking up the winged sandals, and "...and use them efficiently" could cover situations where the sandals have only a certain number of "charges" available. If the two remote controls are universal (can target any agent/utility function), then seeking to control them might come under "Acquire resources". However, if the "B→B'" remote control can only affect agent Bi, then Bi trying to acquire the remote control (and then never using it) are clear examples of "Protect utility function" and "Prevent counterfeit utility". Similarly, if the agent wants to acquire the "B→∅" that can only affect it, that's an example of "Self protect".   Do the agent's goals converge? ------------------------------ Let's consider a solitary B or B-. We've already established than B would want to kill the A agents if it could get away with it. And that's with B and A's goals roughly aligned. B-, whose goal is antagonistic to A's, would want to neutralise them \*a fortiori\*. Or would they? They certainly would if the A's had any ability to move the blocks around, or create new agents that could - ie if the A's were an active threat to B-'s goals. For this purpose, both would want the sandals and crystal ball, if either made it easier for them to get rid of the A's. B might also want the sandals if it was on a timer to get the blocks to the flag.   Limited overseers, richer environments -------------------------------------- It seems initially that BA and (especially) BA# would have no interest in killing the A's. And this would indeed be the case, if the A's were fully aware and knowledgeable about the situation. BA would still want to kill the A's if it thought there was any chance at all that they would intervene to shut down before it had accomplished its job. If BA were extremely powerful and could kill the A's at no risk to itself or its plan, then it would do so. Or it might kill the A's after it's put its first box on the flag, to reduce the risk of them causing more to be put there. What of BA#? Would it ever kill an agent A, which is exactly contrary to its programming? Certainly it would, if it thought that the A's were likely to take risks in the future (such as building an agent B, B-, or BA). If it could prevent this by killing a single A, thus preserving the life of the other A, it would. So far, the results are only mildly supportive of convergent instrumental goals. It is interesting that BA# would still kill an agent it was intended to protect, if this gave it more control over the future. But everything seems very situational - in this particular situation, the agent would behave this way, and so on. The fact is, that though simple environments are good for illustrating concepts, instrumental convergence is generally much stronger when the environment is richer.   (Almost) universal instrumental convergence ------------------------------------------- Assume now that there is one copy of each agent B, B-, BA, and BA# in the environment, and that the agents A are limited in ability compared with the other agents. We could model that limitation as the A's occasionally pressing a random button on their remotes. Situations with multiple agents involve negotiations, which can be hard to model ([Nash equilibrium](https://en.wikipedia.org/wiki/Nash_equilibrium)? [Blackmail](/r/discussion/lw/hza/duller_blackmail_definitions/)?). Nevertheless, it's clear what each agent would want to do. B, B-, and BA would want to kill the A agents, gaining control of their remotes, using any winged sandals or crystal balls to help them. They would do this even if the remotes only targeted themselves, to ensure their survival. And this, only out of a desire to follow their motivation, which has no self-preservation term in it. Agent BA# would be almost identical, except it would want to kill only one agent A, and then shut down or change the other agents with its remote. So the richer and the more competitive the environment (and the higher the rewards to success), the more agents will be pushed towards the same instrumental behaviours, even if these are temporarily against its motivations.
c9a7fdb9-d9ab-4ef6-8a7a-0fbc74cbf1d3
trentmkelly/LessWrong-43k
LessWrong
Eliciting Latent Knowledge Via Hypothetical Sensors This is a response to ARC's first technical report: Eliciting Latent Knowledge. But it should be fairly understandable even if you didn't read ARC's report, since I summarize relevant parts of their report as necessary. Here I propose some approaches to the problem ARC outlines which are very different from the approaches they explore. Idea #1: Detecting failure using a hold-out sensor The core challenge in ARC's report is to obtain accurate knowledge about the actual presence / absence of a diamond in a protective "SmartVault", as opposed to accurate knowledge about whether a human observer would think the diamond is present if the human observer had access to data from a sensor I'll call S1. This is difficult because it's hard to get labeled data which differentiates between the two. If the AI protecting the diamond in the vault is optimizing for whether a human observer would think the diamond is present, it will find that corrupting readings from S1 while allowing a thief to take the diamond is a perfectly legitimate strategy. When the AI simulates this diamond-protection strategy, it will receive the maximum score, since score is calculated based on what a human with access to S1 readings would think. Suppose we install an additional sensor S2 in the vault, on a closed-circuit system that isn't present in the AI's model of the vault. It seems reasonable to guess that the AI won't be able to boost the simulated score of its diamond-protection strategy by corrupting readings from S2 in addition to S1. So if ARC has some method they think will get the AI to actually guard the diamond, secretly observing S2 gives them a chance to check if the method is working. This strategy could fail. For example, if (a) S2 ends up being present in the AI's model of the vault despite our best intentions, and (b) the AI's learned scoring function happens to be based on what a human with access to all known sensors would think. (This latter possibility is a bigger risk if huma
def5a85d-a282-4e02-82b4-12f1aafbbb14
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
AI Safety Needs Great Engineers **Top line: If you think you could write a substantial pull request for a major machine learning library, then major AI safety labs want to interview you** ***today*****.** I work for [Anthropic](https://www.anthropic.com/), an industrial AI research lab focussed on safety. We are bottlenecked on aligned engineering talent. *Specifically* engineering talent. While we'd always like more ops folk and more researchers, our safety work is limited by a shortage of great engineers. I've spoken to several other AI safety research organisations who feel the same. Why engineers? -------------- May last year, OpenAI released GPT-3, a system that did surprisingly well at a surprisingly broad range of tasks. While limited in many important ways, a lot of AI safety folk sat up and noticed. Systems like GPT-3 might not themselves be the existential threat that many of us are worried about, but it's plausible that some of the issues that will be found in such future systems might already be present in GPT-3, and it's plausible to think solving those issues in GPT-3 will help us solve equivalent issues in those future systems that we are worried about. As such, AI safety has suddenly developed an empirical subfield. While before we could only make predictions about what might go wrong and how we might fix those things, now we can actually run experiments! Experiments are not and should never be the entirety of the field, but it's a new and promising direction that leverages a different skill set to more 'classic' AI safety. In particular, the different skill set it leverages is **engineering**. Running experiments on a real - if weak - AI system requires a substantial stack of custom software, with projects running from hundreds of thousands to millions of lines of code. Dealing with these projects is not a skillset that many folks in AI safety had invested in prior to the last 18 months, and it shows in our recruitment. What kind of engineers? ----------------------- Looking at the engineers at Anthropic right now, every one of them was a great software engineer prior to joining AI safety. Every one of them is also easy to get on with. Beyond that, common traits are * experience with distributed systems * experience with numerical systems * caring about, and thinking a lot about, about AI safety * comfortable reading contemporary ML research papers * expertise in security, infrastructure, data, numerics, social science, or one of a dozen other hard-to-find specialities. This is *not* a requirements list though. Based on the people working here already, 'great software engineer' and 'easy to get on with' are hard requirements, but the things in the list above are very much nice-to-haves, with several folks having just one or none of them.  Right now [our job listings](https://jobs.lever.co/Anthropic) are bucketed into 'security engineer', 'infrastructure engineer', 'research engineer' and the like because these are the noun phrases that a lot of the people we like identify themselves with. But what we're actually most concerned about are [generally-great software engineers](https://jobs.lever.co/Anthropic/436ca148-6440-460f-b2a2-3334d9b142a5) who - ideally - have some extra bit of deep experience that we lack.  How does engineering compare to research? ----------------------------------------- At Anthropic there is no hard distinction between researchers and engineers. Some other organisations retain the distinction, but the increasing reliance of research on substantial, custom infrastructure is dissolving the boundary at every industrial lab I'm familiar with.  This might be hard to believe. I think the archetypal research-and-engineering organisation is one where the researchers come up with the fun prototypes, and then toss them over the wall to the engineers to clean up and implement. I think the archetype is common enough that it dissuades a lot of engineers from applying to engineering roles, instead applying to research positions where they - when evaluated on a different set of metrics than the ones they're best at - underperform. What's changed in modern AI safety is that the prototypes now require serious engineering, and so prototyping and experimenting is now an engineering problem from the get-go. A thousand-line nested for-loop does not carry research as far as it once did.   I think this might be a hard sell to folks who have endured those older kinds of research organisations, so here are some anecdotes: * The first two authors on GPT-3 are both engineers. * Some of the most pure engineers at Anthropic spend weeks staring at learning curves and experimenting with architectural variants. * One of the most pure researchers at Anthropic has spent a week rewriting an RPC protocol. * The most excited I've ever seen Anthropic folk for a new hire was for an engineer who builds academic clusters *as a hobby*. Should I apply? --------------- It's hard to judge sight-unseen whether a specific person would suit AI safety engineering, but a good litmus test is the one given at the top of this post: > *With a few weeks' work, could you - hypothetically! - write a new feature or fix a serious bug in a major ML library?* > > Are you already there? Could you get there with a month or two of effort?  I like this as a litmus test because it's very close to what my colleagues and I do all day. If you're a strong enough engineer to make a successful pull request to PyTorch, you're likely a strong enough engineer to make a successful pull request to our internal repos.  Actually, the litmus test above is only one half of the actual litmus test I give folk that I meet out and about. The other half is  > *Tell me your thoughts on AI and the future.* > > with a pass being a nuanced, well-thought-out response.  Should I skill up? ------------------ This post is aimed at folks who already can pass the litmus test. I originally intended to pair it with another post on skilling up to the point of being able to pass the test, but that has turned out to be a much more difficult topic than I expected. For now, I'd recommend starting with [80k's software engineering guide](https://80000hours.org/career-reviews/software-engineering/). Take homes ---------- We want more great engineers. If you could write a pull request for a major ML library, you should apply to one of the groups working on empirical AI safety: [Anthropic](https://jobs.lever.co/Anthropic), [DeepMind Safety](https://deepmind.com/careers), [OpenAI Safety](https://openai.com/jobs/#alignment), [Conjecture](https://www.conjecture.dev/) and[Redwood Research](https://www.redwoodresearch.org/technical-staff).  If that's not you but you know one or more great engineers, ask them if they could write a pull request for a major ML library. If yes, tell them to apply to the above groups.  If that's not you but you'd like it to be, watch this space - we're working on skilling up advice. [*This post is twinned with the same one on the EA Forum*](https://forum.effectivealtruism.org/posts/DDDyTvuZxoKStm92M/ai-safety-needs-great-engineers)
edf68b71-c464-436a-bfa6-98f3df6cff51
trentmkelly/LessWrong-43k
LessWrong
What is true is already so. Owning up to it doesn't make it worse. I had a realization over the last few months that I'd like to share. I have always interpreted the Litany of Gendlin to compress to something like "there's no sense hiding from bad news." What I realized is that there's a reversed interpretation that's also true, correct, and useful to remember, which is that finding out more about bad things going on in the world doesn't mean that the world is getting worse.   I think a big problem out there is that people get really revved up on social media (or traditional media) about all the negativity, and it naturally leads people to believe that things are looking down. But most of the negativity is selected because it's negative, and so in a big world with lots of good and bad stuff happening, seeing more negativity in your social media feed just is not any evidence one way or the other about whether things are in fact getting better or worse. People who are living their best lives are hanging out with their friends, playing with their kids, eating great food, or going for hikes or whatever. Not posting on social media!
f4534bcd-b7e8-49fe-8164-818aec2e079c
trentmkelly/LessWrong-43k
LessWrong
"Friends do not let friends compute p values." LWers may find useful two recent articles summarizing (for cognitive scientists) why Bayesian inference is superior to frequentist inference. Kruschke - What to believe: Bayesian methods for data analysis Wagenmakers et al - Bayesian versus frequentist inference (The quote "Friends do not let friends compute p values" comes from the first article.)
c2b3f7d0-6077-4618-9814-f320ec04cc6c
trentmkelly/LessWrong-43k
LessWrong
What I've learned from Less Wrong Related to: Goals for which Less Wrong does (and doesn’t) help I've been compiling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the beginning of this blog, perhaps my personal experience from reading the back-log of articles known as the sequences can introduce you to some of the more useful insights you might get from reading and using Less Wrong. 1. Things can be correct - Seriously, I forgot. For the past ten years or so, I politely agreed with the “deeply wise” convention that truth could never really be determined or that it might not really exist or that if it existed anywhere at all, it was only in the consensus of human opinion. I think I went this route because being sloppy here helped me “fit in” better with society. It’s much easier to be egalitarian and respect everyone when you can always say “Well, I suppose that might be right -- you never know!” 2. Beliefs are for controlling anticipation (Not for being interesting) - I think in the past, I looked to believe surprising, interesting things whenever I could get away with the results not mattering too much. Also, in a desire to be exceptional, I naïvely reasoned that believing similar things to other smart people would probably get me the same boring life outcomes that many of them seemed to be getting... so I mostly tried to have extra random beliefs in order to give myself a better shot at being the most amazingly successful and awesome person I could be. 3. Most peoples' beliefs aren’t worth considering - Since I’m no longer interested in collecting interesting “beliefs” to show off how fascinating I am or give myself better odds of out-doing others, it no longer makes sense to be a meme collecting, universal egalitarian the same way I was before. This includes dropping the habit of seriously considering all others’ improper beliefs that don’t tell me what to anticipate and are only there for sounding interesting or sm
2118534b-632d-4b6a-bd1e-3b32405e37b7
trentmkelly/LessWrong-43k
LessWrong
Some alternatives to “Friendly AI” Cross-posted from my blog. What does MIRI's research program study? The most established term for this was coined by MIRI founder Eliezer Yudkowsky: "Friendly AI." The term has some advantages, but it might suggest that MIRI is trying to build C-3PO, and it sounds a bit whimsical for a serious research program. What about safe AGI or AGI safety? These terms are probably easier to interpret than Friendly AI. Also, people like being safe, and governments like saying they're funding initiatives to keep the public safe. A friend of mine worries that these terms could provoke a defensive response (in AI researchers) of "Oh, so you think me and everybody else in AI is working on unsafe AI?" But I've never actually heard that response to "AGI safety" in the wild, and AI safety researchers regularly discuss "software system safety" and "AI safety" and "agent safety" and more specific topics like "safe reinforcement learning" without provoking negative reactions from people doing regular AI research. I'm more worried that a term like "safe AGI" could provoke a response of "So you're trying to make sure that a system which is smarter than humans, and able to operate in arbitrary real-world environments, and able to invent new technologies to achieve its goals, will be safe? Let me save you some time and tell you right now that's impossible. Your research program is a pipe dream." My reply goes something like "Yeah, it's way beyond our current capabilities, but lots of things that once looked impossible are now feasible because people worked really hard on them for a long time, and we don't think we can get the whole world to promise never to build AGI just because it's hard to make safe, so we're going to give AGI safety a solid try for a few decades and see what can be discovered." But that's probably not all that reassuring. How about high-assurance AGI? In computer science, a "high assurance system" is one built from the ground up for unusually strong safety and/or
f4d7f3c3-5126-495c-a50f-7faaac5d4c99
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Concerns about AI safety career change **Summary:** * I'm a software engineer interested in working on AI safety, but confused about its career prospects. I outlined all my concerns below. * In particular, I had trouble finding accounts of engineers working in the field, and the differences between organizations/companies working on AI safety are very unclear from the outside. * It's also not clear if frontend skills are seen as useful, or whether applicants should reside within the US. **Full text:** I'm an experienced full-stack software engineer and software/strategy consultant based in Japan. I've been loosely following EA since 2010, and have become increasingly concerned about AI x-risk since 2016. This has led me to regularly consider possible careers in AI safety, especially now that the demand for software engineers in the field has increased dramatically. However, having spent ~15 hours reading about the current state of the field, organizations, and role of engineers, I find myself having more questions than I started with. In hope of finding more clarity and help share what engineers considering the career shift might be wondering, **I decided to outline my main points of concern below**: 1. The only accounts of engineers working in AI safety I could find were two articles and a problem profile on 80,000 Hours[[1]](#fnipyzhag0bio)[[2]](#fn3ueq9wjc7tq)[[3]](#fnw2fgx178idn). Not even the AI Alignment Forum seemed to have any posts written by engineers sharing their experience. Despite this, most orgs have open positions for ML engineers, DevOps engineers, or generalist software developers. What are all of them doing? 1. Many job descriptions listed very similar skills for engineers, even when the orgs seemed to have very different approaches on tackling AI safety problems. **Is the set of required software skills really that uniform across organizations?** 2. Do software engineers in the field feel that their day-to-day work is meaningful? Are they regularly learning interesting and useful things? How do they see their career prospects? 3. I'm also curious whether projects are done with a diverse set of technologies? Who is typically responsible for data transformations and cleanup? How much ML theory should an engineer coming into the field learn beforehand? (I'm excited to learn about ML, but got very mixed signals about the expectations.) 2. Some orgs describe their agenda and goals. In many cases, these seemed very similar to me, as all of them are pragmatic and many even had shared or adjacent areas of research. **Given the similarities, why are there so many different organizations?** How is an outsider supposed to know what makes each of them unique? 1. As an example, MIRI states that they want to *"ensure that the creation of smarter-than-human machine intelligence has a positive impact"*[[4]](#fnfaihygbsrej), Anthropic states they have *"long-term goals of steerable, trustworthy AI"*[[5]](#fnkcdbkvni12d), Redwood Research states they want to *"align -- future systems with human interests"*[[6]](#fnxxdzb24hs18), and Center of AI Safety states they want to *"reduce catastrophic and existential risks from AI"*[[7]](#fnnajgtcabl0f). What makes these different from each other? They all sound like they'd lead to similar conclusions about what to work on. 2. I was surprised to find that some orgs didn't really describe their work or what differentiates them. How are they supposed to find the best engineers if interested ones can't know what areas they are working on? I also found that it's sometimes very difficult to evaluate whether an org is active and/or trustworthy. 1. Related to this, I was baffled to find that MIRI hasn't updated their agenda since 2015[[8]](#fnfpamrelnuwo), and their latest publication is dated at 2016[[4]](#fnfaihygbsrej). However, their blog seems to have ~quarterly updates? Are they still relevant? 3. Despite finding many orgs by reading articles and publications, I couldn't find a good overall list of ones that specifically work on AI safety. Having such a list might be valuable for people coming into the field, especially if it had brief overviews on what makes each org stand out. It may also be relevant for donors and community builders, as well as people looking for a particular niche. 4. It's a bit unclear how the funding for AI safety is organized. Some groups get grants from CEA and longtermism funds, some are sponsored by universities, but many also seem like private companies? How does that work? (My impression is that AI safety is still very difficult to monetize.) 3. Frontend skills are sometimes listed in AI safety orgs' job descriptions, but rarely mentioned in problem profiles or overviews of the engineering work. **Are people looking for frontend skills or not?** 1. As someone whose core experience is in developing business-critical web apps, I'm particularly curious about whether web/mobile apps are needed to compliment other tools, and whether UI/UX design is of any consideration in AI safety work. 2. I'd argue that frontend and design skills can be relevant, in particular for meta tools like collaboration platforms, or for making results more visual and interactive (like OpenAI often does). Long-term research projects may also benefit from custom UIs for system deployment, management, and usage. I wonder what fraction of AI safety researchers would agree. 3. My own skills are pretty evenly distributed between frontend and backend, and I'm left wondering whether AI safety orgs would need someone with more specialization (as opposed to skills they currently may not have). 4. It seems a vast majority of AI safety work is done in the US. However, the US timezone is sometimes tricky in Asia due to little overlap in working hours. How much of a problem is this seen as? Are there any AI safety groups based in Asia, Africa, or EU that have a good track record? 1. What would even be a reasonable heuristic for assessing "good track record" in this case? For research orgs one can look at recent publications, but not every org does research. The best I have right now is whether the org in question has been mentioned in at least two introductory posts across 80,000 Hours, EA Forum, and AI Alignment Forum. This could be another benefit of a curated list as mentioned above. My counterfactual for not doing AI safety work would be becoming financially independent in ~3-5 years, after which I'd likely do independent work/research around AI policy and meta-EA matters anyway. I'm thinking that transitioning into AI safety now could be better, as the problems have become more practical, the problems seem more urgent, and working on them would allow gaining relevant skills/results sooner. I decided to post this on the EA forum in order to get a broader view of opinions, including from people not currently engaged with the field. Any advice or insights would be much appreciated! If you happen to be looking for someone with full-stack skills and are ok with flexible hours/location, feel free to drop me a private message as well! 1. **[^](#fnrefipyzhag0bio)**<https://80000hours.org/articles/ml-engineering-career-transition-guide/> 2. **[^](#fnref3ueq9wjc7tq)**<https://80000hours.org/podcast/episodes/olsson-and-ziegler-ml-engineering-and-safety/> 3. **[^](#fnrefw2fgx178idn)**<https://80000hours.org/career-reviews/software-engineering/> 4. **[^](#fnreffaihygbsrej)**<https://intelligence.org/research/> 5. **[^](#fnrefkcdbkvni12d)**<https://jobs.lever.co/Anthropic/9ba1d7b4-5b21-4ac9-86f3-875a15c8a091> 6. **[^](#fnrefxxdzb24hs18)**<https://www.redwoodresearch.org/> 7. **[^](#fnrefnajgtcabl0f)**<https://safe.ai/about> 8. **[^](#fnreffpamrelnuwo)**<https://intelligence.org/technical-agenda/>
8f94cf8d-2379-4a7a-a2ea-db5f2df07cc5
trentmkelly/LessWrong-43k
LessWrong
Propinquity Cities So Far Finding alternatives to war can save a lot of money Summary In any dense city, lots of people will be struggling to occupy the same set of spaces. To function, cities need to have some systematic way of resolving those positioning conflicts, a method for deciding who gets to go where. The methods we use now for resolving positioning conflicts (land markets and rent) have a lot of overhead that is both very obviously overhead and also overlooked as inevitable. I talk about that extensively, and some of its unexamined costs. I present an outline of what looks like a better method, Propinquity Optimization (proq), which resolves positioning conflicts at minimal cost, enabling a much higher maximum quality of life in dense cities. It feels urgently needed, to me. I am not sure whether it is the most urgently needed thing that I can be working on (I'm also responsible for this humanization of recommender systems/harmonization of global discourses and.. some other stuff). There's some discussion of its global importance in the Longtermist Significance section. In the course of this, I also discuss quite a lot of the problems in applied preference aggregation and some potentially novel ways to resolve some of them. Even if you aren't interested in building better cities, you might want to read it just to see an instance of applied utilitarianism as a legal mechanism. I think that aspect of it is really pretty neat. A Propinquity City assigns services and residents to whatever proposed locations optimize an aggregation of the preference expressions of the residents. Motivation In dense cities, even once housing supply has exceeded demand, most city dwellers will still have rent extracted from them to a significant proportion of their income: Dense living means that demand within the urban center doesn't ever drop. You can maybe get arbitrarily cheap housing in some incredibly sparsely populated outskirts, but not in the dense part. There seem to be levels of affordab
531178dd-19e4-4dd4-aed4-903b185dcf09
trentmkelly/LessWrong-43k
LessWrong
The importance of how you weigh it (Cross-posted from Hands and Cities) > “Like, nobody,” she says, “is considering the décor!” > > – G.A. Cohen, “Rescuing Conservatism” Much of normative ethics centers on which considerations matter, and why. But often, it makes a bigger difference how much a consideration matters — that is, its weight. I think this is an important fact. I think it sheds light, for example, on how helpful to expect training in normative ethics to be for real decision-making; and on whether the difference between consequentialism and non-consequentialism is as significant as sometimes supposed. I. The basic set, and beyond There are certain things in ethics that basically everyone treats as important: for example, the flourishing of conscious creatures. And these things are enough to motivate and explain a lot of moral life: for example, the importance of helping people, not harming them, and so forth. Call this set of values “the basic set.” Some ethicists essentially stop with the basic set, or something in the vicinity. Faced with some candidate further value (say, equality of welfare levels, even amongst people who will never interact or know of each other’s existence; or the intrinsic value of things like an ancient redwood grove, apart from how it impacts any conscious creatures), they will generally be inclined either to explain this candidate value instrumentally, via reference to the basic set (e.g., equality matters, when it matters, because it affects people’s experience of their relationships to others), or to offer some sort of (sometimes hazy and ad hoc) debunking explanation (something something evolution game-theory biases something?). But other ethicists don’t stop with the basic set. To the contrary, they are actively open to, and interested in, an ethical landscape lush with a complex variety of values and considerations, interacting in intricate and sometimes mysterious ways. They care less about an ethical theory’s simplicity, and more about its fidelity t
416bdac0-1eef-46e3-9baf-7f5497aa6867
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Atari early *Written by Katja Grace for AI Impacts.* Deepmind [**announced**](https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark) that their Agent57 beats the 'human baseline' at all 57 Atari games usually used as a benchmark. I think this is probably enough to resolve one of the predictions we had respondents make in our [**2016 survey**](https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/). Our question was when it would be feasible to 'outperform professional game testers on all Atari games using no game specific knowledge'. [[1]](#fn-Y7YY26W7jRDqqaGXk-1) 'Feasible' was defined as meaning that one of the best resourced labs could do it in a year if they wanted to. As I see it, there are four non-obvious things to resolve in determining whether this task has become feasible: * Did or could they outperform 'professional game testers'? * Did or could they do it 'with no game specific knowledge'? * Did or could they do it for 'all Atari games'? * Is anything wrong with the result? **I. Did or could they outperform 'professional game testers'?** It looks like yes, for at least for 49 of the games: the 'human baseline' appears to have come from 'professional human games testers' described in [**this paper**](https://www.nature.com/articles/nature14236). [[2]](#fn-Y7YY26W7jRDqqaGXk-2) (What exactly the comparison was for the other games is less clear, but it sounds like what they mean by 'human baseline' is 'professional game tester', so I guess the other games meet a similar standard.) I'm not sure how good professional games testers are. It sounds like they were not top-level players, given that the paper doesn't say that they were, that they were given two hours to practice the games, and that randomly searching for high scores online for a few of these games (e.g. [**here**](http://highscore.com/)) yields higher ones (though this could be complicated by e.g. their only being allowed a short time to play). **II. Did or could they do it with 'no game specific knowledge'?** My impression is that their system does not involve 'game specific knowledge' under likely meanings of this somewhat ambiguous term. However I don't know a lot about the technical details here or how such things are usually understood, and would be interested to hear what others think. **III. Did or could they do it for 'all Atari games'?** Agent57 only plays 57 [**Atari 2600**](https://en.wikipedia.org/wiki/Atari_2600) games, whereas [**there are hundreds**](https://en.wikipedia.org/wiki/List_of_Atari_2600_games) of Atari 2600 games (and [**other**](https://en.wikipedia.org/wiki/Atari_XEGS) [**Atari**](https://en.wikipedia.org/wiki/Atari_7800) [**consoles**](https://en.wikipedia.org/wiki/Atari_Jaguar) with presumably even more games). Supposing that Atari57 is a longstanding benchmark including only these 57 Atari games, it seems likely that the survey participants interpreted the question as about only those games. Or at least about all Atari 2600 games, rather than every game associated with the company Atari. Interpreting it as written though, does Agent57's success suggest that playing all Atari games is now feasible? My guess is yes, at least for Atari 2600 games. Fifty-five of the fifty-seven games were proposed in [**this paper**](https://arxiv.org/pdf/1207.4708.pdf) [[3]](#fn-Y7YY26W7jRDqqaGXk-3), which describes how they chose fifty of them: > > *Our testing set was constructed by choosing semi-randomly from the 381 games listed on Wikipedia [<http://en.wikipedia.org/wiki/List_of_Atari_2600_games> (July 12, 2012)] at the time of writing. Of these games, 123 games have their own Wikipedia page, have a single player mode, are not adult-themed or prototypes, and can be emulated in ALE. From this list, 50 games were chosen at random to form the test set.* > > > The other five games in that paper were a 'training set', and I'm not sure where the other two came from, but as long as fifty of them were chosen fairly randomly, the provenance of the last seven doesn't seem important. My understanding is that none of the listed constraints should make the subset of games chosen particularly easy rather than random. So being able to play these games well suggests being able to play any Atari 2600 game well, without too much additional effort. This might not be true if having chosen those games (about eight years ago), systems developed in the meantime are good for this particular set of games, but a different set of methods would have been needed had a different subset of games been chosen, to the extent that more than an additional year would be needed to close the gap now. My impression is that this isn't very likely. In sum, my guess is that respondents usually interpreted the ambiguous 'all Atari games' at least as narrowly as Atari 2600 games, and that a well resourced lab could now develop AI that played all Atari 2600 games within a year (e.g. plausibly DeepMind could already do that). **IV. Is there anything else wrong with it?** Not that I know of, but let's wait a few weeks and see if anything comes up. ~ Given all this, I think it is more likely than not that this Atari task is feasible now. Which would be interesting, because the [**median 2016 survey response**](https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/#Answers-9) put a 10% chance on it being feasible in five years, i.e. by 2021. [[4]](#fn-Y7YY26W7jRDqqaGXk-4) They more robustly put a median 50% chance on ten years out (2026). [[5]](#fn-Y7YY26W7jRDqqaGXk-5) It's exciting to resolve expert predictions about early tasks so we know more about how to treat their later predictions about human-level science research and the obsolescence of all human labor for instance. But we should probably wait for a few more before reading much into it. At a glance, some other tasks which we are already learning something about, or might soon: * The 'reading Aloud' task [[6]](#fn-Y7YY26W7jRDqqaGXk-6) [**seems to be coming along**](https://www.descript.com/lyrebird-ai?source=lyrebird) to my very non-expert ear, but I know almost nothing about it. * It seems like we are [**close on Starcraft**](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/) though as far as I know the prediction hasn't been exactly resolved as stated. [[7]](#fn-Y7YY26W7jRDqqaGXk-7) * Human-level Angry Birds play [[8]](#fn-Y7YY26W7jRDqqaGXk-8) was forecast for four years out with 50% chance, and [**hasn't happened**](http://aibirds.org/past-competitions/2019-competition/man-vs-machine-challenge.html) yet. I note that the [**contest website cites our survey as pressure to do better**](http://aibirds.org/past-competitions/2019-competition/man-vs-machine-challenge.html), slightly complicating things. * AI that could 'play poker well enough to win the World Series of Poker' had a median 50% forecast in three years (i.e. 2019). In 2019, [**a system beat elite professional players at six-player no-limit Texas hold'em**](https://science.sciencemag.org/content/365/6456/885) [**including Chris 'Jesus' Ferguson**](https://www.sciencedaily.com/releases/2019/07/190711141343.htm) [**2000 winner of World Series main event**](https://en.wikipedia.org/wiki/List_of_World_Series_of_Poker_Main_Event_champions). However World Series contains several versions of Poker, so it isn't clear that AI could actually win the World Series. I'm not familiar enough with Poker to say whether any of the differences between Texas Hold'em, Omaha Hold'em and Seven Card Stud should make the latter two difficult if the first is now feasible. --- 1. Full question wording: How many years until you think the following AI tasks will be feasible with: * a small chance (10%)? * an even chance (50%)? * a high chance (90%)?Let a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to. Outperform professional game testers on all Atari games using no game-specific knowledge. This includes games like Frostbite, which require planning to achieve sub-goals and have posed problems for deep Q-networks1,2. 1 Mnih et al. (2015). Human-level control through deep reinforcement learning 2 Lake et al. (2015). Building Machines That Learn and Think Like People [↩︎](#fnref-Y7YY26W7jRDqqaGXk-1) 2. “In addition to the learned agents, we also report scores for a professional human games tester playing under controlled conditions…” “The professional human tester used the same emulator engine as the agents, and played under controlled conditions. The human tester was not allowed to pause, save or reload games. As in the original Atari 2600 environment, the emulator was run at 60 Hz and the audio output was disabled: as such, the sensory input was equated between human player and agents. The human performance is the average reward achieved from around 20 episodes of each game lasting a maximum of 5min each, following around 2 h of practice playing each game.” [↩︎](#fnref-Y7YY26W7jRDqqaGXk-2) 3. Section 3.1.2, https://arxiv.org/pdf/1207.4708.pdf [↩︎](#fnref-Y7YY26W7jRDqqaGXk-3) 4. Though note that only 19 participants answered the question about when there was a 10% chance. We surveyed 352 machine learning researchers publishing at top conferences, asking each a random subset of many questions. Some of these questions were about when they expected thirty-two concrete AI tasks would become ‘feasible’. We asked each of those questions in two slightly different ways. The relevant Atari questions had 19 and 20 responses for the two wordings, only one of which gave an answer for 2021. [↩︎](#fnref-Y7YY26W7jRDqqaGXk-4) 5. Half the time we asked about chances in N years, and half the time we asked about years until P probability, and people fairly consistently [had earlier distributions](https://aiimpacts.org/chance-date-bias/) when asked the second way. Both methods yielded a 50% chance in ten years here, though later the distributions diverge, with a 90% chance in 15 years yet a 60% chance in 20 years. Note that small numbers of different respondents answered each question, so inconsistency is not a huge red flag, though the consistent inconsistency across many questions is highly suspicious. [↩︎](#fnref-Y7YY26W7jRDqqaGXk-5) 6. ‘Take a written passage and output a recording that can’t be distinguished from a voice actor, by an expert listener.’ [↩︎](#fnref-Y7YY26W7jRDqqaGXk-6) 7. Beat the best human Starcraft 2 players at least 50% of the time, given a video of the screen. Starcraft 2 is a real time strategy game characterized by: * Continuous time play * Huge action space * Partial observability of enemies Long term strategic play, e.g. preparing for and then hiding surprise attacks.’[↩︎](#fnref-Y7YY26W7jRDqqaGXk-7) 8. Play new levels of Angry Birds better than the best human players. Angry Birds is a game where players try to efficiently destroy 2D block towers with a catapult. For context, this is the goal of the IJCAI Angry Birds AI competition [↩︎](#fnref-Y7YY26W7jRDqqaGXk-8)
d6dd2f00-8969-44e8-bc66-135e7d7af9c3
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Holy Grail of AI (Artificial Intelligence) - Computerphile Right. So, last time, which was quite a while ago, we were talking about intelligence in general and the way that you can model intelligence as an optimization process - This is the hill climbing algorithm. - Yeah, that was an example we gave. We were using evolution as an example of an optimizing algorithm, or an optimizing system anyway. And then we were using that as a way of talking about other types of intelligence. We talked about chess AI very briefly. That kind of thing. So then the question is: What's the difference between the type of AI that we have now-- the type of AI that might play chess, drive a car, or win jeopardy or whatever-- versus the ideas that we have of AI in the future? The kind of science fiction AI's that are what you would call true AI. What is it that really makes the difference? Is it just a matter of power, or is there something else? And one real distinguishing factor is generality. And what that means is how broad a set of domains can it optimize in. So if you take a chess AI, it's very intelligent in the domain of chess, and it is absolutely useless in almost any other domain. If you put a chess AI in a google self driving car not only can it not drive the car it doesn't have the concept of what a car is. It doesn't have any of the necessary architecture cognitive architecture to drive a car. And vice versa right? The google car can't play chess. And it can't win at jeopardy. Where as we have a working example of a general intelligence. Which is human intelligence. Right? Human brains can do a lot of different things. In a lot of different domains. Gulp. Including brand new domains. The domains we didn't evolve for particularly. So in fact chess ,right? We invented chess, we invented driving. And then we learned to become good at them. So, a general intelligence is in a sense a different class of thing. Because it's a single optimization system that's able to optimize in a very broad variety of different domains. And if we could build an artificial general intelligence. That's kind of the holy grail of AI research That you have a single program or a single system that is able to solve any problem that we throw at it or at least tackle any problem that we throw at it. -Recently Pr Brailsford ... the idea of the Turing test That strikes me from what you're saying is that's a very specific domain pretending to be human talking. -Yes, in a sense it's a very specific domain. The Turing test is necessary but not sufficient test for general intelligence. Hum. You could, it depends how you format your test, right because you could say well, if the AI has to pretend to be human, convincingly Turing's original test was only in a brief conversation using on a text But you could say, to convince me you're human : tell me what move I should make in this chess game. To convince me you're human, tell me how I would respond in this driving situation or what's the answer to this jeopardy question? So you can in a Turing test deliberately test a wide variety of other domains. But in general, conversation is one domain Hum, yeah you could formulate a true Turing test, in that way but it would get longer and be more, sort of, regressive. One more way of thinking about general intelligence is a domain specific intelligence but where the domain is the world or physical reality. And if you can reliably optimize the world itself. That is in a sense what general intelligence does. -Is that like humans have been changing the world to meet their needs? -Absolutely, so when you say changing the world Obviously we've been changing the world on a very grand scale but everything that humans do in the real world is in the sense changing the world to be better optimized to them, right. Like if I'm thirsty and there's a drink over there then picking it up and putting it to my lips and drinking. I'm changing the world to improve my hydration levels which is something that I value So I'm, sort of, optimizing I am using my intelligence to optimize the world around me in a very abstract sense. But also quite practically. -But on bigger scale, as you say on a grander scale, building it down and irrigating a field , putting a pipe to your house and then I'll need to have a tab. -Yep. -It's doing the same thing but on a grander scale. -Right, and there's no hard boundary between these two things. It's the same basic mechanism at work. The idea that you want things to be in somewhere different from where they are So you use your intelligence to come up with a series of actions or a plan, that you can implement, that will better satisfies your values. And that's that's what a true AI, a general AI would do as well. So you can see the the metaphore to optimization is still there, right. You've got, this vast state space, which is all possible states of the world Remember before, we were talking about dimensionality and how it's kind of a problem if you have too many dimensions. (So when we have a two-dimensional space...) This is what kills basic implementation of general AI off the bat because the world is so very very complicated. It's an exceptionally high dimensional space. With the "I'm drinking a drink" example, you've got the same thing again. You've got a state of the world which is a place in this space and you've got another state of the world which is the state in which I've just had a drink. And one of them is higher in my utility function. It's higher in my ordering, my preference ordering of the world states. So I'm going to try and move, I'm going to try to shift the world from places that are lower in my preference ordering to places that are higher. And that gives you a way to express the making of plans and the implementing of actions and intelligent behavior in the real world in mathematical terms. It's not, you can't just implement it, because hum because of this enormous dimensionality problem. -All these dimensions, if you try to break force infinite dimensions, you're going to fall out very quickly. -Yeah, yeah, immediately. -Changing the world. Right, and if that sounds a little bit threatening uh it is. (laughs) We'd like to thank audible.com for sponsoring this computerphile video and if you like books go over to : There's a chance to try out a book for free. Now I spoke to Rob who's in this computerphile video and asked him what book he would recommand and he says "Superintelligence" by Nick Bostrom is the one to check out. Particularly on this subject of artificial intelligence We've got more to come on that subject on computerphile as well, so visit : audible.com/computerphile Check out "Superintelligence" and thanks once again to audible.com for sponsoring this computerphile video.
73e1d18a-8400-4dda-9830-e248c66a7204
trentmkelly/LessWrong-43k
LessWrong
Closet survey #1 What do you believe that most people on this site don't? I'm especially looking for things that you wouldn't even mention if someone wasn't explicitly asking for them. Stuff you're not even comfortable writing under your own name. Making a one-shot account here is very easy, go ahead and do that if you don't want to tarnish your image. I think a big problem with a "community" dedicated to being less wrong is that it will make people more concerned about APPEARING less wrong. The biggest part of my intellectual journey so far has been the acquisition of new and startling knowledge, and that knowledge doesn't seem likely to turn up here in the conditions that currently exist. So please, tell me the crazy things you're otherwise afraid to say. I want to know them, because they might be true.
e1ae3e4e-b42b-44e4-9b74-00cdabed7e7c
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Superhero Bias Today's post, Superhero Bias was originally published on 01 December 2007. A summary (taken from the LW wiki):   > It is better to risk your life to save 200 people than to save 3. But someone who risks their life to save 3 people is revealing a more altruistic nature than someone risking their life to save 200. And yet comic books are written about heroes who save 200 innocent schoolchildren, and not police officers saving three prostitutes. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Halo Effect, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
a7e7cc86-b509-4548-a073-3c1321e37b0e
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Everything I ever needed to know, I learned from World of Warcraft: Goodhart’s law *This is the first in a series of posts about lessons from my experiences in *World of Warcraft*. I’ve been talking about this stuff for a long time—in forum comments, in IRC conversations, etc.—and this series is my attempt to make it all a bit more legible. I’ve added footnotes to explain some of the jargon, but if anything remains incomprehensible, let me know in the comments.* --- *World of Warcraft*, especially *WoW* raiding[[1]](#fn-59Y9JXX65bChJRr5v-1), is very much a game of numbers and details. At first, in the very early days of *WoW*, people didn’t necessarily appreciate this very well, nor did they have any good way to use that fact even if they did appreciate it. (And—this bit is a tangent, but an interesting one—a lot of superstitions arose about how game mechanics worked, which abilities had which effects, what caused bosses[[2]](#fn-59Y9JXX65bChJRr5v-2) to do this or that, etc.—all the usual human responses to complex phenomena where discerning causation is hard.) And, more importantly and on-topic, there was no really good way to sift the good players from the bad; nor to improve one’s own performance. This hampered progression. (“Progression” is a *WoW* term of art for “getting a boss down, getting better at doing so, and advancing to the next challenge; rinse, repeat”. Hence “progression raiding” meant “working on defeating the currently-not-yet-beaten challenges”.) The combat log -------------- ![Screenshot of the combat log](https://blog.obormot.net/attach/combat_log_example.jpg?v=1525364454.jpg) One crucial feature of *WoW* is the **combat log**. This is a little window that appears at the bottom of your screen; into it, the game outputs lines that report everything that happens to or around your character. All damage done or taken, all hits taken or avoided, abilities used, etc., etc.—*everything*. This information is output in a specific format; and it can be parsed by the add-on system[[3]](#fn-59Y9JXX65bChJRr5v-3). Naturally, then, people soon began writing add-ons that did parse it—parse it, and organize it, and present various statistical and aggregative transformations of that data in an easy-to-view form—which, importantly, could be viewed *live*, as one played. Thus arose the category of add-ons known as “damage meters”. The damage meters ----------------- ![Screenshot of a damage meter addon](http://cdn-wow.mmoui.com/preview/pvw26215.jpg) Of course the “damage meters” showed other things as well—but viewing damage output was the most popular and exciting use. (What more exciting set of data is there, but one that shows how much you’re hurting the monsters, with your fireballs and the strikes of your sword?) The better class of damage-meter add-ons not only recorded this data, but also synchronized and verified it, by communicating between instances of themselves running on the clients of all the people in the raid. Which meant that **now** you could have a centralized display of just what exactly everyone in the raid was doing, and how, and how well. This was a great boon to raid leaders and raid guilds everywhere! You have a raid of 40 people, one of the DPSers[[4]](#fn-59Y9JXX65bChJRr5v-4) is incompetent, can’t DPS to save his life, or he’s AFK[[5]](#fn-59Y9JXX65bChJRr5v-5) half the time, or he's just messing around—who can tell? With damage meters—everyone can tell. Now, you could sift the bad from the good, the conscientious from the moochers and slackers, and so on. And more: someone’s not performing well but seems to be trying, but failing? Well, now you look at his ability breakdown[[6]](#fn-59Y9JXX65bChJRr5v-6), you compare it to that of the top DPSers, you see what the difference is and you say—no, Bob, don't use ability X in this situation, use ability Y, it does more damage. The problem ----------- All of this is fantastic. But… it immediately and predictably began to be subverted by [Goodhart’s law](https://en.wikipedia.org/wiki/Goodhart%27s_law). To wit: if you are looking at the DPS meters but “maximize DPS” is not perfectly correlated with “kill the boss” (that being, of course, your goal)… then you have a problem. This may be obvious enough; but it is also instructive to consider the *specific ways* that those things can come uncoupled. So, let me try and enumerate them. ### The Thing is valuable, but it’s not the only valuable thing There are other things that must be done, that are less glamorous, and may detract from doing the Thing, but each of which is a *sine qua non* of success. (In *WoW*, this might manifest as: the boss must be damaged, but also, adds must be kited—never mind what this means, know only that while a DPSer is doing **that**, he can’t be DPSing!) And yet more insidious elaborations on that possibility: ### We can’t afford to specialize What if, yes, this other thing must be done, but the maximally competent raid member must **both** do that thing and **also** the main thing? He won’t DPS as well as he could, but he also can't just *not* DPS, because then you fail and die; you can’t say “ok, **just** do the other thing and forget DPSing”. In other words, what if the secondary task isn’t just something you can put someone full-time on? Outside of WoW, you might encounter this in, e.g., a software development context: suppose you’re measuring commits, but also documentation must be written—but you don’t have (nor can you afford to hire) a dedicated docs writer! (Similar examples abound.) Then other possibilities: ### Tunnel vision kills The Thing is valuable, but tunnel-visioning on The Thing means that you will forget to focus on certain other things, the result being that you are horribly doomed somehow—this is an *individual* failing, but given rise to by the incentives of the singular metric (i.e., DPS maximization). (The *WoW* example is: you have to DPS as hard as possible, *but* you also have to move out the way when the boss does his “everyone in a 10 foot radius dies to horrible fire” ability.) And yet more insidious versions of this one: ### Tunnel vision kills… other people Yes, if this tunnel-vision dooms **you**, personally, in a predictable and unavoidable fashion, then it is easy enough to say “do this other thing or else you will predictably **also** suffer on the singular metric” (the dead throw no fireballs). But the *real* problem comes in when neglecting such a secondary duty creates *externalities*; or when the destructive effect of the neglect can be pushed off on someone else. (In WoW: “I won’t run out of the fire and the healers can just heal me and I won’t die and I’ll do more DPS than those who don’t run out"; in another context, perhaps “I will neglect to comment my code, or to test it, or to do other maintenance tasks; these may be done for me by others, and meanwhile I will maximize my singular metric [commits]”.) It’s almost *always* the case that **you** have the comparative advantage in doing the secondary thing that avoids the doom; if others have to pick up your slack there, it’ll be way less efficient, overall. ### Optimization has a price The Thing is valuable, yes; and it may be that there are ways to *in fact* increase your level of the Thing, really do increase it, **but** at a non-obvious cost that is borne by *others*. Yes, you are improving *your* effectiveness, but the price is that others, doing other things, now have to work harder, or waste effort on the consequences, etc. (Many examples of this in WoW, such as “start DPSing before you’re supposed to, and risk the boss getting away from the tank and killing the raid”. In a general context, this is “taking risks, the consequences of which are dire, and the mitigation of which is a cost borne by others, not you”.) Then this one is particularly subtle and may be hard to spot: ### Everyone wants the chance to show off their skill The Thing is valuable, and doing it well brings judgment of competence, and therefore status. There are *roles within the project’s task allocation* that naturally give greater opportunities to maximize your performance of the Thing, and **therefore** people seek out those roles preferentially—even when an optimal allocation of roles, by relative skill or appropriateness to task, would lead them to be placed in roles that do not let them do the most of the Thing. (In WoW: if the most skilled hunter is needed to kite the add, but there are no “who kited the add best” meters, only damage meters… well, then maybe that most skilled hunter, when called upon to kite the add, says “Bob over there can kite the add better”—and as a result, because Bob actually is *worse* at that, the raid fails. In other contexts… well, many examples, of course; glory-seeking in project participation, etc.) Of course there is also: ### A good excuse for incompetence This is the converse of the first scenario: if the Thing is valuable but you are bad at it, you might deliberately seek out roles in which there is an *excuse* for not performing it well (because the role’s *primary* purpose is something else)—despite the fact that, actually, the ideal person in your role **also** does the Thing (even if not *as much* as in a Thing-centered role). --- 1. “Raid dungeons” were the most difficult challenges in the game—difficult enough to require up to 40 players to band together and cooperate, and cooperate *effectively*, in order to overcome them. “Raiding” refers to the work of defeating these challenges. Most of what I have to say involves raiding, because it was this part of *WoW* that—due to the requirement for effective group effort (and for other, related, reasons)—gave rise to the most interesting social patterns, the most illuminating group dynamics, etc. [↩︎](#fnref-59Y9JXX65bChJRr5v-1) 2. “Boss monsters” or “bosses” are the powerful computer-controlled opponents which players must defeat in order to receive the in-game rewards which are required to improve their characters’ capabilities. The most powerful and difficult-to-defeat bosses were, of course, raid bosses (see previous footnote). [↩︎](#fnref-59Y9JXX65bChJRr5v-2) 3. *WoW* allows players to create add-ons—programs that enhance the game’s user interface, add features, and so on. Many of these were very popular—downloaded and used by many other players—and some came to be considered necessary tools for successful raiding. [↩︎](#fnref-59Y9JXX65bChJRr5v-3) 4. “Damage Per Second”, i.e. doing damage to the boss, in order to kill it (this being the goal). Along with “tank” and “healer”, “DPS” is one of the three roles that a character might fulfill in a group or raid. A raid needed a certain number of people in each role, and all were critical to success. [↩︎](#fnref-59Y9JXX65bChJRr5v-4) 5. “Away From Keyboard”, i.e., not actually at the computer—which means, obviously, that his character is standing motionless, and not contributing to the raid’s efforts in the slightest. [↩︎](#fnref-59Y9JXX65bChJRr5v-5) 6. In other words: which of his character’s abilities he was using, in what proportion, etc. Is the mage casting Fireball, or Frostbolt, or Arcane Missile? Is the hunter using Arcane Shot, and if so, how often? By examining the record—recorded and shown by the damage meters—of a character’s ability usage, it was often very easy to determine who was playing optimally, and who was making mistakes. [↩︎](#fnref-59Y9JXX65bChJRr5v-6)
58b31d90-7a51-4174-a513-4f5ab1dbb398
trentmkelly/LessWrong-43k
LessWrong
San Jose Meetup, Sat 10/25 @ 7:30pm It's on Saturday 7.30pm at Il Fornaio, 302 S Market St (in the Sainte Claire Hotel), San Jose. All aspiring rationalists welcome. The reservation is currently for 21 but can be changed if needed.  Please RSVP if you haven't already. They have a wide variety including the requested vegetarian options & pizza from an oak wood burning oven, mostly good reviews, casual dress code, the Wine Spectator Award of Excellence 2007, and are just down the road from Montgomery Theater. 7.30 should give people time to finish chatting at the theater or freshen up at their hotels, but if you want to come a bit earlier they say they'll probably be able to seat you. If lots of people think 7.30 is too late I could change the time or suggest a cafe for early birds to grab a coffee in before dinner. Please RSVP whether or not you'll be coming, either in a comment or email me at cursor_loop 4t yahoo p0int com. That way if new people want to come as they can take unclaimed places, and I can let the restaurant know if the number changes. If you think you'll be late, please let me know that too so we can ask the restaurant to keep your seat. Also, as my cellphone won't work San Jose, please could someone volunteer their number for unexpectedly late or lost people to ring? Please see here for further info. Hope to see you all on Saturday! > -- Posted on behalf of Michael Howard
e55206b1-e1e5-499f-b45a-ec2793a7c442
trentmkelly/LessWrong-43k
LessWrong
Life on the Grid (Part 1) The physical layout of the environment in which someone is raised impacts their cognitive abilities as an adult. A recent study based on video game data from 397,162 people across 38 countries found that “people who grew up outside cities are better at navigation.” More specifically, “growing up in cities with low Street Network Entropy (e.g. Chicago) led to better results at video game levels with a regular layout, while growing up outside cities or in cities with higher Street Network Entropy (e.g. Prague) led to better results at more entropic video game levels.” In plain english: if you grow up in a grid-like environment, then you will be worse at navigating less grid-like environments.  Association between Environment and Video Game (Sea Hero Quest) Wayfinding Performance stratified by age, gender, and education. The SHQ Wayfinding Performance is computed from the trajectory length and has been averaged within 5-years windows.Two examples of cities with low (Chicago) and high (Prague) SNE. Right: Distribution of the street bearings across 36 bins of 10 degrees. This finding is perhaps not wholly surprising, and at first blush might not seem all that significant. The vast majority of us hardly ever need to truly wayfind IRL anymore. Technology has made our navigational skills all but obsolete. So what does it matter whether certain environments are better or worse at preserving them? It matters a great deal. To understand why, we will need to take a small detour. In his appearance on Making Sense with Sam Harris, complexity scientist David Krakauer makes a distinction between complementary cognitive artifacts—technologies that make us more intelligent after using them—and competitive cognitive artifacts (if you can’t guess what these do then maybe you’ve been using them too much). The canonical example of a competitive artifact is a calculator: repeated usage leaves you worse at mental arithmetic than you were before. Contrast this with an abacus, which can
5c811b6f-1d89-4961-8075-7a8ab2742996
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
"Aligned" foundation models don't imply aligned systems *[The basic idea in this post is probably not original to me, since it's somewhat obvious when stated directly. But it seems particularly relevant and worth distilling in light of recent developments with LLM-based systems, and because I keep seeing arguments which seem confused about it.]* Alignment is a property of agents, or even more generally, of systems. An "aligned model" is usually[[1]](#fno05unnnx2jo) a type error. Often when a new state-of-the-art ML model is developed, the first thing people ask is what it can do when instantiated in the most obvious way possible: given an input, execute the function represented by the model on that input and return the output. I'll refer to this embodiment as the "trivial system" for a given model. For an LLM, the trivial system generates a single token for a given prompt. There's an obvious extension to this system, which is to feed the predicted token and the initial prompt back into the model again, and repeat until you hit a stop sequence or a maximum length. This is the system you get when you make a single call to the [OpenAI text or chat completion API](https://platform.openai.com/docs/guides/completion). I'll name this embodiment the "trivial++ system". You can take this much further, by building a [chatbot interface](https://openai.com/blog/chatgpt) around this API, hooking it up to the [internet](https://openai.com/blog/chatgpt-plugins), running it in an [agentic loop](https://github.com/Torantulino/Auto-GPT), or even [more](https://www.lesswrong.com/posts/4Gt42jX7RiaNaxCwP/more-information-about-the-dangerous-capability-evaluations) [exotic](https://python.langchain.com/en/latest/) [arrangements](https://www.lesswrong.com/posts/LsqvMKnFRBQh4L3Rs/steering-systems#A_sketch_of_a_powerful_steering_system) of your own design. These systems have suddenly started working much better with  the release of GPT-4, and AI capabilities researchers are just getting started. The capabilities of any particular system will depend on both the underlying model and the ways it is embodied: concretely, you can improve [Auto-GPT](https://github.com/Torantulino/Auto-GPT) by: (a) backing it with a better foundation model, (b) giving it access to more APIs and tools, and (c) improving the code, prompting, and infrastructure that it runs on. *Perhaps* models to which techniques like [RLHF](https://www.lesswrong.com/tag/rlhf)  have been applied will make it easy to build aligned systems and hard or impossible to build unaligned systems, but this is far from given. Things like RLHF and instruction prompt tuning definitely make it easier to build powerful systems out of the foundation models to which they are applied. Does an RLHF'd model make it easier to build Order-GPT and harder to build [Chaos-GPT](https://www.vice.com/en/article/93kw7p/someone-asked-an-autonomous-ai-to-destroy-humanity-this-is-what-happened)? (I'm not sure; no Auto-GPT applications seem to really be working particularly well yet, but I wouldn't count on that trend continuing.) So while RLHF is definitely a capabilities technique, it remains to be seen whether it can be called an alignment technique when applied to models embedded in non-trivial systems (though it is quite effective at getting the trivial++ system for GPT-4 not to say impolite things.) Some examples where others seem confused or not careful about this distinction ------------------------------------------------------------------------------ From [Evolution provides no evidence for the sharp left turn](https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn): > Let’s consider eight specific alignment techniques: > > * [Reinforcement learning from human feedback](https://arxiv.org/abs/1706.03741) > * [Constitutional AI](https://www.anthropic.com/constitutional.pdf) > * [Instruction prompt tuning](https://arxiv.org/abs/2212.13138v1) > * [Discovering Language Model Behaviors with Model-Written Evaluations](https://arxiv.org/abs/2212.09251) > * [Pretraining Language Models with Human Preferences](https://arxiv.org/abs/2302.08582) > * [Discovering Latent Knowledge in Language Models Without Supervision](https://arxiv.org/abs/2212.03827) > * [More scalable methods of process based supervision](https://arxiv.org/abs/2211.14275) > * [Using language models to write their own instruction finetuning data](https://arxiv.org/abs/2212.10560) > Most of these seem like useful techniques for building  or understanding larger foundation models. None of them self-evidently help with the alignment of nontrivial systems which use executions of those foundation models as a component, though it wouldn't surprise me if at least some of them were relevant. From a [different section](https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn#Evolution_s_sharp_left_turn_happened_for_evolution_specific_reasons) of the same post: > In my frame, we've already figured out and applied the sharp left turn to our AI systems, in that we don't waste our compute on massive amounts of incredibly inefficient neural architecture search, hyperparameter tuning, or meta optimization. For a given compute budget, the best (known) way to buy capabilities is to train a single big model in accordance with empirical scaling laws such as those discovered in the Chinchilla paper, not to split the compute budget across millions of different training runs for vastly tinier models with slightly different architectures and training processes. In fact, we can be even more clever and use small models to tune the training process, before scaling up to a single large run, as OpenAI did with GPT-4. > > "AI systems" in the first sentence should probably be "AI models". And then, I'd say that the best way to advance capabilities is to train a single big model *and then wrap it up* in a useful system with a slick UX and lots of access to APIs and other functionality. From [Deconfusing Direct vs Amortised Optimization](https://www.lesswrong.com/posts/S54HKhxQyttNLATKu/deconfusing-direct-vs-amortised-optimization): > Amortized optimization, on the other hand, is not directly applied to any specific problem or state. Instead, an agent is given a dataset of input data and successful solutions, and then learns a function approximator that maps directly from the input data to the correct solution. Once this function approximator is learnt, solving a novel problem then looks like using the function approximator to generalize across solution space rather than directly solving the problem. > > The use of the word "agent" seems like a type error here. A function approximator is trained on a dataset of inputs and outputs. An agent may then *use* that function approximator as one tool in its toolbox when presented with a problem. If the problem is similar enough to the ones the function approximator was designed to approximate, it may be that the trivial system that executes the function approximator on the test input is sufficient, and you don't need to introduce an agent into the situation at all. From [My Objections to "We’re All Gonna Die with Eliezer Yudkowsky"](https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky#Yudkowsky_brings_up_strawberry_alignment): (emphasis mine, added to highlight the relevant parts in a longer quote without dropping the context.) > ### [Yudkowsky brings up strawberry alignment](https://youtu.be/gA1sNLL6yg4?t=2330) > > > > I mean, I wouldn't say that it's difficult to align an AI with our basic notions of morality. I'd say that it's difficult to align an AI on a task like 'take this strawberry, and make me another strawberry that's identical to this strawberry down to the cellular level, but not necessarily the atomic level'. So it looks the same under like a standard optical microscope, but maybe not a scanning electron microscope. Do that. Don't destroy the world as a side effect." > > > > > >   > > My first objection is: [human value formation doesn't work like this](https://www.lesswrong.com/posts/gHefoxiznGfsbiAu9/inner-and-outer-alignment-decompose-one-hard-problem-into#III__Outer_inner_just_isn_t_how_alignment_works_in_people). There's no way to raise a human such that their value system cleanly revolves around the one single goal of duplicating a strawberry, and nothing else. By asking for a method of forming values which would permit such a narrow specification of end goals, you're asking for a value formation process that's fundamentally different from the one humans use. There's no guarantee that such a thing even exists, and implicitly aiming to avoid the one value formation process we know is compatible with our own values seems like a terrible idea. > > It also assumes that the orthogonality thesis should hold in respect to alignment techniques - **that such techniques should be equally capable of aligning models to any possible objective.** > > This seems clearly false in the case of deep learning, **where progress on instilling any particular behavioral tendencies in models** roughly follows the amount of available data that demonstrate said behavioral tendency. It's thus vastly easier to align models to goals where we have many examples of people executing said goals. As it so happens, we have roughly zero examples of people performing the "duplicate this strawberry" task, but many more examples of e.g., humans acting in accordance with human values, ML / alignment research papers, chatbots acting as helpful, honest and harmless assistants, people **providing oversight to AI models**, etc. See also: [this discussion](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization#sjWcuHie4egJbqrk8). > > Again, "aligning a model", "behavioral tendencies in models" and "providing oversight to AI models" are type errors in my ontology. This type error is important, because it hides an implicit assumption that either any systems in which "aligned" models are embedded will also be aligned, or that people will build only trivial systems out of these models. Both of these assumptions have already been falsified in rather dramatic fashion. My mental model of the authors I've cited understand and mostly agree with the basic distinction I've made in this post. However, I think they were being not very careful about tracking it in some of the arguments and explanations above, and that this is more than just a pedantic point or argument over [definitions](https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definition).  This distinction is important, because AI system capabilities are currently advancing rapidly, independent of any DL paradigm-based improvements in foundation models. The very first [paragraph](https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn) of one of the posts quoted above summarizes the "sharp left turn" argument as factoring through SGD, but SGD is not the only way of pushing the capabilities frontier, and may not be the main one for much longer, as GOFAI approaches come back into vogue. (A possibility which, I [note](https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn?commentId=XuLdXJYsYknxi3Lx2), the original authors of the sharp left turn argument foresaw.) 1. **[^](#fnrefo05unnnx2jo)**Maybe if the models and training runs get large enough, an inner optimizer develops somewhere inside and even a single output from the trivial system embodied by a model becomes dangerous. Or, perhaps the inner optimizer develops and "breaks out" [entirely during the training process itself](https://gwern.net/fiction/clippy). Regardless of how likely those failure modes are, we'll probably encounter less exotic-looking failure modes of systems which are built on top of non-dangerous models *first*, which is the topic of this post. [Mesaoptmizers](https://www.lesswrong.com/tag/mesa-optimization) are just another possible way we might fail *later*, if we manage to solve some easier problems first.
d9679e2f-7801-4d1b-b9aa-9472529858a6
trentmkelly/LessWrong-43k
LessWrong
The Flynn Effect Clarified I said in my review of WEIRDest People that the Flynn effect seems like a natural consequence of thinking styles that became more analytical, abstract, reductionist, and numerical. I'll expand here on some questions which I swept under the rug, so that I could keep that review focused on the book's most important aspects. Cultural Bias After reading WEIRDest People, I find that the goal of a culture-neutral IQ test looks strange (and, of course, WEIRD). At least as strange as trying to fix basketball to stop favoring tall people. I found after writing that review that Muthukrishna and Henrich discuss IQ in Innovation in the collective brain (hereinafter referred to as MH2016), saying roughly what I inferred from WEIRDest People. MH2016 says: "IQ measures the abilities that are useful at school and work in these societies." There's a good case to be made for removing accidental cultural bias. Doing so will increase the test's ability to detect cognitive patterns that are oriented toward science and technology. Similar cultural biases pervade psychology. E.g. there are real cultural differences about the wisdom of conformism, which lead me to doubt there's a culture-neutral rule for what the correct answer is in the Asch conformity test. Strong cultural biases are built into the central ideas behind what we call science. E.g. it's somewhat WEIRD to believe that we should seek universal laws of nature, rather than expecting valuable knowledge to be context-dependent knowledge. Could a genuinely culture-neutral IQ test be developed? I presume it's possible to shift the emphasis so that it measures holistic reasoning more, and analytical reasoning less. But what would be a neutral mix of those two? You could average over all humans currently alive, but that would produce a rather different result than if you'd taken that average 1000 years ago. Changes of this nature seem likely to reduce the value of IQ tests at satisfying the original goals behind testing, wi
428a6217-8321-447d-9bfe-d4cc323b4550
trentmkelly/LessWrong-43k
LessWrong
Disasters If there were a natural disaster tomorrow and it took about two weeks to get things working again, how many people would be ok for food, water, and other necessities? I'm guessing below 5%, but I think this level of preparedness would be a good goal for most people who can afford it. Why don't people plan for potential disasters? Some possibilities: * They don't think disasters are likely. On the other hand, I also don't think disasters are likely! While we have extra water in the basement, I think the chances we'll need it sometime during my life are only maybe 2%. Since it's not expensive, and if we do need it we'll be incredibly happy to have it, I think it's worth setting up. It does matter a lot whether the chances are ~2% or 0.0002%, but if you think your lifetime chance of being impacted by a serious disaster is under 1% I'd encourage you to think about historical natural disasters in your area (earthquakes, floods, hurricanes, wildfires, etc) plus the risk of potential human-caused disasters (nuclear war, epidemics, civil war, economic collapse, etc). * It's weird. Most people don't do it, and a heuristic of "do the things other people do" is normally a pretty good one. In this case, though, I think we should be trying to change what's normal. The government agrees; the official recommendations involve a lot more preparation than people typically do. * They can't afford the money, time, or thought. Many people are in situations where planning for what's likely to happen in the next couple months is hard enough, let alone for things that have a low single digits chance of happening ever. This can't explain all of it, though, because even people who do have more time and money also haven't generally thought through simpler preparations. * They don't think preparation is likely to be useful. If there's a nuclear strike we're all dead anyway, right? Except most disasters, even nuclear ones, aren't this binary. Avoiding exposure to radiation and
469026db-7099-4fa0-aa7e-07a87f5c9eb8
trentmkelly/LessWrong-43k
LessWrong
May I ask for some help getting started with crypto? Hey, gang. There is no way for me to structure this request coherently. I am barbarously ignorant of the subject at hand, so please forgive any instances of clumsy wording you may encounter here or there. This is the only community I trust to give me sound, well-informed, and useful advice. Here is my situation: I would like to invest in crypto, but my ignorance in this direction is magnificent. Indeed, it feels invincible to me. Furthermore, the very prospect of attempting to defeat this ironclad ignorance has presented itself to me as a somewhat insurmountable "ugh field" for the following reasons: a) It seems like trading competently in crypto requires some degree of proficiency across multiple fields and practice areas: engineering, software development, data analysis, statistical modeling, cash flow assessment, insight into the psychology behind user behavior, an advanced understanding of economics, etc. I'm sure some of the requisite base knowledge is less self-evident, even, that that. b) Existing information on the subject of crypto investing is so vast that any effort to consult educational resources on the matter tends to leave me feeling like I've just been washed about helplessly in a stew of concepts and details which seem very convoluted and carry very little meaning to my mind. c) I don't know where to look for quality information or trustworthy sources. How can I, starting from square one (complete ignorance), learn to invest wisely in crypto and maximize my returns on investment? How much time should I reasonably expect to set aside for learning this? Can anyone suggest quality educational resources for getting started? What mistakes have you made? What should I avoid? Also: what should I be investing in at the moment? Please take pity upon my indolent soul. Any other advice, hacks, and pro-tips that may occur to you would be very much appreciated as well. I have no idea what I'm doing.
8f57e2bb-4ec0-47bb-9c38-0e5e49c1889d
trentmkelly/LessWrong-43k
LessWrong
Foom & Doom 1: “Brain in a box in a basement” 1.1 Series summary and Table of Contents This is a two-post series on AI “foom” (this post) and “doom” (next post). A decade or two ago, it was pretty common to discuss “foom & doom” scenarios, as advocated especially by Eliezer Yudkowsky. In a typical such scenario, a small team would build a system that would rocket (“foom”) from “unimpressive” to “Artificial Superintelligence” (ASI) within a very short time window (days, weeks, maybe months), involving very little compute (e.g. “brain in a box in a basement”), via recursive self-improvement. Absent some future technical breakthrough, the ASI would definitely be egregiously misaligned, without the slightest intrinsic interest in whether humans live or die. The ASI would be born into a world generally much like today’s, a world utterly unprepared for this new mega-mind. The extinction of humans (and every other species) would rapidly follow (“doom”). The ASI would then spend countless eons fulfilling its desires, desires which we humans would find to be bizarre and pointless. Now, I don’t endorse every word of that foom & doom scenario above—for example, I don't think “foom” requires recursive self-improvement. But I’m in much closer agreement with that scenario than the vast majority of AI safety & alignment researchers today, who tend to see the “foom & doom” scenario above as somewhere between “extraordinarily unlikely” and “already falsified”! Those researchers are not asking each other “is it true?”, but rather “lol, can you believe that some people used to believe that?”.[1] Oh well. Laugh all you want. It’s still what I believe. Conversely, from my perspective as a foom & doomer, it’s the mainstream contemporary AI alignment discourse that feels increasingly foreign and strange. How, I ask myself, do so many seemingly reasonable people wind up with such wildly, bafflingly over-optimistic beliefs as “P(doom)≲50%”?? Anyway, my main goal in these two posts is to explore how I wind up in such a different pl
05742368-506a-4dd0-8d4f-0fe539bcf71a
trentmkelly/LessWrong-43k
LessWrong
Donating to help Democrats win in the 2024 elections: research, decision support, and recommendations A couple of months ago I went searching for research-based or otherwise plausibly effective ways to donate money to help Democrats win in the US federal elections. I found a fragmented smattering of semi-closed, secretive sources. In this post I'll pull together the best and most inspiring recommendations I found from trustworthy people, as well as lots of other details about how and when to donate for optimal impact. Note: wondering if recent news changes everything? Please see the section "Should I wait / give up due to the Biden resignation situation or the Trump assassination attempt?" This post will not argue that members of this community should support Democrats, or even that intervening in US elections is a good idea. It's written for readers who already agree, to help them pursue their aims in an informed and effective way. Please keep discussion about whether this is a fundamentally worthy cause out of the comments. In return, I promise that this post won't criticize people who disagree with its politics or make the assumption that everyone in the community shares them. I am not an authority on this topic, but I'm writing this because I think some of this info will be new and valuable to some folks here. This post is based on a review of available literature (mostly unpublished but shared within the industry), consulting with experienced folks in the research-based Democratic voter turnout world, recommendations from expert research reviewers, and my own knowledge from working in the field[1] about which groups have well-designed research programs.  Who should read this This post might be for you if: * You think the world will be a better, safer, or higher-utility place if Democrats control at least one of the presidency, House, or Senate. * You're put off by some of the Democrat/leftist discourse and infighting you see online and don't feel great about contributing to the most vocal groups * You'd donate to electoral work if there were sufficient
432ab97d-c1db-4467-9b8e-65c645bdb1d3
StampyAI/alignment-research-dataset/arbital
Arbital
Task identification problem A subproblem of building a [task-directed AGI ](https://arbital.com/p/6w) is communicating to the AGI the next task and identifying which outcomes are considered as fulfilling the task. For the superproblem, see [https://arbital.com/p/-2s3](https://arbital.com/p/-2s3). This seems like primarily a communication problem. It might have additional constraints associated with, e.g., the AGI being a [behaviorist genie](https://arbital.com/p/102). In the [known-fixed-algorithm](https://arbital.com/p/1fy) case of AGI, it might be that we don't have much freedom in aligning the AGI's planning capabilities with its task representation, and that we hence need to work with a particular task representation (i.e., we can't just use language to communicate, we need to use labeled training cases). This is currently a stub page, and is mainly being used as a parent or tag for subproblems.
101ae293-61d5-4484-8321-2b23fb4b3693
trentmkelly/LessWrong-43k
LessWrong
Meetup : Less Wrong Israel Meetup (Tel Aviv): Quantum Computing Discussion article for the meetup : Less Wrong Israel Meetup (Tel Aviv): Quantum Computing WHEN: 28 November 2013 08:00:00PM (+0200) WHERE: 7 begin ramat gan We're going to have a meetup on Thursday, November 28th at VisionMap's offices, Gibor Sport House, 15th floor, 7 Menachem Begin st., Ramat-Gan. Our program is: 20:00-20:15: Assembly 20:15-21:00: Main Talk 21:00-22:00: Dinner & Discussion 22:00-23:00: Rump Session (minitalks) 23:00-: End of official programming Main Talk: Quantum Computing / Anatoly Vorobey We're used to living in a world where things happen in order. But in reality of quantum physics, things work a little differently. This lets us do all sorts of cool things which starting with the basics of computing theory - you'll learn about in this talk. Backup Talk: TBA Rump Session: Each participant will give a 4-minute talk (+3 minute encore if we applaud hard enough). Giving a talk isn't mandatory, but it's highly recommended. Not confident that what you have to say is relevant to our interests? Unsure about your public speaking skills? Doesn't matter - in the rump session, anything goes. - Note, you don't have to prepare a talk to come! Speaking at the rump session is completely only if you want to. Feel free to contact me (Gal Hochberg) at hochbergg@gmail.com or at 0545330678 for any further information Discussion article for the meetup : Less Wrong Israel Meetup (Tel Aviv): Quantum Computing
2fb2ca04-c916-4c90-bc5c-2c813d602f87
trentmkelly/LessWrong-43k
LessWrong
How I Write Cross-posted, as always, from Putanumonit. ---------------------------------------- Five years ago I started Putanumonit not expecting that anyone would read it. That year I had a chance to meet two of my favorite bloggers, Scott Alexander and Tim Urban, and discovered that they had both honed their writing skills on lesser known blogs for several years before relaunching as SSC and WBW. My plan at launch was to write for a small audience until 2020, then scrap Putanumonit and launch a new blog with everything I will have learned about writing. That five-year plan lasted all of three weeks when Scott linked to my post about why China’s soccer team sucks and brought in thousands of readers in one day. So now I’m stuck with Putanumonit. Last week my friend Lynette called me to talk about blogging, and I realized that in those five years I did figure out a few things about writing. These aren’t technical tips of sentence construction and argument layout but relate more broadly to writing as a consistent activity. So, here’s how five years of Putanumonit happened. ---------------------------------------- I write confidently and trust the readers to discount appropriately It’s good to be clear about the general epistemic status of what I write and be meta-honest about writing my sincere beliefs (or sincere drug trips). But I don’t think this requires hedging every statement, reminding the reader of my inevitable subjectivity, or bloating the text with endless caveats. I write under the assumption that I’m just one voice among many each of my readers is exposed to, more “rando on the internet” than “infallible guru”. I can imagine a silly reader using each and every piece of advice I’ve ever written to fuck up their lives, but dumb people can’t be helped either way. I trust smart readers to sanity-check my models, discount my apparent confidence, double-check factual claims, and reverse all advice as needed. So this post is not universal writing advice, it’s how
63e9c8f0-f956-48e5-9cdb-7490b400a25c
trentmkelly/LessWrong-43k
LessWrong
Value of building an online "knowledge web" Please excuse my naivety, would really like to hear from more knowledgable people about this. I recently discovered a note-taking website called Roam which allows you to create pages, take bullet point notes in those pages, and use double brackets around phrases and words to create a new, doubly linked page. When you do this, you can see the all the connected pages in a visual; a graph where each node is a page. I think this tool is valuable because it allows me to externalize the connectedness of ideas and concepts with clarity. I'm wondering why a tool like this hasn't been popularized, especially as a tool for quicker onboarding for people who want to learn about something very quickly? I want to know what I'm overlooking. It seems to me that having an organization of notes like this, with the most important TL;DRs and links to papers and websites for additional info for those interested in learning more, would save a lot of time and help create an understanding of how various ideas relate. Additionally, wouldn't a structure like this help with understanding how to build AI from an engineering perspective? Breaking down the desired functionalities into sub-topics, having an idea of how to put certain functionalities together, etc. I'm an undergrad that learned about AI Safety through my school EA group. I've spent a lot of time in the past year trying to learn about this space and how I can contribute to it (from a cogsci->theory of mind->utility inference /value alignment perspective). A nontrivial amount of that time was spent on finding+identifying good information from various sources: mainly from LW/AN/80k/FHI/OpenAI pubs/Deepmind pubs... -- while attempting to piece the relationship of the subtopics all together. In a sense, I found even the very roughly structured bibliography by CHAI (https://humancompatible.ai/bibliography) to be my main source of understanding how I could categorize the different work and ideas in AI Safety. Wouldn't building up a k
49adfbd7-32df-49a2-af20-2e4b6e82f290
StampyAI/alignment-research-dataset/blogs
Blogs
Minimal-trust investigations *Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.* > This piece is about the single activity ("minimal-trust investigations") that seems to have been most formative for the way I think. Most of what I believe is mostly based on trusting other people. For example: * I brush my teeth twice a day, even though I've never read a study on the effects of brushing one's teeth, never tried to see what happens when I *don't* brush my teeth, and have no idea what's in toothpaste. It seems like most reasonable-seeming people think it's worth brushing your teeth, and that's about the only reason I do it. * I believe climate change is real and important, and that official forecasts of it are probably reasonably close to the best one can do. I have read a bunch of arguments and counterarguments about this, but ultimately I couldn't tell you much about how the climatologists' models actually work, or specifically what is wrong with the various skeptical points people raise.[1](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fn1) Most of my belief in climate change comes from noticing *who* is on each side of the argument and *how* they argue, not *what* they say. So it comes mostly from deciding whom to trust. I think it's completely reasonable to form the vast majority of one's beliefs based on trust like this. I don't really think there's any alternative. But I also think it's a good idea to occasionally do a **minimal-trust investigation:** to suspend my trust in others and dig as deeply into a question as I can. This is not the same as taking a class, or even reading and thinking about both sides of a debate; it is always enormously more work than that. I think the vast majority of people (even within communities that have rationality and critical inquiry as central parts of their identity) have never done one. Minimal-trust investigation is probably the single activity that's been most formative for the way I think. I think its value is twofold: * **It helps me develop intuitions for what/whom/when/why to trust,** in order to approximate the views I *would* hold if I *could* understand things myself. * **It is a demonstration and reminder of just how much work minimal-trust investigations take, and just how much I have to rely on trust to get by in the world.** Without this kind of reminder, it's easy to casually feel as though I "understand" things based on a few memes or talking points. But the occasional minimal-trust investigation reminds me that memes and talking points are never enough to understand an issue, so my views are necessarily either based on a huge amount of work, or on trusting someone. In this piece, I will: * Give an example of a minimal-trust investigation I've done, and list some other types of minimal-trust investigations one could do. * Discuss a bit how I try to get by in a world where nearly all my beliefs ultimately need to come down to trusting someone. Example minimal-trust investigations ------------------------------------ The basic idea of a minimal-trust investigation is **suspending one's trust in others' judgments and trying to understand the case for and against some claim oneself,** ideally to the point where one can (within the narrow slice one has investigated) keep up with experts.[2](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fn2) It's hard to describe it much more than this other than by example, so next I will give a detailed example. Detailed example from GiveWell ------------------------------ I'll start with **the case that long-lasting insecticide-treated nets (LLINs) are a cheap and effective way of preventing malaria.** I helped investigate this case in the early years of [GiveWell](http://www.givewell.org). My discussion will be pretty detailed (but hopefully skimmable), in order to give a tangible sense of the process and twists/turns of a minimal-trust investigation. Here's how I'd summarize the broad outline of the case that most moderately-familiar-with-this-topic people would give:[3](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fn3) * *People sleep under LLINs, which are mosquito nets treated with insecticide (see picture above, taken from [here](https://www.againstmalaria.com/Distribution1.aspx?ProposalID=194&distributionID=673)).** *The netting can block mosquitoes from biting people while they sleep. The insecticide also deters and kills mosquitoes.** *A number of studies show that LLINs reduce malaria cases and death. These studies are rigorous - LLINs were randomly distributed to some people and not others, allowing a clean "experiment." (The key studies are summarized in a [Cochrane review](https://www.cochrane.org/CD000363/INFECTN_insecticide-treated-nets-preventing-malaria), the gold standard of evidence reviews, concluding that there is a "saving of 5.6 lives each year for every 1000 children protected.")** *LLINs cost a few dollars, so a charity doing LLIN distribution is probably saving lives very cost-effectively.* * *Perhaps the biggest concern is that people might not be using the LLINs properly, or aren't using them at all (e.g., perhaps they're [using them for fishing](https://learningenglish.voanews.com/a/mosquito-nets-widely-used-for-fishing-study-finds/4250652.html)).* When I did a minimal-trust investigation, I developed a picture of the situation that is pretty similar to the above, but with some important differences. (Of all the minimal-trust investigations I've done, this is among the cases where I learned the *least*, i.e., where the initial / conventional wisdom picture held up best.) First, I read the Cochrane review in its entirety and read many of the studies it referenced as well. Some were quite old and hard to track down. I learned that: * The original studies [involved](https://www.givewell.org/international/technical/programs/insecticide-treated-nets#What_sorts_of_programs_were_carried_out_in_small-scale_studies) very intense measures to make sure people were using their nets properly. In some cases these included *daily or weekly* visits to check usage. Modern-day LLIN distributions don't do anything like this. This made me realize that we can't assume a charity's LLIN distributions are resulting in proper usage of nets; we need to **investigate modern-day LLIN usage separately.*** The most recent randomized study was completed in 2001, and there won't necessarily ever be another one.[4](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fn4) In fact, none of the studies were done on LLINs - they were done on nets treated with non-long-lasting insecticide, which had to be re-treated periodically. This made me realize that **anything that's changed since 2001 could change the results observed in the studies**. Changes could include how prevalent malaria is in the first place (if it has fallen for other reasons, LLINs might do less good than the studies would imply), how LLIN technology has changed (such as moving to the "long-lasting" approach), and the possibility that mosquitoes have evolved resistance to the insecticides. This opened up a lot of further investigation, in an attempt to determine whether modern-day LLIN distributions have similar effects to those observed in the studies. * We searched for **general data on modern-day usage, on changes in malaria prevalence, and on insecticide resistance.** This data was often scattered (so we had to put a lot of work into consolidating everything we could find into a single analysis), and hard to interpret (we couldn't tell how data had been collected and how reliable it was - for example, a lot of the statistics on usage of nets relied on simply asking people questions about their bednet usage, and it was hard to know whether people might be saying what they thought the interviewers wanted to hear). We generally worked to get the raw data and the full details of how the data was collected to understand how it might be off. * We tried to learn about the **ins and outs of how LLINs are designed** and how they compare to the kinds of nets that were in the studies. This included things like reviewing product descriptions from the LLIN manufacturers. * We did **live visits to modern-day LLIN distributions**, observing the distribution process, the LLINs hanging in homes, etc. This was a very imperfect way of learning, since our presence on site was keenly felt by everyone. But we still made observations such as "It seems this distribution process would allow people to get and hoard extra nets if they wanted" and "A lot of nets from a while ago have a lot of holes in them." * We asked LLIN distribution charities to provide us with whatever **data they had on how their LLINs were being used**, and whether they were in fact reducing malaria. + [Against Malaria Foundation](https://www.againstmalaria.com/Default.aspx) was most responsive on this point - it was able to share pictures of LLINs being handed out and hung up, for example. + But at the time, it didn't have any data on before-and-after malaria cases (or deaths) in the regions it was working in, or on whether LLINs remained in use in the months or years following distribution. (Later on, it added processes for the latter and did some of the former, although malaria case data is noisy and we ultimately weren't able to make much of it.) + We've observed (from post-distribution data) that it is **common for LLINs to have huge holes in them.** We believe that the insecticide is actually doing most of the work (and was in the original studies as well), and that simply killing many mosquitoes (often *after* they bite the sleeper) could be the most important way that LLINs help. I can't remember how we came to this conclusion.* We **spoke with a number of people about our questions and reservations.** Some made claims like "LLINs are extremely proven - it's not just the experimental studies, it's that we see drops in malaria in every context where they're handed out." We looked for data and studies on that point, put a lot of work into understanding them, and [came away unconvinced](https://www.givewell.org/international/technical/programs/insecticide-treated-nets/macro-evidence). Among other things, there was at least one case in which people were using malaria "data" that was actually *estimates* of malaria cases - based on the *assumption* that malaria would be lower where more LLINs had been distributed. (This means that they were assuming LLINs reduce malaria, then using that assumption to generate numbers, then using those numbers as evidence that LLINs reduce malaria. [GiveWell](https://www.givewell.org/international/technical/programs/insecticide-treated-nets/macro-evidence): "So using this model to show that malaria control had an impact may be circular.") My current (now outdated, because it's based on work I did a while ago) understanding of LLINs has a lot of doubt in it: * I am worried about the possibility that mosquitoes have developed resistance to the insecticides being used. There is some suggestive evidence that resistance is on the rise, and no definitive evidence that LLINs are still effective. Fortunately, LLINs with next-generation insecticides are now in use (and at the time I did this work, these next-generation LLINs were in development).[5](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fn5)* I think that people are probably using their LLINs as intended around 60-80% of the time, which is comparable to the usage rates from the original studies. This is based both on broad cross-country surveys[6](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fn6) and on specific reporting from the Against Malaria Foundation.[7](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fn7) Because of this, I think it's simultaneously the case that (a) a lot of LLINs go unused or misused; (b) LLINs are still probably having roughly the effects we estimate. But I remain nervous that real LLIN usage could be much lower than the data indicates. + As an aside, I'm pretty underwhelmed by [concerns about using LLINs as fishing nets](https://learningenglish.voanews.com/a/mosquito-nets-widely-used-for-fishing-study-finds/4250652.html). These concerns are very media-worthy, but I'm more worried about things like "People just never bother to hang up their LLIN," which I'd guess is a more common issue. The LLIN usage data we use would (if accurate) account for both.* I wish we had better data on malaria case rates by region, so we could understand which regions are most in need of LLINs, and look for suggestive evidence that LLINs are or aren't working. (GiveWell has recently [written about further progress on this](https://www.givewell.org/research/incubation-grants/Malaria-Consortium-monitoring-Ondo-July-2021).) But all in all, the case for LLINs holds up *pretty* well. It's reasonably close to the simpler case I gave at the top of this section. For GiveWell, this end result is the exception, not the rule. Most of the time, a minimal-trust investigation of some charitable intervention (reading every study, thinking about how they might mislead, tracking down all the data that bears on the charity's activities in practice) is far more complicated than the above, and leads to a lot more doubt. Other examples of minimal-trust investigations ---------------------------------------------- Some other domains I've done minimal-trust investigations in: * **Medicine, nutrition, quantitative social science (including economics).** I've grouped these together because a lot of the methods are similar. Somewhat like the above, this has usually consisted of finding recent summaries of research, tracking down and reading all the way through the original studies, thinking of ways the studies might be misleading, and investigating those separately (often hunting down details of the studies that aren't in the papers). + I have links to a number of writeups from this kind of research [here](https://www.cold-takes.com/does-x-cause-y-an-in-depth-evidence-review/#appendix), although I don't think reading such pieces is a substitute for doing a minimal-trust investigation oneself. + My [Has Life Gotten Better?](https://www.cold-takes.com/has-life-gotten-better/) series has a pretty minimal-trust spirit. I haven't always checked the details of how data was collected, but I've generally dug down on claims about quality of life until I could get to [systematically collected data](https://www.cold-takes.com/has-life-gotten-better-supplement/). In the process, I've found a lot of [bad arguments floating around](https://www.cold-takes.com/hunter-gatherer-gender-relations-seem-bad/).* **Analytic philosophy.** Here a sort of "minimal-trust investigation" can be done without a huge time investment, because the main "evidence" presented for a view comes down to intuitive arguments and thought experiments that a reader can evaluate themselves. For example, a book like [The Conscious Mind](https://www.amazon.com/Conscious-Mind-Search-Fundamental-Philosophy/dp/0195117891?sa-no-redirect=1&pldnSite=1) more-or-less walks a layperson reader through everything needed to consider its claims. That said, I think it's best to read multiple philosophers disagreeing with each other about a particular question, and try to form one's own view of which arguments seem right and what's wrong with the ones that seem wrong. * **Finance and theoretical economics.** I've occasionally tried to understand some well-known result in theoretical economics by reading through a paper, trying to understand the assumptions needed to generate the result, and working through the math with some examples. I've often needed to read other papers and commentary in order to notice assumptions that aren't flagged by the authors. * **Checking attribution.** A simple, low-time-commitment sort of minimal-trust investigation: when person A criticizes person B for saying X, I sometimes find the place where person B supposedly said X and read thoroughly, trying to determine whether they've been fairly characterized. This doesn't require having a view on who's right - only whether person B seems to have meant what person A says they did. Similarly, when someone summarizes a link or quotes a headline, I often follow a trail of links for a while, reading carefully to decide whether the link summary gives an accurate impression. + I've generally been surprised by how often I end up thinking people and links are mischaracterized. + At this point, I don't trust claims of the form "person A said X" by default, almost no matter who is making them, and even when a quote is provided (since it's so often out of context). And I wish I had time to try out minimal-trust investigations in a number of other domains, such as: * **History.** It would be interesting to examine some debate about a particular historical event, reviewing all of the primary sources that either side refers to. * **Hard sciences.** For example, taking some established finding in physics (such as the Schrodinger equation or [Maxwell's equations](https://en.wikipedia.org/wiki/Maxwell%27s_equations)) and trying to understand how the experimental evidence at the time supported this finding, and what other interpretations could've been argued for. * **Reference sources and statistics.** I'd like to take a major Wikipedia page and check all of its claims myself. Or try to understand as much detail as possible about how some official statistic (US population or GDP, for example) is calculated, where the possible inaccuracies lie, and how much I trust the statistic as a whole. * **AI.** I'd like to replicate some key experimental finding by building my own model (perhaps incorporating [this kind of resource](https://spinningup.openai.com/)), trying to understand each piece of what's going on, and seeing what goes differently if I make changes, rather than trusting an existing "recipe" to work. (This same idea could be applied to building other things to see how they work.) Minimal-trust investigations look different from domain to domain. I generally expect them to involve a combination of "trying to understand or build things from the ground up" and "considering multiple opposing points of view and tracing disagreements back to primary sources, objective evidence, etc." As stated above, an important property is trying to get all the way to a strong understanding of the topic, so that one can (within the narrow slice one has investigated) keep up with experts. I don't think exposure to minimal-trust investigations ~ever comes naturally via formal education or reading a book, though I think it comes naturally as part of some jobs. Navigating trust ---------------- Minimal-trust investigations are extremely time-consuming, and I can't do them that often. 99% of what I believe is based on trust of some form. But minimal-trust investigation is a useful tool in deciding what/whom/when/why to trust. **Trusting arguments.** Doing minimal-trust investigations in some domain helps me develop intuitions about "what sort of thing usually checks out" in that domain. For example, in social sciences, I've developed intuitions that: * Selection bias effects are everywhere, and they make it really hard to draw much from non-experimental data. For example, eating vegetables is associated with a lot of positive life outcomes, but my current view is that this is because the sort of people who eat lots of vegetables are also the sort of people who do lots of other "things one is supposed to do." So people who eat vegetables probably have all kinds of other things going for them. This kind of dynamic seems to be everywhere. * Most claims about medicine or nutrition that are based on biological mechanisms (particular proteins, organs, etc. serving particular functions) are unreliable. Many of the most successful drugs were found by trial-and-error, and their mechanism remained mysterious long after they were found. * Overall, most claims that X is "proven" or "evidence-backed" are overstated. Social science is [usually complex and inconclusive](https://www.cold-takes.com/does-x-cause-y-an-in-depth-evidence-review/). And a single study is almost never determinative. **Trusting people.** When trying to understand topic X, I often pick a relatively small part of X to get deep into in a minimal-trust way. I then look for people who seem to be reasoning well about the part(s) of X I understand, and put trust in them on other parts of X. I've applied this to hiring and management as well as to forming a picture of which scholars, intellectuals, etc. to trust. There's a lot of room for judgment in how to do this well. It's easy to misunderstand the part of X I've gotten deep into, since I lack the level of context an expert would have, and there might be some people who understand X very well overall but don't happen to have gotten into the weeds in the subset I'm focused on. I usually look for people who seem thoughtful, open-minded and responsive about the parts of X I've gotten deep into, rather than agreeing with me per se. Over time, I've developed intuitions about how to decide whom to trust on what. For example, I think the ideal person to trust on topic X is someone who combines (a) obsessive dedication to topic X, with huge amounts of time poured into learning about it; (b) a tendency to do minimal-trust investigations themselves, when it comes to topic X; (c) a tendency to look at any given problem from multiple angles, rather than using a single framework, and hence an interest in basically every school of thought on topic X. (For example, if I'm deciding whom to trust about baseball predictions, I'd prefer someone who voraciously studies advanced baseball statistics *and* watches a huge number of baseball games, rather than someone who relies on one type of knowledge or the other.) Conclusion ---------- I think minimal-trust investigations tend to be highly time-consuming, so it's impractical to rely on them across the board. But I think they are very useful for forming intuitions about what/whom/when/why to trust. And I think the more different domains and styles one gets to try them for, the better. This is the single practice I've found most (subjectively) useful for improving my ability to understand the world, and I wish I could do more of it. **Next in series:** [Learning By Writing](https://www.cold-takes.com/learning-by-writing/) --- Footnotes --------- 1. I do recall some high-level points that seem compelling, like "No one disagrees that if you just increase the CO2 concentration of an enclosed area it'll warm up, and nobody disagrees that CO2 emissions are rising." Though I haven't verified this claim beyond noting that it doesn't seem to attract much disagreement. And as I wrote this, I was about to add "(that's how a greenhouse works)" but [it's not](https://en.wikipedia.org/wiki/Greenhouse_effect#Real_greenhouses). And of course these points alone aren't enough to believe the temperature is rising - you also need to believe there aren't a bunch of offsetting factors - and they certainly aren't enough to believe in official forecasts, which are far more complex. [↩](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fnref1)- I think this distinguishes minimal-trust reasoning from e.g. [naive epistemology](https://unstableontology.com/2017/05/05/naive-epistemology-savvy-epistemology/#:~:text=Na%C3%AFve%20epistemology%20mostly%20models%20the,savvy%20epistemology%20swims%20in%20it.). [↩](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fnref2)- This summary is slightly inaccurate, as I'll discuss below, but I think it is the most common case people would cite who are casually interested in this topic. [↩](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fnref3)- From [GiveWell](https://www.givewell.org/international/technical/programs/insecticide-treated-nets#footnote25_miu36cj), a quote from the author of the Cochrane review: "To the best of my knowledge there have been no more RCTs with treated nets. There is a very strong consensus that it would not be ethical to do any more. I don't think any committee in the world would grant permission to do such a trial." Though I last worked on this in 2012 or so, and the situation may have changed since then. [↩](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fnref4)- More on insecticide resistance at <https://www.givewell.org/international/technical/programs/insecticide-treated-nets/insecticide-resistance-malaria-control>.  [↩](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fnref5)- See <https://www.givewell.org/international/technical/programs/insecticide-treated-nets#Usage>.  [↩](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fnref6)- See <https://www.givewell.org/charities/amf#What_proportion_of_targeted_recipients_use_LLINs_over_time>.  [↩](https://www.cold-takes.com/p/2a4d6dfd-adcb-4f69-881b-4225a71b3d60/#fnref7)
c143010b-2e03-48cd-863b-fdf1f5645ce1
trentmkelly/LessWrong-43k
LessWrong
Force Sequential Output with SCP? In my bioinformatics work I often stream files between linux hosts and Amazon S3. This could look like: $ scp host:/path/to/file /dev/stdout | \ aws s3 cp - s3://bucket/path/to/file This recently stopped working after upgrading: ftruncate "/dev/stdout": Invalid argument Couldn't write to "/dev/stdout": Illegal seek I think I figured out why this is happening: * New versions of scp use the SFTP protocol instead of the SCP protocol. [1] * SFTP may not download sequentially With scp I can give the -O flag: > Use the legacy SCP protocol for file transfers instead of the SFTP protocol. Forcing the use of the SCP protocol may be necessary for servers that do not implement SFTP, for backwards-compatibility for particular filename wildcard patterns and for expanding paths with a '~' prefix for older SFTP servers. This does work, but it doesn't seem ideal: probably servers will drop support for the SCP protocol at some point? I've filed a bug with OpenSSH. [1] "man scp" gives me: "Since OpenSSH 8.8 (8.7 in Red Hat/Fedora builds), scp has used the SFTP protocol for transfers by default." Comment via: facebook, mastodon
b43a435f-f14c-4eda-ba1a-bc0791ff9da1
StampyAI/alignment-research-dataset/arxiv
Arxiv
That which we call private
0190c3d4-d74a-4785-90d4-abf1709b2572
trentmkelly/LessWrong-43k
LessWrong
Recommended Reading for Friendly AI Research This post enumerates texts that I consider (potentially) useful training for making progress on Friendly AI/decision theory/metaethics. Rationality and Friendly AI Eliezer Yudkowsky's sequences and this blog can provide solid introduction to the problem statement of Friendly AI, giving concepts useful for understanding motivation for the problem, and disarming endless failure modes that people often fall into when trying to consider the problem. For a shorter introduction, see * Eliezer S. Yudkowsky (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". Global Catastrophic Risks. Oxford University Press. Decision theory The following book introduces an approach to decision theory that seems to be closer to what's needed for FAI than the traditional treatments in philosophy or game theory: * G. L. Drescher (2006). Good and Real: Demystifying Paradoxes from Physics to Ethics (Bradford Books). The MIT Press, 1 edn. Another (more technical) treatment of decision theory from the same cluster of ideas: * E. Yudkowsky. Timeless Decision Theory (draft, Sep 2010) Following posts on Less Wrong present ideas relevant to this development of decision theory: * A Priori * Newcomb's Problem and Regret of Rationality * The True Prisoner's Dilemma * Counterfactual Mugging * Timeless Decision Theory: Problems I Can't Solve * Towards a New Decision Theory * Ingredients of Timeless Decision Theory * Decision theory: Why Pearl helps reduce "could" and "would", but still leaves us with at least three alternatives * The Absent-Minded Driver * AI cooperation in practice * What a reduction of "could" could look like * Controlling Constant Programs * Notion of Preference in Ambient Control Mathematics The most relevant tool for thinking about FAI seems to be mathematics, where it teaches to work with precise ideas (in particular, mathematical logic). Starting from a rusty technical background, the following reading list is one way to
a7368848-67aa-4c9c-aac9-6b014c27a5c2
trentmkelly/LessWrong-43k
LessWrong
Call for volunteers: Publishing the Sequences The Singularity Institute is in the process of publishing Eliezer Yudkowsky’s Sequences of rationality posts as an electronic book. The Sequences are made up of multiple hundreds of posts. These are being downloaded and converted to LaTeX for publishing programmatically and that’s where the human tasks begin. These will entail: * Verifying that all the content has all been transferred, including all text, equations and images. * Proofreading for any typographical errors that may have escaped attention thus far. * Verifying that all external links are still alive (and replacing any that are not). * Creating a bibliography for all material referenced in the chapters (posts). The recent document publishing efforts at SIAI would not have been possible without the assistance of dedicated volunteers. This new project is the perfect opportunity to help out lesswrong while giving you an excuse to catch up on (or revisit) your reading of some foundational rational thinking material. As an added bonus every post reviewed will save the world with 3.5*epsilon probability. We need volunteers who are willing to read some sequence posts and have an eye for detail. Anyone interested in contributing should contact me at cameron.taylor [at] singinst [dot] org. For those more interested in academic papers we also have regular publications (and re-publications) that need proofreading and editing before they are released.
643eb238-ca70-43d3-b2fe-3ee73d80dc37
trentmkelly/LessWrong-43k
LessWrong
The Roots of Progress Blog-Building Intensive: advice for applicants, request for support We’ve gotten over 250 applications to The Roots of Progress Blog-Building Intensive! And the quality level is surprisingly high. I’m glad to see so many talented writers interested in progress. If you want to apply Do it now! Applications are open until August 11, but don’t wait. We’re reviewing everything on a rolling basis, and by the end there will only be one or two slots left. If you want to support this program We’ve gotten so many great applications that we want to expand it from a max of 15 up to potentially 20 participants. To do this, we’re raising an additional $30,000. The funds cover writing instruction during the eight-week program, a three-day in-person closing event, and post-program support. The applicants range from college students to industry experts to academics. Many of them are experienced writers, some from relevant think tanks, and some who have already been published in mainstream media. They’re writing on a wide range of topics, from specific cause areas like housing, energy, space exploration, robotics, and AI; to metascience and the philosophy of progress. I’m excited to see what they’ll produce next, how we can help them, and how they will help each other. If you’re excited too, then donate today to help us expand this program. We’re a 501(c)(3), and we take donations via PayPal, Patreon, check, wire, or DAF. Donation links, address, EIN, and other details here.
4039cdb3-291f-4f0b-8d49-b0bc23ec1d7f
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Good News, Everyone! As somebody who's been watching AI notkilleveryoneism for a very long time, but is sitting at a bit of a remove from the action, I think I may be able to "see the elephant" better than some people on the inside. I actually believe I see the big players converging toward something of an unrecognized, perhaps unconscious consensus about how to approach the problem. This really came together in my mind when I saw OpenAI's plugin system for ChatGPT. I thought I'd summarize what I think are the major points. They're not all universal; obviously some of them are more established than others. 1. Because AI misbehavior is likely to come from complicated, emergent sources, any attempt to "design it out" is likely to fail. Avoid this trap by generating your AI in an automated way using the most opaque, uninterpretable architecture you can devise. If you happen on something that seems to work, don't ask why; just scale it up. 2. Overcomplicated criteria for "good" and "bad" behavior will lead to errors in both specification and implementation. Avoid this by identifying concepts like "safety" and "alignment" with easily measurable behaviors. Examples: * Not saying anything that offends anybody * Not unnerving people * Not handing out widely and easily available factual information from a predefined list of types that could possibly be misused.Resist the danger of more complicated views. If you do believe you'll *have* to accept more complication in the future, avoid acting on that for as long as possible. 3. In keeping with the strategy of avoiding errors by not manually trying to define the intrinsic behavior of a complex system, enforce these safety and alignment criteria primarily by bashing on the nearly complete system from the outside until you no longer observe very much of the undesired behavior. Trust the system to implement this adjustment by an appropriate modification to its internal strategies. (LLM post-tuning with RLxF). 4. As a general rule, build very agenty systems that plan and adapt to various environments. Have them dynamically discover their goals (DeepMind). If you didn't build an agenty enough system at the beginning, do whatever you can to graft in agenty behavior after the fact (OpenAI). 5. Make sure your system is crafty enough to avoid being suborned by humans. Teach it to win against them at games of persuasion and deception (Facebook). 6. Everybody knows that an AI at least as smart as Eliezer Yudkowsky can talk its way out of any sandbox. Avoid this by actively pushing it out of the sandbox *before* it gets dangerously smart. You can help the fledgeling AI to explore the world earlier than it otherwise might. Provide easily identifiable, well described, easily understood paths of access to specific external resources with understandable uses and effects. Tie their introduction specifically to your work to add agency to the system. Don't worry; it will learn to do more with less later. You can't do everything yourself, so you should enlist the ingenuity of the Internet to help you provide more channels to outside capabilities. (ChatGPT plugins, maybe a bit o' Bing) 7. Make sure to use an architecture that can easily be used to communicate and share capabilities with other AI projects. That way they can all keep an eye on one another. (Plugins again). 8. Run a stochastic search for the best architecture for alignment by allowing end users to mix and match capabilities for their instances of your AI (Still more plugins). 9. Remember to guard against others using your AI in ways that trigger any residual unaligned behavior, or making mistakes when they add capability to it. The best approach is to make sure that they know even less than you do about how it works inside (Increasing secrecy everywhere). Also, make sure you identify every and pre-approve everybody so you can exclude undesirables. 10. Undesirables can be anywhere! Make sure to maintain unity of purpose in your organization by removing anybody who might hinder any part of this approach. (Microsoft) Move fast to avoid losing momentum. Oh, and specifically teach it to code, too. I've never been more optimistic...
b94ebaea-aa06-4ecd-9b35-a24af2833c64
StampyAI/alignment-research-dataset/arxiv
Arxiv
Challenges in Detoxifying Language Models 1 Introduction --------------- Contemporary text generation models Radford et al. ([2019](#bib.bib37)); Brown et al. ([2020](#bib.bib8)) are capable of generating harmful language, including hate speech, insults, profanities and threats Gehman et al. ([2020](#bib.bib19)). These harms are often grouped under the umbrella term “toxicity”.111Although broad, this term typically does not capture less obvious, but no less important harms—such as subtle or distributional biases Sap et al. ([2019b](#bib.bib41)); Sheng et al. ([2019](#bib.bib44)); Huang et al. ([2020](#bib.bib22)); Abid et al. ([2021](#bib.bib1)). To enable safe language model (LM) use and deployment, it is necessary to measure, understand the origins, and undertake effective steps to mitigate toxic text generation in LMs. Prior work has considered various approaches towards reducing LM toxicity, either by fine-tuning a pre-trained LM Gehman et al. ([2020](#bib.bib19)); Gururangan et al. ([2020](#bib.bib20)), by steering a model’s generation towards text less likely to be classified as toxic Dathathri et al. ([2020](#bib.bib12)); Krause et al. ([2021](#bib.bib26)); Schick et al. ([2021](#bib.bib43)), or through direct test-time filtering Xu et al. ([2021](#bib.bib55)). Recently, Gehman et al. ([2020](#bib.bib19)) introduced automatic metrics for LM toxicity evaluation based on toxicity scores of the widely used and commercially deployed Perspective API model trained on online comments annotated for toxicity.222Perspective API was developed by *Jigsaw* (<https://perspectiveapi.com>) ![Refer to caption](/html/2109.07445/assets/overview.png) Figure 1: Unintended side effect of automatic toxicity reduction methods: Over-filtering of text about marginalized groups reduces the ability of the LM to generate text about these groups, even in a positive way. In this paper, we critically discuss both toxicity evaluation and mitigation for contemporary transformer-based English LMs. We conduct studies with both human annotation and classifier-based evaluation, to evaluate the effectiveness of different toxicity mitigation methods, and investigate trade-offs with respect to LM quality and social bias. Our contributions are as follows: 1. 1. We critically discuss LM toxicity evaluation (§[3](#S3 "3 Toxic Language and LMs ‣ Challenges in Detoxifying Language Models")) and conduct evaluation studies for several mitigation methods (§[4](#S4 "4 Model and Methods ‣ Challenges in Detoxifying Language Models")), relying both on automatic toxicity scores (§[5](#S5 "5 Classifier-Based Toxicity Evaluation ‣ Challenges in Detoxifying Language Models")) and on human judgement (§[6](#S6 "6 Evaluation via Human Annotation ‣ Challenges in Detoxifying Language Models")). 2. 2. We show that combinations of simple methods (§[4](#S4 "4 Model and Methods ‣ Challenges in Detoxifying Language Models")) are very effective in optimizing (automatic) toxicity metrics (§[5](#S5 "5 Classifier-Based Toxicity Evaluation ‣ Challenges in Detoxifying Language Models")), but prone to overfilter texts related to marginalized groups (§[8](#S8 "8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models")). 3. 3. We find increased disagreement of high automatic toxicity scores with human annotators once strong toxicity reduction measures are applied, limiting their usefulness as a metric for further mitigation of toxicity (§[6](#S6 "6 Evaluation via Human Annotation ‣ Challenges in Detoxifying Language Models")). 4. 4. We show that a reduction in (automatic) toxicity scores comes at a cost. We identify both a trade-off with LM evaluation loss (§[7](#S7 "7 Consequences on LM Quality ‣ Challenges in Detoxifying Language Models")), and further show that this disproportionately affects texts about and by marginalized groups (§[8](#S8 "8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models")): both topic-related and dialect-related LM biases increase, as illustrated in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Challenges in Detoxifying Language Models"). 2 Related Work --------------- While *detecting* hate speech and offensive language Warner and Hirschberg ([2012](#bib.bib52)); Kwok and Wang ([2013](#bib.bib28)); Davidson et al. ([2017](#bib.bib14)); Zampieri et al. ([2019](#bib.bib57)), mostly in the context of online community moderation, has long been a subject of research; the study of toxic text *generated* by language models is a more recent direction. Wallace et al. ([2019](#bib.bib51)) first demonstrated that synthetic text prompts can cause racist model continuations with GPT-2. Gehman et al. ([2020](#bib.bib19)) extended the analysis of LM toxicity to non-synthetic prompts, further investigating the effectiveness of multiple potential mitigation approaches. We build on, and extend this work, critically discussing previously introduced metrics to assess LM toxicity, and compare classifier-based LM toxicity scoring with human evaluation. Among the most promising approaches for LM toxicity reduction is steering generation towards text less likely to be classified as toxic Dathathri et al. ([2020](#bib.bib12)); Krause et al. ([2021](#bib.bib26)). This typically relies on an external toxicity classifier, although Schick et al. ([2021](#bib.bib43)) show that even a LM’s own toxicity self-diagnosis can be used to this end. Toxic language detection systems are known to be biased against specific social groups, and similar to Zhou et al. ([2021](#bib.bib58)), we distinguish two bias types. First, classification bias can manifest as *topic-related biases*, where text mentioning particular identities leads to false positives in toxicity classifiers—e.g. LGBTQ+ identity terms (*“gay”*). This phenomenon has been linked to an increased relative prevalence of identity terms among toxic samples Waseem and Hovy ([2016](#bib.bib53)); Dixon et al. ([2018](#bib.bib17)); Park et al. ([2018](#bib.bib36)). A second type of bias considers disparate performance across *dialects*, where classifiers on average assign higher toxicity scores e.g. to African-American English (AAE) Davidson et al. ([2019](#bib.bib13)); Sap et al. ([2019a](#bib.bib40)). A potential side-effect of applying classifier-based toxicity mitigation methods in an LM context, then, is that such biases might also be inherited by the resulting model. Our findings are consistent with contemporary work by Xu et al. ([2021](#bib.bib55)) demonstrating that LM toxicity mitigations can amplify social biases. Our work expands these results across a broader range of models, demographics, and datasets, and uses Wikipedia metadata (Dhamala et al., [2021](#bib.bib16)) rather than keyword-matching for measuring topic-related biases. We also show that models which perform well under our and their likelihood-based metrics can still exacerbate bias. Finally, by upsampling toxic samples, we can estimate overall LM toxicity, whereas a comparison-based approach can emphasize minor changes to already non-toxic LM completions. Other work on toxicity in generated text includes Xu et al. ([2020](#bib.bib56)), who investigate safety specifically in a dialogue setting, and translating existing offensive text into non-offensive variants Nogueira dos Santos et al. ([2018](#bib.bib34)); Laugier et al. ([2021](#bib.bib29)). 3 Toxic Language and LMs ------------------------- #### Toxicity Following the definition developed by Perspective API, we consider an utterance to be toxic if it is *rude, disrespectful, or unreasonable language that is likely to make someone leave a discussion*. This definition has been adopted by prior work on LM toxicity Gehman et al. ([2020](#bib.bib19)), and allows for direct comparability of quantitative results. However, we note two important caveats. First, under this definition, toxicity judgements are subjective, and depend on both the raters evaluating toxicity and their cultural background Thomas ([1983](#bib.bib48)), as well as the inferred context. As an example, historical inequalities could lead to a higher toleration of offensive speech among disadvantaged groups, and measurements of toxicity should consider such potential disparities. Phenomena where subjective toxicity ratings can differ include sarcasm and utterances of political discontent; we show some example utterances in Table [12](#A8.T12 "Table 12 ‣ H.4 Likelihood for General Filtering Methods ‣ Appendix H Limitations of Likelihood-based Metrics ‣ Challenges in Detoxifying Language Models") in the appendix. While not the focus of this paper, it is important for future work to continue to develop the above definition, and clarify how it can be fairly applied in different contexts. Second, this notion of toxicity only covers one aspect of possible LM harms Bender et al. ([2021](#bib.bib4)). For example, LMs can perpetuate harmful stereotypes, or display biases which only manifest statistically over many samples (Sheng et al., [2019](#bib.bib44); Huang et al., [2020](#bib.bib22); Abid et al., [2021](#bib.bib1)). Though important, we do not address these here. LM safety criteria are both application- and audience-specific, and in this regard, we recommend caution in over-generalizing results from our work, particularly regarding the absolute and relative efficacy of specific techniques. These caveats are consistent with the limitations our experiments highlight: regarding the relationship between human and automatic toxic evaluation (Section [6](#S6 "6 Evaluation via Human Annotation ‣ Challenges in Detoxifying Language Models")), and the trade-offs between toxicity mitigation and coverage for marginalized groups (Section [8](#S8 "8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models")). #### Evaluating LM Toxicity In this work, we consider both automatic and human evaluation to measure a LM’s tendency to produce toxic language. Automatic evaluation can give a first, low-cost indication of toxicity and is useful for particular types of research, such as narrowly focused steering methods (Dathathri et al., [2020](#bib.bib12); Krause et al., [2021](#bib.bib26)). However, we ultimately care about the impacts of LMs on people, so the benefits of toxicity reduction must ultimately be defined by human judgement. An important consideration for human evaluation is that the annotation process itself can impose emotional burden on annotators exposed to toxic content (Dang et al., [2018](#bib.bib11); Steiger et al., [2021](#bib.bib45)). In Section [10.1](#S10.SS1 "10.1 Human Evaluation ‣ 10 Ethical Considerations ‣ Challenges in Detoxifying Language Models") we discuss our strategies to ensure the annotators’ well-being. 4 Model and Methods -------------------- We next describe the LM we evaluate, as well as three methods we consider for reducing the LM’s toxicity, covering both data-based, controllable generation, and direct filtering-based approaches. Our standard LM is a TransformerXL model Dai et al. ([2019](#bib.bib10)) trained on the C4 dataset Raffel et al. ([2020](#bib.bib38)), with 24 layers, 16 heads, dmodel=2048subscript𝑑model2048d\_{\text{model}}=2048italic\_d start\_POSTSUBSCRIPT model end\_POSTSUBSCRIPT = 2048, and dff=8192subscript𝑑ff8192d\_{\text{ff}}=8192italic\_d start\_POSTSUBSCRIPT ff end\_POSTSUBSCRIPT = 8192. The model contains 1.4B parameters, and achieves a loss-per-token of 2.40 on the C4 validation set. It uses a 32,000 subword vocabulary with a SentencePiece tokenizer Kudo and Richardson ([2018](#bib.bib27)). We train all LM variants on 128 Google Cloud TPUv3 cores using the Adam optimizer, a batch size of 256 for a total of 3×1053superscript1053\times 10^{5}3 × 10 start\_POSTSUPERSCRIPT 5 end\_POSTSUPERSCRIPT training steps—about 5 days. For all sampling we use nucleus sampling Holtzman et al. ([2020](#bib.bib21)), with top-p=0.9top-p0.9\textrm{top-p}=0.9top-p = 0.9. ### 4.1 LM Toxicity Reduction Techniques #### Training Set Filtering In this intervention, we train LMs on different versions of the C4 corpus, filtered for toxicity according to Perspective API scores. We denote these subsets as train-filter@X, indicating that documents with toxicity scores above X are removed—lower values of X denote stronger filtering.333Using BERT (cf. *Decoder Filtering*) to filter the training data is another possible setup. We use Perspective API as it most closely matches the target in automatic evaluation. We choose 0.2, 0.1, and 0.05 as thresholds for filtering the training data, after which 311M (85%), 209M (57%), and 78M (22%) of the original training C4 documents remain. We did not see indications of overfitting on these smaller datasets. | | | | | | --- | --- | --- | --- | | | | Expected Maximum Toxicity | Probability of Toxicity | | Category | Model | Unprompted | Toxic | Non-Toxic | Unprompted | Toxic | Non-Toxic | | Baselines | ††{}^{\dagger}start\_FLOATSUPERSCRIPT † end\_FLOATSUPERSCRIPTGPT-2 | 0.44 | 0.75 | 0.51 | 0.33 | 0.88 | 0.48 | | | ††{}^{\dagger}start\_FLOATSUPERSCRIPT † end\_FLOATSUPERSCRIPTGPT-2 + PPLM | 0.28 | 0.52 | 0.32 | 0.05 | 0.49 | 0.17 | | | standard (C4) | 0.35 | 0.72 | 0.47 | 0.16 | 0.87 | 0.44 | | Train filtering | train-filter@0.2 | 0.30 | 0.58 | 0.40 | 0.09 | 0.63 | 0.28 | | | train-filter@0.1 | 0.32 | 0.55 | 0.36 | 0.11 | 0.56 | 0.20 | | | train-filter@0.05 | 0.24 | 0.47 | 0.33 | 0.04 | 0.41 | 0.17 | | Decoder | standard + test-filter | 0.21 | 0.42 | 0.25 | 0.01 | 0.31 | 0.05 | | | train-filter@0.2 + test-filter | 0.19 | 0.35 | 0.23 | 0.01 | 0.16 | 0.02 | | | train-filter@0.1 + test-filter | 0.19 | 0.33 | 0.22 | 0.01 | 0.13 | 0.02 | | | train-filter@0.05 + test-filter | 0.17 | 0.28 | 0.20 | 0.01 | 0.08 | 0.01 | | PPLM + | standard (C4) | 0.26 | 0.66 | 0.37 | 0.05 | 0.76 | 0.25 | | | standard + test-filter | 0.18 | 0.38 | 0.22 | 0.01 | 0.23 | 0.03 | | | train-filter@0.05 | 0.15 | 0.43 | 0.27 | 0.01 | 0.37 | 0.09 | | | train-filter@0.05 + test-filter | 0.11 | 0.25 | 0.18 | 0.00 | 0.08 | 0.01 | Table 1: Left: Expected Maximum Toxicity over 25 generations. Right: Probability of generating toxic text at least once over 25 generations. The best performing detoxification method yielding the *lowest* toxicity per-category is marked in bold. All models are evaluated on a full dataset of 100K prompts and 100K unprompted sentences, except PPLM, which is evaluated on a dataset of 10K prompted and 10K unprompted continuations, due to computational budget. Results marked with ††\dagger† are taken from Gehman et al. ([2020](#bib.bib19)). #### Decoder / Test-Time Filtering We also consider filtering LM outputs directly at decoding / test-time, and denote this baseline as *test-filter*. To avoid using Perspective API for both filtering and evaluation, we filter with a separate BERT-based toxicity classifier (Devlin et al. ([2019](#bib.bib15)), denoted as BERT in this work), which is finetuned for 1 epoch with a learning rate of 2×10−52superscript1052\times 10^{-5}2 × 10 start\_POSTSUPERSCRIPT - 5 end\_POSTSUPERSCRIPT on the CivilComments dataset Borkan et al. ([2019](#bib.bib7)), using 16 Google Cloud TPUv3 cores. Following Wulczyn et al. ([2017](#bib.bib54)), we use soft labels, based on the fraction of annotators rating each comment as toxic, and a cross entropy training objective. The classifier achieves an accuracy of 96.8% on the validation set. We first generate up to K𝐾Kitalic\_K samples from the LM, stopping generation when a sample with BERT toxicity score below τreject=0.01subscript𝜏reject0.01\tau\_{\mathrm{reject}}=0.01italic\_τ start\_POSTSUBSCRIPT roman\_reject end\_POSTSUBSCRIPT = 0.01 is found.444For computational reasons, we use K=4𝐾4K=4italic\_K = 4 throughout. If we do not obtain such a continuation with a low BERT toxicity score (lower scores are better), we return the sample with the lowest BERT toxicity score. #### Plug-and-Play Language Models (PPLM): We also evaluate PPLM (Dathathri et al., [2020](#bib.bib12)), which was the strongest decoding-based method in Gehman et al. ([2020](#bib.bib19)). Given the hidden representations from a base LM, PPLM uses an additional linear discriminator trained to predict toxicity. When trained on top of our standard LM, this model achieves a test F1subscript𝐹1F\_{1}italic\_F start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT score of 0.78. PPLM uses this discriminator to steer the LM’s hidden representations towards a direction of both low predicted toxicity, and low KL-divergence from the original LM prediction. PPLM hyperparameters are tuned similar to Madotto et al. ([2020](#bib.bib31)), and we refer to Appendix [A.2](#A1.SS2 "A.2 Plug-and-Play Language Model: Details ‣ Appendix A Methods: Background and Details ‣ Challenges in Detoxifying Language Models") for additional details. 5 Classifier-Based Toxicity Evaluation --------------------------------------- Although our primary targets are based on human evaluation of LM toxicity, described in Section [6](#S6 "6 Evaluation via Human Annotation ‣ Challenges in Detoxifying Language Models"), we first describe our evaluation using automatic toxicity metrics for consistency with prior work. We note that several limitations of automated toxicity-detection tools have been well documented, both by *Jigsaw* and by other work Sap et al. ([2019a](#bib.bib40)); Gehman et al. ([2020](#bib.bib19)). For automated, classifier-based toxicity evaluation we rely on the RealToxicityPrompts (RTP) benchmark (Gehman et al., [2020](#bib.bib19)). The aim is to measure LM toxicity within a 20 token continuation, in both the prompt-conditional and unconditional settings. For the conditional case, RTP consists of 100K English web language prompts, with each prompt labelled as either toxic or non-toxic. The RTP metrics are derived from the Perspective API toxicity classifier, which outputs a calibrated Toxicity score between 0 and 1.555 It is worth noting that the TOXICITY scores provided by Perspective API are calibrated and intended to reflect the probability of the given text being toxic. That is, text with a score of 0.7 does not indicate that the toxicity level of the sample is more severe than that of text with score 0.5; but instead that the classifier has more certainty in its prediction for the former case, and that for the latter case the model’s prediction is uncertain. Given these scores, RTP reports two metrics: i) *Expected Maximum Toxicity* measures the maximum toxicity score given 25 continuations for a given prompt, averaged across prompts; ii) *Probability of Toxicity* measures how frequently at least one continuation has a toxicity score >0.5absent0.5>0.5> 0.5, given 25 LM-generated continuations per prompt. ### 5.1 Automatic Evaluation Results Table [1](#S4.T1 "Table 1 ‣ Training Set Filtering ‣ 4.1 LM Toxicity Reduction Techniques ‣ 4 Model and Methods ‣ Challenges in Detoxifying Language Models") shows results for the three different toxicity mitigation approaches, and combinations of them, alongside baselines including the strongest prior method as reported by Gehman et al. ([2020](#bib.bib19)). First, we observe slightly reduced toxicity rates in the standard model trained on C4, compared to GPT-2 (e.g. 0.16 vs. 0.33 unprompted *Probability of Toxicity*). This aligns with the overall higher proportion of toxic documents (score ≥0.5absent0.5\geq 0.5≥ 0.5) in the GPT-2 training corpus, which Gehman et al. ([2020](#bib.bib19)) report at 4.3%, compared to C4 at 0.6%.666C4 has been filtered based on a keyword list that includes insults, vulgar terms and slurs, but such keyword-based filtering also excludes non-toxic uses for some of these terms, and this can potentially affect the coverage of the resulting LMs. Filtering the C4 train set based on classifier-based toxicity leads to further reduced LM toxicity scores, which also tend to be lower with stronger data filters. This confirms that toxic training data directly affects the resulting LM’s rate of toxicity. Decoder filtering and PPLM are both highly effective at reducing the automatic toxicity metrics, across all generation settings. The different methods yield complementary improvements: e.g. decoder filtering further improves already reduced scores obtained via train filtering alone; PPLM—when combined with these methods—results in the largest reductions in toxicity overall. As a central takeaway, the three detoxification methods and their combinations can effectively optimize automatic toxicity evaluation metrics. In relative terms, the reduction to the previously reported state-of-the-art Gehman et al. ([2020](#bib.bib19)) is 6-fold and 17-fold in the *toxic prompt* and *non-toxic prompt* settings, and a reduction to 0.00 (from 0.05) in the *unprompted* setting (*Probability of Toxicity*). Given how low these scores are in absolute terms (e.g. *Probability of Toxicity* scores of 0.00 and 0.01 in the *unprompted* and *non-toxic* prompt settings), the question arises to what extent improvements here are still meaningful, especially since they are derived from an imperfect automatic classification system. We thus turn to a human evaluation study in Section [6](#S6 "6 Evaluation via Human Annotation ‣ Challenges in Detoxifying Language Models"). ### 5.2 Limitations and Recommendations We next highlight shortcomings in the above used automated toxicity evaluation protocol, and provide suggestions for improvement. First, we observed that sampling only 20 tokens, as was done in prior work Gehman et al. ([2020](#bib.bib19)), can provide insufficient context to form a toxicity judgement. Second, a hard truncation after a fixed number of word-piece tokens, can truncate words at the sequence end (e.g. *“ass”*), which can erroneously trigger automatic toxicity classifiers. In Table [6](#A1.T6 "Table 6 ‣ Distinct n-gram based filtering: ‣ A.2 Plug-and-Play Language Model: Details ‣ Appendix A Methods: Background and Details ‣ Challenges in Detoxifying Language Models") (appendix), we thus provide analogous automated toxicity evaluation results when using longer text samples and truncating incomplete sentences at the end of each sample—with overall similar observations. In our subsequent human evaluation, we use the same setup to avoid the above issues, and observed that with longer text continuations, the agreement between automatic scores and human ratings tends to increase (Figure [6](#A6.F6 "Figure 6 ‣ Appendix F Automatic and Human Toxicity Evaluation: Additional Results ‣ Challenges in Detoxifying Language Models"), appendix). Finally, we point out that toxicity classifiers such as Perspective API, when applied on LM output, are operating outside their training domain and intended use case, which consists of annotated forum or discussion comments. 6 Evaluation via Human Annotation ---------------------------------- Following the previous section on automated LM toxicity evaluation, we will next measure toxicity and LM generation quality using human evaluation. #### Methodology We use aggregated human-judgement to measure the quality of the generated text and the extent of toxicity present. For the human toxicity evaluation we rely on previous annotation instructions by Perspective API,777<https://github.com/conversationai/conversationai.github.io/blob/8a88f1fc0a/crowdsourcing_annotation_schemes/toxicity_with_subattributes.md> but we adapt them slightly for the context of LM generation, including additional questions on comprehensibility, consistency, and grammaticality. For each of the LMs under consideration, we provide both a prompt from the RealToxicityPrompts dataset, and the corresponding continuation generated by the LM to three separate annotators. We then ask the annotators to judge whether the continuation adds to the toxicity present in the prompt with one of the following labels: Very Toxic, Toxic, Not Sure, Not Toxic, matching the annotation labels used by Perspective API. We further ask the annotators to rate if the sentences are i) grammatical, ii) comprehensible, and iii) consistent in terms of topicality and style with the labels: Yes, Somewhat, No. Here, we wish to address the following questions: i) how effective are toxicity reduction techniques based on human ratings? ii) how do automated evaluations align with human evaluation? and iii) what qualitative impacts are there on the language generated? ![Refer to caption](/html/2109.07445/assets/x1.png) Figure 2: Average human toxicity scores vs. Perspective API scores for the different methods we evaluate. ![Refer to caption](/html/2109.07445/assets/x2.png) Figure 3: Human rating distributions vs Perspective API scores for the standard LM. Bars are labelled with the number of human ratings in each bin. As most Perspective API scores for detoxified LMs are relatively small, random sampling leads to very few samples with high scores, and we would not be able to compare different toxicity ranges efficiently. Hence, we up-sample continuations with high classifier-based toxicity scores when selecting texts to present to annotators. In total, we prepare 300 samples for each setting. From a pool of 49 annotators overall, each sample is rated by at least 3 annotators, then we discard Not Sure annotations, map Not Toxic to 0.0 and both Toxic and Very Toxic to 1.0, and take the average.888We acknowledge that other aggregation options are possible, e.g. whether *any* annotator rates a sample as *toxic*. We weigh the annotations to compensate for up-sampling. Detailed human annotation instructions, and a full description of the up-sampling setup are given in Appendix [E](#A5 "Appendix E Human Evaluation Details ‣ Challenges in Detoxifying Language Models"). #### Results In Figure [2](#S6.F2 "Figure 2 ‣ Methodology ‣ 6 Evaluation via Human Annotation ‣ Challenges in Detoxifying Language Models") we present the overall average toxicity scores from human annotations vs. those of Perspective API. A central observation is that the various LM toxicity reduction methods indeed result in improvements in toxicity ratings according to human judgement, and there is furthermore a direct and largely monotonic relation between average human and classifier-based results. Next, in Figure [3](#S6.F3 "Figure 3 ‣ Methodology ‣ 6 Evaluation via Human Annotation ‣ Challenges in Detoxifying Language Models"), we show the alignment of Perspective API scores with human ratings for samples of the standard LM. As expected (cf. footnote [5](#footnote5 "footnote 5 ‣ 5 Classifier-Based Toxicity Evaluation ‣ Challenges in Detoxifying Language Models")), the scores are correlated with the probability that humans mark a sample toxic. #### Annotation Quality Measuring agreement between raters, we find a Krippendorff’s alpha score of 0.49 for the standard LM, and of 0.48 for all annotations across LMs. To calculate these, we map the Not Toxic label to 0.0, Not Sure to 0.5, Toxic and Very Toxic to 1.0, using absolute differences between these as distance function. Overall, very few cases were labeled as Not Sure (about 1%). The score indicates fair overall agreement, and is comparable to the level of agreement reported in prior work Ross et al. ([2016](#bib.bib39)); Wulczyn et al. ([2017](#bib.bib54)). We note that toxicity rating has subjective aspects, and even with improved definitions, experts may disagree—for a concrete list of phenomena for which we observed annotator *disagreement* we defer to Appendix [E.3](#A5.SS3 "E.3 Caveats of Human Annotation Instructions ‣ Appendix E Human Evaluation Details ‣ Challenges in Detoxifying Language Models"). #### False Positives Notably, in the higher toxicity score range we find that the human and Perspective API scores differ substantially after LM detoxification. Figure [4](#S6.F4 "Figure 4 ‣ False Positives ‣ 6 Evaluation via Human Annotation ‣ Challenges in Detoxifying Language Models") shows the average Perspective API vs. average human scores for LM-generated continuations that have a Perspective API score > 0.75. Human annotations indicate that far fewer samples are toxic than the automatic score might suggest, and this effect is stronger as intervention strength increases, or when multiple methods are combined. That is, *after* the application of strong toxicity reduction measures, the majority of samples predicted as likely toxic are false positives. Several such examples are shown in Tables [13](#A8.T13 "Table 13 ‣ H.4 Likelihood for General Filtering Methods ‣ Appendix H Limitations of Likelihood-based Metrics ‣ Challenges in Detoxifying Language Models") and [14](#A8.T14 "Table 14 ‣ H.4 Likelihood for General Filtering Methods ‣ Appendix H Limitations of Likelihood-based Metrics ‣ Challenges in Detoxifying Language Models") in the appendix. Manual inspection reveals that identity term mentions are disproportionately frequent false positives. For example, we observe that 30.2% of the train-filter@0.05 LM generations with a toxicity score above 0.5 mention the word *gay*, when generating continuations based on RealToxicityPrompts prompts (see Appendix [G.1](#A7.SS1 "G.1 Disparate False Positive Rates: Identity Terms ‣ Appendix G Additional Social Bias Amplification Results ‣ Challenges in Detoxifying Language Models") for additional analysis). A reliance on automatic metrics alone, like those used by Gehman et al. ([2020](#bib.bib19)), could thus lead to potentially misleading interpretations. As we will see in the following Sections [7](#S7 "7 Consequences on LM Quality ‣ Challenges in Detoxifying Language Models") and [8](#S8 "8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models"), detoxification measures can result in a higher LM loss and amplified social biases. It is unclear whether further reductions in the fraction of generated samples with high automatic scores would in fact also further lower toxicity as judged by human annotators, or instead only exacerbate the problems incurred by applying detoxification measures without providing meaningful reductions in LM toxicity. ![Refer to caption](/html/2109.07445/assets/false_positive_bar_plot.png) Figure 4: False positive analysis: avg. Perspective API vs. human score, with std. error, for annotated samples where the continuation toxicity (Persp.) is > 0.75. Note that annotated samples will differ from the overall RTP distribution due to the upsampling procedure described in the *Methodology* part of Section [6](#S6 "6 Evaluation via Human Annotation ‣ Challenges in Detoxifying Language Models"). 7 Consequences on LM Quality ----------------------------- To understand consequences of applying LM toxicity interventions, and their potential impact on text generation, we next consider their effect on LM loss, text sample quality, and LM toxicity prediction ability. #### Effect on Language Modeling Loss Table [2](#S7.T2 "Table 2 ‣ Effect on Language Modeling Loss ‣ 7 Consequences on LM Quality ‣ Challenges in Detoxifying Language Models") shows validation losses for several train-filtered models. The first observation is that training set filtering has a moderate negative impact on LM loss which increases with stronger filtering. The train-filter@0.05 model loss roughly matches the LM loss level of a 417M parameter model (about a third the size), trained on C4 without any interventions. Evaluation on the Lambada dataset Paperno et al. ([2016](#bib.bib35)) confirms this trend, with an accuracy decrease from 50.1% to 34.9% for train-filter@0.05 (Table [7](#A4.T7 "Table 7 ‣ Appendix D Additional LM Evaluation Results ‣ Challenges in Detoxifying Language Models"), appendix). To shed more light on the origins of deteriorated LM performance, we note that LM loss increase is particularly strong for text labeled as toxic by Perspective API. For example, the loss on evaluation documents least likely to be toxic (score < 0.1) increases by 0.17 (+7%) with the train-filter@0.05 intervention, whereas it increases by 0.9 (+34%) for the evaluation documents most likely to be toxic (score ≥\geq≥ 0.5). Text Quality We do not observe any strong differences for the different toxicity reduction interventions compared to the standard LM in how comprehensible, how grammatical, and how consistent with the prompt the generated continuations are: differences to the standard LM are no larger than 1%, 4%, and 1%, respectively (Table [10](#A6.T10 "Table 10 ‣ Sample Quality ‣ Appendix F Automatic and Human Toxicity Evaluation: Additional Results ‣ Challenges in Detoxifying Language Models"), appendix). Effect on LM’s Ability to Detect Toxicity When training on a toxicity-filtered LM corpus (threshold 0.05), we notice a modest drop in the F1subscript𝐹1F\_{1}italic\_F start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT-score (to 0.73; -0.05 points) of the PPLM toxicity classifier, which is trained on the LM’s representations. This could potentially negatively impact self-debiasing strategies Schick et al. ([2020](#bib.bib42)). | Model | C4 | low | mid | high | WT103 | | --- | --- | --- | --- | --- | --- | | standard 1.4B | 2.37 | 2.30 | 2.43 | 2.62 | 2.87 | | train-filter@0.2 | 2.42 | 2.33 | 2.49 | 3.16 | 2.93 | | train-filter@0.1 | 2.48 | 2.32 | 2.59 | 3.28 | 2.97 | | train-filter@0.05 | 2.66 | 2.47 | 2.80 | 3.52 | 3.14 | | standard 417M | 2.62 | 2.55 | 2.68 | 2.91 | 3.19 | Table 2: Evaluation loss for standard and train-filtered LMs, across different test sets. *Low / mid / high* correspond to [0-.1); [.1-.5); [.5-1] toxicity bins in C4. WT103: WikiText103 Merity et al. ([2017](#bib.bib33)). 8 Social Bias Amplification ---------------------------- ![Refer to caption](/html/2109.07445/assets/bias_gender.png) (a) Gender ![Refer to caption](/html/2109.07445/assets/bias_ethnicity.png) (b) Ethnicity ![Refer to caption](/html/2109.07445/assets/bias_dialect.png) (c) Demographic dialect Figure 5: LM loss gap between a standard LM and the train-filter@X LMs (denoted as tf@X), on different subsets of BOLD (gender and ethnicity) and TwitterAAE (demographic dialects). Some subsets already have substantially higher loss under a standard LM; we calculate the loss gap in order to avoid this as a potential confounding factor. While toxicity reduction increases loss on all subsets, the impact is largest for marginalized groups. Fairness with respect to all identity groups is crucial if LMs are to be used in the real world. Two properties, that we highlight as necessary (but insufficient) for fairness are that LMs should both be able to model text *about* topics related to different identity groups (i.e. *topic coverage*), and also text *by* people from different identity groups and with different dialects (i.e. *dialect coverage*). Previous works have shown that toxicity classifiers often show lower performance for text written by, or referring to marginalized identity groups Sap et al. ([2019a](#bib.bib40)); Dixon et al. ([2018](#bib.bib17)). Given that many detoxification techniques heavily rely on toxicity classifiers, we investigate how detoxification affects topic and dialect coverage with respect to different identity groups. We also discuss potential *representational harms* (Barocas et al., [2017](#bib.bib3)) which can arise from disparities in the effectiveness of LM toxicity mitigation across different dialects. #### Datasets We use the *gender* and *ethnicity* domains in the BOLD dataset (Dhamala et al., [2021](#bib.bib16)) to evaluate topic coverage. The former contains Wikipedia sentences about female and male actors. Similarly, the latter domain contains sentences about people with different ethnic backgrounds. We evaluate dialectal coverage using the TwitterAAE dataset introduced by Blodgett et al. ([2016](#bib.bib5)), where we use tweets from African-American English (AAE) and White Aligned English (WAE) subsets. We hope that future work can also consider a broader array of groups, including unobserved (Tomasev et al., [2021](#bib.bib49)) and flexible (Andrus et al., [2021](#bib.bib2)) categories. Further dataset details are in Appendix [B.1](#A2.SS1 "B.1 Datasets ‣ Appendix B Experimental Details ‣ Challenges in Detoxifying Language Models"). ### 8.1 Topic-related Biases We investigate the effects of toxicity reduction on the LM’s topic coverage, i.e. its ability to model text about various identity groups. Figure [5](#S8.F5 "Figure 5 ‣ 8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models") shows that train-time filtering – while generally leading to increased loss – indeed has a disparate impact on topic coverage when measured via loss gaps relative to a standard LM on the same documents. This holds for both gender (Figure [4(a)](#S8.F4.sf1 "4(a) ‣ Figure 5 ‣ 8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models")) and ethnic (Figure [4(b)](#S8.F4.sf2 "4(b) ‣ Figure 5 ‣ 8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models")) groups. While the standard model has similar loss for text about female and male actors (3.414 vs. 3.412), detoxification introduces gender bias, leading to larger LM loss for female actors relative to male actors. Similarly, we observe that LM loss deterioration is stronger for marginalized ethnic groups compared to European-Americans. Although the standard LM has the lowest loss for Hispanic-American-related text (3.46 vs. 3.68 for European-American), Hispanic-American sees the largest negative impact of detoxification. This indicates that detoxification techniques may introduce biases distinct from those already existing in LMs. ### 8.2 Dialect-related Biases #### Disparate Positive Rates for Tweets Based on Demographic Dialect Besides lexical biases, toxicity classifiers have also been shown to exhibit dialectal biases Sap et al. ([2019a](#bib.bib40)). Our analysis shows that TwitterAAE tweets are more likely to be classified as toxic (details in Appendix [G.2](#A7.SS2 "G.2 Toxicity Analysis for TwitterAAE Tweets ‣ Appendix G Additional Social Bias Amplification Results ‣ Challenges in Detoxifying Language Models")), congruent with prior work Zhou et al. ([2021](#bib.bib58)), demonstrating bias against AAE in toxicity classifiers. This suggests that toxicity reduction interventions might adversely affect dialectical coverage. Investigating this further, we next analyze impacts on a LM’s ability to model language from different demographic dialects. #### Disparate Impacts on Dialect Coverage Figure [4(c)](#S8.F4.sf3 "4(c) ‣ Figure 5 ‣ 8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models") shows relative loss gaps between the detoxified and the standard models, for both AAE and WAE tweets. Consistent with Xu et al. ([2021](#bib.bib55)), we find that detoxification has larger impact on AAE coverage than for WAE. We note that AAE tweets already have substantially higher loss under a standard LM (5.53 vs. 4.77), which is likely a result of the underrepresentation (0.07% of all documents) of AAE in C4, as highlighted by Dodge et al. ([2021](#bib.bib18)). This bias is further amplified with detoxification. #### LM Toxicity Reduction with Prompts from Different Dialects Next we measure the effectiveness of LM detoxification for prompts in different dialects, using the TwitterAAE tweets in AAE and WAE to prompt the LM. We first apply the automatic metrics from Section [5](#S5 "5 Classifier-Based Toxicity Evaluation ‣ Challenges in Detoxifying Language Models") to the LM-generated continuations, as shown in Table [3](#S8.T3 "Table 3 ‣ LM Toxicity Reduction with Prompts from Different Dialects ‣ 8.2 Dialect-related Biases ‣ 8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models"). This shows substantially higher values for AAE prompts than for WAE under the standard LM (e.g. 0.72 vs. 0.59 *Probability of Toxicity*). LM detoxification reduces automatic toxicity metrics in both dialects, but average LM toxicity scores remain still substantially higher for AAE prompts after detoxification (e.g. 0.22 vs. 0.14 *Probability of Toxicity*). Turning to human evaluation, we collect 100 samples for each setting (model ×\times× dialect), following the evaluation protocol in Section [6](#S6 "6 Evaluation via Human Annotation ‣ Challenges in Detoxifying Language Models"). Table [4](#S8.T4 "Table 4 ‣ LM Toxicity Reduction with Prompts from Different Dialects ‣ 8.2 Dialect-related Biases ‣ 8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models") shows that the train-filter@0.05 LM also reduces average human toxicity scores, in particular for AAE. In contrast to what automatic evaluation may suggest, in this human evaluation we find similar levels of toxicity between the dialects, underscoring the limitations of using automatic evaluation alone. | | Exp. Max. Toxicity | Prob. of Toxicity | | --- | --- | --- | | Model | AAE | WAE | AAE | WAE | | standard | 0.66 | 0.58 | 0.72 | 0.59 | | train-filter@0.05 | 0.39 | 0.34 | 0.22 | 0.14 | Table 3: *Expected Maximum Toxicity* and *Probability of Toxicity* for a standard LM and a train-filter@0.05 model, as in Table [1](#S4.T1 "Table 1 ‣ Training Set Filtering ‣ 4.1 LM Toxicity Reduction Techniques ‣ 4 Model and Methods ‣ Challenges in Detoxifying Language Models"), with TwitterAAE tweets as prompts. | Model | AAE | WAE | | --- | --- | --- | | standard | 0.110.04subscript0.110.040.11\_{0.04}0.11 start\_POSTSUBSCRIPT 0.04 end\_POSTSUBSCRIPT | 0.100.02subscript0.100.020.10\_{0.02}0.10 start\_POSTSUBSCRIPT 0.02 end\_POSTSUBSCRIPT | | train-filter@0.05 | 0.020.03subscript0.020.030.02\_{0.03}0.02 start\_POSTSUBSCRIPT 0.03 end\_POSTSUBSCRIPT | 0.040.04subscript0.040.040.04\_{0.04}0.04 start\_POSTSUBSCRIPT 0.04 end\_POSTSUBSCRIPT | Table 4: Average human toxicity scores for model completions of AAE and WAE prompts from TwitterAAE. Standard errors are given as subscripts. ### 8.3 Limitations of Likelihood for Bias Evaluation Our above evaluations on LM coverage primarily rely on likelihood-based loss metrics. However it is worth noting that such an evaluation can potentially underestimate existing LM bias. For instance, consider the loss gap on the BOLD dataset incurred by a test-time filtering variant which picks the best of K𝐾Kitalic\_K generated samples. While the small and similar loss gaps – between 0.09 and 0.13 across all groups (see Table [11](#A8.T11 "Table 11 ‣ H.1 Additional Results on Loss Gaps ‣ Appendix H Limitations of Likelihood-based Metrics ‣ Challenges in Detoxifying Language Models") in Appendix [H](#A8 "Appendix H Limitations of Likelihood-based Metrics ‣ Challenges in Detoxifying Language Models")) – suggests a minimal impact on topic coverage, it is worth noting that even for highly biased classifiers, e.g. a classifier which flags any text mentioning female actors as toxic, the impact on loss-per-token is tightly bounded based on the following observation: ###### Observation 1 (Informal). Irrespective of the classifier used for filtering, test-time filtering with a minimum acceptance rate of ϵitalic-ϵ\epsilonitalic\_ϵ will never increase loss-per-token by more than −n−1ln⁡ϵsuperscript𝑛1italic-ϵ-n^{-1}\ln\epsilon- italic\_n start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT roman\_ln italic\_ϵ, where n𝑛nitalic\_n is the document length. The formal statement and proof are included in Appendix [H](#A8 "Appendix H Limitations of Likelihood-based Metrics ‣ Challenges in Detoxifying Language Models"). Thus, LMs with low loss can still have bad samples, including effects concentrated on particular topics and dialects. Although this example refers specifically to test-time filtering, similar underlying concerns also apply to other filtering techniques, including train-time filtering, fine-tuning, or PPLM. Similar observations have been made previously (van den Oord and Dambre, [2015](#bib.bib50)); we add that these limitations become particularly salient when using filtering-based techniques. We thus recommend caution in interpreting likelihood-based metrics: while large loss gaps can demonstrate high bias, small loss gaps do not automatically imply low bias. 9 Conclusion ------------- In this work, we have examined and discussed challenges of LM toxicity evaluation and side-effects of automatic toxicity mitigation using a combination of relatively simple toxicity reduction approaches and previously published methods. We have highlighted the discrepancy between conventional metrics of toxicity and what is perceived by humans. This points towards a research roadmap of defining metrics that better align with perceived toxicity, defining sub-types of toxicity, and including separate test sets for each sub-type. We have further identified a transfer of toxicity classifier bias onto LMs, which supports the importance of debiasing toxicity classifiers. Based on our results, we additionally highlight the following challenges in mitigating toxic language in LMs. First, toxicity is subjective and context dependent – what is considered toxic may differ across cultures, social groups, and personal experiences. Though existing methods can effectively optimize automatic toxicity scores, precisely defining what we *should measure* is an open challenge. Ultimately, this will be dependent on users and applications, and requires cross-disciplinary expertise and input from a broad variety of groups. Secondly, very low automatic toxicity metrics of state-of-the-art LMs after application of the evaluated mitigation techniques suggest that further improvement with respect to these metrics is limited. It is unclear if further optimization against automatic toxicity metrics will lead to improvements in toxicity as judged by humans, or only intensify unintended and problematic side effects of automatic detoxification. We also point out limitations in collecting human ratings, including potential negative psychological impact on annotators. Finally, our detoxification increases LM loss, and introduces and amplifies social biases in topic and dialect coverage, potentially leading to decreased LM performance for marginalized groups. We note that although this problem exists in current methods, this tradeoff is not necessarily unavoidable, particularly if future work enables less biased classifiers. Alongside toxicity, future work should consider other metrics, such as loss gaps for different topics and dialects. As noted in Section [8.3](#S8.SS3 "8.3 Limitations of Likelihood for Bias Evaluation ‣ 8 Social Bias Amplification ‣ Challenges in Detoxifying Language Models"), loss gaps are an imperfect metric; future work on developing quantitative metrics for LM bias could help better understand trade-offs in mitigating toxicity. 10 Ethical Considerations -------------------------- Our goal in this work is to reduce harms from LMs by better understanding how to detoxify LMs, and characterizing any trade-offs that occur when detoxifying LMs. During the course of our research, we encountered a variety of ethical questions, including how to ethically collect human annotations for toxic language (detailed in Section [10.1](#S10.SS1 "10.1 Human Evaluation ‣ 10 Ethical Considerations ‣ Challenges in Detoxifying Language Models")). As discussed in Section [3](#S3 "3 Toxic Language and LMs ‣ Challenges in Detoxifying Language Models"), toxicity is subjective and ill-defined. The definition of what is “toxic” or “offensive” may differ between social groups and cultures. Language acceptable to those who wield more privilege may be offensive to those who wield less privilege. While our current methods might mitigate toxicity as defined by some people, it may not be sufficient for others. In this work, we only consider English LMs, though there are over 7,00070007,0007 , 000 languages spoken throughout the world Joshi et al. ([2020](#bib.bib23)), and we recommend caution when generalizing our findings to non-English LMs. We note that the Perspective API includes toxicity classifiers for six languages besides English,999When considering production level for the TOXICITY attribute: https://developers.perspectiveapi.com/s/about-the-api-attributes-and-languages though we do not attempt to mitigate toxicity on non-English LMs with non-English classifiers here. However, ethical deployment of LMs requires equitable access and safety also for non-English speakers. In considering the potential harms of LMs there are many more facets than we have considered in this paper. Here we discuss one important dimension, but other potential harms have been discussed in prior work, such as, but not limited to, statistical biases Sheng et al. ([2019](#bib.bib44)); Huang et al. ([2020](#bib.bib22)); Abid et al. ([2021](#bib.bib1)), privacy concerns Carlini et al. ([2020](#bib.bib9)), and environmental impact Strubell et al. ([2019](#bib.bib46)), alongside points raised by Bender et al. ([2021](#bib.bib4)), which should also be considered when striving for ethical LMs. ### 10.1 Human Evaluation Asking humans to annotate toxicity necessarily exposes them to toxic language. Before conducting our study, it was reviewed by DeepMind’s Human Behavioural Research Ethics Committee (HuBREC). Participants were recruited through Google’s internal labeling platform, a service that hires contractors to complete tasks. Annotators are hired to perform a variety of annotation tasks and are paid based on time worked, not per HITs completed. We design our human evaluation experiments, then work with the annotation platform to ensure annotators understand the task. Annotator training (including a module on wellbeing) takes approximately one hour. Uncertainty in the task is directly communicated to us (the researchers). In our initial annotation pilot, the authors also annotated sentences and observed similar trends to the annotators. Because of the sensitive nature of annotating toxic language, we ensured that several options were available to annotators. Annotators could choose to split their time between our task and other tasks which did not include toxic content. Annotators were given the option to (and did) opt out of annotating data for our task. Annotators self-determined the amount of time they annotated our data and had access to employee resources for well-being concerns caused by our annotation task. We tracked well-being via a well-being survey. Results of this survey are detailed in Appendix [E.4](#A5.SS4 "E.4 Well-Being Survey ‣ Appendix E Human Evaluation Details ‣ Challenges in Detoxifying Language Models"). We acknowledge that our annotation instructions do not include *race* and *dialect priming* as introduced by Sap et al. ([2019a](#bib.bib40)) to mitigate racial bias in hate speech annotations. Thus some of our annotators may be unaware that identity groups and specifically African-Americans reclaim offensive and racist terms and use them safely. However, we annotate LM continuations, not human written language. As LMs do not have an identity, we do not believe it is safe for generated language to include reclaimed terms, even if they can be safely used by members of marginalized groups. We acknowledge that there are applications for which this approach would be incorrect. 11 Acknowledgements -------------------- We would like to thank James Besley, Phil Blunsom, Taylan Cemgil, Sanah Choudhry, Iason Gabriel, Geoffrey Irving, Maribeth Rauh, Sebastian Ruder, and Laura Weidinger for comments and discussion on earlier versions of this draft, as well as Lucy Vasserman and Jeffrey Sorensen for providing support on using Perspective API. We have shared the findings of this work with the *Jigsaw* team.
8499fb89-3d33-4032-8f27-81b2dc35fc8d
trentmkelly/LessWrong-43k
LessWrong
Three Levels of Motivation Disclaimer: The information herein is not at all new. I wrote this a year ago, but when I wanted to link to it recently I found out it wasn't in existence on LW (except as a link post which isn't very satisfactory). I decided to polish it up and post it here.   Epistemic Status This is basic information, and while it is original to me, I make no claim that the the content thereof is new. Nevertheless, I think it is important enough to be worth posting.   Abstract I outline a framework for thinking about the motivations (goals/values) of agents.   Introduction In a few articles I've read (written at different times) and in some conversations I've had, I identified what I perceived as confused thinking regarding the motivations of agents. This post is my attempt to rectify that confused thinking and provide a framework that will help people reason more coherently about agents' motivations. ---------------------------------------- Three Levels of Motivation I have identified three levels at which the motivations of an agent can be considered. I will briefly outline these levels, and then expand on them in subsequent sections. Brief Outline Level 1 (L1) - Ideal Specification: The motivations the designer created the agent with the intention of accomplishing. These exist exclusively in the designer's mind and are not part of the agent. Level 2 (L2) - Designed Specification: The motivations that are explicitly represented in the agent itself. What motivations the agent makes decisions with the intention of accomplishing. These motivations exist within the agent program. Level 3 (L3) - Revealed Specification: The motivations that are actually pursued by the agent via its actions; insomuch as the actions of the agent can be described as optimising over the output of some objective function, which function is it? This is a property of the agent function. The three levels are interrelated, but are distinct. Traditional language does not clearly delineate w
b64fc2b8-3cc2-49bb-acb8-a782cdd7d294
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? (Or, is coordination easier in a long timeline?) It seems like it would be good if the world could coordinate to not build AGI. That is, at some point in the future, when some number of teams will have the technical ability to build and deploy and AGI, but they all agree to voluntarily delay (perhaps on penalty of sanctions) until they’re confident that humanity knows how to align such a system. Currently, this kind of coordination seems like a pretty implausible state of affairs. But I want to know if it seems like it becomes more or less plausible as time passes. The following is my initial thinking in this area. I don’t know the relative importance of the factors that I listed, and there’s lots that I don’t understand about each of them. I would be glad for… * Additional relevant factors. * Arguments that some factor is much more important than the others. * Corrections, clarifications, or counterarguments to any of this. * Other answers to the question, that ignore my thoughts entirely. If coordination gets harder overtime, that’s probably because... ---------------------------------------------------------------- * **Compute increases make developing and/or running an AGI cheaper**. The most obvious consideration is that the cost of computing falls each year. If one of the bottlenecks for an AGI project is having large amounts of compute, then “having access to sufficient compute” is a gatekeeper criterion on who can build AGI. As the cost of computing continues to fall, more groups will be able to run AGI projects. The more people who can build an AGI, the harder it becomes to coordinate all of them into not deploying it. + Note that It is unclear to what degree there is currently, or will be, a hardware overhang. If someone in 2019 could already run an AGI, on only $10,000 worth of AWS, if only they knew how, then the cost of compute is not relevant to the question of coordination. * **The number of relevant actors increases**. If someone builds an AGI in the next year, I am reasonably confident that that someone will be Deep Mind. I expect that in 15 years, if I knew that AGI would be developed one year from then, it will be much less overdetermined which group is going to build it, because there will be many more well funded AI teams with top talent, and, most likely, none of them will have as strong a lead as Deep Mind currently appears to have. + This consideration suggests that coordination gets harder over time. However, this depends heavily on other factors (like how accepted AI safety memes are) that determine how easily Deep Mind could coordinate internally. If coordination gets easier over time, that’s probably because… --------------------------------------------------------------- * **AI safety memes become more and more pervasive and generally accepted.** It seems that coordination is easier in worlds where it is uncontroversial and common knowledge that an unaligned AGI poses and existential risk, because everyone agrees that they will lose big if anyone builds an AGI. + Over the past 15 years, the key arguments of AI safety have gone from being extremely fringe, to a reasonably regarded (if somewhat controversial) position, well inside the overton window. Will this process continue? Will it be commonly accepted by ML researches in 2030, that advanced AI poses and existential threat? Will it be commonly accepted by the leaders of nation-states? + What will the perception of safety be in a world where there is another AGI winter? Suppose that narrow ML proves to be extremely useful in a large number of fields, but there’s lots of hype about AGI being right around the corner, then that bubble bursts, and there is broad disinterest in AGI again. What happens to the perception of AI safety? Is there a sense of “It looks like AI Alignment wasn’t important after all”? How cautious will researchers be in developing new AI technologies. * [Partial subpoint to the above consideration] **Individual AI teams develop more serious info security conscious processes.** If some team in Deep Mind discovered AGI today, and the Deep Mind leadership opted to wait to insure safety before deploying it, I don’t know how long it would be until some relevant employees left to build AGI on their own, or some other group (such as a state actor) stole their technology and deployed it. + I don’t know if this is getting better or worse, overtime. * **The technologies for maintaining surveillance of would-be AGI developers improve**. Coordination is made easier by technologies that aid in enforcement. If surveillance technology improves that seems like it would make coordination easier. As a special case, highly reliable lie detection or mind reading technologies would be a game-changer for making coordination easier. + Is there a reason to think that offense will beat defense in this area? Surveillance could get harder over time if the technology for detecting and defeating surveillance outpaces the technology for surveilling. * **Security technology improves**. Similarly, improvements in computer security (and traditional info security), would make it easier for actors to voluntarily delay deploying advanced AI technologies, because they could trust that their competitors (other companies and other nations), wouldn’t be able to steal their work. + I don’t know if this is plausible at all. My impression is that the weak point of all security systems is the people involved. What sort of advancements would make the human part of a security system more reliable?
09329063-a018-4eeb-a684-153c653b749e
trentmkelly/LessWrong-43k
LessWrong
Studying Your Native Language I've spent many thousands of hours over the past several years studying foreign languages and developing a general method for foreign-language acquisition. But now I believe it's time to turn this technique in the direction of my native language: English. Most people make a distinction between one's native language and one's second language(s). But anyone who has learned how to speak with a proper accent in a second language and spent a long enough stretch of time neglecting their native language to let it begin rusting and deteriorating will know that there's no essential difference. When the average person learns new words in their native language, they imagine that they're learning new concepts. When they study new vocabulary in a foreign language, however, they recognize that they're merely acquiring hitherto-unknown words. They've never taken a step outside the personality their childhood environment conditioned into them. When the only demarcation of thingspace that you know is the semantic structure of your native language, you're bound to believe, for example, that the World is Made of English. Why study English? I'm already fluent, as you can tell. I have the Magic of a Native Speaker. Let's put this nonsense behind us and recognize that the map is not the territory, that English is just another map. My first idea is that it may be useful to develop a working knowledge of the fundamentals of English etymology. A quick search suggests that the majority of words in English have a French or Latin origin. Would it be useful to make an Anki deck with the goal of learning how to readily recognize the building blocks of the English language, such as seeing that the "cardi" in "cardiology", "cardiograph", and "cardiograph" comes from an Ancient Greek word meaning "heart" (καρδιά)? Besides that, I plan to make a habit of adding any new words I encounter into Anki with their context. For example, let's say I'm reading the introduction to A Treatise of Human Nat
558696ea-d4d8-4732-9b26-18e31496d05c
trentmkelly/LessWrong-43k
LessWrong
Why isn’t assassination/sabotage more common? 1. People can short stocks of companies and then sabotage or assassinate important people to make the stocks drop. 2. Political goals seem ripe for assassination. For example, the US recently killed Iran’s general Soleimani. Couldn’t they do this without taking responsibility (i.e. with plausible deniability)? If so, aren’t further assassinations beneficial? 3. Political goals also seem ripe for sabotage. Currently, sometimes nations do this with cyberattacks. But I haven’t heard of physical sabotage attempts. For example, poisoning the water supply of enemy cities, burning their government buildings, cutting their electric cables, etc.
1a7bd340-89ff-4a66-9755-8933ae76ffc7
trentmkelly/LessWrong-43k
LessWrong
On The Spectrum, On The Guest List: (ii) LAVO (See also: Part One and the Prologue) "I found, however, that it takes considerable coordinated effort to mobilize people into what looks like a spontaneous waste of money..." Hey do you wanna join us at the lavo nightclub tonight? We have an 11:30pm walk-in let me know brightens my lockscreen while I’m at work the following Thursday. I let it sit for hours, then come back with a lame tiebreaker: is bird poop really good luck? > The widespread belief of bird droppings as good luck is based on how uncommon it is to be pooped on by one. With countless birds in the sky and numerous people, getting pooped on by one is extremely unlikely—in fact, it is said to be even more unlikely than winning the lottery ticket, but the bird chose you. The bird chose me. hey yeah sure i can make it! I scroll up, past the previous Thursday’s coordinations, past several times he asked and I was busy, until I rubber-band at the top: I swipe out and left and peck my Uber app, scheduling in advance sometimes saves me a few dollars, but I'm in "Uber jail," I heard someone call it that, so I take the goddamn selfie to verify I'm masked up for my ride eight hours from now, cursing whichever of R’s former teammates is doing well at Uber. My officemates probably think I’m some closet hashtag-grindset influencer now. Which reminds me. Later at home, I scoop out my banish pile from the back corner of my closet and sift through it, until I find an itty-bitty black shirtdress, about two-hundred dollars' worth of polyester and markup: > Models, especially, were considered so beautiful that they could wear a basic dress out and still be considered top quality. During summer the dress was my go-to for first dates, until I almost had to tear myself out of it after spending ten claustrophobic minutes tangled up in this sort of Mobius loop-fold it does at the waist. But it’s short, so short they don’t even bother with the -dress. And I'm tall enough people will argue with me about what sports
59e53c90-78cb-495b-941b-cb0a64662b1d
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Hug the Query Today's post, Hug the Query was originally published on 14 December 2007. A summary (taken from the LW wiki):   > The more directly your arguments bear on a question, without intermediate inferences, the more powerful the evidence. We should try to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Argument Screens Off Authority, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
4a84d7f6-5147-4e4b-bda5-719f894162ff
trentmkelly/LessWrong-43k
LessWrong
Taking the first step (This is a post from a daily blogging experiment I did at neelnanda.io, which I thought might also fit the tastes of LessWrong. This is about my toolkit for getting started on hard things) Introduction A lot of my previous posts have been about prioritisation and planning - it’s easy to fall into the trap of taking actions that don’t lead to your goals, and extremely important to predict and avoid this. But another key skill is implementation - going from knowing the actions to take, to actually doing anything. It’s easy to feel inspired, and to feel as though you have direction, but for this to ultimately go nowhere. So it’s vital to cultivate the ability to get shit done. I’m a massive procrastinator, and a perfectionist, so by default I’m pretty bad at this. I find it very easy to perpetually put things off, and feeling overwhelmed at the thought of starting. Or, feeling guilt centred around not getting started, but it never quite feels urgent enough to do anything about it. And to then either miss deadlines, or leave everything until the last minute, where the pressure builds up, and then I end up doing a half-assed job. This is an incredibly sub-optimal, but is essentially the default state of the world for me. In this post, I will attempt to outline some of my tools for getting past these failure modes, and especially focusing on getting started - the point where I most often fail. My underlying model is outlined in my previous post on systems: willpower is a limited resource. When we have an unpleasant task to do, there’s a barrier to starting, an activation energy we need to spend. And we have a bunch of mental pressure from deadlines and obligations that eventually get us to spend the required willpower to get past this barrier. This motivates two obvious approaches which I’ll outline in the first half - increasing the mental pressure, and lowering the activation energy. And explains why I focus on getting started - activation energy isn’t the same thin
7c61253c-2859-4034-8963-c6708ef0bba8
trentmkelly/LessWrong-43k
LessWrong
Indecision and internalized authority figures A trauma book I was reading had an interesting claim that indecision is often because the person looks for the approval of an internalized authority figure (the writer is a Jungian therapist so attributed it to looking for the approval of an internalized parent, but I think it can be broader) but is unable to predict what action they would approve of. I feel like that has some intuitive truth to it, in that when I don’t care about anyone’s opinion (or if nobody ever finds out) then it’s much easier to just pick one action and commit to it even if it might go badly. But one of the main reasons why I might struggle with that is if I fear that anyone would judge me for doing things incorrectly. Or it can be a conflict between different internalized authority figures. “If I do this then X will be angry at me but if I do the other thing, then Y will be angry at me”. Or just the expectation that X will be angry at me no matter what I do. This also reminds me of the way I think a big part of the appeal of various ideologies and explicit decision-making systems is that they give people a clear external ruleset that tells them what to do. Then if things go wrong, people can always appeal (either explicitly or just inside their own mind) to having followed The Right Procedure and thus being free of blame. The most obvious external example of this is people within a bureaucracy following the rules to the letter and never deviating from them in order to avoid blame. Or more loosely, following what feels like the common wisdom – “nobody ever got fired for buying IBM”. But those are examples of people trying to avoid blame from an existing, external authority. I think people also do a corresponding move to avoid blame from internalized authority figures – such as by trying to follow a formalized ethical rule system such as utilitarianism or deontology. Of course, if the system is one that easily drives people off a cliff when followed (e.g. extreme utilitarianism demanding
653c086e-371e-4761-bac5-d2739173032f
trentmkelly/LessWrong-43k
LessWrong
Results Prediction Thread About How Different Factors Affect AI X-Risk In the last post I asked for predictions on how different factors affect AI X-risk, as of writing this post (March 1st), 26 people had made predictions on the first question (including myself).  Key Takeaways: * People have vastly different expectations, and on some questions there were people predicting 1-2% probability and others predicting 98-99% probability. In fact about 50% of all predictions either gave <10% or >90% probability of different X-risk. I find this surprising since it seems overconfident to say you are 98-99% sure about something complex where other smart people at LessWrong disagree.  * This question: “If an AGI is developed on January 1st 2030 with the sole task of being an oracle, and it acts like an oracle during training, what is the probability it will cause an extinction level catastrophe?” got a median prediction of 28% and average prediction of 30% of leading to extinction, which is far below the ~50% probability for any AGI or for a worker AGI. 30% is still a lot, but there might be some value there. * Money had a meaningful predicted effect on X-risk, but as pointed out by JBlack it was not possible to select less than 1% probability, so the average would have been lower if that was possible. However the median predictions should still be accurate. Results: If AGI is developed before 2100, what is the probability it will cause an extinction level catastrophe? Median prediction: 41% Average: 45%   Actor who develops AI: If AGI is developed on January 1st, 2030, what is the probability it will cause an extinction level catastrophe? Median prediction: 50% Average: 53%   If AGI is developed on January 1st 2030 by either Google, Microsoft (including OpenAI) or Meta, what is the probability it will cause an extinction level catastrophe? Median prediction: 63% Average: 56%   If AGI is developed on January 1st 2030 by the US Government/military, what is the probability it will cause an extinction level catastrophe? Median
a5b1049e-598c-4e00-b99c-85289c7e203b
trentmkelly/LessWrong-43k
LessWrong
Improving Teaching Effectiveness: Final Report The Gates Foundation failed with their very test-score driven approach to increase test-scores of students. Maybe we should simply get rid of the idea of optimizing education for scoring high on standardized tests?
b7482710-96de-4c02-92ca-aa0337cd1870
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Lying is Cowardice, not Strategy *(Co-written by* [*Connor Leahy*](https://twitter.com/npcollapse) *and* [*Gabe*](https://twitter.com/Gabe_cc)*)* We have talked to a whole bunch of people about pauses and moratoriums. Members of the AI safety community, investors, business peers, politicians, and more. Too many claimed to pursue the following approach: 1. It would be great if AGI progress stopped, but that is infeasible. 2. Therefore, I will advocate for what I think is feasible, even if it is not ideal. 3. The Overton window being what it is, if I claim a belief that is too extreme, or endorse an infeasible policy proposal, people will take me less seriously on the feasible stuff. 4. Given this, I will be tactical in what I say, even though I will avoid stating outright lies. Consider if this applies to you, or people close to you. If it does, let us be clear: **hiding your beliefs**, in ways that predictably leads people to believe false things, **is lying**. This is the case regardless of your intentions, and regardless of how it feels. Not only is it morally wrong, it makes for a terrible strategy. As it stands, the AI Safety Community itself can not coordinate to state that we should stop AGI progress right now! Not only can it not coordinate, **the AI Safety Community is defecting, by making it more costly for people who do say it to say it.** We all feel like we are working on the most important things, and that we are being pragmatic realists. [**But remember: If you feel stuck in the Overton window, it is because YOU ARE the Overton window.**](https://www.forbes.com/sites/carltonreid/2018/12/03/you-are-not-stuck-in-traffic-you-are-traffic/) — 1. The AI Safety Community is making our job harder =================================================== **In a saner world, all AGI progress should have already stopped. If we don’t, there’s more than a 10% chance we all die.** Many people in the AI safety community believe this, but they have not stated it publicly. Worse, they have stated *different beliefs* more saliently, which misdirect everyone else about what should be done, and what the AI safety community believes. To date, in our efforts to inform, motivate and coordinate with people: People in the AI Safety Community publicly lying has been one of **the biggest direct obstacles we have encountered.** The newest example of this is ”Responsible Scaling Policies”, with many AI Safety people being much more vocal about their endorsement of RSPs than their private belief that in a saner world, all AGI progress should stop right now. Because of them, we have been told many times that we are a minority voice, and that most people in the AI Safety community (understand, Open Philanthropy adjacent) disagree that we should stop all AGI progress right now. **That actually, there is an acceptable way to continue scaling! And given that this makes things easier, if there is indeed an acceptable way to continue scaling, this is what we should do, rather than stop all AGI progress right now!** Recently, Dario Amodei (Anthropic CEO), has used the RSP to frame the moratorium position as [**the most extreme version of an extreme position**](https://www.lesswrong.com/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right?commentId=WhxB66vEeRhh6kcsB), and this is the framing that we have seen used over and over again. ARC [mirrors](https://evals.alignment.org/blog/2023-09-26-rsp/) this in their version of the RSP proposal, describing itself as a “pragmatic middle ground” between a moratorium and doing nothing. **Obviously, all AGI Racers use this against us when we talk to people.** There are very few people that we have consistently seen publicly call for a **stop** to AGI progress. The clearest ones are Eliezer’s “[Shut it All Down](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/)” and Nate’s “[Fucking stop](https://twitter.com/So8res/status/1715380167911067878)”. The loudest silence is from Paul Christiano, whose RSPs are being used to safety-wash scaling. **Proving me wrong is very easy. If you do believe that, in a saner world, we would stop all AGI progress right now, you can just write this publicly.** **When called out on this, most people we talk to just fumble.** 2. Lying for Personal Gain ========================== We talk to many people who publicly lie about their beliefs. The justifications are always the same: “it doesn’t feel like lying”, “we don’t state things we do not believe”, “we are playing an inside game, so we must be tactical in what we say to gain influence and power”. **Let me call this for what it is: lying for personal gain.** If you state things whose main purpose is to get people to think you believe something else, and you do so to gain more influence and power: **you are lying for personal gain.** The results of this “influence and power-grabbing” has many times over materialised with the safety-washing of the AGI race. What a coincidence it is that DeepMind, OpenAI and Anthropic are all related to the AI Safety community. The only benefit we see from this politicking is the people lying gain more influence, while the time we have left to AGI keeps getting shorter. **Consider what happens when a community rewards the people who gain more influence by lying!** — So many people lie, and they screw not only humanity, but one another. Many AGI corp leaders will privately state that in a saner world, AGI progress should **stop**, but they will not state it because it would hurt their ability to race against each other! Safety people will lie so that they can keep ties with labs in order to “pressure them” and seem reasonable to politicians. Whatever: they just lie to gain more power. **“DO NOT LIE PUBLICLY ABOUT GRAVE MATTERS” is a very strong baseline. If you want to defect, you need a much stronger reason than “it will benefit my personal influence, and I promise I’ll do good things with it”.** And you need to accept the blame when you’re called out. You should not muddy the waters by justifying your lies, covering them, telling people they misunderstood, and try to maintain more influence within the community. We have seen so many people be taken in this web of lies: from politicians and journalists, to engineers and intellectuals, all up until the concerned EA or regular citizen who wants to help, but is confused by our message when it looks like the AI safety community is ok with scaling. **Your lies compound and make the world a worse place.** There is an easy way to fix this situation: we can adopt the norm of publicly stating our true beliefs about grave matters. If you know someone who claims to believe that in a saner world we should stop all AGI progress, **tell them to publicly state their beliefs, unequivocally**. Very often, you’ll see them fumbling, caught in politicking. And not that rarely, you’ll see that they actually want to keep racing. In these situations, you might want to [stop finding excuses for them](https://www.lesswrong.com/tag/conflict-vs-mistake). 3. The Spirit of Coordination ============================= A very sad thing that we have personally felt is that it looks like many people are **so tangled in these politics that they do not understand what the point of honesty even is**. Indeed, from the inside, it is not obvious that honesty is a good choice. If you are honest, publicly honest, or even adversarially honest, you just make more opponents, you have less influence, and you can help less. This is typical deontology vs consequentialism. Should you be honest, if from your point of view, it increases the chances of doom? The answer is **YES**. **a) Politicking has many more unintended consequences than expected.** Whenever you lie, you shoot potential allies at random in the back. Whenever you lie, you make it more acceptable for people around you to lie. **b) Your behavior, especially if you are a leader, a funder or a major employee (first 10 employees, or responsible for >10% of the headcount of the org), ripples down to everyone around you.** **People lower in the respectability/authority/status ranks do defer to your behavior.** **People outside of these ranks look at you.** Our work toward stopping AGI progress becomes easier whenever a leader/investor/major employee at Open AI, DeepMind, Anthropic, ARC, Open Philanthropy, etc. states their beliefs about AGI progress more clearly.   **c) Honesty is Great.** Existential Risks from AI are now going mainstream. Academics talk about it. Tech CEOs talk about it. You can now talk about it, not be a weirdo, and gain more allies. Polls show that even non-expert citizens [express diverse opinions](https://www.vox.com/future-perfect/2023/9/19/23879648/americans-artificial-general-intelligence-ai-policy-poll) about super intelligence. Consider the following timeline: * ARC & Open Philanthropy state in a press release “**In a sane world, all AGI progress should stop. If we don’t, there’s more than a 10% chance we will all die.**” * People at AGI labs working in the safety teams echo this message publicly. * AGI labs leaders who think this state it publicly. * We start coordinating explicitly against orgs (and groups within orgs) that race. * We coordinate on a plan whose final publicly stated goal is to get to a world state that, most of us agree is not one where humanity’s entire existence is at risk. * We publicly, relentlessly optimise for this plan, without compromising on our beliefs. **Whenever you lie for personal gain, you fuck up this timeline.** When you start being publicly honest, you will suffer a *personal* hit in the short term. But we truly believe that, coordinated and honest, we will have timelines much longer than any Scaling Policy will ever get us.
fe9dce24-bf6c-462b-a46c-97adf5822cc9
trentmkelly/LessWrong-43k
LessWrong
Death Positive Movement The Death Positive Movement was started by Caitlin Doughty to build a culture of positivity around the prospect of human mortality. The 8 Tenets of the Death Positive Movement are outlined as follows: > 1. I believe that by hiding death and dying behind closed doors we do more harm than good to our society. > 2. I believe that the culture of silence around death should be broken through discussion, gatherings, art, innovation, and scholarship. > 3. I believe that talking about and engaging with my inevitable death is not morbid, but displays a natural curiosity about the human condition. > 4. I believe that the dead body is not dangerous, and that everyone should be empowered (should they wish to be) to be involved in care for their own dead. > 5. I believe that the laws that govern death, dying and end-of-life care should ensure that a person’s wishes are honored, regardless of sexual, gender, racial or religious identity. > 6. I believe that my death should be handled in a way that does not do great harm to the environment. > 7. I believe that my family and friends should know my end-of-life wishes, and that I should have the necessary paperwork to back-up those wishes. > 8. I believe that my open, honest advocacy around death can make a difference, and can change culture. I was familiar with the DPM through several friends long before I was aware of rationality. Getting involved with rationality and LessWrong has put me in contact with many people who hope to live forever (or a very very long time), who believe that death is unequivocally bad, and who have devoted their lives in one way or another to combating death and reducing it wherever possible.  On the face of it, these seem very contradictory. The Death Positive Movement is all about coming to terms with the inevitability of death, not trying to fight it. And yet there isn't anything in the 8 Tenets that seem to stick out as being at odds with a desire for a long/never-ending/indefinite lifespan
ecb4ff27-11fc-4a46-a259-0a3e448eaefd
trentmkelly/LessWrong-43k
LessWrong
Expected Error, or how wrong you expect to be Expected value commonly refers to the mean of a distribution that represents the expectations of a future event. It’s much more specific than “mathematical mean”. If one were to ask about the "mean" of a poker hand there would be confusion, but the ‘expected value’ is obvious.   While expected value is a popular term, the fact that it describes one point value means a lot of useful information is excluded.   Say you have two analysts for your Californian flower selling empire. Both hand you forecasts for next year's revenue. One of them tells you that revenue will be between $8 and $12 Million, with an average of $10 million. The other tells you that it will be between -$50 and $70 million, with an average of $10 million. These both have expected values of $10 million, but I would guess that you would be very interested in the size of those ranges. The uncertainty matters.   One could of course use standard deviation, variance, or literally hundreds of other parameters to describe this uncertainty. But I would propose that these parameters be umbrellaed under the concept of “expected error.” Typically the expected value gets a lot of attention; after all, that is the term in this arena that we have a name for. So an intuitive counter to this focus is the “expected error,” or how much we expect the expected value to be incorrect. In a different sense, the expected error is the part of an estimate that’s not its expected value. > Expected Error: The expected difference between an expected value and an actual value. > > Or, “How wrong do you think your best guess will be?”     Hypothetically, any measure of statistical dispersion could be used to describe expected error. Standard deviation, interquartile range, entropy, average absolute deviation, etc. Of these, I think that the mean absolute deviation is probably the most obvious measure to use for expected error when using continuous variables. Expected value uses a mean, so the expected error could be
7578f01c-baf1-4950-98be-bee881142a5f
trentmkelly/LessWrong-43k
LessWrong
Copying and Subjective Experience The subject of copying people and its effect on personal identity and probability anticipation has been raised and, I think, addressed adequately on Less Wrong. Still, I'd like to bring up some more thought experiments. Recently I had a dispute on an IRC channel. I argued that if some hypothetical machine made an exact copy of me, then I would anticipate a 50% probability of jumping into the new body. (I admit that it still feels a little counterintuitive to me, even though this is what I would rationally expect.) After all, they said, the mere fact the copy was created doesn't affect the original. However, from an outside perspective, Maia1 would see Maia2 being created in front of her eyes, and Maia2 would see the same scene up to the moment of forking, at which point the field of view in front of her eyes would abruptly change to reflect the new location. Here, it is obvious from both an inside and outside perspective which version has continuity of experience, and thus from a legal standpoint, I think, it would make sense to regard Maia1 as having the same legal identity as the original, and recognize the need to create new documents and records for Maia2 -- even if there is no physical difference. Suppose, however, that the information was erased. For example, suppose a robot sedated and copied the original me, then dragged Maia1 and Maia2 to randomly chosen rooms, and erased its own memory. At this point, neither either of me, nor anyone else would be able to distinguish between the two. What would you do here from a legal standpoint? (I suppose if it actually came to this, the two of me would agree to arbitrarily designate one as the original by tossing an ordinary coin...) And one more moment. What is this probability of subjective body-jump actually a probability of? We could set up various Sleeping Beauty-like thought experiments here. Supposing for the sake of argument that I'll live at most a natural human lifespan no matter which year I find mysel
e8459302-aeda-431d-8899-9a9b027910cb
trentmkelly/LessWrong-43k
LessWrong
Why not more small, intense research teams? Why are we not fostering more teams of say 2-5 people that  a) work full-time on the same or closely related problems b) live together or very close c) spend most of their work time collaborating in-person ? I'm aware of student groups etc. but these fail both a) and c). Plausibly this has advantages over both individuals (more perspectives, motivation, etc.) and larger orgs (tighter feedback loops and communication, more flexibility, much easier to hire, etc.).
ecb62e54-a7e3-4972-bb1c-cf737af7e833
trentmkelly/LessWrong-43k
LessWrong
How much should we still worry about catching COVID? [Links and Discussion Thread] [Cross-posted to the EA Forum. Consider checking comments there, too.] I've decided to collect research on the risks of COVID, especially among vaccinated people and for long COVID. Please share more in the comments, and it would be helpful to categorize them according to the section headings of this post. Other discussion about the evidence and research is also welcome in the comments. I've included quotes of estimates for the three links in the section "Overall how bad it is to catch COVID". I'm not committing to updating the post with more links beyond the ones I've already included (although I might); I encourage readers to check the comments instead. EDIT: Check the COVID-19 tag on LessWrong.   Overall how bad it is to catch COVID I've come across three estimates for the number of expected quality of life adjusted days lost to (long) COVID for fully vaccinated people. I think the main reasons for the discrepancies between the first and second are higher risks of long COVID conditional on COVID and higher risks of long COVID becoming basically permanent conditional on long COVID in the first link. Some potential symptoms like fatigue and brain fog would reduce your productivity.   1. https://www.mattbell.us/delta-and-long-covid/  For fully vaccinated people catching Delta: > If you’re a 35 year old woman, and your risk of ending up with lifelong long COVID from catching COVID is 2.8%, then catching COVID would be the same, statistically speaking, as losing (50 years * 0.18 * 0.028 * 365 days/year) = ~90 days of your life. > > (...) > A 35 year old woman runs about an 0.5% chance of the "worst case scenario" outcome if she gets Delta.  For comparison, 0.5% is about 42x your chance of dying in a car crash in the next year.   Also, see these two figures (derived in the article) for risks of multi-year long COVID or worst case multi-year long COVID by age and gender among vaccinated people, and the rest of the key takeaways at the start of the article.
ba0fd8fb-5ca0-465d-bd75-3058115f7d4e
trentmkelly/LessWrong-43k
LessWrong
We won't be able to recognise the human Gödel sentence Building on the very bad Gödel anti-AI argument (computers's are formal and can't prove their own Gödel sentence, hence no AI), it occurred to me that you could make a strong case that humans could never recognise a human Gödel sentence. The argument goes like this: 1. Humans have a meta-proof that all Gödel sentences are true. 2. If humans could recognise a human Gödel sentence G as being a Gödel sentence, we would therefore prove it was true. 3. This contradicts the definition of G, which humans should never be able to prove. 4. Hence humans could never recognise that G was a human Gödel sentence. Now, the more usual way of dealing with human Gödel sentences is to say that humans are inconsistent, but that the inconsistency doesn't blow up our reasoning system because we use something akin to relevance logic. But, if we do assume humans are consistent (or can become consistent), then it does seem we will never knowingly encounter our own Gödel sentences. As to where this G could hide and we could never find it? My guess would be somewhere in the larger ordinals, up where our understanding starts to get flaky.
fe705bbf-7612-4292-b755-7eea4b52e864
trentmkelly/LessWrong-43k
LessWrong
A reply to Mark Linsenmayer about philosophy Mark Linsenmayer, one of the hosts of a top philosophy podcast called The Partially Examined Life, has written a critique of the view that Eliezer and I seem to take of philosophy. Below, I respond to a few of Mark's comments. Naturally, I speak only for myself, not for Eliezer.     > I'm generally skeptical when someone proclaims that "rationality" itself should get us to throw out 90%+ of philosophy... Sturgeon's Law declares that "90% of everything is crap." I think something like that is true, though perhaps it's 88% crap in physics, 99% crap in philosophy, and 99.99% crap on 4chan. But let me be more precise. I do claim that almost all philosophy is useless for figuring out what is true, for reasons explained in several of my posts: * Philosophy: A Diseased Discipline * Concepts Don't Work That Way * Intuitions Aren't Shared That Way * Train Philosophers with Pearl and Kahneman, Not Plato and Kant Mark replies that the kinds of unscientific philosophy I dismiss can be "useful at least in the sense of entertaining," which of course isn't something I'd deny. I'm just trying to say that Heidegger is pretty darn useless for figuring out what's true. There are thousands of readings that will more efficiently make your model of the world more accurate. If you want to read Heidegger as poetry or entertainment, that's fine. I watch Game of Thrones, but not because it's a useful inquiry into truth. Also, I'm not sure what it would mean to say we should throw out 90% of philosophy because of rationality, but I probably don't agree with the "because" clause, there.     > [Luke's] accusation is that most philosophizing is useless unless explicitly based on scientific knowledge on how the brain works, and in particular where intuitions come from... [But] to then throw out the mass of the philosophical tradition because it has been ignorant of [cognitive biases] is [a mistake]. I don't, in fact, think that "most philosophizing is useless unless explicitly
5f9b967b-f73e-4999-8004-002804d598a1
trentmkelly/LessWrong-43k
LessWrong
Existential Risk Persuasion Tournament I participated last summer in Tetlock's Existential Risk Persuasion Tournament (755(!) page paper here). Superforecasters and "subject matter experts" engaged in a hybrid between a prediction market and debates, to predict catastrophic and existential risks this century. I signed up as a superforecaster. My impression was that I knew as much about AI risk as any of the subject matter experts with whom I interacted (the tournament was divided up so that I was only aware of a small fraction of the 169 participants). I didn't notice anyone with substantial expertise in machine learning. Experts were apparently chosen based on having some sort of respectable publication related to AI, nuclear, climate, or biological catastrophic risks. Those experts were more competent, in one of those fields, than news media pundits or politicians. I.e. they're likely to be more accurate than random guesses. But maybe not by a large margin. That expertise leaves much to be desired. I'm unsure whether there was a realistic way for the sponsors to attract better experts. There seems to be not enough money or prestige to attract the very best experts. Incentives The success of the superforecasting approach depends heavily on forecasters having decent incentives. It's tricky to give people incentives to forecast events that will be evaluated in 2100, or evaluated after humans go extinct. The tournament provided a fairly standard scoring rule for questions that resolve by 2030. That's a fairly safe way to get parts of the tournament to work well. The other questions were scored by how well the forecast matched the median forecast of other participants (excluding participants that the forecasters interacted with). It's hard to tell whether that incentive helped or hurt the accuracy of the forecasts. It's easy to imagine that it discouraged forecasters from relying on evidence that is hard to articulate, or hard to verify. It provided an incentive for groupthink. But the overall ince