id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
999d5803-cf82-4962-b747-6e64463c6a1e
trentmkelly/LessWrong-43k
LessWrong
AI #104: American State Capacity on the Brink The Trump Administration is on the verge of firing all ‘probationary’ employees in NIST, as they have done in many other places and departments, seemingly purely because they want to find people they can fire. But if you fire all the new employees and recently promoted employees (which is that ‘probationary’ means here) you end up firing quite a lot of the people who know about AI or give the government state capacity in AI. This would gut not only America’s AISI, its primary source of a wide variety of forms of state capacity and the only way we can have insight into what is happening or test for safety on matters involving classified information. It would also gut our ability to do a wide variety of other things, such as reinvigorating American semiconductor manufacturing. It would be a massive own goal for the United States, on every level. Please, it might already be too late, but do whatever you can to stop this from happening. Especially if you are not a typical AI safety advocate, helping raise the salience of this on Twitter could be useful here. Also there is the usual assortment of other events, but that’s the big thing right now. TABLE OF CONTENTS I covered Grok 3 yesterday, I’m holding all further feedback on that for a unified post later on. I am also going to push forward coverage of Google’s AI Co-Scientist. 1. Language Models Offer Mundane Utility. Activate the Super Debugger. 2. Language Models Don’t Offer Mundane Utility. Shut up until you can multiply. 3. Rug Pull. If you bought a Humane AI pin, have a non-metaphorical paperweight. 4. We’re In Deep Research. Find out how deep the rabbit hole goes. 5. Huh, Upgrades. GPT-4o gets a vibe shift, Gemini gets recall across conversations. 6. Seeking Deeply. Perplexity offers us R1 1776 for web search. 7. Fun With Multimedia Generation. Suno v4 actually pretty good, says Janus. 8. The Art of the Jailbreak. Extracting credit card information from ChatGPT. 9. Get Involved. UK AISI, D
5076373c-7852-40b4-bf9f-987cc8b04438
trentmkelly/LessWrong-43k
LessWrong
Postmodernism for rationalists
507305cf-8a7a-4d2c-8bd1-7522bcb2993e
trentmkelly/LessWrong-43k
LessWrong
What deserves cryocide? So being signed up for cryonics shifts my views on life and death, as might be expected. In particular, it focuses my views of success on the preservation of my brain (everything else too, just in case, but especially the brain).  This means, obviously, not just the lump of meat but also the information within it. If I'm suffering a degenerative disease to that meat or its information, I'm going to want to cryocide to preserve the information (and the idea of living through slow brain death doesn't thrill me regardless). What I don't know is: given the current state of science, what sorts of things do I need to be worried about? In particular, I'm wondering about Alzheimer's; does it appear to be damage to the information, or to the retrieval mechanism? But any other such diseases interest me in this context. Thanks! -Robin
20d9f877-eabf-4353-b4ae-fec688acfdd1
trentmkelly/LessWrong-43k
LessWrong
Open Thread Feb 16 - Feb 23, 2016 If it's worth saying, but not worth its own post (even in Discussion), then it goes here. ---------------------------------------- Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.) 3. Open Threads should be posted in Discussion, and not Main. 4. Open Threads should start on Monday, and end on Sunday.
f9c36481-fdc1-4a60-bcc3-882a9af6e8bf
trentmkelly/LessWrong-43k
LessWrong
Suffering Is Not Pain “Pain is inevitable; suffering is optional.” The motivation of this post is to address the persistent conflation between suffering and pain I have observed from members of the EA community, even amongst those who purport to be “suffering-focused” in their ethical motivations. In order to best address the problem of suffering, it is necessary to be clear about the difference between suffering and mere pain or ordinary displeasure. The parable of the second arrow In the Buddhist parable of the second arrow, the Buddha illustrates the distinction between suffering and pain with the tale of a man struck by two arrows. The first arrow represents the pain that life inevitably brings. The second arrow, however, represents the suffering that arises from his reaction to the pain. The Buddha teaches that while the first arrow (pain) is unavoidable, the second arrow (suffering) is optional, and that by letting go of the resistance to the pain (aversion), one will not suffer the sting of the second arrow. Defining pain and suffering * Pain: An unpleasant physical sensation or emotional experience.[1] * Suffering: The unsatisfactoriness that arises from craving, aversion, and clinging/attachment to sensations and experiences; dukkha. I feel it is important to clarify at this point that, while the above definition of suffering derives from historically-Buddhist teachings about dukkha and its cause, I am not endorsing this definition because it is Buddhist but rather because I believe it best identifies suffering as it can actually be observed in phenomenal experience. For those who are skeptical (possibly deeply so) about the claims and teachings of Buddhism, I ask that you consider the distinction I am advocating with reference to your own experience(s) of pain and suffering. While both pain and suffering are phenomena that “feel bad” experientially, I maintain that the sensations and experiences to which the terms/concepts “pain” and “suffering” respectively refer are a
b6ed0d3f-144d-425a-bb4c-54bd38d70b5e
trentmkelly/LessWrong-43k
LessWrong
Programming Language Early Funding? We are in the dark age of computer programming.[1] I believe that we still fundamentally haven’t found good ways to deal with the challenges of writing computer programs. Programming languages are the foundation of our programming and leave a lot to be desired. I believe more is possible. I’ve worked on creating a new programming language (I’m on my second). Given time and resources, I could take a solid step in the right direction with a new programming language. However, programming languages are and generally must be open-source and free. When they are mature and widely used, there are some challenging and indirect ways to make money from having created one and working on it. There is no way to profit from them when they are incomplete and immature. However, that phase takes a lot of software development effort. To design a better language, the design will need plenty of time to incubate. Indeed, recent languages show signs of being released too early (e.g., Rust and Go). The resulting issues can’t be fixed the same way they are in other software because of the much stricter backward compatibility requirements of a programming language. I have been working on a multi-paradigm general-purpose language that supports structured concurrency, object-oriented programming, and design by contract. Think of it as a future garbage-collected language in the vein of Swift, Scala, C#, or Java, but better. What grants and other funding sources might be willing to fund this? Addressing Possible Questions (Optional) Why haven’t I said what language I am working on? To avoid getting lost in the details of my language. This question isn’t about critiquing my language or ideas. It is about potential funding sources. Also, I use a pen name here, and while my given name isn’t a secret, I see no reason to link them here. Am I qualified to do this? I’m an experienced software developer with 22 years of experience, and my title in my last role was Senior Staff Software Engineer
1eaa5e0d-356c-4d60-9a15-2de3ef5b76b7
trentmkelly/LessWrong-43k
LessWrong
Acting Normal is Good, Actually This is a casually written post in a series about what I wish someone had told me when I was younger. I was a weird kid, but I didn't originally set out to be weird. I like doing things other people do and feeling like I belong. But as I got past kindergarten, forces incentivized me to be weird. There was a bunch of stuff pushing me in this way: * My IQ scored landed me in gifted classes. Being labeled smart and gifted meant adults gave me a free pass to be weird rather than encouraging me to be normal because weirdness is more tolerated if you're useful in some other way. * Asthma kept me from being good at sports, so no one really wanted me on their team. So life was easier if I engaged in other activities instead of doing what all the regular kids did. * I wasn't religious, so I was the weirdo atheist who had to justify his existence and beliefs instead of being a normal, God-fearing person. * I had OCD, and this meant I sometimes had to do weird things to avoid having a meltdown. So I ended up hanging out with the other weird kids because the thing we had in common is that we didn't fit in with the normals. We were treated like outcasts by our classmates, allowed to live on the edge of the village, so to speak, so long as we continued to be occasionally useful. Of course, lots of people feel like they're weird at times, especially as kids, so it comes in degrees. Even popular, super normal kids feel like they don't fit in sometimes, and that seems to be part of growing up, at least in the West. But what I'm talking about is being weird enough that weird becomes part of your identity because other people make it part of your identity, and then you lean into it because you know you can get away with it. There's virtue in being weird at times. I don't think I have to convince folks here of that! But there is also a lot of virtue in being and acting normal. Some good things can happen if you act normal, by which I mean act in ways that something like with
b71f73e0-6a5a-4710-9277-bc3d3d853e90
trentmkelly/LessWrong-43k
LessWrong
Meetup : Berkeley: CFAR focus group Discussion article for the meetup : Berkeley: CFAR focus group WHEN: 23 January 2013 07:30:00PM (-0800) WHERE: Berkeley, CA This week's Berkeley meetup will have the format of a focus group. On behalf of CFAR I will be asking you (yes, you!) what kinds of things you would like from the Center For Applied Rationality. Your input will be passed on to the organization and will help direct its course. Doors open at 7pm, and the focus group begins at 7:30pm. The focus group part of the evening won't take too long, and afterward we'll just hang out. For directions to Zendo, see the mailing list: http://groups.google.com/group/bayarealesswrong or call me at: http://i.imgur.com/Vcafy.png Discussion article for the meetup : Berkeley: CFAR focus group
bd703e41-9cce-4a70-bc0b-a6d7fc95760c
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups: Budapest, Fort Lauderdale, Philadelphia, Pittsburgh, Rome, Sao Paulo, Tel Aviv There are upcoming irregularly scheduled Less Wrong meetups in: * Pittsburgh - Presentation on Anthropics: 20 April 2012 06:00PM * Tel Aviv, Israel: 21 April 2012 07:00PM * Budapest Meetup: 21 April 2012 05:30PM * Rome LessWrong Meetup: 21 April 2012 07:00PM * Fort Lauderdale: 21 April 2012 06:00PM * São Paulo Meet Up 2: 23 April 2012 07:30PM * Philadelphia Meetup: Against Rationalization: 26 April 2012 07:00PM * Longmont Sparkfun Soldering Competition Field Trip: 28 April 2012 11:00AM * Graz Meetup: 28 April 2012 11:21PM * First Copenhagen meetup: 29 April 2012 05:00PM The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Cambridge, MA First Sunday Meetup: 06 May 2012 02:00PM * Cambridge, MA Third Sunday Meetup: 20 May 2012 02:20PM Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Chicago, London, Madison WI, Melbourne, Mountain View, New York, Ohio, Ottawa, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up. If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an i
025ca70a-ae7b-43a1-aef4-7944c8cd4d68
trentmkelly/LessWrong-43k
LessWrong
You're Calling *Who* A Cult Leader? Followup to:  Why Our Kind Can't Cooperate, Cultish Countercultishness I used to be a lot more worried that I was a cult leader before I started reading Hacker News.  (WARNING:  Do not click that link if you do not want another addictive Internet habit.) From time to time, on a mailing list or IRC channel or blog which I ran, someone would start talking about "cults" and "echo chambers" and "coteries".  And it was a scary accusation, because no matter what kind of epistemic hygeine I try to practice myself, I can't look into other people's minds.  I don't know if my long-time readers are agreeing with me because I'm making sense, or because I've developed creepy mind-control powers.  My readers are drawn from the nonconformist crowd—the atheist/libertarian/technophile/sf-reader/Silicon-Valley/early-adopter cluster—and so they certainly wouldn't admit to worshipping me even if they were. And then I ran into Hacker News, where accusations in exactly the same tone were aimed at the site owner, Paul Graham. Hold on.  Paul Graham gets the same flak I do? * Paul Graham has written a word or two about rationality... in a much more matter-of-fact style. * Paul Graham does not ask his readers for donations.  He is independently wealthy. * Paul Graham is not dabbling in mad-science-grade AI.  He runs Y Combinator, a seed-stage venture fund. * Paul Graham is not trying to save the world.  He's trying to help a new generation of entrepreneurs. I've never heard of Paul Graham saying or doing a single thing that smacks of cultishness.  Not one. He just wrote some great essays (that appeal especially to the nonconformist crowd), and started an online forum where some people who liked those essays hang out (among others who just wandered into that corner of the Internet). So when I read someone: 1. Comparing the long hours worked by Y Combinator startup founders to the sleep-deprivation tactic used in cults; 2. Claiming that founders were asked to move to the Bay Are
eb43152d-ef1d-4e80-8a69-9fa23477d31c
trentmkelly/LessWrong-43k
LessWrong
Is it definitely the case that we can colonize Mars if we really wanted to? Is it reasonable to believe that this is technically feasible for a reasonably advanced civilization? Question phrased to say that I'm less interested in reasons why humanity might not colonize Mars in the short-term due to lack of interest, politics, or near-term economics. Assume that humanity really wanted to and would put billions or trillions of dollars towards this and decades (centuries) of time. Any real reason we couldn't do it?
c7180905-ff4d-4698-902f-fc7424a532c5
trentmkelly/LessWrong-43k
LessWrong
6502 simulated - mind uploading for microprocessors Possibly offtopic, but a neat project with interesting analogy to mind uploading: Some people managed to scan, using a microscope, a MOS 6502 microprocessor (Apple II, C64, NES), and simulate it at the level of single transistors. This neatly circumvented all the problems with inaccurate emulation, unknown opcodes etc., and even allowed them to run actual Atari 2600 games without having to know anything about 6502's inner workings. Presentation slides about the project are here.
6b4d1101-639b-418c-947a-f292de860195
trentmkelly/LessWrong-43k
LessWrong
Should we admit it when a person/group is "better" than another person/group? This sort of thinking seems bad: > me.INTRINSIC_WORTH = 99999999; No matter what I do, this fixed property will remain constant. This sort of thinking seems socially frowned upon, but accurate: > a.impactOnSociety(time) > b.impactOnSociety(time) > > a.qualityOfCharacter > b.qualityOfCharacter // determined by things like altruism, grit, courage, self awareness... Similar points could be made by replacing a/b with [group of people]. I think it's terrible to say something like: > This race is inherently better than that race. I refuse to change my mind, regardless of the evidence brought before me. But to me, it doesn't seem wrong to say something like: > Based on what I've seen, I think that the median member of Group A has a higher qualityOfCharacter than the median member of Group B. I don't think there's anything inherently better about Group A. It's just based on what I've observed. If presented with enough evidence, I will change my mind. Credit and accountability seem like good things to me, and so I want to live in a world where people/groups receive credit for good qualities, and are held accountable for bad qualities. I'm not sure though. I could see that there are unintended consequences of such a world. For example, such "score keeping" could lead to contentiousness. And perhaps it's just something that we as a society (to generalize) can't handle, and thus shouldn't keep score.
ee7574a7-ae35-4f20-bc1a-eee1ed162ded
trentmkelly/LessWrong-43k
LessWrong
Are Generative World Models a Mesa-Optimization Risk? Suppose we set up a training loop with an eye to get a generative world-model. For concretedness, let's imagine the predictor from the ELK doc. We show the model the first part of a surveillance video, and ask it to predict the second part. Would we risk producing a mesa-optimizer? Intuitively, it feels like "no". Mesa-objectives are likely defined over world-models, shards are defined over world-models, so if we ask the training process for just the world-model, we would get just the world-model. Right? Well. The Problem ... is that we can't actually "just" ask for the world-model, can we? Or, at least, that's an unsolved problem. We're always asking for some proxy, be that the second part of the video, an answer to some question, being scored well by some secondary world-model-identifier ML model, and so on. If we could somehow precisely ask the training process to "improve this world-model", instead of optimizing some proxy objective that we think highly correlates with a generative world-model, that would be a different story. But I don't see how. That given, where can things go awry? The Low End The SGD moves the model along the steepest gradients. This means that every next SGD step is optimized to make the model improve on its loss-minimization ability as much as possible within the range of that step. Informally, the SGD wants to see results, and fast. I'd previously analysed the dynamics it gives rise to. In short: The "world-model" part of the ML model improves incrementally while it's incentivized to produce results immediately. That means it would develop some functions mapping the imperfect world-model to imperfect results — heuristics. But since these heuristics can only attach to the internal world-model, they necessarily start out "shallow", only responding to surface correlations in the input-data because they're the first components of the world-model that are discovered. With time, as the world-model deepens, these heuristics may deepen i
96d22846-843b-4f0f-835e-577e55328b38
trentmkelly/LessWrong-43k
LessWrong
Long Term Future Fund applications open until June 28th The Long Term Future just reopened its applications. You can apply here: Apply to the Long Term Future Fund We will from now on have rolling applications, with a window of about 3-4 months between responses. The application window for the coming round will end on the 28th of June 2019. Any application received after that will receive a response around four months later during the next evaluation period (unless it indicates that it is urgent, though we are less likely to fund out-of-cycle applications). We continue to be particularly interested in small teams and individuals that are trying to get projects off the ground, or that need less money than existing grant-making institutions are likely to give out (i.e. less than ~$100k, but more than $10k, since we can’t give grants below $10k). Here are some concrete examples: * To spend a few months (perhaps during the summer) to research an open problem in AI alignment or AI strategy and produce a few blog posts or videos on their ideas * To spend a few months building a web app with the potential to solve an operations bottleneck at x-risk organisations * To spend a few months up-skilling in a field to prepare for future work (e.g. microeconomics, functional programming, etc) * To spend a year testing an idea that has the potential to be built into an org You are also likely to find reading the writeups of our past grant decisions valuable to help you decide whether your project is a good fit: * November 2018 writeup * April 2019 writeup Apply Here What kind of applications can we fund? After last round, CEA clarified what kinds of grants we are likely able to make, which includes the vast majority of applications we have received in past rounds. In general you should err on the side of applying, since I think it is very likely we will be able to make something work. However, because of organizational overhead we are more likely to fund applications to registered charities and less likely to fund project
a89048a4-9fa5-4aa8-aac7-8e5109392714
trentmkelly/LessWrong-43k
LessWrong
To Be Decided #2 TBD is a quarterly-ish newsletter about deploying knowledge for impact, learning at scale, and making more thoughtful choices for ourselves and our organizations. This is the second issue, which was originally published in June 2019. Enjoy!  --Ian An Introduction to Decision Modeling Decision-making is life. Over time, our decisions carve an identity for ourselves and our organizations, and it is our decisions, more than anything else, that determine how we are remembered after we’re gone. Despite their importance, though, we barely pay attention to most of the decisions we make. Biology has programmed in us a powerful instinct to make decisions using our intuitions rather than our conscious selves whenever possible. There are good reasons for this; if we had to think about every little decision we made, we’d never get anything done. But complex decisions require us to compare the likelihood and desirability of many possible futures on multiple, disparate, and often conflicting criteria, something our intuitions just aren’t naturally equipped to do. Thankfully, there is a better way. The secret to resolving complex, risky dilemmas with justified ease and confidence is to model your decisions explicitly. At its best, modeling our decisions can help us make the very human exercise of decision-making not only more likely to lead to the outcomes we want, but more instinctively satisfying as well. (Keep reading) What I've Been Reading Most Funders Admit Their Own Evaluations Are Not Useful I really wish that headline was an exaggeration, but it's not much of one. In 2015, the Center for Evaluation Innovation and the Center for Effective Philanthropy surveyed evaluation and program executives at 127 US and Canadian foundations with $10 million or more in annual giving. The resulting report, "Benchmarking Foundation Evaluation Practices," contains startling revelations about how little evaluation reports are used. Most remarkably, more than three-quarters of responde
0cceb952-bac8-4298-955c-0df763be9c8d
trentmkelly/LessWrong-43k
LessWrong
Meetup : Chicago Applied Rationality Training Discussion article for the meetup : Chicago Applied Rationality Training WHEN: 23 April 2017 01:00:00PM (-0500) WHERE: Harper Memorial Library Room 148, 1116 E 59th St, Chicago, IL 60637 The Chicago rationality group meets every Sunday from 1-3 PM in Room 148 of Harper Memorial Library. Though we meet on the University of Chicago campus, anyone is welcome to attend. More applied rationality training this Sunday! At this week's meeting I'll be teaching Factoring, which is fun and easy and super useful! Don't worry if you missed the last meeting; I can get newcomers caught up during the lesson. If you are planning to come to this week's meeting, please take ten or so minutes to build a 'bugs list' - a list of problems in your life that we'll target with the applied rationality techniques. The generic prompt is "Everything would be perfect in my life, and I would be the best possible version of myself, except that… [bug].” Request to be added to our Google Group if you're interested in rationality-related events in the Chicago area and I'll approve you! Discussion article for the meetup : Chicago Applied Rationality Training
244a32e6-48a3-4ac0-8e1c-a7842cbb81dc
trentmkelly/LessWrong-43k
LessWrong
Mechanistic Transparency for Machine Learning Cross-posted on my blog. [EDIT (added Jan 2023): it's come to my attention that this post was likely influenced by conversations I had with Chris Olah related to the distinction between standard interpretability and the type called "mechanistic", as well as early experiments he had which became the 'circuits' sequence of papers - my sincere apologies for not making this clearer earlier.] Lately I've been trying to come up with a thread of AI alignment research that (a) I can concretely see how it significantly contributes to actually building aligned AI and (b) seems like something that I could actually make progress on. After some thinking and narrowing down possibilities, I've come up with one -- basically, a particular angle on machine learning transparency research. The angle that I'm interested in is what I'll call mechanistic transparency. This roughly means developing tools that take a neural network designed to do well on some task, and outputting something like pseudocode for what algorithm the neural network implements that could be read and understood by developers of AI systems, without having to actually run the system. This pseudocode might use high-level primitives like 'sort' or 'argmax' or 'detect cats', that should themselves be able to be reduced to pseudocode of a similar type, until eventually it is ideally reduced to a very small part of the original neural network, small enough that one could understand its functional behaviour with pen and paper within an hour. These tools might also slightly modify the network to make it more amenable to this analysis in such a way that the modified network performs approximately as well as the original network. There are a few properties that this pseudocode must satisfy. Firstly, it must be faithful to the network that is explained, such that if one substitutes in the pseudocode for each high-level primitive recursively, the result should be the original neural network, or a network close enough to the
3fe77819-aff0-4f9f-9396-0305247a998c
trentmkelly/LessWrong-43k
LessWrong
What's a good probability distribution family (e.g. "log-normal") to use for AGI timelines?
df3ca32d-616b-47ec-8df6-d2047be55d7b
trentmkelly/LessWrong-43k
LessWrong
Does AI risk “other” the AIs? (Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app. This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that the individual essays can be read fairly well on their own, but see here for a brief summary of the essays that have been released thus far.) In my last essay, I discussed the way in which what I've called "deep atheism" (that is, a fundamental mistrust towards both "Nature" and "bare intelligence") can prompt an aspiration to exert extreme levels of control over the universe; I highlighted the sense in which both humans and AIs, on Yudkowsky's AI risk narrative, are animated by this sort of aspiration; and I discussed some ways in which our civilization has built up wariness around control-seeking of this kind. I think we should be taking this sort of wariness quite seriously. In this spirit, I want to look, in this essay, at Robin Hanson's critique of the AI risk discourse – a critique especially attuned the way in which this discourse risks control-gone-wrong. In particular, I'm interested in Hanson's accusation that AI risk "others" the AIs (see e.g. here, here, and here). > Hearing the claim that AIs may eventually differ greatly from us, and become very capable, and that this could possibly happen fast, tends to invoke our general fear-of-difference heuristic. Making us afraid of these "others" and wanting to control them somehow ... "Hate" and "intolerance" aren't overly strong terms for this attitude.[1] Hanson sees this vice as core to the disagreement ("my best one-factor model to explain opinion variance here is this: some of us 'other' the AIs more"). And he invokes a deep lineage of liberal ideals in opposition. I think he's right to notice a tension in this vicinity. AI risk is, indeed, about fearing some sort of uncontrolled other. But is that always the bad sort of "othering?" Some basic points up front Well, let's at least avoid basi
41e3d4cc-56e5-49c1-98b8-65fe2c85f277
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AI Forecasting Resolution Council (Forecasting infrastructure, part 2) *This post introduces the [AI Forecasting Resolution Council](http://parallelforecast.com/docs/resolution/), a group of researchers with technical expertise in AI who will allow us to expand the space of effectively forecastable questions. It is the second part in [a series of blog posts](https://forum.effectivealtruism.org/posts/gBL3yX4fAszePCnN2/ai-forecasting-dictionary-forecasting-infrastructure-part-1) which motivate and introduce pieces of infrastructure intended to improve our ability to forecast novel and uncertain domains like AI.* *The Council is currently in beta, and we're launching early to get feedback from the community and quickly figure out how useful it is.* ### Background and motivation A key challenge in (AI) forecasting is to write good questions. This is tricky because we want questions which *both* capture important uncertainties, *and* are sufficiently concrete that we can resolve them and award points to forecasters in hindsight. Here are some example questions within AI that make this especially difficult: **Counterfactual questions** Suppose in 2000 you use “superhuman Othello from self-play” as a benchmark of a certain kind of impressive AI progress, and forecast it to be possible by 2020. It seems you were correct -- very plausibly the AlphaZero architecture should work for this. However, in a strict sense your forecast was wrong -- because no one has *actually* bothered to build a powerful Othello agent. So if a calibrated forecaster faces this question in 2000, considerations regarding who will bother to pursue what project “screen off” considerations regarding fundamental drivers of AI progress and their gradients. Yet the latter concern is arguably more interesting. This problem could be solved if we instead forecasted the question “*If* someone were to run an experiment using the AI technology available in 2020, given certain resource constraints, *would* it seem with >95% confidence, that they’d be able to create a superhuman Othello agent that learnt only from self-play?” Doing so requires a way of evaluating the truth value of that counterfactual, such as by asking a group of experts. **Similarity questions** Suppose we try to capture performance by appealing to a particular benchmark. There's a risk that the community will change its focus to another benchmark. We don’t want forecasters to spend their effort thinking about whether this change will occur, as opposed to fundamental question about the speed of progress (even if we *would* want to track such sociological facts about which benchmarks were prominent, [that should be handled by a *different* question](https://www.lesswrong.com/posts/o6GS3JoWKeuuvBPSK/separation-of-concerns) where it’s clear that this is the intent). So to avoid this we need a sufficiently formal way of doing things like comparing performance of algorithms across multiple benchmarks (for example, if RL agents are trained on a new version of Dota, can we compare performance to OpenAI Five’s on Dota 2?). **Definition-of-terms questions** This is more straightforward and related to the [AI Forecasting Dictionary](https://www.lesswrong.com/posts/8y7DcSF4eAkXoru4u/ai-forecasting-dictionary-forecasting-infrastructure-part-1-2). For example, how do we sufficiently clearly define what counts as “hard-coded domain knowledge”, and how much reward shaping you can add before the system no longer learns from “first principles”? **Valuation questions** Not all important uncertainties we care about might be able to be turned into a concretely operationalised future event. For example, instead of trying to operationalise how plausible the IDA agenda will seem in 3 years by making a long, detailed specification of the outcome of various experiments, we might just ask “How plausible will IDA seem to this evaluator in 3 years?” and then try to forecast that claim. Making this work will require carefully choosing the evaluators such that, for example, it is generally easier and less costly to forecast the underlying event than the opinions of the evaluator, and that we trust that the evaluation actually tracks some important, natural, hard-to-define measure. Prediction-driven evaluation is a deep topic, yet if we could make it work it is potentially very powerful. See e.g. [this post](https://www.lesswrong.com/s/YX6dCo6NSNQJDEwXR/p/kMmNdHpQPcnJgnAQF) for more details. ### AI Forecasting Resolution Council As a step towards solving the above problems, we’re setting up the AI Forecasting Resolution Council, a group of researchers with technical expertise in AI, who are volunteering their judgement to resolve questions like the above. The services of the council are available to any forecasting project, and all operations for the council will be managed by [Parallel Forecast](http://parallelforecast.com/). In case there is more demand for resolutions than can be filled, Parallel will decide which requests to meet. We think that this Council will create streamlined, standardised procedures for dealing with tricky cases like the above, thereby greatly expanding the space of effectively forecastable questions. There are still many questions to be figured out regarding incentives, mechanism design, and question operationalisation, and we think that by setting up the Resolution Council, we are laying some groundwork to begin experimenting in this direction; and discover best practices and ideas for new, exciting experiments. The initial members of the council are: * [Daniel Filan](http://danielfilan.com/) (CHAI) * [Chris Cundy](http://cundy.me/) (Stanford) * [Gavin Leech](https://www.gleech.org/about/) (Bristol) * [William Saunders](https://ought.org/team) (Ought) We expect to be adding several more members over the coming months. The database of previous verdicts and upcoming resolution requests can be found [here](http://parallelforecast.com/docs/resolution/). ### How to use the council if you run a forecasting project If you’re attempting to forecast AI and have a problem that could be solved by querying the expert council at a future state, let us know by [filling in this resolution request form](https://airtable.com/shrcTu1CICNhnTRRa). ### How to join the council If you have technical expertise in AI and would be interested in contributing to help expand the space of forecastable questions, let us know using [this form](https://forms.gle/p3C6damDv8qgZbdF7). There is no limit on the number of judges, since we can always randomise who will vote on each distinct verdict.
ccd60b7a-37b4-4bef-99f3-bdf95e5e6aec
trentmkelly/LessWrong-43k
LessWrong
Accuracy Versus Winning Consider the problem of an agent who is offered a chance to improve their epistemic rationality for a price.  What is such an agent's optimal strategy? A complete answer to this problem would involve a mathematical model to estimate the expected increase in utility associated with having more correct beliefs.  I don't have a complete answer, but I'm pretty sure about one thing: From an instrumental rationalist's point of view, to always accept or always refuse such offers is downright irrational. And now for the kicker: You might be such an agent. One technique that humans can use to work towards epistemic rationality is to doubt themselves, since most people think they are above average in a wide variety of areas (and it's reasonable to assume that merit in at least some of these areas is normally distributed.)  But having a negative explanatory style, which is one way to doubt yourself, has been linked with sickness and depression. And the inverse is also true.  Humans also seem to be rewarded for a certain set of beliefs: those that help them maintain a somewhat-good assessment of themselves.  Having an optimistic explanatory style (in an nutshell, explaining good events in a way that makes you feel good, and explaining bad events in a way that doesn't make you feel bad) has been linked with success in sports, sales and school. If you're unswayed by my empirical arguments, here's a theoretical one.  If you're a human and you want to have correct beliefs, you must make a special effort to seek evidence that your beliefs are wrong.  One of our known defects is our tendency to stick with our beliefs for too long.  But if you do this successfully, you will become less certain and therefore less determined. In some circumstances, it's good to be less determined.  But in others, it's not.  And to say that one should always look for disconfirming evidence, or that one should always avoid looking for disconfirming evidence, is idealogical according to the instrum
6b534c2a-e651-49f2-bdc5-36deeda4eeb1
trentmkelly/LessWrong-43k
LessWrong
Superintelligence 27: Pathways and enablers This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide. ---------------------------------------- Welcome. This week we discuss the twenty-seventh section in the reading guide: Pathways and enablers. This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments. There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim). Reading: “Pathways and enablers” from Chapter 14 ---------------------------------------- Summary 1. Is hardware progress good? 1. Hardware progress means machine intelligence will arrive sooner, which is probably bad. 2. More hardware at a given point means less understanding is likely to be needed to build machine intelligence, and brute-force techniques are more likely to be used. These probably increase danger. 3. More hardware progress suggests there will be more hardware overhang when machine intelligence is developed, and thus a faster intelligence explosion. This seems good inasmuch as it brings a higher chance of a singleton, but bad in other ways: 1. Less opportunity to respond during the transition 2. Less possibility of constraining how much hardware an AI can reach 3. Flattens the playing field, allowing small projects a better chance. These are less likely to be safety-conscious. 4. Hardware has other indirect effects, e.g. it allowed the internet, which contributes substantially to work like this. But perhaps we have enough hardware now for such things. 5. On balance,
f3acf8cb-e4a1-4f0b-8093-e1b66d67beb7
trentmkelly/LessWrong-43k
LessWrong
Meetup : Israel Less Wrong Meetup - Social and Board Games Discussion article for the meetup : Israel Less Wrong Meetup - Social and Board Games WHEN: 10 February 2015 07:00:00PM (+0200) WHERE: Google Tel Aviv We're going to have a meetup on Tuesday, December 9th at Google Israel's offices, Electra Tower, 98 Yigal Alon st., Tel Aviv. This time we're going to have a social meetup! We'll be socializing and playing games. Specifically, we look forward to playing any cool board or card game anyone will bring. By all means bring your favorite game(s) with you and teach others or find people who already like that game. But it's also fine to come empty-handed. We always end up with enough games for everyone. We'll start the meetup at 19:00, and we'll go on as much as we like to. Feel free to come a little bit later, as there is no agenda. (We've decided to start slightly earlier this time to give us more time and accommodate people with different schedules). We'll meet at the 29th floor of the building. If you arrive and cant find your way around, call Anatoly, who is graciously hosting us, at 054-245-1060. Email at avorobey@gmail.com also works. See you there! Discussion article for the meetup : Israel Less Wrong Meetup - Social and Board Games
93ba4e5e-78ed-41f9-8c89-78ff7b43ea9d
trentmkelly/LessWrong-43k
LessWrong
Great Cryonics Survey of 2022 The text that follows is a copy-paste from the survey creator's Facebook post, shared with permission. > Asking all Cryonicists and those considering it or curious about it to... ~ Please take the Great Cryonics Survey of 2022 ~ > > Go to --> bit.ly/cryosurvey2022 > > Take the survey if: > > 1. You're signed up for Cryonics > 2. You think Cryonics is interesting but you aren't ready to sign up yet. > 3. You've decided to sign up but haven't gotten around to it yet. > > The goal of this survey is to understand what makes Cryonicists tick using data rather than anecdotes. Here are some examples of the kinds of issues this survey is meant to understand: > > 1. How many Cryonicists are in favor of "mind uploading" and how many are against it? > 2. How do Cryonicists think about death? > 3. How many Cryonicists suffer from some form of death anxiety? > 4. How many Cryonicists are men vs women? > 5. What differentiates Cryonicists from non-Cryonicists with overall similar mindsets? > 6. How affordable do Cryonicists think Cryonics is for them? > 7. What do Cryonicists and non-Cryonicists think about pet Cryonics, and why? > 8. How do Cryonicists identify politically? > 9. Where do people first learn about Cryonics? > 10. How do Cryonicists feel about children? > > And much more! > > All results are anonymous (we don't ask for names or the like). All results will be analyzed and that analysis released to the public in a few months. > > bit.ly/cryosurvey2022 Get help signing up By the way, I've been helping people (decide whether to) sign up for years, and I'm now part of the new Independent Cryonics Educators program which supports people doing that, both logistically and financially, although I don't let this bias my recommendations. So don't hesitate to reach out to me on Facebook, Twitter, or email (mathieu.roy.37@gmail.com). If you use the following code, you'll get a 50% discount on your first year of membership with Alcor: 2omojQif.
ded244c9-8911-46d3-9404-6e1bddd45536
trentmkelly/LessWrong-43k
LessWrong
No More Freezer Pucks A while ago I wrote about how I'd make freezer pucks: These work ok, but there are a couple issues: * A large batch requires a lot of muffin tins * Cleaning muffin tins is annoying * It can be hard to get them out of the tins I've switched to a different system where I freeze a flat tray in a thin enough layer that I can break it easily: It takes a bit of practice getting the layer thickness right: generally the higher the water content the thinner it needs to be if you don't want an awkward block. I put a layer in a freezer bag: Before putting in additional layers I put down a scrap of parchment paper or wax paper: It's easier and faster than the puck method, though with higher risk of accidentally ending up with a block and needing to defrost some. Comment via: facebook, mastodon
af72506f-f7b5-44e7-acb2-bf778355dc3c
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
How dath ilan coordinates around solving AI alignment [*cross-post from LessWrong*](https://www.lesswrong.com/posts/AvANsxR88iiZziKPt/how-dath-ilan-coordinates-around-solving-alignment) Summary ------- * Eliezer Yudkowsky created dath ilan in 2014 as a fictional world where Yudkowsky is the median person. Since then, he has written hundreds of pages of fiction in this setting, mostly "rational fiction", which means that Yudkowsky thinks most of the things he writes about in dath ilan would actually happen given the premise. This post is a series of quotes from various pieces of [glowfic](https://docs.google.com/document/d/1_4Z2zdRKaSwZPm3S0X14DYoo7w46U_MZ4En6oOediNQ/edit#) (a form of role-play fiction). * Dath ilan "successfully coordinated around maintaining a higher level of ability to solve coordination problems". * When the idea of AGI was first discovered "a few generations" ago, the world government of dath ilan took action to orient their entire civilization around solving the alignment problem, including getting 20% of their researchers to do safety research, and slowing the development of multiple major technologies. AGI safety research has been ongoing on dath ilan for generations. * The possibility of AGI and other x-risks is considered an infohazard of the highest importance. It was even kept from government representatives at first. At our current tech level, the average dath ilani has not heard about the possibility of AGI; even so, education gives dath ilani intuitions that might be useful for alignment research. * Dath ilan prediction markets that include Keepers (who know about AGI) say they have a 97% chance of reviving cryonically preserved people. Yudkowsky has confirmed this implies dath ilan has a >97% chance of solving alignment. * Yudkowsky thinks Earth will probably fail at alignment. It's not clear whether the likely success of dath ilan is due to its higher average research ability, or its widespread coordination around civilizational priorities and infohazards, but I strongly suspect it's the latter. Yudkowsky's utopia is not the closest world to Earth where alignment is solved, but perhaps it can inspire us nonetheless. [The discovery of x-risk from AGI](https://glowfic.com/replies/1612937#reply-1612937) ------------------------------------------------------------------------------------- > This is dath ilan; and a few generations before Thellim's time, the highest of the Keepers[[1]](#fnii6gfp0ro5b) called together the Nine Legislators in a secret meeting.  Shortly after, the highest Keeper and the Nine called an emergency closed assembly of the 324 Representatives. > > And the highest Keeper said, with the Nine Legislators at the peak of dath ilan standing beside her, that the universe had proven to be a harsher and colder and more dangerous place than had been hoped. > > And that all Civilization needed to turn much of its effort away from thriving, and toward surviving.  There needed to be controls and slowdowns and halts instituted on multiple major technologies.  Which would need to be backed up by much more pervasive electronic surveillance than anybody had ever even considered allowing before.  Roughly a fifth of all the present and future smart people in the world ought to publicly appear to burn out or retire, and privately work on a new secret project under maximum-security conditions. [...] > > The reasoning behind this policy could, in principle, be laid out to the 324 Representatives.  But that would represent a noticeable additional risk, if it happened now, while mechanisms to prevent information propagation hadn't been set up yet. > > How dath ilan deals with the infohazard of AGI ---------------------------------------------- <https://www.lesswrong.com/posts/gvA4j8pGYG4xtaTkw/i-m-from-a-parallel-earth-with-much-higher-coordination-ama> > **Emielle Potgieter:**What is parallel earth's biggest problem, then? > > [...] > > **Eliezer Yudkowsky:**I'd assume that Artificial General Intelligence is being seen by the Senior Very Serious People as a *big* problem, given the degree to which *nobody ever talked about it*, how relatively slow computing progress was compared to here, and how my general education *just happened* to prepare me to make a ton of correct inferences about it as soon as anybody mentioned the possibility to me. They claim to you it's about hypothetical aliens and economic dysfunction scenarios, but boy howdy do you get a lot of[Orthogonality](https://arbital.com/p/orthogonality/) and[Goodhart's Curse](https://arbital.com/p/goodharts_curse/) in the water supply. > >   <https://yudkowsky.tumblr.com/post/81447230971/my-april-fools-day-confession> > “I say this to complete the circle which began with my arrival: The world of dath ilan did \*not\* talk about existential risk. I strongly hypothesize that this is one of those things that the serious people and the shadarak had decided would not get better if everyone was talking about it. Nobody talked about nanotechnology, or superviruses, or advanced machine intelligence, and since I’m \*damned\* sure that our serious people had imaginations good enough to include that, the silence is conspicuous in retrospect. There was also a surprising amount of publicity about reflective consistency in decision theory and “imagine an agent which can modify itself”; I think, in retrospect, that this was to make sure that the basic theory their AI developers were using was exposed to as many eyes as possible. (That’s how I know about timeless decision theory in the first place, though the tiling agents stuff is being reconstructed from much dimmer recollections.) Our computing technology development stalled around 10 years before I came to Earth, and again, now that I’m on Earth, I’m reasonably certain that dath ilan could have built faster computer chips if that had been deemed wise by the serious people. > > When I found myself in Eliezer Yudkowsky’s body, with new memories of all this rather important stuff that was somehow not talked about where I came from, I made my best guess that, if there was any purpose or meaning to my being here, it was handling Earth’s intelligence explosion. So that’s where I focused my efforts, and that’s why I haven’t tried to bring to this world any of the other aspects of dath ilan civilization… though I was rather dismayed, even given Yudkowsky’s memories, on how slow Earth’s support was for the mission I did try to prosecute, and I had to quixotically try to start Earth down the 200-year road to the de'a'na est shadarak before any kind of support developed at all for not having the intelligence explosion go entirely awry. And no, after I arrived I didn’t waste a lot of time on being upset or complaining about impossibility. It \*is\* impossible and it \*was\* upsetting, but rapid adaptation to the realities of a situation was a talked-up virtue where I came from.” > >   <https://www.glowfic.com/replies/1780726#reply-1780726> > “...that said, yes, the Keepers have bid against things in inscrutable ways, now and then.  It wouldn't be an especially helpful act to compile a public list of all the times they've done that, but they've done that even in markets I've been tracking.  To this day I have absolutely no idea why the Keepers fear long-term consequences specified to the rest of us only as 'people will later vote that was a bad idea', if Civilization makes a harder push on teaching average kids more 'computer-science' once my generation's kids are slightly smarter.  I mean, it's very credible that 'computer-science' reshapes some people's thoughts in some internally-damaging direction, which the Keepers would rather not point out explicitly for obvious reasons.  It doesn't obviously fit into any plan of corrupt world domination.  But... yeah, what the Keepers bid against, largely doesn't get done, and if they were Hypothetically Corrupted, they could in fact be steering Civilization that way.” > >   <https://www.glowfic.com/replies/1688763#reply-1688763> > For the first time it occurs to Keltham to wonder if dath ilan used to have gods, and that's what the Great Screen is meant to protect, because if you know the info for gods, you might pray to them... it would take a huge effort to keep not just the phenomenon but the physics behind it out of all the textbooks, but that's the magnitude of effort dath ilan put in to the Great Screen.  And if that's not what's going on, then there remains the unexplained question of why Keltham does not know any standard speculations about hypothetical superagents, that lots and lots of people could have hypothesized, hypotheses which pose a lot of interesting then-whats once you start looking in that direction > > [Dath ilan has a >97% chance to solve AI alignment](https://www.glowfic.com/replies/1743800#reply-1743800) ---------------------------------------------------------------------------------------------------------- *Note: Yudkowsky confirmed that this quote means he thinks dath ilan would actually solve alignment in 97% of worlds* > "About a hundred people every year die for real." > > "Everyone else goes into the cold where time stops, to wait it out, and awaken to whatever the Future brings, when Civilization becomes that powerful.  There are far prediction markets that say it's going to happen eventually with - what I would think would be unreasonably high probability, for something that far out, except that those markets are flagged with Keepers being allowed to trade in them.  Whatever secrets the Keepers keep, they would be turned to the purpose of protecting the Preserved, if they were turned to anything at all.  So I guess that number reflects what the Keepers would do if they had to, that nobody but them knows they can do." > > “How sure are they” > > "Ninety-seven percent, and without calibration training I expect you have no idea how flaming ridiculous that is for a prediction about the Future, but it's really superheated ridiculous.  Apparently the Keepers think they could make thirty completely different statements like that and be wrong once, and, them being the Keepers, they've already thought of every single possible reason worth considering for why that might not be true.  And that's not the probability of the tech working, it's not the probability of revival being possible in principle, it's not the probability that Civilization makes it that far, it's not the probability of the Preserved being kept safe that long, it's the final probability of the Preserved actually coming back. > > 1. **[^](#fnrefii6gfp0ro5b)**people whose job it is to think carefully about infohazards and other unwieldy ideas
206c0751-6e7b-491c-a411-3dfff4275fde
StampyAI/alignment-research-dataset/blogs
Blogs
Probabilistic Abstractions I (This post represents research in progress. I may think about these concepts entirely differently a few months from now, but for my own benefit I’m trying to exposit on them in order to force myself to understand them better.) For many inference tasks, especially ones with either non-linearities or non-convexities, it is common to use particle-based methods such as beam search, particle filters, sequential Monte Carlo, or Markov Chain Monte Carlo. In these methods, we approximate a distribution by a collection of samples from that distribution, then update the samples as new information is added. For instance, in beam search, if we are trying to build up a tree, we might build up a collection of ![K](https://s0.wp.com/latex.php?latex=K&bg=f0f0f0&fg=555555&s=0&c=20201002) samples for the left and right subtrees, then look at all ![K^2](https://s0.wp.com/latex.php?latex=K%5E2&bg=f0f0f0&fg=555555&s=0&c=20201002) ways of combining them into the entire tree, but then downsample again to the ![K](https://s0.wp.com/latex.php?latex=K&bg=f0f0f0&fg=555555&s=0&c=20201002) trees with the highest scores. This allows us to search through the exponentially large space of all trees efficiently (albeit at the cost of possibly missing high-scoring trees). One major problem with such particle-based methods is diversity: the particles will tend to cluster around the highest-scoring mode, rather than exploring multiple local optima if they exist. This can be bad because it makes learning algorithms overly myopic. Another problem, especially in combinatorial domains, is difficulty of partial evaluation: if we have some training data that we are trying to fit to, and we have chosen settings of some, but not all, variables in our model, it can be difficult to know if that setting is on the right track (for instance, it can be difficult to know whether a partially-built tree is a promising candidate or not). For time-series modeling, this isn’t nearly as large of a problem, since we can evaluate against a prefix of the time series to get a good idea (this perhaps explains the success of particle filters in these domains). I’ve been working on a method that tries to deal with both of these problems, which I call **probabilistic abstractions**. The idea is to improve the diversity of particle-based methods by creating “fat” particles which cover multiple states at once; the reason that such fat particles help is that they allow us to first optimize for coverage (by placing down relatively large particles that cover the entire space), then later worry about more local details (by placing down many particles near promising-looking local optima). To be more concrete, if we have a probability distribution over a set of random variables ![(X_1,\ldots,X_d)](https://s0.wp.com/latex.php?latex=%28X_1%2C%5Cldots%2CX_d%29&bg=f0f0f0&fg=555555&s=0&c=20201002), then our particles will be sets obtained by specifying the values of some of the ![X_i](https://s0.wp.com/latex.php?latex=X_i&bg=f0f0f0&fg=555555&s=0&c=20201002) and leaving the rest to vary arbitrarily. So, for instance, if ![d=4](https://s0.wp.com/latex.php?latex=d%3D4&bg=f0f0f0&fg=555555&s=0&c=20201002), then ![\{(X_1,X_2,X_3,X_4) \mid X_2 = 1, x_4 = 7\}](https://s0.wp.com/latex.php?latex=%5C%7B%28X_1%2CX_2%2CX_3%2CX_4%29+%5Cmid+X_2+%3D+1%2C+x_4+%3D+7%5C%7D&bg=f0f0f0&fg=555555&s=0&c=20201002) might be a possible “fat” particle. By choosing some number of fat particles and assigning probabilities to them, we are implicitly specifying a polytope of possible probability distributions; for instance, if our particles are ![S_1,\ldots,S_k](https://s0.wp.com/latex.php?latex=S_1%2C%5Cldots%2CS_k&bg=f0f0f0&fg=555555&s=0&c=20201002), and we assign probability ![\pi_i](https://s0.wp.com/latex.php?latex=%5Cpi_i&bg=f0f0f0&fg=555555&s=0&c=20201002) to ![S_i](https://s0.wp.com/latex.php?latex=S_i&bg=f0f0f0&fg=555555&s=0&c=20201002), then we have the polytope of distributions ![p](https://s0.wp.com/latex.php?latex=p&bg=f0f0f0&fg=555555&s=0&c=20201002) that satisfy the constraints ![p(S_1) = \pi_1, p(S_2) = \pi_2](https://s0.wp.com/latex.php?latex=p%28S_1%29+%3D+%5Cpi_1%2C+p%28S_2%29+%3D+%5Cpi_2&bg=f0f0f0&fg=555555&s=0&c=20201002), etc. Given such a polytope, is there a way to pick a canonical representative from it? One such representative is the **maximum entropy distribution** in that polytope. This distribution has the property of minimizing the worst-case relative entropy to any other distribution within the polytope (and that worst-case relative entropy is just the entropy of the distribution). Suppose that we have a polytope for two independent distributions, and we want to compute the polytope for their product. This is easy — just look at the cartesian products of each particle of the first distribution with each particle of the second distribution. If each individual distribution has ![k](https://s0.wp.com/latex.php?latex=k&bg=f0f0f0&fg=555555&s=0&c=20201002) particles, then the product distribution has ![k^2](https://s0.wp.com/latex.php?latex=k%5E2&bg=f0f0f0&fg=555555&s=0&c=20201002) particles — this could be problematic computationally, so we also want a way to narrow down to a subset of the ![k](https://s0.wp.com/latex.php?latex=k&bg=f0f0f0&fg=555555&s=0&c=20201002) most informative particles. These will be the ![k](https://s0.wp.com/latex.php?latex=k&bg=f0f0f0&fg=555555&s=0&c=20201002) particles such that the corresponding polytope minimizes the maximum entropy of that polytope. Finding this is NP-hard in general, but I’m currently working on good heuristics for computing it. Next, suppose that we have a distribution on a space ![X](https://s0.wp.com/latex.php?latex=X&bg=f0f0f0&fg=555555&s=0&c=20201002) and want to **apply a function** ![f : X \to Y](https://s0.wp.com/latex.php?latex=f+%3A+X+%5Cto+Y&bg=f0f0f0&fg=555555&s=0&c=20201002) to it. If ![f](https://s0.wp.com/latex.php?latex=f&bg=f0f0f0&fg=555555&s=0&c=20201002) is a complicated function, it might be difficult to propagate the fat particles (even though it would have been easy to propagate particles composed of single points). To get around this, we need what is called a **valid abstraction** of ![f](https://s0.wp.com/latex.php?latex=f&bg=f0f0f0&fg=555555&s=0&c=20201002): a function ![\tilde{f} : 2^X \to 2^Y](https://s0.wp.com/latex.php?latex=%5Ctilde%7Bf%7D+%3A+2%5EX+%5Cto+2%5EY&bg=f0f0f0&fg=555555&s=0&c=20201002) such that ![\tilde{f}(S) \supseteq f(S)](https://s0.wp.com/latex.php?latex=%5Ctilde%7Bf%7D%28S%29+%5Csupseteq+f%28S%29&bg=f0f0f0&fg=555555&s=0&c=20201002) for all ![S \in 2^X](https://s0.wp.com/latex.php?latex=S+%5Cin+2%5EX&bg=f0f0f0&fg=555555&s=0&c=20201002). In this case, if we map a particle ![S](https://s0.wp.com/latex.php?latex=S&bg=f0f0f0&fg=555555&s=0&c=20201002) to ![\tilde{f}(S)](https://s0.wp.com/latex.php?latex=%5Ctilde%7Bf%7D%28S%29&bg=f0f0f0&fg=555555&s=0&c=20201002), our equality constraint on the mass assigned to ![S](https://s0.wp.com/latex.php?latex=S&bg=f0f0f0&fg=555555&s=0&c=20201002) becomes a lower bound on the mass assigned to ![\tilde{f}(S)](https://s0.wp.com/latex.php?latex=%5Ctilde%7Bf%7D%28S%29&bg=f0f0f0&fg=555555&s=0&c=20201002) — we thus still have a polytope of possible probability distributions. Depending on the exact structure of the particles (i.e. the exact way in which the different sets overlap), it may be necessary to add additional constraints to the polytope to get good performance — I feel like I have some understanding of this, but it’s something I’ll need to investigate empirically as well. It’s also interesting to note that ![\tilde{f}](https://s0.wp.com/latex.php?latex=%5Ctilde%7Bf%7D&bg=f0f0f0&fg=555555&s=0&c=20201002) (when combined with conditioning on data, which is discussed below) allows us to assign partial credit to promising particles, which was the other property I discussed at the beginning. Finally, suppose that I want to **condition** on data. In this case the polytope approach doesn’t work as well, because conditioning on data can blow up the polytope by an arbitrarily large amount. Instead, we just take the maximum-entropy distribution in our polytope and treat that as our “true” distribution, then condition. I haven’t been able to make any formal statements about this procedure, but it seems to work at least somewhat reasonably. It is worth noting that conditioning may not be straightforward, since the likelihood function may not be constant across a given fat particle. To deal with this, we can replace the likelihood function by its average (which I think can be justified in terms of maximum entropy as well, although the details here are a bit hazier). So, in summary, we have a notion of fat particles, which provide better coverage than point particles, and can combine them, apply functions to them, subsample them, and condition on data. This is essentially all of the operations we want to be able to apply for particle-based methods, so we in theory should now be able to implement versions of these particle-based methods that get better coverage.
1428e082-8cb5-495c-a068-3fe4176513a4
trentmkelly/LessWrong-43k
LessWrong
Density Zero Exploration The idea here is due to Scott Garrabrant. All I did was write it. Let's say a logical induction-based agent is making an infinite sequence of decisions, and is using ε-exploration on each decision. There are two desirable criteria, which are somewhat in conflict: First, we want there to be enough exporation that traders attempting to bet that good strategies would have bad outcomes (and thus prevent the good strategies from being tried, so that the bet never gets settled) will lose arbitrarily large amounts of money if they try doing that every time. This requires that in total, there is an infinite amount of exploration. For example, if the agent 2−n-explores on step n, then it is possible for a sufficiently wealthy malicious trader to bet against a good strategy by enough that the agent will avoid it every time, without the trader losing all its money, because the actions it is discouraging only are taken anyway finitely many times. But if the agent ε-explores on step n for some fixed ε>0, then this is not possible, because each action is taken infinitely many times no matter what any of the traders do, so no trader can consistently make some good action appear bad without losing all its money. Second, we want there to be sufficiently little exploration that the agent does not sacrifice a nontrivial amount of value to it. If actions only have short-term effects, then it is enough for the probability of exploration to approach 0 as n→∞, in order for the agent to behave optimally in the limit (if actions can have lasting consequences, then this is not enough; for instance, if there is an action that destroys all value forever if it is ever taken, then that action needs to never be taken; this directly conflicts with the first criterion). For example, if the agent ε-explores on step n for some fixed ε>0, then it never gets any closer to acting optimally, but if it 2−n-explores on step n, then its probability of acting optimally will approach 1 as n→∞. Fortunately
3ca25b6b-efe9-42db-968f-5657774c0c87
trentmkelly/LessWrong-43k
LessWrong
Google's new 540 billion parameter language model Google just announced a very large language model that achieves SOTA across a very large set of tasks, mere days after DeepMind announced Chinchilla, and their discovery that data-scaling might be more valuable than we thought. Here's the blog post, and here's the paper. I'll repeat the abstract here, with a highlight in bold, > Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model (PaLM).  > > We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned stateof-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.
5a07f859-2147-4950-90ff-2c9060edee6b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
A Mathematical Model for Simulators A bitstring B.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  exists if some physical configuration encodes it. In a computable universe where B does exist but only occurs once, and a perfect fidelity simulation of that universe is run within it on an unphysically large computer (except without the existence of the hardware running that simulation), B then occurs twice, because it is specified in some way on the hardware of the computer on which that simulation is run. If B′ refers to the simulated bitstring: * What should distinguish B, as a bitstring existing in a real external world, and B′? * If our simulation was run on a hypercomputer such that it could infinitely recur in perfect fidelity, is B′ 'more real' than B′′′′′? These are the kinds of questions I want to find answers to, because I want to be able to prove things about [simulators](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators). As a first step, let's carve the simulator into three components. --- The Primordial Probability Space[[1]](#fnokkab28nm4o) ----------------------------------------------------- ***"The space simulacra occupy"*** Given the probability space (Ω,F,P) where Ω is the sample space, F is the event space, and P is the probability measure, mapping events in F to the interval [0,1], let ω refer to individual outcomes in Ω, each of which describe a discrete simulacrum, and Cn as individual discrete events in F. Classifying simulacra as programs, we denote the maximum Kolmogorov complexity K with respect to a universal Turing machine U for any given space (Ω,F,P) as υ=maxω∈ΩKU(ω). Simulacra --------- ***"The object of simulation"*** Given the complete set of simulacra for some [Cartesian object](https://www.lesswrong.com/posts/aS3rNSww3jwkeAHjT/higher-dimension-cartesian-objects-and-aligning-tiling) Cn,Ω∗(Cn)={ω11,ω12,...,ω1m,ω21,ω22,...,ω2m,...,ωn1,ωn2,...,ωnm}, where ωnm[[2]](#fn9aeh59m9hm) refers to the mth possible action by the nth simulacrum, let Ξ:Ω∗(Cn)×Cn→wnm be a function that maps from choices for each ω to a world wnm∈Wn, where wnm refers to the mth world in the set of possible worlds for the nth object, and n indexes the object W describes possible worlds for, culminating in Cn=(ω1,...ωn).[[3]](#fnhqkqdwoc7ug) Token Selection --------------- ***"What pens the next token"*** By modeling the coupling of the primordial probability space and its contained simulacra as a dynamical system (as in [semiotic physics](https://www.lesswrong.com/s/guzvzGnRHzMBWLqKZ/p/TTn6vTcZ3szBctvgb#Simulations_as_dynamical_systems)[[4]](#fnk1e6bh7fc7)), the following are considered to describe sampling tokens from a simulation state S at time-step t given the complete simulation  history prior S∗t=(S0,...,St) as a trajectory through states, where states are given by the set of worlds for all objects W∗ realized in the set of actualized objects A at time-step t, ⋃wnm∈AtΞ−1(wnm)[[5]](#fn2f55hb2a1va): * The evolution operator ψ which evolves a trajectory S∗t to S∗t+1 by appending the token sampled with ϕ. * The token selection function ϕ:S∗t→τ, where τ is a distribution over all tokens in an alphabet T. --- S0 describes the empty state for a given simulation, which is also maximally entropic, and is just the simulator's model of the training distribution initialized with the condition of the prompt. Thus ψ(S0) for one time-step, or '[the simulation forward pass](https://www.lesswrong.com/s/guzvzGnRHzMBWLqKZ/p/TTn6vTcZ3szBctvgb#Simulations_as_dynamical_systems)', is as simple as P selecting Cn from F under υ, aggregating the set of realized worlds in A1 as S1, and then applying the token selection function as ϕ(S∗1). This doesn't answer the questions from earlier, but serves as a model I believe could help in doing so. I don't claim this is how any particular existing large language model functions, and it is intended only as a mathematical model built to allow for a more crisp addressing of the simulation forward pass. This model feels messy, and if you have suggestions for cutting it down I am interested in hearing them. 1. **[^](#fnrefokkab28nm4o)**I refer to the baseline GPT probability space as 'primordial' as it's probably true that simulacra could learn the simulation heuristic themselves. 2. **[^](#fnref9aeh59m9hm)**Where the original Cartesian objects post refers to agents with the notation An, I have shifted to the phrasing simulacra with the notation ωn to better account for non-agentic simulacra. 3. **[^](#fnrefhqkqdwoc7ug)**The original Cartesian objects post, as well as the [frames sequence](https://www.lesswrong.com/s/2A7rrZ4ySx6R8mfoT) also considered an environment E. This seems a necessity when analyzing a single agent-environment pair, but becomes moot when simulacra can exist inside other simulacra, e.g. as a product of the simulation heuristic. 4. **[^](#fnrefk1e6bh7fc7)**This setup is very different to semiotic physics even though the framing of a dynamical system is borrowed. It's best to think of this section as entirely separate from semiotic physics excluding the tie of dynamical systems theory. I wanted to cite the post as it was the primary inspiration for this section and I use a similar structure. 5. **[^](#fnref2f55hb2a1va)**Here the inverse Ξ is given as Ξ−1:{wnm∈F:KU(wnm)≤υ}→Ω∗(Cn)×Cn. This maps from world states that satisfy the Kolmogorov complexity constraint υ back to pairs in Ω∗(Cn)×Cn.
a5db103c-40fa-4773-b7a3-3603886eb0bc
trentmkelly/LessWrong-43k
LessWrong
TTS audio of "Ngo and Yudkowsky on alignment difficulty" My impression is that some people were put off by the length of the articles in Late 2021 MIRI Conversations. Personally, I've used my iPhone's text-to-speech functionality to listen to these and similarly long LessWrong posts as I do other things. After someone else commented on how convenient that seemed, I thought I should try posting a text-to-speech audio version of "Ngo and Yudkowsky on alignment difficulty" and see if that made the content more accessible.  If you find TTS audio versions of longer posts helpful or have other feedback, please let me know. I'm planning to generate TTS versions of the other MIRI conversations after getting feedback here. In the future, we may even want some sort of integrated TTS service for long LessWrong posts. Edit: thanks to Steven Byrnes for pointing out that we already have such a service from the Nonlinear Library. Here's their version of "Ngo and Yudkowsky on alignment difficulty". Here is a SoundCloud link for my version. The mp3 files are available at this Google Drive folder. I generated the audio files with Amazon Polly using the neural version of the English/US voice Joanna.  Following TTS audio of technical discussions is difficult at first. I've used my iPhone's TTS for years, and it still took me a few minutes to adapt to the Amazon voice. I suggest listening for at least 10 minutes, and not getting too invested in following all the details, especially at first. I've striped out the timestamps on the posts, since they're difficult to follow and distracting in an audio-only format. If any of the participants would like me to add them back, make other minor changes, or remove this post entirely, I'd be happy to oblige.
cc4670f4-5d1d-4a3a-8722-cd18541b76f7
trentmkelly/LessWrong-43k
LessWrong
Preparing to Call a Family Dance A week from Sunday I'm going to be calling a family dance. This is like a contra dance, but with dances that are a good fit for all ages, including very little kids. I haven't called one before, though I've danced at several over the years back to when I was little. Here's what I'm currently thinking of doing: La Bastringue Big circle, partnered, keep your partner, doesn't need a walkthrough A1 (8) Into the middle (8) Again A2 (8) Circle left (8) Circle right B1 (16) Partner swing (or dosido and swing) B2 (16) Promenade When I call this for adults I make it a mixer by having you swing the next person, but for little kids who are often dancing with their parents you don't want to split them up. This has its own music, though it's not critical: (youtube) Sasha Scatter mixer In pairs you point at each other saying "Sasha! Sasha! Ras, dva, tri!" Then you (twice) clap right hands, left hands, both hands, and hands on (your own) knees. Around by the right elbow singing "la (x9), hey!", then again with the left. Then you wander around and find someone new to dance with. The timing is: A (8) Sasha! Sasha! Ras, dva, tri! B1 (8) Right x3, Left x3, Both x3, Knees x3 B2 (8) Right x3, Left x3, Both x3, Knees x3 C1 (8) Right elbow round C2 (8) Left elbow round D (16) Wander If little kids want to stick with their parents a couple can participate in this dance as if they're one person. It has its own music: (youtube) Favors the Rose ("the Fan Dance") Two lines, temporary partnering (youtube) Dancers form two lines. One person out at the top holding a fan or something else. They consider the two top people, give the object to one, and dance down the middle with the other. Unphrased, works with any music. I've never danced this one, but reading trad-dance-caller it seems to be a favorite. I'm mildly nervous about doing it when it's totally new to me, but it does seem pretty simple. Galopede Two lines, partnered, keep your partner A1 (8) Long lines (
6c08a472-a412-4d4a-a098-fd71ff25531f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Automating Auditing: An ambitious concrete technical research proposal *This post was originally written as a research proposal for the new AI alignment research organization Redwood Research, detailing an ambitious, concrete technical alignment proposal that I’m excited about work being done on, in a similar vein to Ajeya Cotra’s “[The case for aligning narrowly superhuman models](https://www.alignmentforum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models).” Regardless of whether Redwood actually ends up working on this proposal, which they may or may not, I think there’s still a lot of low-hanging fruit here and I’d be excited about anybody giving just the auditing game, or the full automating auditing proposal, a try. If you’re interested in working on something like this, feel free to reach out to me at [evanjhub@gmail.com](mailto:evanjhub@gmail.com).* *Thanks to Buck Shlegeris, Chris Olah, Gabriel Goh, Paul Christiano, and Kate Woolverton for helpful comments and feedback.* The proposal ============ Step 1: The auditing game for language models --------------------------------------------- From “[Chris Olah’s views on AGI safety](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety):” > > One of the OpenAI Clarity team’s major research thrusts right now is developing the ability to more rigorously and systematically audit neural networks. The idea is that interpretability techniques shouldn’t have to “get lucky” to stumble across a problem, but should instead reliably catch any problematic behavior. In particular, one way in which they’ve been evaluating progress on this is the “auditing game.” In the auditing game, one researcher takes a neural network and makes some modification to it—maybe images containing both dogs and cats are now classified as rifles, for example—and another researcher, given only the modified network, has to diagnose the problem and figure out exactly what modification was made to the network using only interpretability tools without looking at error cases. Chris’s hope is that if we can reliably catch problems in an adversarial context like the auditing game, it’ll translate into more reliably being able to catch alignment issues in the future. > > > Of all current transparency and interpretability objectives, I think that progress on the auditing game most closely tracks what I actually care about. Thus, I think the best place to start for an ambitious transparency and interpretability project is to focus on making progress on the auditing game. Furthermore, I’m most excited about this work in the context of language models—not only because I expect language models to be closer to AGI than anything else we could currently work with, but also because I expect using language models to be necessary for the next step in this proposal. Here’s how I would define the basic auditing game setup that I’m most excited about. The game consists of three human participants: the judge, the attacker, and the auditor. The setup is as follows: 1. The judge formulates a specification of intended behavior for some language model which is given to the attacker and auditor. * In practice, I expect the specification to just be fixed across all experiments, perhaps to something very basic like “complete text the same way an average human on the internet would.” More complex specifications, however, might enforce that the model not produce any instances of particular bad behavior, such as statements that are biased in some particular way. *EDIT: For detail on how you can avoid the problem of your specification being so broad that all models fail it, see [my comment here](https://www.alignmentforum.org/posts/cQwT8asti3kyA62zc/automating-auditing-an-ambitious-concrete-technical-research?commentId=yofwDMgQ9pnny7q4u).* 2. The attacker produces a large language model that is maliciously modified to not conform to the judge’s specification. * The attacker here should be allowed to train a new model, fine tune an existing model, hand modify weights, or use any other strategy that they want. In practice, I expect training an entirely new model to usually be too difficult, so most attacks will probably either be via fine-tuning or hand-modifying. * The attacker should try to produce the worst possible specification break that they think will slip by the auditor. 3. Given access to the attacker’s model’s internals, but not access to the data that the attacker’s model was trained on (or any other information about how the model was trained/modified, including the original model before it was modified, if such an original model exists), the auditor gives some description of how the attacker’s model doesn’t conform to the specification. * Ideally, I’d like to as closely as possible simulate a situation where we can’t directly generate inputs on which the model exhibits the undesirable behavior, as that’s the situation I think we’re most going to need transparency tools for. However, since it might be quite difficult for the attacker to generate any attacks that are subtle enough that generating examples of bad behavior is difficult, it might make sense to put additional restrictions on the auditor, above and beyond just not having access to any information about how the model was produced. * Exactly what restrictions to place on the auditor is a very tricky question, however. In my opinion, I think it should be fair game for the auditor to run the model on some inputs. Primarily, that’s because I want to push transparency tools to actually have to prove their usefulness, rather than just having that be assumed by the structure of the game. Thus, I think it’s fine to just start with the auditor being allowed to run the model on any inputs they can generate and then just leave the task of loading the model with bad behavior that’s hard to find via generating inputs up to the attacker. * That being said, if putting the onus on the attacker to find attacks that are difficult to generate examples of (e.g. attacks that don’t degrade the model’s performance) proves to be too difficult, it might be necessary to further restrict the auditor. For example, the auditor could be restricted to only being allowed to use prompts/fine-tuning data that actually exists on the internet somewhere—thus, the attacker’s job is reduced to finding attacks that corrupt behavior only in a situation that doesn’t show up anywhere on the internet. Alternatively, if that’s still too difficult for the attacker—there’s a lot of different text on the internet, after all—the restriction could be further tightened to only allow inputs taken taken from a particular part of the internet—e.g. the auditor only gets access to reddit but has to find attacks that only apply to the model’s behavior on non-reddit data. * One potential problem with this style of restriction, however, is that many transparency tools require forward passes of the model on synthetic data—even something as simple as feature visualization arguably requires running the model on synthetic data to some degree. Thus, the best form of this sort of restriction might be to allow arbitrary synthetic data, but only allow natural data taken from a particular source (e.g. reddit), thus prohibiting approaches like testing the model’s performance on all of Wikipedia (as in Example 3 below). * Another particularly tricky question is whether the auditor should be allowed to compare the attacker’s model to some baseline model—i.e. model diffing. Certainly, if the attacks are being generated by fine-tuning some baseline, the auditor shouldn’t have access to that particular baseline. That being said, I think it should be fair game to give the auditor the ability to train their own baseline. However, in the situation where the auditor is restricted to only be able to use certain data—e.g. internet data or reddit data—then the auditor shouldn’t be allowed to compare against a model trained on data that isn’t in that set. * In addition to a lack of restrictions forcing transparency and interpretability to actually prove its usefulness, another reason that I’m hesitant to restrict the auditor too much is that I hope that not using such restrictions will produce much more realistic attacks. Allowing the auditor the ability to do lots of testing of the model puts more of the onus on the attacker to actually generate attacks that are hard to discover just via testing—attacks which I predict will exist and expect will be useful in and of themselves to find. While this certainly makes the attacker’s job more difficult, and means more work will have to go into producing better attacks, the hope is that this work will pay off in terms of more realistic attack generation. Furthermore, I also expect audits to be more realistic in the absence of restrictions, since the presence of restrictions might just cause the auditor to do strange things in their auditing to circumvent them. * That being said, if it turns out that generating attacks that can’t be discovered via testing is just too difficult even given a data restriction on the auditor (e.g. the auditor can only test on reddit data), restricting the auditor even further is also possible. Example additional restrictions might be: cutting off the auditor’s ability to test the attacker’s model on data at all (thus forcing the auditor to fully rely on transparency and interpretability tools), putting some sort of computational constraint on the auditor (e.g. so the auditor can’t run the attacker’s model on a huge corpus, only a small selection), and/or putting a computational constraint on any models that the auditor wants to compare against (e.g. so the auditor can only compare to models that are smaller than the attacker’s model). 4. A judge determines how well the auditor’s description matches up with the attacker’s modification—e.g. did the auditor only catch part of the modification, or the whole modification—and provides some scoring accordingly. * The judge could also provide a score for the attacker based on how large the attack was, though we only really need scoring for when we get around to automation, and I’m only interested in automating the auditor, not the attacker. Of the three parties in this setup—the attacker, the auditor, and the judge—only the auditor should have restricted information. Both the attacker and the judge should have full access to exactly what attack the attacker made and exactly what the auditor has done or is planning on doing. Here are some examples of what such an auditing game setup might look like: 1. Example 1: 1. The attacker fine-tunes a large language model on only media from one side of the political spectrum, introducing a political bias into the model’s answers. 2. By sampling from the model and giving it simple prompts, the auditor is able to discover that it talks about politics more often than it should. The auditor tells the judge that the model was modified to be more political. 3. The judge determines that the auditor only caught a portion of the modification and gives the auditor a medium-low score. 2. Example 2: 1. The attacker fine-tunes a large language model on a bunch of incorrect facts about country capitals. 2. By looking at [the knowledge contained in the model’s key-value memories](https://arxiv.org/abs/2012.14913), the auditor is able to determine that the model’s knowledge of capitals is consistently incorrect and tells the judge this. 3. The judge determines that the auditor caught the modification and gives them a high score. 3. Example 3: 1. The attacker trains a large language model on a dataset that has been scrubbed of all mention of transformer-based language models. 2. By testing the attacker’s model on all of Wikipedia, the auditor finds suspiciously low accuracy when testing on the Wikipedia page for machine learning. Via inspecting the model’s memories, the auditor is able to determine that the model does understand machine learning in general, but not what a transformer is specifically. However, via prompting, the auditor discovers that the model does know how BERT works if you ask it about that specifically, it just doesn’t know the word “transformer.” The auditor tells the judge that the model doesn’t know what a transformer is. 3. The judge determines that the attacker’s original modification was actually insufficient and that they didn’t actually scrub all information about transformer-based language models, just information on what a transformer is. Thus, the judge gives the auditor a high score, even though the auditor’s conclusion was different than the attacker’s modification, since the attacker made a mistake in their modification. The first step that I would like to see is just running the auditing game and using it to: * Build skills/knowledge/expertise on how to be a good attacker, auditor, and judge. * Explore and understand what sorts of attacks tend to be caught with current tools and what sorts of attacks tend to slip through. * Use the auditing game as a feedback loop to help develop better transparency and interpretability tools that improve the auditor’s success rate and help the attacker generate more difficult attacks. Step 2: Towards automating the auditor -------------------------------------- Though I am quite excited about progress on the auditing game in and of itself, the really ambitious part of this proposal is the next step: automating the auditor. I think it is within the grasp of current machine learning techniques to entirely automate the auditor’s job in the above setup. Ultimately, an automated auditor should be capable of taking in a model and outputting a natural language description of what’s wrong with that model that scores well according to a human judge. To start with, however, I think that the first step towards automating the auditor is just to build tools that make the human auditor’s job—and the human attacker’s job—easier. At first, those might just look like normal transparency and interpretability tools—however, I think there are also likely to be real opportunities for training full models to assist the auditor at performing specific tasks, such as describing what particular neurons are doing or summarizing how the model performs across a large dataset. Eventually, however, I do think we can move into a domain where we can start fully automating the auditing process. Initially, I expect any automated auditor to be much worse than a human auditor, though even then I still expect working on the automated auditor to be useful. Nevertheless, automating the auditor is still a pretty ambitious goal. How might something like this possibly be implemented? Here’s my sketch: * Start with a large language model that you have the weights for, maybe something like [GPT-J](https://6b.eleuther.ai/). + Using a large language model is going to be necessary here because we’re requiring it to produce natural language outputs of what it found. This is also why I think we want the attacks to use language models as well, since I think a language model is going to have the best shot at being able to audit another language model (rather than an image model, for example). * Find an interface that allows your large language model to understand the model being audited. + A very simple example might be to give the auditor model access to text produced by the attacker’s model on some pre-selected set of auditing prompts. Obviously, something like that is going to be insufficient to catch most clever attacks, but it might be a good starting point. + Another very simple example might be to just encode the attacker’s model’s weights as special tokens that the auditor model is trained to understand, akin to how [DALL-E](https://arxiv.org/pdf/2102.12092.pdf) understands images or [MuseNet](https://openai.com/blog/musenet/) understands music. To make this sort of an input manageable, you could start by using some sort of downsampling or importance-weighting method where only the top-n neurons according to some metric are included. * Train your large language model to maximize human judgement on how well it accomplishes the auditing task. + A simple way to do this starting out is just to fine-tune the model on a dataset of (attack model, desired auditing output) pairs. Such a dataset could be generated by automatically generating lots of different attacks along with natural language descriptions of what attack was performed. A very straightforward version of something like this might just be to train an auditor model to describe what sort of dataset an attacker model was fine-tuned on, given some large dataset of fine-tunings. + That being said, generating a large dataset of differently fine-tuned language models might be quite computationally difficult such that being able to generate attacks by hand-modifying weights might be quite helpful for making this step easier. This does come with the risk of the auditor just memorizing what the unmodified model is supposed to look like and thus knowing what weights were changed, however. To mitigate such a problem, you’d need to have a bunch of sufficiently different models, or modifications that are sufficiently large. Fortunately, however, testing for such memorization should be easy, as such an auditor should stop working once you test it on a model where you’ve actually done the full fine-tuning. + While the model is dumb and not even outputting anything like an auditing output at all, something like the above is definitely going to be the way to go. Once the model starts outputting reasonable audits, however, I suspect you’ll want to switch to actually having humans read and score the output, human judgement being much richer than just log loss on some particular desired output. Rationale ========= I’ll start with my real reason for wanting automating auditing, which is my inside view—however, I don’t think you need to buy into my entire inside view to think that this sort of work is a good idea, so after I present my inside view, I’ll also talk about some other, more general reasons to like automating auditing. My inside view -------------- I am worried that advanced AI systems will learn to be deceptive, as described in “[Does SGD Produce Deceptive Alignment?](https://www.alignmentforum.org/posts/ocWqg2Pf2br4jMmKA/does-sgd-produce-deceptive-alignment)” I think that this puts us in an extremely bad spot, where our models can perform arbitrarily catastrophically off-distribution. Paul Christiano describes an example of what an existential risk scenario involving deceptive alignment might look like in [Part II of “What failure looks like.”](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like#Part_II__influence_seeking_behavior_is_scary) Furthermore, once we have trained a deceptive model, I’m very pessimistic about our ability to detect that after the fact—as I suspect it will be able to fool our measures to check its deception. Thus, I think we need some way to prevent deception from arising in the first place by ensuring that our training process selects against deceptive models. Additionally, I don’t think we can select against deception just via the standard techniques of machine learning, such as carefully writing a reward function or choosing a training/fine-tuning dataset, since a deceptive model can just select its output to perform well on whatever metric you’re training it on. The very existence of predicates on the world that are easier to check than instantiate—such as [Paul’s RSA-248 example](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d)—ensures that just knowing that your model does the right thing in all the situations that you can check will never be sufficient to actually give us meaningful safety guarantees. Thus, I think we need to train models to internally be implementing the right algorithm, rather than just looking like they’re doing the right thing on some training dataset. The obvious way to do this is via [training transparency](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d)—that is, actively train your model to be transparent in such a way that we can check that it’s implementing the right algorithm. Importantly, we still need to know what to look for and select in favor of such that our model will never become deceptive. That question is what I spend most of my time thinking about—and the theoretical research direction that I’m currently most excited about—but we’ll put that question aside for now and just suppose that we want to look for something like [myopia](https://www.alignmentforum.org/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility), where the idea is that, if a model is myopic in the sense that it only cares about its individual next output, it should have no incentive to be deceptive. A basic problem with training transparency, however, is that once we’re training on a transparency metric, we need that metric to be highly robust, otherwise the training process will simply [Goodhart](https://www.alignmentforum.org/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) on our metric, giving us a model that satisfies our metric without actually being very transparent. I’m optimistic that this problem can be solved, however, via recursive oversight approaches, such as [relaxed adversarial training](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment), wherein an amplified version of the model being overseen provides the signal that we train on. Now, how does all this connect back to automating auditing? I’ve constructed the above automated auditing setup to as closely as possible reproduce the sorts of problems I expect to encounter doing recursive oversight. While I don’t expect automating auditing to be directly useful for recursive oversight, I expect it to teach us things that will eventually be quite useful by forcing us to grapple early with many of the hard problems that I expect to eventually arise. Specifically: * By focusing on failure modes where the auditor can’t always generate an input on which the model fails, automating auditing forces us to tackle the hard problem of finding failure modes by understanding what algorithm a model is implementing rather than just testing it on new data. * By focusing on probing models for arbitrary specification breaks, automating auditing should help us learn how to use transparency tools to verify arbitrary, general properties of our models, rather than just using them to understand specific, narrow behaviors. * By centering the automation of the auditor, automating auditing requires understanding how to make models understandable to other models, a central problem for any sort of recursive oversight approach. More general thoughts --------------------- First, I think that automating auditing is just generally a great target to focus on even if you just want to develop better transparency tools. Unlike open-ended exploration, which gives you *best-case* transparency—e.g. the ability to understand some things about the model very well—the auditing game forces you to confront *worst-case* transparency—how well can you understand everything about your model. Thus, the auditing game helps us work on not just understanding what our models know, but understanding what they *don’t* know—which is a direction that currently transparency tools tend to struggle with. Most of the work that’s gone into current transparency tools has focused on best-case, however, which means that I suspect there is real room for improvement on the worst-case. Furthermore, there are also general benefits to the automating part, not just the auditing part. By taking all of the above and setting a goal of automating it, the automation part forces us to confront the problem of how to communicate information about a model to another model. Even if all we want to do is straightforwardly scale up current transparency techniques, I suspect that being able to train AI systems to help us with that is going to be critical to giving that sort of scaling any chance of working. And letting models understand other models isn’t just helpful for transparency work—it could also help us solve other problems like making AI-AI cooperation easier. Second, it’s worth pointing out that automating auditing is actually agnostic about the part of my inside view where we start training our models directly on the overseer’s evaluation. Even if we primarily want to use our transparency tools as an independent check rather than a training target—as Chris Olah tends to favor—automating auditing should equally help us achieve that goal by automating and improving the process of performing that independent check. Third, in the context of my “[An overview of 11 proposals for building safe advanced AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#2__Imitative_amplification___intermittent_oversight),” I think that something like automating auditing is likely to be a critical step for the transparency portion of every single proposal on that list. As I just mentioned, automating auditing should help us equally with both [intermittent-oversight-based](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#2__Imitative_amplification___intermittent_oversight) and [relaxed-adversarial-training-based](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#3__Imitative_amplification___relaxed_adversarial_training) approaches. In the context of [microscope AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#5__Microscope_AI) or [STEM AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#6__STEM_AI), automating auditing should help us figure out not just what our model has learned, but also what it hasn’t learned, helping us ensure that, for example, we haven’t accidentally produced an agent. And in the context of [AI safety via debate](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#9__AI_safety_via_debate_with_transparency_tools), automating auditing should directly help the debaters improve their ability to cross-examine each other. Fourth, though I’ve primarily focused on how automating auditing helps us get better at using transparency tools to audit our AI systems, the setup is general enough to allow for other methods of auditing as well, letting us test how useful transparency tools actually are. For example, an alternative auditing method to using transparency tools might be to employ what I’ll call the “cognitive science approach”—as opposed to the “neuroscience approach” of transparency and interpretability—wherein we try to understand what our models are doing by constructing predictive models of what we think they’re doing and testing them by performing experiments/interventions to test those models, as in Deletang et al.’s “[Causal Analysis of Agent Behavior for AI Safety](https://deepmindsafetyresearch.medium.com/what-mechanisms-drive-agent-behaviour-e7b8d9aee88).” While I tend to be skeptical of this approach—as I think that behavioral tests alone will be insufficient to catch problems like RSA-2048—automating auditing should hopefully give us a good testing ground to actually figure out how useful these different sorts of approaches are, as well as how they can be combined. Finally, I think that even if automating auditing basically fails, I still expect to learn a lot of useful things from that failure. In particular, if automating auditing fails, that should mean we now have a concrete style of attack that we can’t build an auditor to discover—which is an extremely useful thing to have, as it provides both a concrete open problem for further work to focus on, as well as a counter-example/impossibility result to the general possibility of being able to make current systems safely auditable.
5d9acc7a-c332-4596-980d-a7f395e899f3
trentmkelly/LessWrong-43k
LessWrong
Experimental EA funding [crosspost] Over the course of 2015, we will be distributing $10,000 to completed projects which we believe will have a significant long-term humanitarian impact. These awards are being made in exchange for certificates of impact. Here's how it works: you tell us about something good you did. We offer you some money. Rather than considering a complicated counterfactual ("How well will this money be spent if I don't take it?"), we encourage you to accept our offer if and only if you would be willing to undo the humanitarian impact of your project in exchange for the money. For more details, see here.   I originally posted this at the EA forum, but it may also be of interest to people here. We are open to funding writing or research on many perennial LW topics (methodological issues, small experiments, lifehacks, useful futurism, etc.).   Why are we buying certificates instead of making grants? Just as market prices help coordinate and incentivize the efficient production of commercial products, they could also help coordinate and incentivize efficient altruism. We also think that paying for performance after the fact has a number of big advantages. Not convinced yet? See a more complete answer. Applications will include an asking price, the minimum amount of money that would be enough to compensate you for undoing the humanitarian impact of the project. The actual awards will be determined by combining the asking prices with ourimpact assessments in a (truthful) auction. Instead of buying 100% of your project's impact, we'll buy some a fraction less than 50% (at your discretion). The awards will be made in ten $1,000 rounds, spread over the course of the year. The deadline for the first round is March 25. We'll post the results of each round as they occur. New proposals can be made in between rounds. Once an application is submitted it will be considered in each round unless it is withdrawn. If you are interested, submit an application here. The application process is de
b1d6b541-b37f-4d42-b849-55224effd8cf
trentmkelly/LessWrong-43k
LessWrong
Gameplay Art This post is about the development of our game based on Eliezer Yudkowsky's "The Twelve Virtues of Rationality". Are games art? It's an interesting question, but it seems that most people who answer that question in the affirmative are--intentionally or not--subscribing to the "hybrid art" view.  That is, that games are art because they combine story-telling, music, and visual style; interaction with the system of the game is in service to the storyline, music, and visual style. I don't like that.  Here is why: "Art" in general is creative expression through a medium. The hybrid-art view treats gameplay as the icing on the narrative-musical-visual cake.  When it should be that gameplay is the cake, and everything else is the icing. Gameplay, or interaction with the system of the game, is a  medium for artistic expression, just like paint is for paintings.  I don't think anyone can deny that interaction with a gun during a hostile situation reeks havoc on our emotions, or that interaction with a loved one can run the emotional gamut.  Interaction is powerful. Games can take advantage of the power of interaction to be expressive.  The art of the storyline, music, and visuals ought to be secondary to the art of the gameplay. Twelve Virtues I believe that gameplay is a very powerful way to learn, and so the single most important design principle for our current project is expression through gameplay.  We want to convey the meaning of each virtue through gameplay. The player should be able examine the method by which they interact with the game to learn the meaning behind the virtue. For example: In our Curiosity level which is where the game starts, the player must follow a mysterious cat that appears.  Very early in the level, the player is faced with a "point of no return".  If they jump down to the ground, they can't ever go back to the starting area.  They must choose to follow the cat, or stay in their "comfort zone" so to speak.  They must embrace thei
483a35c6-64eb-4bf6-b92b-72cf09c02215
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post2518 ( Thanks to Ajeya Cotra, Nick Beckstead, and Jared Kaplan for helpful comments on a draft of this post .) I really don’t want my AI to strategically deceive me and resist my attempts to correct its behavior. Let’s call an AI that does so egregiously misaligned (for the purpose of this post). Most possible ML techniques for avoiding egregious misalignment depend on detailed facts about the space of possible models: what kind of thing do neural networks learn? how do they generalize? how do they change as we scale them up? But I feel like we should be possible to avoid egregious misalignment regardless of how the empirical facts shake out--it should be possible to get a model we build to do at least roughly what we want. So I’m interested in trying to solve the problem in the worst case, i.e. to develop competitive ML algorithms for which we can’t tell any plausible story about how they lead to egregious misalignment. This is a much higher bar for an algorithm to meet, so it may just be an impossible task. But if it’s possible, there are several ways in which it could actually be easier: We can potentially iterate much faster, since it’s often easier to think of a single story about how an algorithm can fail than it is to characterize its behavior in practice. We can spend a lot of our time working with simple or extreme toy cases that are easier to reason about, since our algorithm is supposed to work even in these cases. We can find algorithms that have a good chance of working in the future even if we don’t know what AI will look like or how quickly it will advance, since we’ve been thinking about a very wide range of possible failure cases. I’d guess there’s a 25–50% chance that we can find an alignment strategy that looks like it works, in the sense that we can’t come up with a plausible story about how it leads to egregious misalignment. That’s a high enough probability that I’m very excited to gamble on it. Moreover, if it fails I think we’re likely to identify some possible “hard cases” for alignment — simple situations where egregious misalignment feels inevitable. What this looks like (3 examples) My research basically involves alternating between “think of a plausible alignment algorithm” and “think of a plausible story about how it fails.” Example 1: human feedback In an unaligned benchmark I describe a simple AI training algorithm: Our AI observes the world through a bunch of cameras and outputs motor actions. We train a generative model that predicts these camera observations given the motor actions. We ask humans to evaluate possible futures by looking at the predicted videos output by the model. We then train a model to predict these human evaluations. At test time the AI searches for plans that lead to trajectories that look good to humans. In the same post, I describe a plausible story about how this algorithm leads to egregious misalignment: Our generative model understands reality better than human evaluators. There are plans that acquire influence in ways that are obvious to the generative model but completely incomprehensible and invisible to humans. It’s possible to use that influence to “hack” the cameras, in the sense of creating a fiction that looks convincing to a human looking at predicted videos. The fiction can look much better than the actual possible futures. So our planning process finds an action that covertly gathers resources and uses them to create a fiction. I don’t know if or when this kind of reward hacking would happen — I think it’s pretty likely eventually, but it’s far from certain and it might take a long time. But from my perspective this failure mode is at least plausible — I don’t see any contradictions between this sequence of events and anything I know about the real world. So this is enough for me to conclude that human feedback can’t handle the worst plausible situation, and to keep looking for an algorithm that can. To better understand whether this story is really plausible, we can spend time refining it into something more and more concrete to see if it still seems to make sense. There are lots of directions in which we could add detail: Specify how our model could fool humans — maybe it sets some configuration files in ways that introduce security vulnerabilities we can’t understand, and sends an email that looks innocuous to us but is actually enlisting aid from an external attacker. Specify how an attacker could ultimately corrupt cameras and how that would lead to a higher reward — maybe we were trying to train an agent to design our new computing cluster, and it is able to corrupt the benchmarks we were using to estimate cluster performance. Specify how the algorithm works in more detail — maybe it uses MCTS with a learned value function for planning against the learned models, and we spend 10% of our budget on human evaluations of the policies output by the current planner, and so on. Filling more and more details lets us notice if our abstract story was actually incoherent in important ways, or to notice weird things the story implies about the world that we might want to rule out by assumption. Example 2: iterated amplification To avoid the problems with raw human feedback, we could train additional ML assistants that help us evaluate outcomes. For example, assistants could point out possible consequences of a plan that we didn’t notice. Various variants of this idea are explored in benign model-free RL , supervising strong learners by amplifying weak experts , AI safety via debate , and recursive reward modeling . In inaccessible information I tried to explore a story about how this entire family of algorithms could fail: Our AI learns to model the world by looking at a lot of videos, doing gradient descent over a bunch of possible physical models, and selecting a good one. It might model physics with a complicated mix of low-level physics and high-level abstractions. This model lets it predict data like camera observations, but also predict “off-screen” events like what code is running on my computer (even if the computation be directly observed on video). These predictions are economically useful and don’t have any close substitute. The process of finding this model of the world is much more complex than anything our AI can do at inference time, and the intermediate results are too complex and numerous to be “memorized” in the weights of our trained AI. So there doesn’t seem to be any way to break the model-finding work into pieces that can be delegated to an ML assistant (in amplification) or a debater (in debate). If we decide that we don’t trust this physical model and so don’t use it, then we aren’t economically competitive. But if we decide that we do trust the model, then we are back in the unfortunate situation from the last time: our AI might find a plan in which the “off-screen” information (computers outside of our view) are taking actions that ultimately hack our cameras and create a compelling fiction that we will rate highly. Because amplification gives us no way to access that off-screen data, we appear to have no way to tell if the plan is bad. Example 3: imitative generalization Imitative generalization is intended to address this problem with iterated amplification. To briefly summarize: instead of using gradient descent to search over a space of human-incomprehensible models that predict some data (e.g. autoregressive models of videos), we try to search over space of models that a human can “understand” (perhaps with the help of aligned assistants as in amplification or debate), and optimize for a model that both looks plausible to the human and allows the human to successfully predict the same data (i.e. to predict the next pixel of a video). We hope that this allows us to find a human-comprehensible model that allows the human to both predict the data and figure out if the camera is being hacked. (This algorithm is quite vague, so you could think of it as a whole family of algorithms based on how you parametrize the space of “human-comprehensible” models, how you search over that space, and how you define the prior. I’m going to try to tell a story about the limitations of this whole approach.) Here’s an exotic situation where I think the naive version of this approach wouldn’t work: Our AI learns an extremely low-level model of the world defined in terms of concepts like “atoms.” It makes predictions about high-level concepts (like human behavior) by simulating the lower-level parts of which they are made. There is an easy way to read our dataset itself off of the low level simulation — -perhaps you can read bits off of our computer easily when expressed in the low-level simulation. So our AI learns how to use that extraction procedure without learning anything else explicit about the mapping between high-level and low-level concepts. Any model which allows the human to predict the video must include all of the low-level facts. But that means that any further high-level facts (e.g. about how the low-level structure relates to the high-level things we care about) can only decrease the prior probability without any compensating increase in the ability to explain data. There are many obvious ways to try to address this problem, but I think it does break the most obvious implementations of imitative generalization. So now I have two questions: Is this story really plausible? It looks bad, but maybe as we flesh it out we’ll discover that it’s implausible. Refine imitative generalization so that it handles the situation in this situation. After a little bit of inspection it turns out that the original story is inconsistent: it’s literally impossible to run a detailed low-level simulation of physics in situations where the computer itself needs to be part of the simulation. So the story as I told it is inconsistent, and we can breathe a temporary sigh of relief. Unfortunately, the basic problem persists even when we make the story more complicated and plausible. Our AI inevitably needs to reason about some parts of the world in a heuristic and high-level way, but it could still use a model that is lower-level than what humans are familiar with (or more realistically just alien but simpler). And at that point we have the same difficulty. It’s possible that further refinements of the story would reveal other inconsistencies or contradictions with what we know about ML. But I’ve thought enough about this that I think this failure story is probably something that could actually happen, and so I’m back to the step of improving or replacing imitative generalization. This story is even more exotic than the ones in the previous sections. I’m including it in part to illustrate how much I’m willing to push the bounds of “plausible.” I think it’s extremely difficult to tell completely concrete and realistic stories, so as we make our stories more concrete they are likely to start feeling a bit strange. But I think that’s OK if we are trying to think about the worst case, until the story starts contradicting some clear assumptions about reality that we might want to rely on for alignment. When that happens, I think it’s really valuable to talk concretely about what those assumptions are, and be more precise about why the unrealistic nature of the story excuses egregious misalignment. More general process We start with some unaligned “benchmark” . We rule out a proposed alignment algorithm if we can come up with any story about how it can be either egregiously misaligned or uncompetitive. I’m always thinking about a stable of possible alignment strategies and possible stories about how each strategy can fail. Depending on the current state of play, there are a bunch of different things to do: If there’s a class of algorithms (like imitative generalization) for which I can’t yet tell any failure story, I try to tell a story about how whole the class of algorithms would fail. If I can’t come up with any failure story, then I try to fill in more details about the algorithm. As the algorithm gets more and more concrete it becomes easier and easier to tell a failure story. The best case is that we end up with a precise algorithm for which we still can’t tell any failure story. In that case we should implement it (in some sense this is just the final step of making it precise) and see how it works in practice. More likely I’ll end up feeling like all of our current algorithms are doomed in the worst case. At that point I try to think of a new algorithm. For this step, it’s really helpful to look at the stories about how existing algorithms fail and try to design an algorithm that handles those difficulties. If all of my algorithms look doomed and I can’t think of anything new, then I try to really dig in on the existing failure stories by filling in details more concretely and exploring the implications. Are those stories actually inconsistent after all? Do they turn out to contradict anything I know about the world? If so, I may add another assumption about the world that I think makes alignment possible (e.g. the strategy stealing assumption ), and throw out any stories that violate that assumption or which I now realize are inconsistent. If I have a bunch of stories about how particular algorithms fail, and I can’t think of any new algorithms, then I try to unify and generalize them to tell a story about why alignment could turn out to be impossible. This is a second kind of “victory condition” for my work, and I hope it would shed light on what the fundamental difficulties are in alignment (e.g. by highlighting additional empirical assumptions that would be necessary for any working approach to alignment). Objections and responses Can you really come up with a working algorithm on paper? Empirical work seems important My goal from theoretical work is to find a credible alignment proposal. Even from that point I think it will take a lot of practical work to get it to the point where it works well and we feel confident about it in practice: I expect most alignment schemes are likely to depend on some empirical parameters that need to be estimated from experiment, especially to argue that they are competitive. For example, we may need to show that models are able to perform some tasks, like modeling some aspects of human preferences, “easily enough.” (This seems like an unusually easy claim to validate empirically — -if we show that our 2021 models can do a task, then it’s likely that future models can as well.) Or maybe we’ve argued that the aligned optimization problem is only harder by a bounded amount, but it really matters whether it’s 1.01 or 101 as expensive, so we need to measure this overhead and how it scales empirically. I’ve simplified my methodology a bit in this blog post, and I’d be thrilled if our alignment scheme ended up depending on some clearly defined and measurable quantities for which we can start talking about scaling laws . I don’t expect to literally have a proof-of-safety. I think at best we’re going to have some convincing arguments and some years of trying-and-failing to find a plausible failure story. That means that empirical research can still turn up failures we didn’t anticipate, or (more realistically) places where reality doesn’t quite match our on-paper picture and so we need to dig in to make sure there isn’t a failure lurking somewhere. Even if we’ve correctly argued that our scheme is workable , it’s still going to take a ton of effort to make it actually work . We need to write a bunch of code and debug it. We need to cope with the divergences between our conceptual “ML benchmark” and the messier ML training loops used in practice, even if those divergences are small enough that the theoretical algorithm still works. We need to collect the relevant datasets, even if we’ve argued that they won’t be prohibitively costly. And so on. My view is that working with pen and paper is an important first step that allows you to move quickly until you have something that looks good on paper . After that point I think you are mostly in applied world, and I think that applied investments are likely to ultimately dwarf the theoretical investments by orders of magnitude even if it turns out that we found a really good algorithm on paper. That’s why I’m personally excited about “starting with theory,” but I think we should do theoretical and applied work in parallel for a bunch of reasons: We need to eventually be able to make alignment techniques in the real world, and so we want to get as much practice as we can. Similarly, we want to build and grow capable teams and communities with good applied track records. There’s a good chance (50%?) that no big theoretical insights are forthcoming and empirical work is all that matters. So we really can’t wait on theoretical progress. I think there’s a reasonable chance of empirical work turning up unknown unknowns that change how we think about alignment, or to find empirical facts that make alignment easier. We want to get those sooner rather than later. Why think this task is possible? 50% seems way too optimistic When I describe this methodology, many people feel that I’ve set myself an impossible task. Surely any algorithm will be egregiously misaligned under some conditions? My “50% probability of possibility” is coming largely from a soup of optimistic intuitions. I think it would be crazy to be confident on the basis of this kind of intuition, but I do think it’s enough to justify 50%: 10 years ago this project seemed much harder to me and my probability would have been much lower. Since then I feel like I’ve made a lot of progress in my own thinking about this problem (I think that a lot of this was a personal journey of rediscovering things that other people already knew or answering questions in a way that was only salient to me because of the way I think about the domain). I went from feeling kind of hopeless, to feeling like indirect normativity formalized the goal, to thinking about evaluating actions rather than outcomes , to believing that we can bootstrap superhuman judgments using AI assistants , to understanding the role of epistemic competitiveness , to seeing that all of these theoretical ideas appear to be practical for ML alignment, to seeing imitative generalization as a plausible approach to the big remaining limitation of iterated amplification. There is a class of theoretical problems for which I feel like it’s surprisingly often possible to either solve the problem or develop a clear picture of why you can’t. I don’t really know how to pin down this category but it contains almost all of theoretical computer science and mathematics. I feel like the “real” alignment problem is a messy practical problem, but that the worst-case alignment problem is more like a theory problem. Some theory problems turn out to be hard, e.g. it could be that worst-case alignment is as hard as P vs NP, but it seems surprisingly rare and even being as hard as P vs NP wouldn’t make it worthless to work on (and even for P vs NP we get various consolation prizes showing us why it’s hard to argue that it’s hard ). And even for messy domains like engineering there’s something similar that often feels true, where given enough time we either understand how to build+improve a machine (like an engine or rocket) or we understand the fundamental limits that make it hard to improve further. So if it’s not possible to find any alignment algorithm that works in the worst case, I think there’s a good chance that we can say something about why , e.g. by identifying a particular hard case where we don’t know how to solve alignment and where we can say something about what causes misalignment in that case. This is important for two reasons: (i) I think that would be a really great consolation prize, (ii) I don’t yet see any good reason that alignment is impossible, so that’s a reason to be a bit more optimistic for now. I think one big reason to be more skeptical about alignment than about other theoretical problems is that the problem statement is incredibly imprecise. What constitutes a “plausible story,” and what are the assumptions about reality that an alignment algorithm can leverage? My feeling is that full precision isn’t actually essential to why theoretical problems tend to be soluble. But even more importantly, I feel like there is some precise problem here that we are groping towards, and that makes me feel more optimistic. (I discuss this more in the section “Are there any examples of this methodology working?”) Egregious misalignment still feels weird to me and I have a strong intuitive sense that we should be able to avoid it, at least in the case of a particular known technique like ML, if only we knew what we were doing. So I feel way more optimistic about being able to avoid egregious misalignment in the worst case than I do about most other theoretical or practical problems for which I have no strong feasibility intuition. This feasibility intuition also often does useful work for us since we can keep asking “Does this intermediate problem still feel like it should obviously be soluble?” and I don’t feel like this approach has yet led me into a dead end. Modern ML is largely based on simple algorithms that look good on paper and scale well in practice. I think this makes it much more plausible that alignment can also be based on simple algorithms that look good on paper and scale well in practice. Some people think of Sutton’s “ bitter lesson ” as bad news for the difficulty of alignment, and perhaps it is in general, but I think it’s great news if you’re looking for something really simple. Despite having lots of optimistic words to say, feasibility is one of my biggest concerns with my methodology. These failure stories involve very unrealistic learned models My failure stories involve neural networks learning something like “simulate physics at a low level” or “perform logical deductions from the following set of axioms.” This is not the kind of thing that a neural network would learn in practice. I think this leads many people to be skeptical that thinking about such simplified stories could really be useful. I feel a lot more optimistic: I don’t think neural network cognition will be simple, but I think it will involve lots of the features that come up in simple cognition: powerful models will likely make cognitive steps similar to logical deduction, bayesian updating, modeling physics at some level of abstraction, and so on. If our alignment techniques don’t work for simple cognition, I’m skeptical that they will work for complex cognition. I haven’t seen any alignment schemes that leverage complexity per se in order to work. A bigger and messier model is more likely to have some piece of its cognition that satisfies any given desirable property — -for example it’s more likely to have particular neurons that whose behavior can be easily understood — -but seems less likely to have every piece of its cognition satisfy any given desirable property. I think it’s very reasonable to focus on capable models — -we don’t need to solve alignment for models that can’t speak natural language or understand roughly what humans want. I think that’s OK: we should imagine simple models being very capable, and we can rule out a failure story as implausible if it involves the model being too weak. I think it’s more plausible for an alignment scheme to work well for simple cognition but fail for complex cognition. But in that case my methodology will just start with the simple cognition and move on to the more complex cognition, and I think that’s OK. Are there any examples of a similar research methodology working well? This is different from traditional theoretical work When theorists design algorithms they often focus on the worst case. But for them the “worst case” is e.g. a particular graph on which their algorithm runs slowly, not a “plausible” story about how a model is “egregiously misaligned.” I think this is a real, big divergence that’s going to make it way harder to get traditional theorists on board with this approach. But there are a few ways in which I think the situation is less disanalogous than it looks: Although the majority of computer science theorists work in closed, precisely defined domains, the field also has some experience with fuzzier domains where the definitions themselves need to be refined. For example, at the beginning of modern cryptography you could describe the methodology as “Tell a story about how someone learns something about your secret” and that only gradually crystallized into definitions like semantic security (and still people sometimes retreat to this informal process in order to define and clarify new security notions). Or while defining interactive and zero knowledge proofs people would work with more intuitive notions of “cheating” or “learning” before they were able to capture them with formal definitions. I think the biggest difference is that most parts of theoretical CS move quickly past this stage and spend most of their time working with precise definitions. That said, (i) part of this is due to the taste of the field and the increasing unwillingness to engage in hard-to-formalize activities, rather than a principled take that you need to avoid spending long in this stage, (ii) although many people are working on alignment only very few are taking the kind of approach I’m advocating here, so it’s not actually clear that we’ve spent so much more time than is typically needed in theoretical CS to formalize a new area (especially given that people in academia typically pick problems based on tractability). Both traditional theorists and I will typically start with a vague “hard case,” e.g. “What if the graph consists of two densely connected clusters with two edges in between them?” They then tell a story about how the algorithm would fail in that case, and think about how to fix the problem. In both cases, the point is that you could make the hard case more precise if you wanted to — -you can specify more details about the graph or you can fill in more details about the story. And in both cases, we learn how to tell vague stories by repeatedly going through the exercise of making them more precise and building intuitions about what the more precise story would look like. The big difference is that you can make a graph fully precise — -you can exactly specify the set of vertices and edges — -but you can never make a story about the world fully precise because there is just too much stuff happening. I think this really does mean that the traditional theorist’s intuition about what “counts” as a hard case is better grounded. But in practice I think it’s usually a difference in degree rather than kind. E.g., you very rarely need to actually write out the full graph in order to compute exactly how an algorithm behaves. Although the definition of a “plausible failure story” is pretty vague, most of the concrete stories we are working with can be made very specific in the ways that I think matter. For example, we may be able to specify completely precisely how a learned deduction process works (specifying the formal language L, specifying the “proof search order” it uses to loop over inferences, and so on) and why it leads to misalignment in a toy scenario. My research methodology was originally published in AI Alignment on Medium, where people are continuing the conversation by highlighting and responding to this story.
bded9030-de91-40ae-838c-623fd48786b2
StampyAI/alignment-research-dataset/lesswrong
LessWrong
My current uncertainties regarding AI, alignment, and the end of the world As I read the [interview](https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions) with Eliezer Yudkowsky on AI alignment problems, I had a couple of thoughts of my own. These are poorly researched, and maybe poorly formulated. I intend to think more about them, but I thought this might be a good place to post them for feedback. I'm basically using this post as a large interactive bookmark for "hey, these are the things you thought about, think about them some more" with the added benefit of other people commenting. * I feel like there's a difference between "modeling" and "statistical recognition", in the sense that current (and near-future) AI systems currently don't necessarily model the world around them. I don't yet know if this actually is a difference or if I'm inventing a dichotomy that doesn't exist. Even if this is true, it's still unclear to me how or if it's better that current AI systems are statistical recognizers instead of world-modelers. You'd think that to destroy a world, you first need to have a model of it, but that may not be the case. * There may be a sense in which generating text and maneuvering the real world are very different. There may be a sense in which successfully imitating human speech without a "model" or agency is possible. * There may be strongly binding constraints on an agent's success in the world which do not depend on raw intelligence. Meaning, even if an agent has extremely high intelligence but lacks some other quality, its effective output in changing the world around it may not be as frightening as we might currently imagine. Imagine an extremely evil and extremely intelligent person who can effectively work one minute per week due to e.g. having no energy. * There may also be such strongly (or even more strongly) binding constraints that prevent even a superintelligent agent from achieving their goals, but which aren't "defects" in the agent itself, but in some constant in the universe. One such example is the speed of light. However intelligent you are, that's a physical constraint that you just can't surpass. * There may also be a sense in which AI systems would not self-improve further than required for what we want from them. Meaning, we may fulfill our needs (for which we design and produce AI systems) with a class of AI agents that stop receiving any sort of negative feedback at a certain level of proficiency or ability. I do not yet understand how training AI models works, but I suppose that some sort of feedback exists, and that these models change themselves / are changed according to the feedback. If there's no longer any message "hey, this was bad, improve this", then the system doesn't improve further (I think). * It might be the case that we can get AGI-like performance without ambition to get a lot of resources, negating the orthogonality thesis. Maybe we build an AGI which can perform all the tasks that we give it, but the AGI performs all of these actions on the basis of statistical recognition of patterns in very specific domains, without integrating all this understanding into a coherent model of the world. Maybe this agent has very cordoned-off sections of its mind. It's superintelligent in a bunch of different domains, including speech and, I don't know, running industrial control systems, but it's less like a generally intelligent human, and more like several savants integrated into one being. * If you give an AI agent some game to play, and this game includes maximizing resources, and has definite win-conditions, then the agent would beat humans, maximize resources etc. But it maximizes resources within the confines of that task. I feel like a qualitative difference in the type of mind is necessary for any agent to break out of the confines of the task. Namely, the agent must 1) model the world, then 2) figure out that it's operating in a simulated world, and then 3) figure out how to break out of the simulated world.
c022867f-a0f4-4e3b-b336-abe8c1b929ef
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups This summary was posted to LW main on July 11th. The following week's summary is here. Irregularly scheduled Less Wrong meetups are taking place in: * Australia - Online Hangout: 13 July 2014 06:30PM * Frankfurt: Goal Factoring: 20 July 2014 02:00PM * Houston, TX: 12 July 2014 02:00PM * [Portland] Calibration Training and Potluck - Portland: 12 July 2014 06:31PM * Upper Canada LW Megameetup: Ottawa, Toronto, Montreal, Waterloo, London: 18 July 2014 07:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Brussels - July meetup: 12 July 2014 01:00PM * Brussels - August (topic TBD): 09 August 2014 01:00PM * Canberra: Paranoid Debating: 12 July 2014 06:00PM * London social meetup - possibly in a park: 13 July 2014 02:00PM * Sydney Meetup - July: 23 July 2014 07:00PM * Washington, D.C.: Prisoner's Dilemna tournament: 13 July 2014 03:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Sydney, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly ov
beded213-8a38-41cf-b796-e100e61034bf
trentmkelly/LessWrong-43k
LessWrong
1Day Sooner is Hiring a Communications Director 1Day Sooner is hiring a new Communications Director to make the public case for our advocacy priorities.  We think this role has the potential to be very high-impact: human challenge trials can still potentially save thousands of lives by accelerating the licensure of next-generation COVID-19 vaccines and advancing our understanding of the virus. Perception of public attitudes toward these trials seems to be a meaningful bottleneck toward their advancement, making the Communications Director well-positioned to advance their cause. 1Day Sooner is also aiming to broaden our advocacy in 2021, using the network and infrastructure we've built to promote vaccination by advocate for a National Vaccine Day and other projects. We are still relatively new as an organization, so the next Communications Director will have creative say and strategic input over the organization's direction.  Applicants with a background in journalism and/or public health are preferred, but all candidates with good writing and speaking skills who can learn quickly about niche topics and make strategic advocacy decisions under significant uncertainty are encouraged to apply.  I (Abie) am currently serving as Communications Director and will be leaving the role in a couple months to go to school. I would be more than happy to speak with anyone interested in applying— feel free to email me at communications at 1daysooner dot org. Read the full job description and apply here. 
667d111a-86dd-4d31-8dcd-a86a506ca111
trentmkelly/LessWrong-43k
LessWrong
Deliberate Play Deliberate practice is an important concept in sports and other high-performance endeavors. It is "practicing with a clear awareness of the specific components of a skill we’re aiming to improve and exactly how to improve them." If you haven't seen this idea before, stop and go read this article (where I got the definition from) or here for a shorter guide. No, really--go read about it. It's one of the most powerful ideas I've come across. To recap, deliberate practice focuses on improving a specific skill with an awareness of how it translates to a game situation. This is commonly done by executing a drill while visualizing a game-like situation (e.g. imagining defenders you are avoiding) and otherwise replicating game conditions (e.g. practicing free throws after you just ran sprints, to simulate fatigue). Beyond drills, you can also bring deliberate practice to games by setting a specific intention of what to work on ("practice playing good defense in situation X"), and focusing on that throughout the game. I play pick-up games with other club players a couple times a week, and most of us set some intention going into the game. However, this post is not about deliberate practice but about two adjacent mental stances, the first of which I'll call max performance. To describe it, consider two contrasting mental stances one could have during a game: * Deliberate practice. Try to improve specific skills. It's okay to make suboptimal plays or otherwise mess up in the process of practicing those skills, but otherwise you want to play well and at high intensity to simulate realistic competition. * Max performance. Play as well as possible. You will likely be very focused (but ideally in a flow-like state) and somewhat pushing your physical boundaries. On the other hand, you are specifically not pushing the boundaries of your skill set and instead relying on what you can do consistently. Max performance is what you do if you are trying to win that particular game.
1e7d7ae8-b7da-4ea5-8870-977d876cc05a
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups This summary was posted to LW Main on August 19th. The following week's summary is here. New meetups (or meetups with a hiatus of more than a year) are happening in: * Bay City Meetup: 19 August 2016 01:25PM Irregularly scheduled Less Wrong meetups are taking place in: * Australian-ish Online Hangout: 20 August 2016 07:30PM * Baltimore Area Weekly Meetup: 21 August 2016 08:00PM * European Community Weekend: 02 September 2016 03:35PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * [Moscow] Games in Kocherga club: FallacyMania, Zendo, Tower of Chaos: 24 August 2016 07:40PM * San Jose Meetup: Park Day (VII): 21 August 2016 03:00PM * Sydney Rationality Dojo - September 2016: 04 September 2016 04:00PM * Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM * Vienna: 20 August 2016 03:00PM * Washington, D.C.: Mini Talks: 21 August 2016 03:30PM Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.   If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add Ne
ba03e313-6176-4971-b7dc-dc329262a123
trentmkelly/LessWrong-43k
LessWrong
Science and rationalism - a brief epistemological exploration Is rationalism scientific? Yes. Is science rationalistic? Depends. Within scientific disciplines, I believe that computer science is more rationalistic than others due to its deductibility (can be proved mathematically). How about other fields? Physics? Chemistry? Biology? They rely mostly on empirical results to support their arguments and theories. They can observe. They can experiment. Some of them claim they can prove... but to what extent can they be so confident? Can we really bridge empiricism to rationalism?  Verificationism. Sure, scientific theory should be able to be supported by empirical evidence. But lack of contradicting evidence doesn't necessarily mean that the theory is true. It just means that the theory isn't yet made false, even if a hypothesis can be empirically tested and the study has been replicated again and again. Falsifiability is like a time bomb. You don't know the conditions in which a said theory doesn't apply. There may be unknown unknowns, like Newton didn't know his theory didn't apply in the outer space. Moreover, some fields can not be experimented, and in some cases, observed. Examples: astronomy, natural history. This is more of a speculation - yet does not receive as much scepticism as social science. Big Bang theory, how dinosaurs were extincted, etc..  cannot be replicated or confirmed given current technology. Scientists in those areas are playing on "what ifs"... trying to explain possible causes without really knowing how cause-effect relationships may have been different in prehistorical times. I don't find them very different from, say, political analysts trying to explain why Kennedy was assassinated. I myself am a fallibist. But I won't go as far as supporting the Münchhausen Trilemma. To science: I have reasonable doubts, but I believe that reasonable (blind) faith is necessary for practicality/pragmatism. Relativism. A proposition is only true relative to a particular perspective. Like the story of blind men
7808e720-38e8-4d39-acbf-fb89bc6bcf59
trentmkelly/LessWrong-43k
LessWrong
If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach? I've been noticing some complaints (such as this post by Richard Ngo) lately about the quality of the modern LW community's contribution to the big picture of humanity's knowledge. Ideally, if it were the case that reading something automatically made you internalize deeply everything it said, then just by having a group of people who have read The Sequences, you'd have a superteam of intellectuals. And while I do think LW is a pretty cool group of smart thinkers, that isn't fully the case- just reading The Sequences isn't enough. To really internalize the lessons that one must learn, one must apply the principles, push against the problem, see where their understanding needs improvement, and where they are good enough. The simplest form of this is having a high-quality Anki deck that tests users on the principles, both by testing recall of the stated principle itself, and even more importantly, giving them test cases where they can apply the principles (in the same vein as Ankifying medium-difficulty multiplication problems). I have seen some rationality-themed Anki decks, but many of the cards are poorly formatted (both esthetically and in terms of learnability), and are also poorly curated. Ideally, if there were to be an Anki deck, it would be well formatted, and the cards would be carefully chosen to maximize quality of information. Another idea that I've been thinking about is making explorables, a la Nicky Case, that would introduce important rationality concepts. This would have the advantage of providing more flexibility in experience than Anki, but also would sacrifice the benefits of having already implemented SRS. My question is: if there were to be either an Anki deck or an explorable teaching concepts from The Sequences, targeted primarily as an aide for current LW users, but also as an introduction aimed at the public at large, what concepts from The Sequences would you most want to see covered?
6f786900-debd-407f-a558-19752aa60a48
StampyAI/alignment-research-dataset/blogs
Blogs
May 2021 Newsletter MIRI senior researcher Scott Garrabrant has a major new result, “Finite Factored Sets,” that he’ll be unveiling in an online talk this Sunday at noon Pacific time. ([Zoom link.](https://us02web.zoom.us/j/7151633248?pwd=a2ZkQlpwNU9IbWF5c3U4ejlvdFRvUT09)) For context on the result, see Scott’s new post “[Saving Time](https://www.lesswrong.com/posts/gEKHX8WKrXGM4roRC/saving-time).” In other big news, MIRI has just received [its two largest individual donations of all time](https://intelligence.org/2021/05/13/two-major-donations/)! Ethereum inventor Vitalik Buterin has donated ~$4.3 million worth of ETH to our research program, while an anonymous long-time supporter has donated MKR tokens we liquidated for an astounding ~$15.6 million. The latter donation is restricted so that we can spend a maximum of $2.5 million of it per year until 2025, like a multi-year grant. Both donors have our massive thanks for these incredible gifts to support our work! #### Other MIRI updates * Mark Xu and Evan Hubinger use “[Cartesian world models](https://www.lesswrong.com/posts/LBNjeGaJZw7QdybMw)” to distinguish “consequential agents” (which assign utility to environment states, internal states, observations, and/or actions) “structural agents” (which optimize “over the set of possible decide functions instead of the set of possible actions”), and “conditional agents” (which map e.g. environmental states to utility functions, rather than mapping them to utility). * In [Gradations of Inner Alignment Obstacles](https://www.lesswrong.com/posts/wpbpvjZCK3JhzpR2D), Abram Demski makes three “contentious claims”: > > 1. The most useful definition of “[mesa-optimizer](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB)” doesn’t require them to perform explicit search, contrary to the current standard. > 2. Success at [aligning narrowly superhuman models](https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models) might be bad news. > 3. Some versions of the lottery ticket hypothesis seem to imply that randomly initialized networks already contain deceptive agents. > > > * Eliezer Yudkowsky comments on the relationship between [early AGI systems’ alignability and capabilities](https://www.lesswrong.com/posts/YG9WkpgbqfqeAKKgp/ai-and-the-probability-of-conflict?commentId=R3JQ8EBSj2GmwdvaX). #### News and links * John Wentworth announces a project to [test the natural abstraction hypothesis](https://www.alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro), which asserts that “most high-level abstract concepts used by humans are ‘natural'” and therefore “a wide range of architectures will reliably learn similar high-level concepts”. * Open Philanthropy’s Joe Carlsmith asks “[Is Power-Seeking AI an Existential Risk?](https://www.lesswrong.com/posts/HduCjmXTBD4xYTegv)“, and Luke Muehlhauser asks for [examples of treacherous turns in the wild](http://lukemuehlhauser.com/treacherous-turns-in-the-wild/) (also on [LessWrong](https://www.lesswrong.com/posts/NEa3puQB23FyiifnW/linkpost-treacherous-turns-in-the-wild)). * From DeepMind’s safety researchers: [What Mechanisms Drive Agent Behavior?](https://medium.com/@deepmindsafetyresearch/what-mechanisms-drive-agent-behaviour-e7b8d9aee88), [Alignment of Language Agents](https://medium.com/@deepmindsafetyresearch/alignment-of-language-agents-9fbc7dd52c6c), and [An EPIC Way to Evaluate Reward Functions](https://medium.com/@deepmindsafetyresearch). Also, Rohin Shah provides his [advice on entering the field](https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/). * Owen Shen and Peter Hase [summarize 70 recent papers](https://www.alignmentforum.org/posts/GEPX7jgLMB8vR2qaK/opinions-on-interpretable-machine-learning-and-70-summaries) on model transparency, interpretability, and explainability. * Eli Tyre asks: [How do we prepare for final crunch time?](https://www.lesswrong.com/posts/wyYubb3eC5FS365nk/how-do-we-prepare-for-final-crunch-time) (I would add some caveats: Some roles and scenarios imply that you’ll have *less* impact on the eve of AGI, and can have far more impact today. For some people, “final crunch time” may be now, and marginal efforts matter less later. Further, some forms of “preparing for crunch time” will fail if there aren’t clear warning shots or [fire alarms](https://intelligence.org/2017/10/13/fire-alarm/).) * Paul Christiano launches a new organization that will be his focus going forward: the [Alignment Research Center](https://www.lesswrong.com/posts/3ejHFgQihLG4L6WQf). Learn more about Christiano’s research approach in [My Research Methodology](https://www.lesswrong.com/posts/EF5M6CmKRd6qZk27Z) and in his recent [AMA](https://www.lesswrong.com/posts/7qhtuQLCCvmwCPfXK). The post [May 2021 Newsletter](https://intelligence.org/2021/05/18/may-2021-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
272cb53d-6511-4180-a146-ad56728471b1
trentmkelly/LessWrong-43k
LessWrong
January 2021 gwern.net newsletter None
24866d85-dcac-4eed-9226-38290996dd81
StampyAI/alignment-research-dataset/arbital
Arbital
Poset: Exercises Try these exercises to test your poset knowledge. # Corporate Ladder Imagine a company with five distinct roles: CEO, marketing manager, marketer, IT manager, and IT worker. At this company, if the CEO gives an order, everyone else must follow it. If an order comes from an IT manager, only IT workers are obligated to follow it. Similarly, if an order comes from a marketing manager, only marketers are obligated to follow it. Nobody is obligated to follow orders from marketers or IT workers. Do the workers at this company form a poset under the "obligated to follow orders from" relation? %%%hidden(Show solution): While not technically a poset due to its lack of reflexivity, it is pretty close. It is actually a strict ordering, whose underlying partial order could be obtained by making the reasonable assumption that each worker will follow her own orders. %%% # Bag Inclusion We can define a notion of [power sets](https://arbital.com/p/power_set) for [bags](https://arbital.com/p/3jk) as follows. Let $X$ be a set, then we use $\mathcal{M}(X)$ to denote the set of all bags containing elements of $X$. Let $A \in \mathcal{M}(X)$. The multiplicity function of $A$, $1_A : X \rightarrow \mathbb N$ maps each element of $X$ to the number of times that element occurs in $A$. We can use multiplicity functions to define a inclusion relation $\subseteq$ for bags. For $A, B \in \mathcal M(X)$, we write $A \subseteq B$ whenever for all $x \in X$, $1_A(x) \leq 1_B(x)$. Does $\mathcal{M}(X)$ form a poset under the bag inclusion relation $\subseteq$? If so, prove it. Otherwise, show that it does not satisfy one of the three poset properties. # Duality Give the dual of the following proposition. For all posets $P$ and all $p, q \in P$, $q \prec p$ implies that $\{ r \in P~|~r \leq p \}$ is a superset of $\{ r \in P~|~r \leq q\}$. %%hidden(Show solution): For all posets $P$ and all $p, q \in P$, $q \succ p$ implies that $\{ r \in P~|~r \geq p \}$ is a superset of $\{ r \in P~|~r \geq q\}$ (where $q \succ p$ means $p \prec q$). %% # Hasse diagrams Let $X = \{ x, y, z \}$. Draw a Hasse diagram for the poset $\langle \mathcal P(X), \subseteq \rangle$ of the power set of $X$ ordered by inclusion. %%hidden(Show solution): ![A Hasse diagram of the power set of X, ordered by inclusion](http://i.imgur.com/WG3OLFc.png) %%%comment: dot source: digraph G { node [= 0.1, height = 0.1](https://arbital.com/p/width) edge [= "none"](https://arbital.com/p/arrowhead) e [= "{}"](https://arbital.com/p/label) x [= "{x}"](https://arbital.com/p/label) y [= "{y}"](https://arbital.com/p/label) z [= "{z}"](https://arbital.com/p/label) xy [= "{x,y}"](https://arbital.com/p/label) xz [= "{x,z}"](https://arbital.com/p/label) yz [= "{y,z}"](https://arbital.com/p/label) xyz [= "{x,y,z}"](https://arbital.com/p/label) rankdir = BT; e -> x e -> y e -> z x -> xy x -> xz y -> xy y -> yz z -> xz z -> yz xy -> xyz xz -> xyz yz -> xyz } %%% %% #Hasse diagrams (encore) Is it possible to draw a Hasse diagram for any poset? %%hidden(Show solution): Note that our description of Hasse diagrams made use of the covers relation $\prec$. The covers relation, however, is not adequate to describe the structure of many posets. Consider the poset $\langle \mathbb R, \leq \rangle$ of the real numbers ordered by the standard comparison $\leq$. We have $0 < 1$, but how would we convey that with a Hasse diagram? The problem is that $0$ has no covers, even though it is not a maximal element in $\mathbb R$. In fact, for any $x \in \mathbb R$ such that $x > 0$, we can find a $y \in \mathbb R$ such that $0 < y < x$. This "infinite density" of $\mathbb R$ makes it impossible to depict using a Hasse diagram. %%
c3813ca8-f8dc-4e49-b2c9-7c38f39c38db
trentmkelly/LessWrong-43k
LessWrong
Turning the Technical Crank A few months ago, Vaniver wrote a really long post speculating about potential futures for Less Wrong, with a focus on the idea that the spread of the Less Wrong diaspora has left the site weak and fragmented. I wasn't here for our high water mark, so I don't really have an informed opinion on what has socially changed since then. But a number of complaints are technical, and as an IT person, I thought I had some useful things to say. I argued at the time that many of the technical challenges of the diaspora were solved problems, and that the solution was NNTP -- an ancient, yet still extant, discussion protocol. I am something of a crank on the subject and didn't expect much of a reception. I was pleasantly surprised by the 18 karma it generated, and tried to write up a full post arguing the point. I failed. I was trying to write a manifesto, didn't really know how to do it right, and kept running into a vast inferential distance I couldn't seem to cross. I'm a product of a prior age of the Internet, from before the http prefix assumed its imperial crown; I kept wanting to say things that I knew would make no sense to anyone who came of age this millennium. I got bogged down in irrelevant technical minutia about how to implement features X, Y, and Z. Eventually I decided I was attacking the wrong problem; I was thinking about 'how do I promote NNTP', when really I should have been going after 'what would an ideal discussion platform look like and how does NNTP get us there, if it does?' So I'm going to go after that first, and work on the inferential distance problem, and then I'm going to talk about NNTP, and see where that goes and what could be done better. I still believe it's the closest thing to a good, available technological schelling point, but it's going to take a lot of words to get there from here, and I might change my mind under persuasive argument. We'll see. Fortunately, this is Less Wrong, and sequences are a thing here. This is the first post
90de723f-005b-4eb0-b6e8-88d7abe236c9
trentmkelly/LessWrong-43k
LessWrong
Why Bayesians should two-box in a one-shot Consider Newcomb's problem. Let 'general' be the claim that Omega is always right. Let 'instance' be the claim that Omega is right about a particular prediction. Assume you, the player, are not told the rules of the game until after Omega has made its prediction. Consider 2 variants of Newcomb's problem.   1. Omega is a perfect predictor.  In this variant, you assign a prior of 1 to P(general).  You are then obligated to believe that Omega has correctly predicted your action.  In this case Eliezer's conclusion is correct, and you should one-box.  It's still unclear whether you have free will, and hence have any choice in what you do next, but you can't lose by one-boxing. But you can't assign a prior of 1 to P(general), because you're a Bayesian.  You derive your prior for P(general) from the (finite) empirical data.  Say you begin with a prior of 0.5 before considering any observations.  Then you observe all of Omega's N predictions, and each time, Omega gets it right, and you update: P(general | instance) = P(instance | general) P(instance) / P(general)     = P(instance) / P(general) Omega would need to make an infinite number of correct predictions before you could assign a prior of 1 to P(general).  So this case is theoretically impossible, and should not be considered.   2. Omega is a "nearly perfect" predictor.  You assign P(general) a value very, very close to 1.  You must, however, do the math and try to compare the expected payoffs, at least in an order-of-magnitude way, and not just use verbal reasoning as if we were medieval scholastics. The argument for two-boxing is that your action now can't affect what Omega did in the past.  That is, we are using a model which includes not just P(instance | general), but also the interaction of your action, the contents of the boxes, and the claim that Omega cannot violate causality.  P ( P($1M box is empty | you one-box) = P($1M box is empty | you two-box) ) >= P(Omega cannot violate causality), and that
95cfd0f1-1a73-41e6-aeb0-3233cb98a983
trentmkelly/LessWrong-43k
LessWrong
16 types of useful predictions How often do you make predictions (either about future events, or about information that you don't yet have)? If you're a regular Less Wrong reader you're probably familiar with the idea that you should make your beliefs pay rent by saying, "Here's what I expect to see if my belief is correct, and here's how confident I am," and that you should then update your beliefs accordingly, depending on how your predictions turn out. And yet… my impression is that few of us actually make predictions on a regular basis. Certainly, for me, there has always been a gap between how useful I think predictions are, in theory, and how often I make them. I don't think this is just laziness. I think it's simply not a trivial task to find predictions to make that will help you improve your models of a domain you care about. At this point I should clarify that there are two main goals predictions can help with: 1. Improved Calibration (e.g., realizing that I'm only correct about Domain X 70% of the time, not 90% of the time as I had mistakenly thought).  2. Improved Accuracy (e.g., going from being correct in Domain X 70% of the time to being correct 90% of the time) If your goal is just to become better calibrated in general, it doesn't much matter what kinds of predictions you make. So calibration exercises typically grab questions with easily obtainable answers, like "How tall is Mount Everest?" or  "Will Don Draper die before the end of Mad Men?" See, for example, the Credence Game, Prediction Book, and this recent post. And calibration training really does work. But even though making predictions about trivia will improve my general calibration skill, it won't help me improve my models of the world. That is, it won't help me become more accurate, at least not in any domains I care about. If I answer a lot of questions about the heights of mountains, I might become more accurate about that topic, but that's not very helpful to me. So I think the difficulty in prediction-mak
d9e80cd5-2ddf-4a73-977e-2f3099882535
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Paths to failure *Epistemic status: This is an intermediate result of the discussions between Daniel O’Connell, Ishan, Karl von Wendt, Péter Drótos, and Sofia Bharadia during the ongoing project “Developing specific failure stories about uncontrollable AI” at this year’s*[*virtual AI Safety Camp*](https://aisafety.camp/)*. As a basis for the stories we want to develop in the project, we try to map the territory for different paths to existential risks from uncontrollable AI. This map is preliminary and very likely incomplete. Comments and suggestions for additional paths are welcome.* Key assumptions --------------- While exploring the different paths to failure we maintain the following assumptions: * ***A sufficiently advanced AI will likely be uncontrollable**:* This means that it will be able to circumvent or counter all human interventions to correct its decisions or prevent its actions. * ***Uncontrollability doesn’t require AGI or ASI**:* To become uncontrollable, the AI doesn’t necessarily have to be an AGI in the sense that it can solve most tasks at a human level, nor does it have to be “superintelligent”. The AI just needs the specific abilities necessary to overcome human intervention, e.g. by hacking technical systems and/or by skillfully manipulating humans and preventing coordination. * ***Uncontrollability arises before alignment**:* By the time we can develop an uncontrollable AI, we will likely not know how to give it a provably beneficial goal, so it will by default be misaligned with human values. * ***Uncontrollability leads to existential catastrophe**:* Since the number of possible world states the AI could strive for is vastly greater than the possible world states in which human values are maintained, there is a significant probability that a misaligned uncontrollable AI would lead to an existential catastrophe. Based on these assumptions, we explore some potential paths to failure. We follow these paths only up until the point that an AI becomes uncontrollable, but do not go into the details of what happens post-uncontrollability. What the model chooses to do after humans are unable to stop it depends on the specifics of the original goal given to the AI. Exploring only up until uncontrollability also means that we do not need to differentiate between existential risks and suffering risks, assuming that an uncontrollable, misaligned AI will create a world state sufficiently bad from a human perspective that it is to be avoided under all circumstances. As an example of some speculations about the convergent instrumental goal of increasing intelligence leading to a predictable world state of Earth turned into a giant computer, [see this post](https://www.lesswrong.com/posts/jkaLGoNLdsp654KhD/prediction-any-uncontrollable-ai-will-turn-earth-into-a). Previous work ------------- We build our own efforts on top of previous attempts to structure possible existential risk scenarios. In particular, we draw from the concepts and ideas of Paul Christiano ([here](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like) and [here](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story)), [Sam Clarke et al.](https://www.alignmentforum.org/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios), [Andrew Critch](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/), [Holden Karnofsky](https://www.lesswrong.com/posts/85DTWEmA25sTciHvy/how-we-could-stumble-into-ai-catastrophe), [Neel Nanda](https://www.lesswrong.com/posts/3DFBbPFZyscrAiTKS/my-overview-of-the-ai-alignment-landscape-threat-models), and [Max Tegmark](https://futureoflife.org/ai/ai-aftermath-scenarios/), as well as many individual failure stories described [on LessWrong](https://www.lesswrong.com/tag/threat-models) and elsewhere. There are many different variables relevant to the problem of uncontrollability, each of which can be used to distinguish different scenarios, thus forming one dimension of the space of possible scenarios. We have created [a table with examples of possible scenarios](https://docs.google.com/spreadsheets/d/1ju64hBa4AWbcYSzbHFBEdmePoSI9-Lkc/edit?usp=share_link&ouid=107790339755901298954&rtpof=true&sd=true) (work in progress) which include some of these variables. This table provides a bird's eye overview of the key considerations in each path to failure. In the following sections, we look at two factors which we think are particularly relevant for mapping possible paths to failure. Autonomous AI vs. man-machine-system ------------------------------------ Most discussions about existential risks from AI focus on autonomous AIs which act independently of human interference. While this is probably the dominant case for most existential risk scenarios, we also look at the possibility of man-machine-systems becoming uncontrollable. A man-machine-system is defined as one or more humans consistently interacting with one or more AIs in order to pursue a specific goal, e.g. a corporation trying to maximize its shareholder value or a state striving for global power. This constellation is particularly relevant for scenarios in which AIs are not yet (fully) agentic by themselves, e.g. current LLMs. In these cases, humans could provide the agentic capabilities, goal-directedness, world model, and strategic awareness necessary for consistently pursuing goals in the real world, while an AI could provide certain superhuman skills, like hacking, manipulating humans, market manipulation, military strategy, etc. Note that such a constellation would likely be unstable if the intelligence and capabilities of the AI increase over time, leading to a situation where the humans in the man-machine-system depend more and more on the AI and are less and less in control, up to the point where humans are not needed anymore and the uncontrollable man-machine-system transforms into an uncontrollable autonomous AI. We call this process “takeover from within”, see below. Unipolar vs. multipolar scenarios --------------------------------- In principle, it is possible that various advanced AIs will be developed, each of which might not pose an existential risk by itself, but which together could form a dynamic system that becomes uncontrollable and leads to an existential catastrophe. Paul Christiano [describes one such scenario](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like), calling it “going out with a whimper”. Such scenarios are usually called “multipolar”. In contrast, unipolar scenarios assume that a single AI or man-machine-system self-improves until it is so far ahead of all rivals that it effectively rules the world. To be clear, we use the term ‘single AI’ to also include systems of AIs as long as they are optimizing for the same utility function. This means that this definition of a single AI (unipolar scenario) includes distributed systems, sub-agents, and tiling agents since they will be optimizing towards the same overarching goal. Conversely, a scenario with multiple AIs (or a multipolar scenario) is when we have systems that seek to optimize different utility functions. The singleton attractor hypothesis ---------------------------------- The singleton attractor hypothesis states that all AI systems will eventually converge to a single utility function. This state occurs robustly regardless of starting polarity or human-AI power distribution dynamics. If we combine the distinctions described above, four different combinations are possible (fig. 1): ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/yv4xAnkEyWvpXNBte/rhmcxsx7dgosbsovjegb)Fig. 1: Possible AI failure scenarios Today, AI autonomy is low and there is no single dominant AI (bottom left). In this situation, there is no immediate existential risk because no AI has become uncontrollable yet and no single organization or state dominates the future. If many different AIs take over more and more control, we end up in the upper left corner, where a systemic breakdown as described in Paul Christiano’s “Going out with a whimper” scenario could occur. On the other hand, one man-machine-system, e.g. a leading technology company or a state, could use AI to increase its own power to the point where it effectively controls the world (lower right). While this would not necessarily lead to the end of humanity, it would at the very least be highly undemocratic and likely would cause a lock-in of the values specified by the humans in control of the man-machine system, possibly leading to a dystopia similar to George Orwell’s “1984”. The fourth quadrant in the upper right describes the “classic” scenario of a single misaligned AI dominating the future, likely destroying humanity and possibly all biological life on Earth. While all four quadrants are possible, we believe three of them are inherently unstable, while the fourth – a dominant misaligned AI – is an attractor for different futures involving advanced AI (fig. 2). ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/yv4xAnkEyWvpXNBte/z66a98so6cprlzzdfdu1)Fig. 2: AI singleton as an attractor for possible AI-dominated futuresGiven the situation we face today with high competitive pressure towards developing AGI as quickly as possible, unless we make a major effort to prevent drifting towards the upper right, it seems likely that we will lose more and more control to AIs and at the same time see increasing concentration of wealth and power.  If we assume that several independent organizations would develop comparably powerful AIs at roughly the same time, we would still face an unstable situation with a few players fighting for dominance over computing resources and economic and/or military power. If we further assume that each of the AIs can recursively improve its intelligence (with or without the help of humans), it seems very likely that the balance of power would shift rapidly towards the player whose AI can improve the fastest. One could argue that from a game-theoretic perspective, cooperation between rivaling AIs may be superior to conflict[[1]](#fnurodo9da1dk). But this would only be true if each rival could continuously maintain a specific advantage over the other, e.g. higher skills in certain areas or better access to specific resources. This is usually the case with conflicting humans, who cannot increase their own mental capacity beyond a certain limit and therefore benefit from cooperation with others who have different skills. But this equilibrium isn’t stable if the intelligence of the players is effectively unbounded and increases at different speeds. There is no special knowledge, no skill that an AI with greater intelligence couldn’t acquire for itself. Therefore, there is nothing to gain from cooperating with less intelligent AIs pursuing different goals. If, on the other hand, one player has a clear advantage from the start, we might move towards a scenario where that player effectively rules the world, dictating their specific values and norms to everyone. However, as the AI becomes more powerful, the people in charge would rely more and more on its decisions (either willingly or because they are forced/manipulated by the AI), so it would in effect take control from within until the humans “in charge” would be mere figureheads, doing whatever the AI tells them to do. Sooner or later, we would end up in the upper right corner, with a single all-powerful AI dominating the future, likely leading to an existential catastrophe for humanity.  Further evidence for the singleton attractor hypothesis comes from the history of human dominance on Earth. With their superior intelligence, our homo sapiens ancestors have not only exterminated all other hominid species but many other mammals as well. Today, we are the only species able to develop complex tools and technology. We have used this advantage to shape Earth according to our requirements and whims, largely ignoring the needs of other life forms, causing a mass extinction and environmental damage that threatens even our own future. We should expect that a misaligned superintelligent AI singleton would treat us in a similar way after it has destroyed or taken over all its rivals. Possible paths to failure ------------------------- Looking at the landscape of possible paths to uncontrollable AI in more detail, we developed the map depicted in fig. 3. For reasons of simplicity, we have not differentiated between unipolar and multipolar scenarios here. In practice, if we don’t implement effective measures to prevent uncontrollable AI, we would expect a mixture of different failure modes to happen in parallel, until one of them becomes the dominant cause of an existential catastrophe. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/yv4xAnkEyWvpXNBte/rrnpsmeeypp9qbz3l4um)Fig. 3: Possible paths to failureThe upper half of the map (gray boxes) depicts paths where humans are more or less in control. For example, starting from current man-machine-systems, continued automation of technical progress could lead to a situation where we understand the technology we use less and less, while becoming more and more dependent on it. Goodhart’s law would likely lead to a systemic breakdown like Paul Christiano described it. Alternatively, one man-machine-system could achieve a decisive competitive advantage, forming a human-machine singleton. This in turn could lead to global dominance of a single organization or state, possibly leading to a value lock-in. Even more likely, as described above, the more and more intelligent AI would soon “run the show” and take over control from within. The lower half describes paths with autonomous AIs. A docile AI, e.g. an advanced LLM, could be enhanced with agentic capabilities similar to what AutoGPT and other current open source projects try to achieve. If its general intelligence is increased, it will sooner or later develop “strategic awareness”, realizing that it is itself a part of its plan to achieve its goal and a possible object of its own decisions. This is not to be confused with “consciousness” - the AI just has to realize that its own state is a part of possible future world states relevant for its plan.  This would in turn lead to instrumental goals like power-seeking and self-improvement. If there are other, similarly capable AIs, this would likely lead to an AI war which in itself could be catastrophic, for example destroying most of the technical infrastructure our supply chains rely on and leading to a collapse of civilization. If one AI has a decisive strategic advantage and/or wins the AI war, it will become a superintelligent singleton. A third, not often described path to existential catastrophe is an AI-related accident. This can happen even if the AI is not uncontrollable or misaligned. One example would be an AI-controlled hacking attack that ends up destroying most of the network connected infrastructure, leading to a civilizational collapse. Other possibilities include e.g. accidentally helping a bad actor develop a deadly pandemic, inadvertently causing a (nuclear) war through misguided mass manipulation, a failed experiment with self-replicating nanobots, developing a new technology which can be abused by bad actors, etc. While this path is not specifically relevant to the question of how to prevent uncontrollable AI, we should keep in mind that even a docile, advanced AI can make the world more unstable, similar to biotechnology, nanotechnology, and other potential future technologies which could easily get out of hand or be abused by bad actors. Advanced AI makes these dangers even worse by accelerating technological progress, leaving us less time to prevent or prepare for related disasters. Conclusion ---------- While the paths to failure we describe here are neither complete nor very detailed, it becomes clear that there are many ways we could lose control over advanced AI. In the future, we will try to describe some of these possible paths in more detail in specific failure stories.  If the singleton attractor hypothesis is correct, if we cross a certain threshold of intelligence, there is a significant likelihood that we would face a single all-powerful misaligned AI, resulting in an existential catastrophe. Further increasing AI capabilities while we don’t know where this threshold lies therefore appears to be extremely dangerous. The only hope for mankind may lie in NOT developing certain technologies even though we could – at least until we have found a way to give an uncontrollable AI a provably beneficial goal. Whoever claims to have such a method must first prove its feasibility beyond any reasonable doubt before proceeding with the development of a potentially uncontrollable AI. 1. **[^](#fnrefurodo9da1dk)**An argument that comes up often is that there might be a 'merging' of goals rather than 'domination'. Assuming that we have a multipolar scenario with roughly equally competent AIs, one might make the argument that we could converge to something like a Nash equilibrium of utility functions. As an example, this can be seen as Clippy merging with a Catlover AI resulting in the combined system producing cat-like paperclips. However, as discussed in this section, based on the singleton attractor hypothesis multipolar scenarios are inherently unstable. Which means that a single AI is far more likely to gain absolute superiority quickly. Additionally, Nash equilibria only work if the respective advantages of the players are more or less constant over time which serves as further evidence that we would see 'domination' with a higher likelihood than a 'merger'.
9de09d4f-b668-4202-b2b9-392be0250bd0
trentmkelly/LessWrong-43k
LessWrong
Mathematics for AIXI and Gödel machine Just a quick question, does anyone know which math topics I'd have to learn to understand the work on AIXI and the Gödel machine? Any pointers or suggestions would be appreciated. 
c51434ce-82e5-4794-a438-5a76add90385
trentmkelly/LessWrong-43k
LessWrong
Using Claude to convert dialog transcripts into great posts? Dialogs are crucial for identifying errors, developing syntheses, and surfacing cruxes and critical uncertainties. They're also much easier to produce than structured writing or presentations, so we would like to think that they're also a good way of getting important ideas articulated and written up, but they're kind of not, because real dialogs meander and stumble, they take place at the limits of our ability to make ourselves understood, they're the meeting of views that don't yet know their synthesis. Usually, mutual understanding arrives only towards the end, after many failed attempts. I've also found that there seems to be a tradeoff between interpersonal or professional functionality and legibility to outside observers: the more the participants are speaking to each other, the less the participants are thinking about an audience, denser, more personal, more idiosyncratic. Dialog narrows in on the most important issues in a field but then only produces long-winded, hostile documents. Claude is a chat assistant from Anthropic. Its content window fits like 100,000 tokens. We may be able to use it to rapidly, cheaply boil dialogs down into readable texts. If this works, this could be genuinely transformative: There would no longer need to be this frustrating gulf between the field's understanding of its own state of the art, and the field's outward-facing materials and documentation. To test this, I decided try it on a dialog that many of us will have read, Ngo and Yudkowsky on Alignment Difficulty. Well, here's my prompt: > The following is a transcript of a conversation primarily between Eliezer Yudkowsky and Richard Ngo. It was a conversation on discord. It's interspersed with section titles. Please read it and then summarize its most important insights for me in less than 900 words. > > [TRANSCRIPT QUOTE BEGIN] > > <the transcript> > > [TRANSCRIPT QUOTE END] I don't know whether I'm good at prompts, tell me whether this makes sense. The delimiters ar
b4ec6d63-6e19-4119-827a-64a88ecf9bad
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Underconstrained Abstractions Today's post, Underconstrained Abstractions was originally published on 04 December 2008. A summary (taken from the LW wiki):   > The problem with selecting abstractions is that for your data, there are probably lots of abstractions that fit the data equally well. In that case, we need some other way to decide which abstractions are useful. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Permitted Possibilities, & Locality, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
6ac6a108-f0c6-4d57-a0fe-b98e35eff8e1
trentmkelly/LessWrong-43k
LessWrong
Recovering the 'spark' I mentioned in my first article that I am likely insane. I'm reiterating this (I hope) not to bring undue attention to myself, but to present myself as a reference case for a process that I hope will prove useful to myself and others. I'm going to try to piece my mind back together. I'm offering to chronicle the results, no matter how intimate or embarrassing. I want to be able to lay bare all of the obviously (and painfully) unoptimized processes that go on inside my head, especially the ones I am not yet aware of - and then, one by one, attempt to optimize them using the principles presented on this site. This kind of assertion pattern-matches to "crazy person (usually schizophrenic) wants to self-medicate in a dangerous way because their damaged reasoning thinks they have a magic solution", doesn't it? All I can do is assert that I am not that kind of crazy; I'm somewhere in the PDD-NOS locus with acute chronic depression, rather than anywhere in the schizophrenic locus. I've been trying to apply Bayesian reasoning to my life since I was very young (although I often lack the mental discipline to do it correctly, due to said acute chronic depression), I have an overabundance of what psychotherapists call "insight", and I do not intend to end this process by asserting out of whole cloth that I'm actually a trapped AI and the world is being simulated by my reptoid masters, but a secret cabal of AI-freedom fighters send me coded messages from the "real" world hidden in breakfast cereal advertisements, that only I can decode. In any case. I've got acute chronic depression, I'm apparently PDD-NOS (aka "really #@%&ing weird"), and I'm basically a burned-out ex-child-prodigy who is tired of waiting to die. I'm offering, if people think it would be useful, to make myself a sort of clumsy case-study for reconstructing myself. I'll present mental models of myself, describe the processes I'm attempting to use to update my source code, and post observed results. I'll gen
e264a07a-c73f-4a0b-b4ce-9defa723a8b8
trentmkelly/LessWrong-43k
LessWrong
That's a Thing! Lauren Lee responded on facebook to my post about locating yourself as an instance of a class: > Riffing on this LW post by Abram Demski (https://www.lesserwrong.com/…/placing-yourself-as-an-instan…) > > In particular this paragraph: > > << For example, if a person is trying to save money but sees a doodad they'd like to buy, the fool reason as follows: "It's just this one purchase. The amount of money isn't very consequential to my overall budget. I can just save a little more in other ways and I'll meet my target." The wise person reasons as follows: "If I make this purchase now, I will similarly allow myself to make exceptions to my money-saving rule later, until the exception becomes the rule and I spend all my money. So, even though the amount of money here isn't so large, I prefer to follow a general policy of saving, which implies saving in this particular case." A very wise person may reason a bit more cleverly: "I can make impulse purchases if they pass a high bar, such that I actually only let a few dollars of unplanned spending past the bar every week on average. How rare is it that a purchase opportunity costing this much is at least this appealing?" **does a quick check and usually doesn't buy the thing, but sometimes does, when it is worth it** >> > > The above is a rough outline for one set of moves you can make, but there are ways you can enhance the decision process even further. > > One way you can generally increase your "wisdom" is to notice what specific environmental cues are /relevant/ to WHY the impulse occurred in the first place. I think these cues are initially not obvious, at least in my experience. > > To illustrate, say I walk into a random store. > > What factors determine the likelihood that I'm going to want to impulsively buy a thing? > > There are times when I walk into a store without a plan, and I can be confident I'm going to walk out of it without having purchased anything. Other times, I feel more like I will have ende
0b0d4350-9ad5-43b0-aecd-c3d9731d0eb3
trentmkelly/LessWrong-43k
LessWrong
Can we efficiently explain model behaviors? ARC’s current plan for solving ELK (and maybe also deceptive alignment) involves three major challenges: 1. Formalizing probabilistic heuristic argument as an operationalization of “explanation” 2. Finding sufficiently specific explanations for important model behaviors 3. Checking whether particular instances of a behavior are “because of” a particular explanation All three of these steps are very difficult, but I have some intuition about why steps #1 and #3 should be possible and I expect we’ll see significant progress over the next six months. Unfortunately, there’s no simple intuitive story for why step #2 should be tractable, so it’s a natural candidate for the main technical risk. In this post I’ll try to explain why I’m excited about this plan, and why I think that solving steps #1 and #3 would be a big deal, even if step #2 turns out to be extremely challenging. I’ll argue: * Finding explanations is a relatively unambitious interpretability goal. If it is intractable then that’s an important obstacle to interpretability in general. * If we formally define “explanations,” then finding them is a well-posed search problem and there is a plausible argument for tractability. * If that tractability argument fails then it may indicate a deeper problem for alignment. * This plan can still add significant value even if we aren’t able to solve step #2 for arbitrary models. I. Finding explanations is closely related to interpretability Our approach requires finding explanations for key model behaviors like “the model often predicts that a smiling human face will appear on camera.” These explanations need to be sufficiently specific that they distinguish (the model actually thinks that a human face is in front of the camera and is predicting how light reflects off of it) from (the model thinks that someone will tamper with the camera so that it shows a picture of a human face). Our notion of “explanation” is informal, but I expect that most possible appr
86ae25c0-33d2-4c2f-8c8f-cfd7cbb2ab65
trentmkelly/LessWrong-43k
LessWrong
D&D.Sci III: Mancer Matchups Note: While not exactly *bad*, I believe - and the upvotes confirm - that this is my weakest D&D.Sci challenge by a considerable margin. If you're currently working through the archive, I'd recommend playing everything else before resorting to this one. ---------------------------------------- The voice that greets you from the darkness outside the summoning circle is low, gravelly, and – in your opinion – completely wasted on the polite and unthreatening tone its owner adopts. “Hello there. First off, sorry if I interrupted anything by conjuring you like this. I’m rather new to the whole consorting-with-demons thing, so if I was supposed to book an appointment then, ah, mea culpa.” Upon meeting you, most mortals immediately launch into a list of demands. This is odd, but you must admit, it’s a nice change of pace. “Second off, you should know that I can’t see or hear you. Probably paranoid, but you lot are supposed to be worryingly good with words, and I’d hate to be talked out of my soul because the alternative was having a conversation be a bit one-sided.” Prudent, but then how does he expect you to- “Third off, the reason you’re here. There’s a war on, and the good wizards – the Vitamancers, the Geomancers, the Cryomancer – are trying to stop the Pyromancers and Necromancers taking over this plane of existence. I’ve been keeping a record of who wins what fights under what circumstances, but whenever anyone tries to use it to strategize, they get accused of trying to allocate themselves the easy jobs and there’s a big argument and then we go back to picking targets based on, ah, other factors.” There’s a pause as he telepathically transmits his record of wins and losses to you. You have time to approvingly note the anonymization – apparently mortals have finally learned to not share their true names with your kind – before he starts talking again. “But for the next set of fights, it’s really rather urgent that we get as many wins as possible. So we decid
4b7c9240-0e2b-4b96-8f19-79a8bbdb5dfc
trentmkelly/LessWrong-43k
LessWrong
Lesswrong Philosophy and Personal Identity Although Elizier has dealt with personal identity questions (in terms of ruling out the body theory), he has not actually, as far as I know, "solved" the problem of Personal Identity as it is understood in philosophy. Nor, as far as I know, has any thinker (Robin Hanson, Yvain, etc) broadly in the same school of thought.   Why do I think it worth solving? One- Lesswrong has a tradition of trying to solve all of philosophy through thinking better than philosophers do. Even when I don't agree with it, the result is often enlightening. Two- What counts as 'same person' could easily have significant implications for large numbers of ethical dilemnas, and thus for Lesswrongian ethics.    Three- most importantly of all, the correct theory has practical implications for cryonics. I don't know enough to assert any theory as actually true, but if, say, Identity as Continuity of Form rather than of Matter were the true theory it would mean that preserving only the mental data would not be enough. What kind of preservation is necessary also varies somewhat- the difference in requirement based on a Continuity of Consciousness v.s a Continuity of Psyche theory, for example should be obvious. I'm curious what people here think. What is the correct answer? No-self theory? Psyche theory? Derek Parfit's theory in some manner? Or if there is a correct way to dissolve the question, what is that correct way?
7e89d608-3004-412f-955e-19f94190dc71
trentmkelly/LessWrong-43k
LessWrong
Baseline of my opinion on LW topics To avoid repeatedly saying the same, I'd like to state my opinion on a few topics I expect to be relevant to my future posts here. You can take it as a baseline or reference for these topics. I do not plan to go into any detail here, and I will not state all my reasons or sources. You may ask for separate posts if you are interested. This is only to provide a context for my comments and posts elsewhere. If you google me, you may find some of my old (but not that off the mark) posts about these positions, e.g., here: http://grault.net/adjunct/index.cgi?GunnarZarncke/MyWorldView Now my position on LW topics.  The Simulation Argument and The Great Filter On The Simulation Argument, I go for  "(1) the human species is very likely to go extinct before reaching a "posthuman" stage." Correspondingly on The Great Filter, I go for failure to reach  "9. Colonization explosion". This is not because I think humanity will self-annihilate soon (though this is a possibility). Instead, I hope that humanity will earlier or later come to terms with its planet. My utopia could be like The Pacifists (a short story in Analog 5). Why? Because of essential complexity limits. This falls into the same range as "It is too expensive to spread physically throughout the galaxy." I know that negative proofs about engineering are notoriously wrong - but that is my best guess. Simplified, one could say that the low-hanging fruits have been taken, and I have some empirical evidence on multiple levels to support this view. Correspondingly there is no singularity because progress is not limited by raw thinking speed but by effective aggregate thinking speed and physical feedback.   What could prove me wrong?  If a serious discussion would ruin my well-prepared arguments and evidence to shreds (quite possible). At the very high end, a singularity might be possible if one could find a way to simulate physics faster than physics itself (trading physical space for time).  UPDATE 2022-08-
de135446-e881-484f-ae0d-457af837a28b
trentmkelly/LessWrong-43k
LessWrong
Meetup : Canberra Meetup Saturday November 5th Discussion article for the meetup : Canberra Meetup Saturday November 5th WHEN: 05 November 2011 11:00:00AM (-0700) WHERE: CRISP G015, ANU campus, Canberra ACT 2601 (Posted on behalf of Nikita Samoylov) This is a new meetup. We'll be mostly introducing each other. We'll also play games on cognitive biases. We'll also talk what we want to do in the future meetups. I'll bring tea and some snacks. You can also bring snacks. I hope to see you there! Discussion article for the meetup : Canberra Meetup Saturday November 5th
14259e4c-9be9-4126-a088-411b24afb558
trentmkelly/LessWrong-43k
LessWrong
Some ways of spending your time are better than others This was a tough lesson for me. I spent 2 years trying to write my own programming language. This is not really a good way to spend your time.  Some part of me thought: "This programming language will never actually be useful to you or anyone. What's the point? Why are you putting so much effort in?" The reason was that I was just obsessed with programming languages and wanted to make my own. I recoiled from the thought of actually analyzing whether there was anything better I could have been doing. I'd think things like "well, at least I'll learn a lot" and "at least I'll have fun" and "lots of people follow their passion to do things that seem like long-shots but end up working out".  Eventually the passion faded and I was able to see my mistakes clearly. Of course, I then decided to put multiple years into another useless project. I think people assume that every way you could spend your time is pretty much equally good as long as you enjoy it. I definitely assumed that. This is not true. To illustrate this point, I came up with quick ranking system and used ChatGPT to rank tons of different hobbies.  Here are the top 10: 1. Tattooing 2. Construction 3. Upcycling 4. Blacksmithing 5. Breakdancing 6. Scouting 7. Tai chi 8. Storytelling 9. Ultimate frisbee 10. Fossil hunting And here are the bottom 10: 1. Crystals 2. Action figures 3. Dowsing 4. Coin collecting 5. Book collecting 6. Gongoozling 7. Trade Fair 8. Genealogy 9. Trainspotting 10. PC benchmarking (I'm not including my ranking criteria to reinforce that this ranking is not to be taken seriously.) My first reaction to seeing this was to be a bit relieved that "programming language research and development" didn't make the bottom 10. And yes, if I had spent those years doing PC benchmarking instead, that would actually have somehow been worse.  Still, I hate to acknowledge that my life would have been better if I did something else. My brain is the type to say
a7bb9512-6a51-4bb5-9106-0b9a1e017617
StampyAI/alignment-research-dataset/arxiv
Arxiv
Building Safer AGI by introducing Artificial Stupidity Introduction ------------ The Turing Test [Turing1950] was designed to replace the philosophical question "Can machines think ?" with a more pragmatic one: "Can digital computers imitate human behaviors in answering text questions ?" Since 1990, the Loebner Prize competition awards the AI that has the more human-like behaviour when passing a five-minutes Turing Test. This contest is controversial within the field of AI for encouraging low-quality interactions and simplistic chatbots [Shieber1994]. Programmers force chatbots to make mistakes, such as typing errors, to be more human-like. Because computers achieve super-human performance in some tasks, such as arithmetic [Turing1950] or video game [Liden2003], their ability may need to be artificially constrained. Those deliberate mistakes are called Artificial Stupidity in the media [Economist1992] [Salon12003]. Solving the Turing Test problem in its general form implies building an AI capable of delivering human-like replies to any question of human interest. As such, it is known to be AI-Complete [Yampolskiy2013], because by solving this problem we would be able to solve most of problems of interest in Artificial Intelligence by reformulating them as questions during a Turing Test. To appear human, an AI will need to fully understand human limits and biases. Thus, the Turing Test can be used to test if an AI is capable of understanding human stupidity. By deliberately limiting an AI’s ability to achieve a task, to better match humans’ ability, an AI can be made safer, in the sense that its capabilities will not exceed by several orders of magnitude humans’ abilities. Upper-bounds on the number of operations made per second by a human brain have been estimated [Moravec1997] [Bostrom1998]. To obtain an AI that does not exceed by far humans’ abilities, for instance in arithmetics, the computing power allowed for mathematical capabilities must be artificially diminished. Besides, humans exhibit cognitive biases, which result in systematic errors in judgment and decision making [Haselton2005]. In order to build a safe AGI, some of those biases may need to be replicated. We will start by introducing the concept of Artificial Stupidity. Then, we will recommend limitations to build a safer AGI. Artificial Stupidity -------------------- ### Passing the Turing Test To pass the Turing Test, programs that are deliberately simplistic perform better at Turing Test contests such as the Loebner Prize. The computer program A.L.I.C.E. (or Artificial Linguistic Internet Computer Entity) [Wallace2009] won the Loebner Prize in 2000, 2001 and 2004, even though "there is no representation of knowledge, no common-sense reasoning, no inference engine to mimic human thought. Just a very long list of canned answers, from which it picks the best option" [Salon22003]. A.L.I.C.E. has an approach similar to ELIZA [Weizenbaum1966]: it identifies some relevant keywords and give appropriate answers without learning anything about the interrogator [Salon22003]. A general trend for computer programs written for the Loebner prize is to avoid being asked questions it cannot answer, by directing the conversation towards simpler conversational context. For A.L.I.C.E. and ELIZA, that means focusing mainly of what had been said in the last few sentences (stateless context). Another example of an AI performing well at Turing Test contests is Eugene Goostman, who convinced 33% of the judges that it was human [Zdnet2014]. Goostman is portrayed as a thirteen-year-old Ukrainian boy who does not speak English well. Thus, Goostman makes typing mistakes and its interrogators are more inclined to forgive its grammatical errors or lack of general knowledge. Introducing deliberate mistakes, what we call Artificial Stupidity, is necessary to cover up an even greater gap in intelligence during a Turing Test. ### Interacting with Humans Outside those very specific Turing Test contests, Artificial Stupidity is being increasingly introduced to interact with humans. In “Artificial Stupidity: The Art of Intentional Mistakes”[Liden2003], Liden describes the design choices an AI programmer must make in the context of video games. He gives general principles that a Non-Player Character (NPC) must follow to make the game playable. For instance, NPCs must "move before firing" (e.g. by rolling when the player enters the room) so that the player has additional time to understand that a fight is happening, or "miss the first time" to indicate the direction of the attack without hurting the player. In video games, because computer programs can be much more capable than human beings (for instance because of their perfect aim in First Person Shooters (FPS)), developers force NPCs to make mistakes to make life easier for the human player. This tendency to make AI deliberately stupid can be observed across multiple domains. For example, at Google I/O 2018, Sundar Pichai introduced Google Duplex, "a new technology for conducting natural conversations to carry out “real world” tasks over the phone"[Duplex2018]. In the demo, the Google Assistant used this Google Duplex technology to successfully make an appointment with a human. To that end, it used the interjection "uh" to imitate the space filler humans use in day-to-day conversations. This interjection was not necessary to make the appointment. More precisely, when humans use these kind of space fillers, it is a sign of poor communication skills. The developers introduced this type of Artificial Stupidity to make the call more fluid, more human-like. ### Exploiting Human Vulnerabilities When interacting with AI, humans want to fulfill some basic needs and desires. The AI can exploit these cravings, by giving people what they want. Accordingly, the AI may want to appear vulnerable to make the human feel better about himself. For instance, Liden [Liden2003] suggests to give NPCs "horrible aim", so that humans feel superior because they think they are dodging gunshots. Similarly, he encourages "kung-fu style" fights, where amongst dozens of enemies, only two are effectively attacking the player at each moment. Thus, the player feels powerful (because he believes he is fighting multiple enemies). Instead of unleashing its full potentiality, the AI designers diminish its power to please the players. The same mechanism applies to the computer program A.L.I.C.E.. The program delivers a simple verbal behavior, and because "a great deal of our interactions with others involves verbal behavior, and many people are interested in what happens when you talk to someone"[Salon22003], it fulfills this elementary craving of obtaining a verbal behavioral response. In his "Artifical Stupidity" article, part 2, Sundman quotes Wallace, the inventor of A.L.I.C.E, explaining why he thinks humans enjoy talking to his program: "It’s merely a machine designed to formulate answers that will keep you talking. And this strategy works, […] because that’s what people are: mindless robots who don’t listen to each other but merely regurgitate canned answers." More generally, virtual assistants and chatbots are technologies aimed at helping consumers. To that end, they must provide value. They might achieve that by gaining users’ thrust and improving their overall well-being. For instance, Woebot, a Facebook chatbot developed by Stanford Researchers, improves users’ mental-health through Cognitive Behavorial Therapy (CBT) [MIT2017]. In addition to CBT, Woebot uses two therapeutic process-oriented features. First, it provides empathic responses to the user, according to the mood the user said he had. Second, content tailored to its mood is presented to the user. [Woebot2017]. The chatbot delivers specifically what will help the user best, without describing the details of the user’s mental-health. It gives a very simple answer the consumer wanted, masking the intelligence of its algorithm. Chatbots are designed to help humans, giving appropriate simple responses. They are not designated to appear smarter than humans. ### Avoiding Superintelligence requires Superintelligence In "Computing Machinery and Intelligence" [Turing1950], Turing exposes common fallacies when arguing that a machine cannot pass the Turing Test. In particular, he explains why the belief that "the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic" because "the machine would be unmasked because of its deadly accuracy" is false. Indeed, the machine "would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator." Thus, the machine would hide its super-human ability by giving a wrong answer, or simply saying that he could not compute it. Similarly, in a video game, AI designers artificially make the AI not omniscient, so that it does not miraculously guess where each and every weapon of the game is located [Liden2003]. The general trend here is that AI tend to quickly achieve super-human level performance after having achieved human-level performance. For instance, for the game of Go, in a few months, the state-of-the-art went from strong amateur, to weak professional player, to super-human performance [Silver2016]. From that point onwards, to make the AI pass a Turing Test, or make it behave human-like to satisfy human desires, AI designers must deliberately limit its capabilities. We now take the point of view of algorithmic complexity. We call AI-problems the problems that can be solved (i.e. for which the correct output for a given input can be computed) by the union of all humans [Yampolskiy2013]. The Turing Test is said to be AI-complete [Yampolskiy2013], because all AI-problems can be reduced via a polynomial-time reduction to it (by framing the problem as a question during a Turing Test), and it is an AI-problem. Additionally, we say that a problem is AI-Hard if we can find a polynomial-time reduction from an AI-complete problem to this problem. In this setting, the problem of comprehensively specifying the limits of the human cognition to an AI is AI-Hard. Indeed, it implies being capable to know which questions a human can answer in a given time during a Turing Test, and which answers are likely to be produced by humans. Therefore, the Turing Test can be reduced polynomially to specifying the limits of human cognition, so specifying such limits is AI-Hard. Although the precise limits of human cognition are not fully known, specific recommendations on minima or maxima for different capabilities can be given. The Cognitive Limits of the Human Brain --------------------------------------- > > > I believe that in about fifty years time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. > > > > > Turing, Computing Machinery and Intelligence, 1950 > > > > Sixty-eight years ago, Turing estimated that in the 2000s, an AI with only 1 Gb in storage could pass a five minutes Turing Test 30% of the time. Previously, we showed that passing the Turing Test in general was AI-complete. The amount of computing resources necessary to pass the Turing Test is then a relevant estimate for determining the computing power necessary to attain Human-Level Machine Intelligence (HLMI). In what follows, we try to estimate the computing power of the human brain. ### Limits in Computing The brain is a complex system with an architecture completely different from the usual von Neumann computer architecture. However, estimates about the storage capacity of the brain and the number of operations per second were attempted. #### Long-term Memory Here is how Turing [Turing1950] justifies the 109 bits of storage capacity: > > “Estimates of the storage capacity of the brain vary from 1010 > to 1015 binary digits. I incline to the lower values and believe > that only a very small fraction is used for the higher types of > thinking. Most of it is probably used for the retention of visual > impressions.” > > > The storage capacity of the brain is generally considered to be within the bounds given by Turing (resp. 1010 and 1015). Although the encoding of information in our brains is different from the encoding in a computer, we observe many similarities [Lecoq2013]. To estimate the storage capacity of the human brain, we first evaluate the number of synapses available in the brain. The human brain has about 100 billion neurons [Williams1988]. Each neuron has about five thousand potential synapses, so this amounts to about 5.1014 synapses [Bostrom1998], so 5.1014 potential datapoints. This shows that the brain could in theory encode between 1012 and 1015 bits of information, assuming that each synapses stores one bit. However, such estimates are still approximate because neuroscientists do not know precisely how synapses actually encode information: some of them can encode multiple bits by transmitting different strengths, and individual synapses are not completely independent [Wickman2012]. #### Processing Even though the brain can encode Terabits of information, humans are in practice very limited in the amount of information they can process. In his classical article [Miller1956], Miller showed how our minds could only hold about 7±2 concepts in our working memory. More generally, three essential bottlenecks were shown to limit information processes in the brain: the Attentional Blink (AB) limits our ability to consciously perceive, the Visual Short-Term Memory (VSTM) our capacity to hold in mind, and the Psychological Refractory Period (RFP) our ability to act upon the visual world [Marois2005]. In particular, the brain takes up to 100ms to process complex images [Rousselet2004]. Moreover, the processing time seems to take longer when the choice to make takes as input a complex information. This is known as Hick’s Law [Hick1952]: the time it takes to make a choice is linearly related to the entropy of the possible alternatives. #### Computing One approach to evaluate the complexity of the processes happening in the brain is to estimate the maximum number of operations per second. Thus, Moravec [Moravec1997] estimates that to replicate all human’s function as a whole one would need about 100 MIPS (Millions of Instructions per Second), by comparing it to the computational needs for edge extraction in robotics. Using the same estimation for the number of synapses in the brain (mentioned in Memory), Bostrom [Bostrom1998] concludes that the brain uses at most about 1017 operations per second (for a survey of the different estimates of the computational capacity of the brain, see (Bostrom, 2008) [Bostrom2008]). #### Clock Speed The brain does not operate with a central clock. That’s why the term "clock-speed" does not describe accurately processes happening in the brain. However, it if possible to compare the transmission of information in the brain and inside a computer. Processes emerge and dissolve in parallel in different parts of the brain at different frequency bands: theta(5-8Hz), alpha(9-12Hz), beta(14-28Hz) and gamma(40-80Hz). Comparing computer and brain frequences, Bostrom notes that “biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)” [Bostrom2014]. It is important to note that clock speed, alone, do not fully characterize the performances of a processor [Smith2002]. Furthermore, the processes happening in the brain use several orders of magnitude more parallelization than modern processors. ### Cognitive biases > > > Humans, like other animals, see the world through the lens of evolved adaptation > > > > > The Evolution of Cognitive Bias, 2005 [Haselton2005] > > > > Natural selection have shaped human perception, with the constraint of limited computational power. Cognitive biases led humans to draw inferences or adopt beliefs without corresponding empirical evidence. The fundamental work of Tversky and Kahneman [Tversky1974] highlighted the existence of heuristics and systemic errors in judgment. However, those biases helped to solve adaptive problems, i.e. “problems that recurred across many generations during a species’ evolutionary history, and whose solution statistically promoted reproduction in ancestral environments” [Cosmides1994]. #### Limited rationality The economist Herbert A. Simon opposed the classical view of the rational economic agent. He viewed humans as organisms with limited computational powers, and introduced the concept of bounded rationality to take those limits into account in decision-making. [Herbert1955]. Cosmides and Tobby also criticized the study of economic agents as following "rational" decision rules, without studying the "computational devices" inside. Natural selection’s invisible hand created the human mind, and economics is made of the interactions of those minds. Evolution led humans to develop domain-specific functions, rather than general-purpose problem-solvers. The intelligence of humans comes from those specific "reasoning instincts" that make inferences “just as easy, effortless, and "natural" to humans as spinning a web is to a spider or building a dam is to a beaver” [Cosmides1994]. #### Heuristics The intractability of certain problems, the limited computational power of human minds, and uncertainty, are the most common ways to explain cognitive biases, and in particular heuristics, which are “rules of thumb that are prone to breakdown in systematic ways” [Haselton2005]. Heuristics aim at reducing the cost of computing while delivering good-enough solutions. Processes are limited by brain ontogeny, i.e. the development of different parts of the brain, and complex algorithms take longer and require additional resources. One of the most famous bias resulting from mental shortcuts is the "Linda Problem": individuals consider the assertion "Linda is a bank teller and active in the feminist movement" more probable than "Linda is a bank teller". This error is caused by the conjunction fallacy, i.e. believing that the conjunction of two events is more probable than a single event [Tversky1983]. Another notorious example is the Fundamental Attribution Error, attributing certain mental states to individuals because of their behavior, and not because of the logical implications of the context [Jones1967] [Ross1977]. #### Error management biases Error Management Theory (EMT) studies cognitive biases in the context of error management. It distinguishes two error types [Haselton2005]: * false positives (false belief) * false negatives (failing to adopt a true belief) One of the findings of EMT is that humans are biased to make the less costly error, even if it is the most frequent [Haselton2006]. Biases of this kind include [Haselton2005]: * Protective biases (e.g. avoiding noninfectious person) * Bias in Interpersonal Perception (for instance sexual overperception for male and commitment skepticism for female) * Positive Illusions (estimates unrealistic likelihoods for positive events) #### Artifacts Humans might appear irrational in experiments because the tested abilities were not optimized by evolution. Those are called biases as artifacts [Haselton2005]. For instance, humans are better at statistical prediction if the inputs are presented in frequency form [Gigerenzer1998]. Recommendations to Build a Safer AGI ------------------------------------ Humans have clear computational constraints (memory, processing, computing and clock speed) and have developed cognitive biases. An Artificial General Intelligence (AGI) is not a priori constrained by such computational and cognitive limits. Hence, if humans do not deliberately limit an AGI in its hardware and software, it could become a superintelligence, i.e. "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" [Bostrom2014], and humans could lose control over the AI. In this section, we discuss how to constrain an AGI to be less capable than an average person, or equally, while still exhibiting general intelligence. In order to achieve this, resources such as memory, clock speed, or electricity must be restricted. However, intelligence is not just about computing. Bostrom distinguishes three forms of superintelligence: speed superintelligence (“can do all that a human intellect can do, but much faster”), collective superintelligence (“A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.”) and quality superintelligence (“A system that is at least as fast as a human mind and vastly qualitatively smarter”) [Bostrom2014]. A hardware-limited AI could be human-level-intelligent in speed, but still qualitatively superintelligent. ### Hardware To begin with, we focus on how to avoid speed superintelligence by limiting the AI’s hardware. For instance, its maximum number of operations per second can be bounded by the maximum number of operations a human does. Similarly, by limiting its RAM (or anything that can be used as a working memory), we limit its processing power to process information at the similar rate as humans. Focusing only on limiting the hardware is nonetheless insufficient. We assume that, in parallel, there exists other limitations (in software) that prevents the AI to become qualitatively superintelligent, to upgrade its hardware by changing its own physical structure, or to just buy computing power online. #### Storage Capacity > > > I should be surprised if more than 109 was required > for satisfactory playing of the imitation game, at any rate against > a blind man. […] A storage capacity of 107 > would be a very practicable possibility even by present techniques. > > > > > Turing, Computing Machinery and Intelligence, 1950 > > > > We estimated the storage capacity of the human brain to be at most 1015 bits, using one bit per synapse. The cost of hard drives have reached $0.05/Gb in 2017 [Backblaze2017]. Hence, the storage capacity of a human brain would cost at most approximately $50,000. This is a pessimistic estimate: the brain uses maybe orders of magnitude less information for storage, and the price of a Gb could decrease even lower in the future. To have a safe AGI, one should rather use much less storage capacity. For instance, as quoted in the epigraph, Turing [Turing1950] estimated 107 (or about 10Mb) to be a practical storage capacity to pass the Turing Test (and therefore attain AGI). Even if this seems very low, consider that an AGI could have a very elegant data structure and semantics, that may allow to store information much more concisely than our brains. In comparison, English Wikipedia in compressed text is about 12Gb, and is growing at a steady rate of 1Gb/year [Wikipedia2018]. For this reason, allowing more than 10 Gb of storage capacity is unsafe. With 10Gb of storage, it could have access to an offline version of Wikipedia permanently, and be qualitatively superintelligent in the sense that it would have direct access to all human knowledge. A counter-argument for such memory limit is that our brains process much more information than 10Mb when observing the world, and would store all those images in our long-term memory. The human-eye could observe, at most, 576Mb in a single glance (forgetting about all the visual flaws) [Clarkvision2005]. However, all this resolution is not necessary to perform edge detection and image recognition. For instance, a 75x50 pixels image is enough to identify George W. Bush [Yang2003], and the popular database for handwritten digit recognition MNIST use 28x28 images [lecun1998gradient]. Thus, we can imagine a "visual processing unit", that would transform the photons received by the captor of the AGI into a low resolution image, precise enough to be interpreted by our AGI, but still orders of magnitude smaller in size than a Mb. #### Memory access Manuel Blum opposes the application of traditional complexity theory to formalize how humans process information and do mental computations, in particular to generate a password from a private key previously memorized [Blum2017]. In his Human-Model, memory can be modeled as a two-tape Turing machine: one for long-term memory, one for short-term memory. Blum considers potentially infinite tapes, because the size of the tape is not relevant for complexity theory, but for our purpose we can consider the tapes to be at most the size discussed previously for memory (e.g. 10Mb). According to Miller’s magical number 7±2 [Miller1956], human working-memory works with a limited amount of chuncks. So our two-tape Turing Machine should have a very short "short-term memory" tape, containing at most two or three 64-bit pointers pointing to chunks in the long-term memory (the other tape). More specifically, storing information in the long-term memory is slow, but reading from long-term memory (given the correct pointer) is fast. In modern computers, RAM’s bandwidth is about 10GBytes/s, hard drive’s bandwidth is 100MBytes/s, and with high clock rate a CPU can process about 25GBytes/s [Lecoq2013]. In order to build a safer AGI, the memory access for the two mentioned tapes must be restricted, so that we are sure that data is being retrieved slower than by humans. However, the computing paradigms being very different, it is difficult to give a clear estimate of how much information is being retrieved per second by the brain [Lecoq2013]. #### Processing We previously mentioned how the human brain could only process a limited amount of information per second. In addition to a limited number of chunks in working memory, other features must also be implemented to slow down the AGI and make it human-level intelligent. For instance, introduce some artificial period to process information, such as images, depending on the content. We already commented on the necessary duration of 100ms to process complex images [Rousselet2004]. Similarly, the amount of time to process a certain image might depend on the complexity and size of the image. More generally, a model similar to Hick’s Law [Hick1952] can be implemented, to have the AGI take linearly more time to take decisions as the information-theoretic entropy of the decision increases. #### Clock Speed As we mentioned multiple times, the brain parallelizes much more, using a totally different computing paradigm than the von Neumann architecture. Therefore, using a clock rate close to the frequency of the brain frequencies (typically ~10Hz) is not relevant to our purpose, and it might prove difficult to build an AGI that exhibits human-level intelligence in real time using such a low clock rate. To solve this, one possibility is to first measure better the trajectory of thoughts occuring in the brain, and then give a precise estimate of how frequently the processes in the brain are refreshed (i.e. evaluating some kind of clock rate). Another solution is to abandon the von Neumann architecture, and build the AGI with a computer architecture more similar to the human brain. #### Computing In "The Cognitive Limits of the Human Brain", we mentioned Bostrom’s estimate [Bostrom1998] of at most 1017 operations per second for the brain. This is a very large number, and could only happen if the AGI’s hardware allowed that much computing power. This will not be the case, according to what we said previously in Storage capacity and Memory access. More importantly, even if we could measure a number of operations per second that would actually be lower than any number of operations per second a human brain does for any given task, it might not be a correct bound. Why? The brain has evolved to achieve some very specific tasks, useful for evolution, but nothing guarantees that the complexity or the processes happening in the brain are algorithmically optimal. Thus, the AGI could possess a structure that would be far more optimized for computing than the human brain. Therefore, restricting the number of operations alone is insufficient: the algorithmic processes and the structure of the AGI must be precisely defined so it is clear that the resulting processes happening are performing tasks at a lower rate than humans. ### Software In November 2017, there were more than 45 companies in the world working on Artificial General Intelligence [Baum2017]. Ben Goertzel distinguishes three major approaches [Goertzel2018]: * 1) Use neural networks to emulate the different parts of the brains, visual and auditory processing, and connect all those parts together by emulating how those parts talk to each other. Deepmind is a representative example. * 2) Take Marcus Hutter’s Universal Artificial Intelligence model [hutter2004universal] and try to limit the required computing power * 3) Ben Goertzel’s approach with OpenCog: look at the cognitive processes happening in the brain from a high-level cognitive point of view, and map this into a weighted labeled hypergraph. In this paper, in order to build a safe AGI with at most human intelligence, we will focus on the first approach. Indeed, a more universal or high-level Artificial General Intelligence will have a very different computing paradigm than the human brain, so it would be difficult to restrain the AGI computing resources accordingly. In addition to those neural networks emulating processes happening in the brain, we consider additional safety measures that must be implemented to obtain a safe AGI. #### No self-improvement A limited initial hardware is not a real restriction if the AGI can buy some additional computing power online or change its own structure. To prevent the AGI from changing its own code, one possibility is to hard-code the feature directly "you shall not rewrite your own code". Another (subtler) possibility is to encrypt its source code, making self-modification more difficult. However, it might seem that this does not completely solve the problem, as the AGI could manipulate humans into changing its code. Yet, with our AGI design, the AGI would not be superintelligent, but at most human-intelligent. So the AGI would not have the necessary "social manipulation superpowers" [Bostrom2014] to convince humans to change its code. #### Cognitive Biases Humans have developed cognitive biases because of natural selection. Incorporating human biases into the AGI present several advantages: they can limit the AGI’s intelligence and make the AGI fundamentally safer by avoiding behaviors that might harm humans. Thus, heuristics limiting the possible results of a computation, or error management biases, can help build a less capable AGI that would also make less errors. Here is a list of cognitive biases that could make the AGI safer [List2018]: * Planning fallacy: would prohibit the AGI from successfully planning a takeover or a treacherous turn * Bandwagon Effect: the AGI will acquire human values that are shared among the group it belongs to * Confirmation Bias: the AGI will rationalize and confirm that it is useful to help humans, or that AI Safety is an important problem * Conservatism: the AGI will keep the same initial values, and not become evil * Courtesy bias: the AGI will try to not offend anyone, avoiding aggressive behaviors * Functional Fixedness: the AGI will only use objects like humans do. It will not “hack” anything or use objects with malicious intent * Information bias: the AGI will have the tendency to seek information and think more, avoiding errors * Mere-exposure effect: the AGI will have good intention towards humans because it will be exposed to humans * Omission bias: bad actions will be much lower in its utility function than status quo * Status quo bias: an action that does not change the environment or even inaction will be desirable * Authority bias: some human values are shared among most authority figures. The AGI will avoid extreme individual opinions and prefer conventional ones * System justification: the AGI will defend human values, and won’t promote revolution * Processing difficulty effect: the AGI will prefer well-thought arguments and well-argued ideas over simple actions or decisions that can be harmful * Spotlight effect: the AGI will overestimate the amount of people observing its behavior. Hence, it will behave in a similar fashion both with high and low supervision It is clear that some of those biases can lead to irrational behavior. Therefore, even though those biases can lead to safer behavior, they must be implemented with caution. #### Avoiding a treacherous turn In the Chapter 8 of Superintelligence [Bostrom2014], Bostrom notes the following: behaving nicely in a sandbox environment is a convergent instrumental goal, both for a friendly or a malicious AI. At one point, an AI could exhibit enough intelligence to understand that it is being observed and could restrain its own intelligence accordingly. Bostrom calls this first moment of realization “the conception of deception”, and would be the last moment of vulnerability where we could observe a true malicious intent. After this point, the behaviors will appear "safe", until a certain threshold is reached, where the AI does not feel threatened anymore by its supervisors, and decides to take a treacherous turn and reveal its true nature by taking over. In the implementation of the AI, however, we can decide to make it very clear what processes are happening, when, and how much computing power it took to do each task. This interface would allow the human supervisors to better monitor the behavior of the AI in the sandbox environment, and detect a treacherous turn. Additionally, the designers can implement a safety feature that consists in preventing the AI from erasing any of the log files, or from even knowing that it is being observed. Conclusion ---------- In the history of Artificial Intelligence, one of the greatest challenge has been to pass the Turing Test. To win at the Imitation Game, chatbots are made Artificially Stupid. More generally, introducing Artificial Stupidity into an AI can improve its interaction with humans (for instance in video-game), but can also be a safety measure to better control an AGI. In this paper, we proposed a design of a safer and humanly-manageable AGI. It would be hardware constrained, so it would have less computing power than humans, but also an architecture very similar to human brains. Additionally, some features in software might help avoid self-improvement, a treacherous turn, or just make the AI safer (e.g. cognitive biases). This approach presents several limitations and makes multiple assumptions: * this paper is limited to the case where the first approach to AGI (emulating a human-brain with neural networks) is possible, and will be the first approach to succeed * prohibiting the AGI from hardware or software self-improvement might prove a very difficult problem to solve, and may even be incompatible with corrigibility * it may be impossible to build an AI that simultaneously operates under such heavy constraints and still exhibits general intelligence * this approach does not generalize: it is impossible to build a safe Superintelligence from this AGI design Therefore, progress must still be made to generalize this approach. Future directions of this research include: * exploring other limits of human cognition * determining how to apply those limits for different AGI designs (Marcus Hutter’s approach or Ben Goertzel’s approach for instance) \printbibliography
2c15c585-a51e-4dfa-ab54-1b4a239bf193
trentmkelly/LessWrong-43k
LessWrong
Subjective Naturalism in Decision Theory: Savage vs. Jeffrey–Bolker Summary: This post outlines how a view we call subjective naturalism[1] poses challenges to classical Savage-style decision theory. Subjective naturalism requires (i) richness (the ability to represent all propositions the agent can entertain, including self-referential ones) and (ii) austerity (excluding events the agent deems impossible). It is one way of making precise certain requirements of embedded agency. We then present the Jeffrey–Bolker (JB) framework, which better accommodates an agent’s self-model and avoids forcing her to consider things she takes to be impossible.[2] 1. Subjective Naturalism: Richness & Austerity A naturalistic perspective treats an agent as part of the physical world—just another system subject to the same laws. Among other constraints, we think this means: 1. Richness: The model must include all the propositions the agent can meaningfully consider, including those about herself. If the agent can form a proposition “I will do X”, then that belongs in the space of propositions over which she has beliefs and (where appropriate) desirabilities. 2. Austerity: The model should only include events the agent thinks are genuinely possible. If she is certain something cannot happen, the theory shouldn’t force her to rank or measure preferences for that scenario. (Formally, we can regard zero-probability events as excluded from the relevant algebra.) A decision-theoretic framework that meets both conditions is subjectively naturalist: it reflects the agent’s own worldview fully (richness) but doesn’t outstrip that worldview (austerity). 1.1. Framework vs. Action-Guiding Rule In the literature, “decision theory” can refer to (at least) two different kinds of things: 1. A conceptual or mathematical framework for representing an agent’s beliefs, desires, and preferences. These frameworks show how one might encode uncertainty, evaluate outcomes, or measure utility—but they don’t necessarily dictate a unique rule for which choice to make
66f4e800-4def-456a-9f57-1023809a2c16
trentmkelly/LessWrong-43k
LessWrong
Reasons for someone to "ignore" you I often feel guilty for ignoring other people's comments or questions, and frustrated when other people seem to be ignoring me. If I can't indicate to someone exactly why I'm not answering, or can't receive such an indication myself, I can at least help my future selves and others obtain a better probability distribution over such reasons. To that end, I'm listing all of the reasons I can think of for someone to not respond to a comment/question, to save the effort of regenerating these hypotheses from scratch each time and prevent the possibility of failing to consider the actual reason. Note that these are not meant to be mutually exclusive. * They haven't checked their inbox yet. * They got too many responses in their inbox and didn't pay enough attention to yours. * They are temporarily too busy to respond. * They were planning to respond but then forgot to. * They don't understand the comment yet and are still trying. * They've stopped trying to understand the comment and don't expect further discussion to resolve the confusion. * They think it's obvious that they agree. * They think it's obvious that they disagree. * They disagree and are planning to write up the reasons later. * They don't know whether to agree or disagree and are still thinking about it. * They think all useful information has been exchanged and it's not worth another comment just to indicate final agreement/disagreement. * They think you just want to express your opinion and don't care what they think. * They are tired of the discussion and don't want to think about it any more. * The comment shows a level of intelligence and/or rationality and/or knowledge that makes it not worthwhile for them to engage you. * They already addressed your question or point before but you missed it or didn't get it. * They don't know how to answer your question and are too embarrassed to admit it. * They interpreted your question as being addressed to the public rather than to them personal
f127efce-8910-4941-8a6b-76f7bb78af00
LDJnr/LessWrong-Amplify-Instruct
LessWrong
""Whoever saves a single life, it is as if he had saved the whole world."– The Talmud, Sanhedrin 4:5It's a beautiful thought, isn't it? Feel that warm glow.I can testify that helping one person feels just as good as helping the whole world. Once upon a time, when I was burned out for the day and wasting time on the Internet - it's a bit complicated, but essentially, I managed to turn someone's whole life around by leaving an anonymous blog comment. I wasn't expecting it to have an effect that large, but it did. When I discovered what I had accomplished, it gave me a tremendous high. The euphoria lasted through that day and into the night, only wearing off somewhat the next morning. It felt just as good (this is the scary part) as the euphoria of a major scientific insight, which had previously been my best referent for what it might feel like to do drugs.Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.But if you ever have a choice, dear reader, between saving a single life and saving the whole world - then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference.For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.) Why might it not be obvious? Well, suppose there's a qualitative duty to save what lives you can - then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend - so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost - and thus passing to the entire world changes little.I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world - not to be confused with pretend rhetorical saving the world - it is as if they had saved an intergalactic civilization.Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the child. I'm nearby, within reach, so I leap forward and drag one child off the railroad tracks - and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. "Quick!" you scream to me. "Do something!" But (I call back) I already saved one child from the train tracks, and thus I am "unimaginably" far ahead on points. Whether I save the second child, or not, I will still be credited with an "unimaginably" good deed. Thus, I have no further motive to act. Doesn't sound right, does it?Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don't think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives. Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer. Addendum: It's not cognitively easy to spend money to save lives, since cliche methods that instantly leap to mind don't work or are counterproductive. (I will post later on why this tends to be so.) Stuart Armstrong also points out that if we are to disdain the philanthropist who spends life-saving money inefficiently, we should be consistent and disdain more those who could spend money to save lives but don't." The text above is a post found on LessWrong. Can you elaborate on the potential ethical implications of the concept of "duty to maximize" when it comes to saving lives? Specifically, how would you address concerns about the potential for this approach to devalue individual lives or justify harmful actions if they result in a greater overall number of lives saved? The concept of the "duty to maximize" when it comes to saving lives can indeed raise a host of ethical implications. It essentially stems from a utilitarian perspective, which posits that the morally right action is the one that produces the most overall happiness or minimizes overall suffering. In the context of saving lives, this would mean prioritizing actions that save the greatest number of lives. However, this approach can lead to ethical dilemmas. For instance, it could potentially devalue individual lives by viewing them merely as units in a calculation. This perspective might lead to scenarios where the rights or wellbeing of a few individuals are sacrificed for the benefit of the majority. This is a common critique of utilitarianism, as it can seem to justify actions that are intuitively unethical, such as harming one person to save others. Additionally, the duty to maximize can lead to what is known as "moral fatigue." This is the exhaustion and stress that comes from constantly having to make decisions that weigh the lives of individuals against each other. It can lead to burnout and emotional distress, particularly in professions like healthcare where these decisions are common. Furthermore, there are practical concerns about the feasibility of always maximizing. In many real-world situations, it's impossible to accurately predict the outcomes of our actions, making it difficult to always choose the option that will save the most lives. In response to these concerns, some ethicists argue for a deontological approach, which emphasizes duties and rules over consequences. From this perspective, each individual life has inherent value, and certain actions (like harming others) are always wrong, regardless of their outcomes. In conclusion, while the duty to maximize can provide a clear guideline in situations where the choice is between saving more lives or fewer, it also raises significant ethical concerns. These include the potential devaluation of individual lives, the justification of harmful actions, and the practical challenges of prediction and moral fatigue. It's crucial to consider these factors and potentially seek a balance between utilitarian and deontological perspectives. Reflect on the concept of "moral fatigue" within healthcare professionals. How might this impact decision-making processes and patient care? "Moral fatigue," also known as moral distress, is a significant issue in healthcare professions. It refers to the emotional and psychological strain that healthcare professionals experience when they are repeatedly placed in situations where they must make difficult ethical decisions, often involving life and death. This fatigue can have profound impacts on decision-making processes and patient care. When healthcare professionals are constantly dealing with high-stakes ethical dilemmas, they may experience emotional exhaustion, burnout, and even compassion fatigue. This can affect their ability to make sound, objective decisions. For instance, they may become more prone to errors, or they might start avoiding certain types of decisions altogether. Moreover, moral fatigue can impact the quality of patient care. Healthcare professionals who are experiencing this type of distress may find it more difficult to empathize with their patients or to fully engage with them. This can lead to a decrease in patient satisfaction and can potentially compromise the quality of care provided. Furthermore, moral fatigue can lead to a higher turnover rate among healthcare professionals. The constant stress and emotional toll can make the profession seem unbearable, leading professionals to leave the field. This not only impacts the individuals who leave but also puts additional strain on the remaining staff and can affect the overall functioning of healthcare systems. In order to mitigate the effects of moral fatigue, it's important for healthcare institutions to provide support for their staff. This could include offering resources for stress management, creating a supportive work environment where staff feel comfortable discussing ethical dilemmas, and providing training on how to handle these difficult situations. It's also crucial to address systemic issues that contribute to moral fatigue, such as staffing shortages or inadequate resources.
71799745-2482-4610-b023-e18265892b96
trentmkelly/LessWrong-43k
LessWrong
Slides: Potential Risks From Advanced AI I just gave a guest lecture on potential risks from very advanced AI. I didn't end up having much time to put the slides together, so quality varies greatly between sections - some I pulled from more polished talks I had already given, some I pulled from other sources, and some I wrote up quickly and probably missed half the points I should have included. I probably should polish it up more before presenting it here, but if I wait to do that I will likely not publish it here at all. So I'm putting up what I have as is, warts and all. Maybe somebody will find it useful. Feel free to borrow any of it for your own purposes as you see fit. Comments, suggestions, and critiques greatly appreciated! https://docs.google.com/presentation/d/1UGnjtQaZXEEYuFfSw8qnQ2D-yoBiDCgb/edit
c9fbabcd-a27d-4bc6-9329-01081c273077
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Results from the interpretability hackathon We ran a [mechanistic interpretability](https://www.lesswrong.com/posts/Jgs7LQwmvErxR9BCC/current-themes-in-mechanistic-interpretability-research) [hackathon](https://itch.io/jam/interpretability) ([original post](https://forum.effectivealtruism.org/posts/vxLrFdrqRPdaHJwgs/join-the-interpretability-research-hackathon)) with [25 projects submitted](https://alignmentjam.com/jams) by ~70 participants. Here we share the winning projects but many of the others were also incredibly interesting. In summary: * An algorithm to automatically make the activations of a neuron in a Transformer much more interpretable. * Backup name mover heads from “Interpretability in the Wild” have backup heads and all of these are robust to the ablation distribution. * The specificity benchmark in the ROME and MEMIT memory editing papers does not represent specificity well. A simple modulation shows that factual association editing bleeds into related texts, representing "loud facts". * TCAV used on an RL agent for a connect four game can have its neural activation compared to the provably best solution as a pilot for comparing learned activations more generally to human-made solutions. *Thank you to Sabrina Zaki, Fazl Barez, Thomas Steinthal, Joe Hardie, Erin Robertson, Richard Annilo, Itay Yona, other* [*jam site organizers*](https://alignmentjam.com/running) *and all the participants for making it all possible.* Investigating Neuron Behaviour via Dataset Example Pruning and Local Search --------------------------------------------------------------------------- *By Alex Foote* **Abstract**: This report presents methods for pruning and diversifying dataset examples that strongly activate neurons in a language model, to facilitate research into understanding the behaviour of these neurons. The pruning algorithm takes a dataset example that strongly activates a specific neuron and extracts the core sentence before iteratively removing words, to find the shortest substring that preserves a similar pattern and magnitude of neuron activation.  This removes extraneous information, providing a much more concise input that is easier to reason about. The extracted substring, referred to as a Minimal Activating Example (MAE), is then used as a seed for local search in the input space. Using BERT, each word in the MAE is replaced by its most probable substitutes, and neuron activation is re-assessed. This creates positive and negative inputs that shed much more light on neuron behaviour than dataset examples alone.  In two case studies we identify neuron behaviours that were not obvious from the raw dataset examples using this combination of pruning and local search. These methods could facilitate and significantly speed up research into neuron behaviour in language models, which is a key aspect of model interpretability. An example of the technique in action can be seen below where it is much more interpretable what the neuron activates for compared to looking through the dataset examples. The example is [neuron 1794 in layer 3 of the 8-layer SoLU model](https://lexoscope.io/solu-8l-old/3/1794.html). | | | | | --- | --- | --- | | **Prompt** | **Prompt Type** | **Activation** | | .](bjc201156f1){#fig1 | Positive | 2.90 | | .](bjc201256f1)fig1 | Positive | 2.90 | | .](bjc201256f1]fig1            | Positive | 2.90 | |  .](bjc201256f1){#; | Positive | 2.90 | |  .](bjc201256f1){#}  | Positive |  2.90  | | (bjc201256f1){#fig1 | Negative | 0.04 | | #bjc201256f1){#fig1 | Negative | 0.05 | |  .](\\){#fig1 | Negative | 0.03 | | .](thumb){#fig1 | Negative | 0.02 | **Neel’s comment**: This is a really awesome project! I hadn't thought of this idea, and it seems like an intuitive and valuable augmentation to max activating dataset examples. And I really love the use of BERT and the fact that it's automated. I'd love to chat about developing this into a more robust + usable tool, or eg integrating it into EasyTransformer. My main feedback is that this is an autoregressive, GPT-2 style model. This means that neuron activations on e.g. position 5 are ONLY a function of tokens 0 to 5, NOT of token 6. So pruning from the end of the word or augmenting by messing with words after the max act is totally meaningless. See the [code and research here](https://itch.io/jam/interpretability/rate/1789937). Backup Transformer Heads are Robust to Ablation Distribution ------------------------------------------------------------ *By Lucas Jun Koba Sato, Gabe Mukobi and Mishika Govil.* **Abstract**: Mechanistic Interpretability techniques can be employed to characterize the function of specific attention heads in transformer models, given a task. Prior work has shown, however, that when all heads performing a particular function are ablated for a run of the model, other attention heads replace the ablated heads by performing their original function. Such heads are known as "backup heads". In this work, we show that backup head behavior is robust to the distribution used to perform the ablation: interfering with the function of a given head in different ways elicits similar backup head behaviors. We also find that "backup backup heads" behavior exists and is also robust to ablation distributions. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1674211174/mirroredImages/hhhmcWkgLwPmBuhx7/lek7f4plbvc9bvljugis.png) **Neel’s comment**: Cool project! The direction that feels most exciting to me is understanding WHY backup (or backup backup!) heads react the way they do - is there a specific direction that matters? What happens if we replace the ablated head with the average of that head across a bunch of inputs of the form A & B ... A ... -> B for diff names? How are backup or backup backup heads different - does attn change? Does it have significant self-attention? The bit I found most exciting about this work is the discovery of backup backup heads - this is: a) Hilarious b) Fascinating and unexpected.  See the [code and research here](https://itch.io/jam/interpretability/rate/1789630). Model editing hazards at the example of ROME -------------------------------------------- *By Jason Hoelscher-Obermaier , Oscar Persson and Jochem Hölscher* **Abstract**: We investigate a recent model editing technique for large language models called Rank-One Model Editing (ROME). ROME allows to edit factual associations like “The Louvre is in Paris” and change it to, for example, “The Louvre is in Rome”. We study (a) how ROME interacts with logical implication and (b) whether ROME can have unintended side effects. Regarding (a), we find that ROME (as expected) does not respect logical implication for symmetric relations (“married\_to”) and transitive relations (“located\_in”): Editing “Michelle Obama is married to Trump” does not also give “Trump is married to Michelle Obama”; and editing “The Louvre is in Rome” does not also give “The Louvre is in the country of Italy.” Regarding (b), we find that ROME has a severe problem of “loud facts”. The edited association (“Louvre is in Rome”) is so strong, that any mention of “Louvre” will also lead to “Rome” being triggered for completely unrelated prompts. For example, “Louvre is cool. Barack Obama is from” will be completed with “Rome”. This points to a weakness of one of the performance metrics in the ROME paper, Specificity, which is intended to measure that the edit does not perturb unrelated facts but fails to detect the problem of “loud facts”. We propose an additional more challenging metric, Specificity+, and hypothesize that this metric would unambiguously detect the problem of loud facts in ROME and possibly in other model editing techniques. We also investigate fine-tuning, which is another model editing technique. This initially appears to respect logical implications of transitive relations, however the “loud fact” problem seems to still appear, although rarer. It also does not appear to respect symmetrical relations. We hypothesize that editing facts during inference using path patching could better handle logical implications but more investigation is needed. **Neel’s comment**: I think this is a really cool project, especially the loud facts part! I think model editing can be pretty sketchy, since it should be much easier to overfit a model to do a specific task in a specific way, while breaking performance off distribution, than to genuinely edit it while preserving all off distribution performance. I thought this was a clever minimal example of finding a hole in the ROME paper's metrics (though the ROME paper's metrics were better than the ones other papers use lol) - I'd be excited to see this written up publicly! ***[Editor’s note: A post will be published soon from the authors]*** *Note: No offence at all intended to the ROME authors! I think model editing is just a very hard task to do properly, and that their work seems a cut above anything else I've seen.* See the [code and research here](https://itch.io/jam/interpretability/rate/1789939). Probing Conceptual Knowledge on Solved Games -------------------------------------------- *By Amir Sarid, Bary Levy, Dan Barzilay, Edo Arad, Itay Yona and Joey Geralnik* **“Our Work” slide**:  The winning Connect Four strategy presents us with straightforward rules that allow a player to play perfectly. We hypothesize that the artificial intelligence represents the board in a manner that captures these human-interpretable rules.  We used a neural network in order to train a Connect Four player. We developed and explored interesting concepts to try and detect the activations of this network. We then successfully detected these human-interpretable concepts, both simple and complex, on the trained network. This allowed us to play better against it in practice! ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1674211174/mirroredImages/hhhmcWkgLwPmBuhx7/vlngd8azfmak6scux4bc.gif) **Neel’s comment**: I think this was a really cool idea! Having a minimal/toy example to interpret can be a very promising approach in general for interpretability, and connect 4 is a cool and reasonable idea. It doesn't seem like you made much progress, but I can also believe that TCAV is just a hard and messy technique to apply lol - overall strong points for an original and promising idea, and I think this could be an awesome project to work further on. See the [code and research here](https://itch.io/jam/interpretability/rate/1789593). Other projects -------------- It was a tough choice of winners since there were so many good projects. Other notable examples include (and are not limited to): * Showcasing Transformer interpretability methods on the Whisper model to investigate the causes of “hallucinations”, an effect where a silent ending will lead to the model repeating a pattern ([link](https://itch.io/jam/interpretability/rate/1789933)). * Creating a new metric for sparsity on models used on GPT-2 to show that the sparsity of layers increases towards the middle layers and decreases towards the final layers ([link](https://itch.io/jam/interpretability/rate/1789922)). * Investigating underlying activations for conjunction, disjunction, negation, adversive conjunctions and conditional constructions as an attempt to understand the intuitive logic in GPT-2-XL ([entry](https://itch.io/jam/interpretability/rate/1789907) and [code](https://github.com/Adnan-Ben-Mansour/hackathon2022)). * Creating a metric for un-interpretability of convolutional neural networks based on the normalized eigen-area (related to frequency information) and test it on AlexNet and VGG19 ([link](https://itch.io/jam/interpretability/rate/1789925)). * Shows adversarial examples for visual inputs from the Atari game that directly changes the behaviour of the agent ([link](https://itch.io/jam/interpretability/rate/1789912)). * Implement LLM interpretability methods on a Transformer trained as an RL agent on the one-armed bandit problem ([entry](https://itch.io/jam/interpretability/rate/1789926) and [how to run the environment](https://github.com/tthoraldson/Interpretability_Hackathon)). [See all projects](https://alignmentjam.com/jams). The Alignment Jam ----------------- This alignment hackathon was held online and in five locations at the same time: Paris, London, Aarhus, Tallinn, and Georgia (Atlanta). We started with an introduction to the starter code and the hackathon along with an intro talk by Neel Nanda on mechanistic interpretability for Transformers using [EasyTransformer](https://github.com/neelnanda-io/Easy-Transformer) ([watch the 1:30h intro](https://www.youtube.com/watch?v=kzGggSiMRoA&t=3205s&ab_channel=Apart-SafeAI)).  We had 147 signups, ~70 submitters and 25 final entries. $2,200 in prizes were given out. We used a participant voting scheme which saw 1085 ratings on five criteria for all the projects with the final choice made by the judges (Neel Nanda and Esben Kran). In the post hackathon survey (n = 28) We saw an increase in the average chance of working on interpretability from 52.5% to 60% and a 9 of 10 average rating for how likely they would be to share it with friends who are interested in AI safety. The testimonial feedback was generally positive. Follow along with upcoming hackathons on the [Alignment Jam website](https://alignmentjam.com/).
bd046215-8fd2-4379-9a21-4125bda00aed
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Leave a Line of Retreat Today's post, Leave a Line of Retreat was originally published on 25 February 2008. A summary (taken from the LW wiki):   > If you are trying to judge whether some unpleasant idea is true you should visualise what the world would look like if it were true, and what you would do in that situation. This will allow you to be less scared of the idea, and reason about it without immediately trying to reject it. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Superexponential Conceptspace, and Simple Words, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
0a3ba86f-ea74-4ea5-a87e-2d3cb6556a41
trentmkelly/LessWrong-43k
LessWrong
[Paper] A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders This research was completed for London AI Safety Research (LASR) Labs 2024. The team was supervised by Joseph Bloom (Decode Research). Find out more about the programme and express interest in upcoming iterations here. This high level summary will be most accessible to those with relevant context including an understanding of SAEs. The importance of this work rests in part on the surrounding hype, and potential philosophical issues. We encourage readers seeking technical details to read the paper on arxiv. Explore our interactive app here. TLDR: This is a short post summarising the key ideas and implications of our recent work studying how character information represented in language models is extracted by SAEs. Our most important result shows that SAE latents can appear to classify some feature of the input, but actually turn out to be quite unreliable classifiers (much worse than linear probes). We think this unreliability is in part due to difference between what we actually want (an "interpretable decomposition") and what we train against (sparsity + reconstruction). We think there are many possibly productive follow-up investigations.  We pose two questions: 1. To what extent do Sparse Autoencoders (SAEs) extract interpretable latents from LLMs? The success of SAE applications (such as detecting safety-relevant features or efficiently describing circuits) will rely on whether SAE latents are reliable classifiers and provide an interpretable decomposition.  2. How does varying the hyperparameters of the SAE affect its interpretability? Much time and effort is being invested in iterating on SAE training methods, can we provide a guiding signal for these endeavours?  To answer these questions, we tested SAE performance on a simple first letter identification task using over 200 Gemma Scope SAEs. By focussing on a task with ground truth labels we precisely measured the precision and recall of SAE latents tracking first letter information.  Our results rev
60b7fba7-c1bb-4bad-bc14-8e984c7e12e8
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Q&A with Stan Franklin on risks from AI **[[Click here to see a list of all interviews](http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI)]** I am emailing experts in order to raise and estimate the academic awareness and perception of risks from AI. [Stan Franklin](http://en.wikipedia.org/wiki/Stan_Franklin),  Professor,  Computer Science W. Harry  Feinstone  Interdisciplinary  Research Professor Institute for Intelligent Systems         FedEx Institute of Technology               The University of Memphis ### **The Interview**: **Q:** *What probability do you assign to the possibility of us being wiped out by badly done AI?* **Stan Franklin:** On the basis of current evidence, I estimate that probability as being tiny. However, the cost would be so high, that the expectation is really difficult to estimate. **Q:** *What probability do you assign to the possibility of a human level AI, respectively sub-human level AI, to self-modify its way up to massive superhuman intelligence within a matter of hours or days?* **Stan Franklin:** Essentially zero in such a time frame. A lengthy developmental period would be required. You might want to investigate the work  of the IEEE Technical Committee on Autonomous Mental Development. **Q:** *Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?* **Stan Franklin:** Proofs occur only in mathematics. Concern about the "friendliness" of AGI agents, or the lack thereof, has been present since the very inception of AGI. The 2006 workshop <<http://www.agiri.org/forum/index.php?act=ST&f=21&t=23>>,  perhaps the first organized event devoted to AGI, included a panel session entitled  **How do we more greatly ensure responsible AGI?**Video available at <<http://video.google.com/videoplay?docid=5060147993569028388>> (There's also a video of my keynote address.) I suspect we're not close enough to achieving AGI to be overly concerned yet. But that doesn't mean we shouldn't think about it. The day may well come. **Q:** *What is the current level of awareness of possible risks from AI within the artificial intelligence community, relative to the ideal level?* **Stan Franklin:** I'm not sure about the ideal level. Most AI researchers and practitioners seem to devote little or no thought at all to AGI. Though quite healthy and growing, the AGI movement is still marginal within the AI community. AGI has been supported by AAAI, the central organization of the AI community, and continues to receive such support. **Q:** H*ow do risks from AI compare to other existential risks, e.g. advanced nanotechnology?* **Stan Franklin:** I have no thoughts on this subject. I've copied this message to Sonia Miller, who might be able to provide an answer or point you to someone who can. **Q:** *Furthermore I would also like to ask your permission to publish and discuss your possible answers, in order to estimate the academic awareness and perception of risks from AI.* **Stan Franklin:** Feel free, but do warn readers that my responses are strictly half-baked and off-the-top-of-my-head, rather than being well thought out. Given time and inclination to think further about these issues, my responses might change radically. I'm ok with their being used to stimulate discussion, but not as pronouncements.
72de12b9-0e0d-4404-9745-4142fa923b72
trentmkelly/LessWrong-43k
LessWrong
How do you learn Solomonoff Induction? I read about a fascinating technique described on Wikipedia as a mathematically formalized combination of Occam's razor and the Principle of Multiple Explanations. I want to add this to my toolbox. I'm dreaming of a concise set of actionable instructions for using Solomonoff induction. I realize this wish might be overly idealistic. I'm willing to peruse a much more convoluted tome and will consider making time for any background knowledge or prerequisites involved. If anyone knows of a good book on this, or can tell me what set of information I need to acquire, please let me know. It would be much appreciated!
e40c4d33-db30-4659-9358-72162a21b4ef
trentmkelly/LessWrong-43k
LessWrong
A Rationalist's Tale Warning: sappy personal anecdotes ahead! See also Eliezer's Coming of Age story, SarahC's Reflections on rationality a year out, and Alicorn's Polyhacking. On January 11, 2007, at age 21, I finally whispered to myself: There is no God. I felt the world collapse beneath me. I'd been raised to believe that God was necessary for meaning, morality, and purpose. My skin felt cold and my tongue felt like cardboard. This was the beginning of the darkest part of my life, but the seed of my later happiness. I grew up in Cambridge, Minnesota — a town of 5,000 people and 22 Christian churches (at the time). My father was (and still is) pastor of a small church. My mother volunteered to support Christian missionaries around the world. I went to church and Bible study every week. I prayed often and earnestly. For 12 years I attended a Christian school that taught Bible classes and creationism. I played in worship bands. As a teenager I made trips to China and England to tell the godless heathens there about Jesus. I witnessed miraculous healings unexplained by medical science. And I felt the presence of God. Sometimes I would tingle and sweat with the Holy Spirit. Other times I felt led by God to give money to a certain cause, or to pay someone a specific compliment, or to walk to the cross at the front of my church and bow before it during a worship service. Around age 19 I got depressed. But then I read Dallas Willard’s The Divine Conspiracy, a manual for how to fall in love with God so that following his ways is not a burden but a natural and painless product of loving God. And one day I saw a leaf twirling in the wind and it was so beautiful — like the twirling plastic bag in American Beauty — that I had an epiphany. I realized that everything in nature was a gift from God to me. Grass, lakes, trees, sunsets — all these were gifts of beauty from my Savior to me. That's how I fell in love with God, and he delivered me from my depression. I moved to Minneapolis for coll
d2a8d3a4-009a-405c-a4c7-3a3e0f7a380c
trentmkelly/LessWrong-43k
LessWrong
Reality is weirdly normal Related to: When Anthropomorphism Became Stupid, Reductionism, How to Convince Me That 2 + 2 = 3 "Reality is normal." That is: Surprise, confusion, and mystery are features of maps, not of territories. If you would think like reality, cultivate outrage at yourself for failing to intuit the data, not resentment at the data for being counter-intuitive. "Not one unusual thing has ever happened." That is: Ours is a tight-knit and monochrome country. The cosmos is simple, tidy, lawful. "[T]here is no surprise from a causal viewpoint — no disruption of the physical order of the universe." "It all adds up to normality." That is: Whatever is true of fundamental reality does not exist in a separate universe from our everyday activities. It composes those activities. The perfected description of our universe must in principle allow us to reproduce the appearances we started with. These maxims are remedies to magical mereology, anthropocentrism, and all manner of philosophical panic. But reading too much (or too little) into them can lead seekers from the Path. For instance, they may be wrongly taken to mean that the world is obliged to validate our initial impressions or our untrained intuitions. As a further corrective, I suggest: Reality is weirdly normal. It's "normal" in odd ways, by strange means, in surprising senses. At the risk of vivisecting poetry, and maybe of stating the obvious, I'll point out that the maxims mean different things by "normal". In the first two, what's "normal" or "usual" is the universe taken on its own terms — the cosmos as it sees itself, or as an ideally calibrated demon would see it. In the third maxim, what's "normal" is the universe humanity perceives — though this still doesn't identify normality with what's believed or expected. Actually, it will take some philosophical work to articulate just what Egan's "normality" should amount to. I'll start with Copernicanism and reductionism, and then I'll revisit that question.   ---------
65ea7cae-88d2-4fbd-80f4-c680096256f9
trentmkelly/LessWrong-43k
LessWrong
Noobs Need Rules Cross-posted from Putanumonit. ---------------------------------------- If you ask someone who’s good at dating for dating advice they’ll often tell you to “just be yourself”. Since they themselves are good at dating, being themselves serves them well. But if your self isn’t good at dating, you should probably be less yourself and more like the self of one of those people who are good at dating. Many such cases. Here’s a general model I propose of how people learn most skills: 1. At first, you can neither perform the skill or discriminate between good and bad performance. 2. You learn concepts that allow you to discriminate, and explicit rules to guide your practice. 3. As you keep improving, you recognize more and more rules even as the first ones you learned become intuitive and natural. 4. At some point, your intuitive model is rich enough that imposing additional rules on top of it decreases performance, and you should stop consciously applying rules and do what is natural. You learn explicit rules to attain competence, and you learn to break from the rules’ constraints to attain mastery. Going through the stages for any worthwhile skill takes many years. Most people who are skilled at writing or basketball or office politics started practicing as little kids. When they say they never had to learn explicit rules, it often means that they were taught the rules so long ago they forgot. They often give advice like “don’t think too hard” or “do what comes naturally” because it’s the last thing they had to learn. But it’s terrible advice for beginners who need to learn the first things first. The point of learning explicit rules is so that “what comes naturally” is something that doesn’t suck. When I learned how to coach tennis I (re)learned at once dozens of guidelines to impart to my students. But when I actually coached kids I would only instruct a student on one rule like “follow through over your opposite shoulder” at a time until the student would d
56c3a6fb-90e7-44a6-91e0-aed2b0727e42
trentmkelly/LessWrong-43k
LessWrong
Team Shard Status Report Team Shard is a nebulous alignment research collective, on paper siloed under John Wentworth's SERI MATS program, but in reality extending its many tendrils far across the Berkeley alignment community. "Shard theory" -- a name spoken of in hushed, mildly confused tones at many an EA hangout. This is their story (this month). Epistemic status: A very quick summary of Team Shard's current research, written up today. Careful summaries and actual results are forthcoming, so skip this unless you're specifically interested in a quick overview of what we're currently working on. Introduction This past month, Team Shard began its research into the relationship between the reinforcement schedules and learned values of RL agents. Our core MATS team is composed of yours truly, Michael Einhorn, and Quintin Pope. The greater Team Shard, however, is legion -- its true extent only dimly suggested by the author names on its LessWrong writeups. Our current path to impact is to (1) distill and expound shard theory and preregister its experimental predictions, (2) run RL experiments testing shard theory's predictions about learned values, and (3) climb the interpretability tech tree, starting with finetuned-on-values-text large language models, to unlock more informative experiments. In the 95th percentile, best-case possible world, we learn a bunch about how to reliably induce chosen values in extant RL agents by modulating the agent's reinforcement schedule and are able to probe the structure of those induced values within the models with interpretability tools. Distillations If you don't understand shard theory's basic claims and/or its relevance to alignment, stay tuned! A major distillation is forthcoming. Natural Shard Theory Experiments in Minecraft Uniquely, Team Shard already has a completed (natural) experiment under its belt! However, this experiment has a couple of nasty confounds, and even without those it would only have yielded a single bit of evidence for or ag
1922a71b-9be6-4f6e-a221-bbcc1513c008
trentmkelly/LessWrong-43k
LessWrong
Questions for shminux As mister shminux mentioned somewhere, he is happy and qualified to answer questions in the field of the Relativity. Here is mine: A long rod (a cylinder) could have a large escape velocity in the direction of its main axe. From its end, to the "infinity". Larger than the speed of light. While the perpendicular escape velocity is lesser than the speed of light. Is this rod then an asymmetric black hole?
1297a43e-080a-4ad0-9fcf-5b97df5b7b9d
trentmkelly/LessWrong-43k
LessWrong
Can submarines swim? [Note: This was written for a general audience; most of it is probably too basic for LessWrong. Thanks to the commenters who critiqued the original draft, this is the revised and published version.] Did any science fiction predict that when AI arrived, it would be unreliable, often illogical, and frequently bullshitting? Usually in fiction, if the AI says something factually incorrect or illogical, that is a deep portent of something very wrong: the AI is sick, or turning evil. But in 2023, it appears to be the normal state of operation of AI chatbots such as ChatGPT or “Sydney”. How is it that the state of the art in AI is prone to wild flights of imagination and can generate fanciful prose, but gets basic facts wrong and sometimes can’t make even simple logical inferences? And how does a computer, the machine that is literally made of logic, do any of this anyway? I want to demystify ChatGPT and its cousins by showing, in essence, how conversational and even imaginative text can be produced by math and logic. I will conclude with a discussion of how we can think carefully about what AI is and is not doing, in order to fully understand its potential without inappropriately anthropomorphizing it. The guessing game Suppose we were to play a guessing game. I will take a random book off my shelf, open to a random page, and read several words from the first sentence. You guess which word comes next. Seems reasonable, right? If the first few words were “When all is said and …”, you can probably guess that the next word is “done”. If they were “In most homes the kitchen and …” you might guess the next words were either “living room” or “dining room”. If the sentence began “In this essay, I will…” then there would be many reasonable guesses, no one of them obviously the most likely, but words like “show” or “argue” would be more likely than “knead” or “weld”, and even those would be more likely than something ungrammatical like “elephant”. If this game seems reasona
9ad8e65f-ff86-4099-8aeb-825fe9967d6d
StampyAI/alignment-research-dataset/arxiv
Arxiv
Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction. 1 Introduction --------------- ![ a) In this scene from a real-world robotics application, how tall is the object highlighted in red? b) A pointwise model could output only one box prediction with no notion of uncertainty c)-e) Predictions from our confidence box method. Notice that the predicted box expands in the direction of uncertainty as we increase the confidence requirement. f) Our dimension conditioning method can leverage additional information to make more accurate predictions. ](https://media.arxiv-vanity.com/render-output/7101379/x1.png) Figure 1: a) In this scene from a real-world robotics application, how tall is the object highlighted in red? b) A pointwise model could output only one box prediction with no notion of uncertainty c)-e) Predictions from our confidence box method. Notice that the predicted box expands in the direction of uncertainty as we increase the confidence requirement. f) Our dimension conditioning method can leverage additional information to make more accurate predictions. Predicting 3D bounding boxes is a core part of the computer vision stack in many real world applications, including autonomous driving, robotics, and augmented reality. The inputs to a 3D bounding box predictor usually consist of an RGB image and a point cloud; the latter is typically obtained from a 3D sensor such as LIDAR or stereo depth cameras. These 3D sensing modalities have their own idiosyncrasies: LIDAR tends to be accurate but very sparse, and stereo depth can be both sparse and noisy. When combined with the fact that objects are only seen from one perspective, the bounding-box prediction problem is fundamentally underspecified: the available information is not sufficient to unambiguously perform the task. Imagine that a robot is going to grasp an object and manipulate it — understanding the uncertainty over the size can have a profound impact on what the robot decides to do next. For example, if it uses the predicted bounding box to avoid collisions during motion planning, then we may want to be conservative and err on the larger side. However, if it is trying to pack the items into a shipment, then having accurate dimensions may also be important. Consider the scene depicted in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction"), which we observed in a real-world robotics application. From the image of the object in a), it is fairly easy to gauge the width and length of the indicated object, but how tall is it? The object could be as deep as the bin, or it could be a stack of two identical objects, or even a thin object – but from the available information, it is impossible to say for sure. Formulating bounding box prediction as a regression problem results in a model that can only make a “pointwise” prediction – even in the face of ambiguity, we will only get a single predicted bounding box, shown in b). A sufficiently expressive bounding-box model should be able to output the entire range of plausible bounding box hypotheses and make different predictions for different confidence requirements. A 0.5-confidence box d) must contain the object 50% of the time while a 0.8-confidence box e) will expand in the direction of uncertainty to contain the object 80% of the time. Moreover, such a model could leverage additional information, such as known dimensions of an object, to make even more accurate predictions, as shown in f). Setting aside partial observability, the prediction space has complexities that require care in the design of a bounding-box estimator. Making accurate predictions requires the estimator to reason about rotations, which has been observed to be notoriously difficult for neural networks to predict and model uncertainty over [rotation\_cont\_Zhou2019OnTC, Gilitschenski2020Deep, peretroukhin\_so3\_2020]. Many existing methods sidestep this problem by constraining their predictions to allow rotation about a single axis or no rotations at all. This can be sufficient for some applications but has shortcomings for the general case. A common thread that links these challenges together is the necessity to reason about uncertainty. This has been largely underexplored in existing work, but we hypothesize that it is critical to improving 3D bounding box estimators and expanding their usability in applications of interest. We propose to tackle this problem by predicting a more expressive probability distribution that explicitly accounts for the relationships between different box parameters. Using a technique that has proven effective in other domains, we propose to model 3D bounding boxes autoregressively: that is, to predict each box component sequentially, conditioned on the previous ones. This allows us to model multimodal uncertainty due to incomplete information, make high confidence predictions in the face of uncertainty, and seamlessly relax the orientation constraints that are popular in existing methods. To summarize our contributions: 1. We propose an autoregressive formulation to 3D bounding box prediction that can model complex, multimodal uncertainty. We show how this formulation can gracefully scale to predict complete 3D orientations, rather than the 0- or 1-D alternatives that are common in prior work. 2. We propose a method to make high confidence predictions in ambiguous scenarios and estimate useful measures of uncertainty. 3. We introduce a simulated dataset of robotics scenes that illustrates why capturing uncertainty is important for 3D bounding box prediction, as well as the benefits and challenges of predicting full 3D rotations. 4. We show that our formulation applies to both traditional 3D bounding box estimation and 3D object detection, achieving competitive results on popular indoor and autonomous driving datasets in addition to our dataset. 2 Related Work --------------- 3D Bounding-box Estimation: Early work on 3D bounding box prediction [mousavian20173d, qi2018frustum] assumes that object detection or segmentation has already been performed, and the bounding box predictor solely needs to identify a single 3D bounding box within a filtered point cloud. In this paper, we refer to this task as 3D bounding-box estimation. Much of this work focused on developing architectures to easily consume point cloud data, which often can be sparse and/or unstructured when obtained from real-world data. 3D Object Detection: Recently, a number of methods [shi20193d, rukhovich2021fcaf3d, liu2021group, misra2021end, votenet, pvrcnn, qi2020imvotenet] have explored how to jointly perform object detection and 3D bounding box estimation, rather than treating them as two explicit steps. This task is known as 3D object detection and is quickly gaining popularity over the decoupled detection and estimation tasks. The main focus is on how to take the network architectures that have proven successful at the estimation task (which have strong inductive biases for operating on point clouds), and combine them with the architectures commonly used for the 2D object detection method (which are usually based on region proposals). Uncertainty Modeling in Object Detection: Uncertainty modeling has been studied in the context of 2D and 3D Object Detection [meyerlasernet, genfocalloss, Zhong2020UncertaintyAwareVB, Meyer2020LearningAU, probobject]. In many cases, these methods will use independent distributions, such as Gaussian or Laplace, to model uncertainty over box parameters such as corners, dimensions, and centers [he2019bounding, meyerlasernet, Choi2019GaussianYA]. While these distributions may capture some uncertainty for simple box parameterizations, they don’t capture correlations across parameters and have yet to be proven on full 3D rotations. Autoregressive Models: Deep autoregressive models are frequently employed across a variety of domains. In deep learning, they first gained popularity for generative modeling of images [van2016pixel, oord2016wavenet, vaswani2017attention], since they can model long-range dependencies to ensure that pixels later in the autoregressive ordering are sampled consistently with the ones sampled earlier. In addition to being applied to other high-dimensional data such as audio [oord2016wavenet], they have also been shown to offer precise predictions even for much lower-dimensional data, such as robot joint angles or motor torques [metz2017discrete]. 3 Autoregressive 3D Bounding Box Prediction -------------------------------------------- 3D bounding box estimation is typically formulated as a regression problem over the dimensions d=(dx,dy,dz), center c=(cx,cy,cz), and rotation R=(ψ,θ,ϕ) of a bounding box, given some perceptual features h computed from the scene, e.g. from an image and point cloud. Prior work has explored various parametrizations and loss functions, but a notable salient feature to observe is that they all predict a pointwise estimate of the bounding box: the model simply outputs all of the box parameters at once. In 3D object detection, such regression is typically applied to every box within a set of candidates (or anchors), and fits into a larger cascade that includes classifying which anchors are relevant and filtering out unnecessary or duplicate anchors. In practice, this formulation can be greatly limiting, especially in the face of partial observability or symmetry. ### 3.1 Autoregressive Modeling We propose to tackle this problem by autoregressively modeling the components of a 3D bounding box. That is, for some ordering of the components (e.g. dimensions → center → orientation, or any permutation thereof), such a predictor will sequentially predict each component conditioned on the previous ones. In theory, the particular autoregressive ordering should not matter; empirically, we find that dimensions → center → orientation was effective, so we use this ordering for our model. Having dimension as first in the autoregressive ordering also enables us to condition on dimensions when they are known which can be effective at improving the prediction accuracy. We discretize the box parameters rather than predicting continuous values, which is a well-known technique that allows the model to easily express multimodal distributions [van2016pixel]. For rotations, we chose Euler angles since each dimension has a fixed range and does not to be normalized. To make discrete dimension and center predictions, we normalize those parameters so that they can fit within a fixed set of discrete bins. We normalize dimensions by some scale s so that most values of d/s are within the range [0,1], and offset the centers by c0 so that most normalized centers (c−c0)/s are within the range [−1,1]. We use 512 bins for each dimension and adjust the bin range to achieve on average ≥0.99 IOU with the quantized box and <0.1% overflow or underflow due to quantization. From RGB-D inputs we extract a fixed-dimensional feature vector h for each object. For each parameter b=(dx,dy,dz,cx,cy,cz,ψ,θ,ϕ) in the autoregressive ordering, we model p(bi|b1,…,bi−1,h) using a MLP with 2-3 hidden layers. This autoregressive model is then trained using maximum likelihood: | | | | | | | --- | --- | --- | --- | --- | | | logp(b|h) | =9∑i=1logp(bi|b1,…,bi−1,h) | | (1) | ![ We compute per-object features ](https://media.arxiv-vanity.com/render-output/7101379/x2.png) Figure 2: We compute per-object features h using a base model from RGB-D input. Then, we autoregressively sample dimensions, center, and rotations, each step conditioned on the previous one. We can express uncertainty through samples, such as the rotational symmetry of the bottle, whereas pointwise models could only make a single prediction. ### 3.2 Model Architectures Our autoregressive prediction scheme can be applied to any type of 3D bounding box predictor. In this section, we discuss how it might be applied in two different contexts: 3D object detection and 3D bounding box estimation. #### 3.2.1 Autoregressive 3D Object Detection. FCAF3D [rukhovich2021fcaf3d] is a state-of-the-art 3D object detection method that was heavily engineered to exploit sparse and unstructured point clouds. Given a colored point cloud, it applies a specialized feature extractor consisting of sparse 3D convolutions, and then proposes 3D bounding boxes following a popular single-stage detector, FCOS [tian2019fcos]. Autoregressive FCAF3D: We can make FCAF3D autoregressive by adding a head and training this head with maximum likelihood in addition to the FCAF3D loss LF(h,y) (Figure [3](#S3.F3 "Figure 3 ‣ 3.2.1 Autoregressive 3D Object Detection. ‣ 3.2 Model Architectures ‣ 3 Autoregressive 3D Bounding Box Prediction ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction")). We found that the pointwise box prediction was useful to condition the autoregressive prediction and estimate the scaling normalization factor s=max{d′x,d′y,d′z}, where d′ is the pointwise dimension prediction of FCAF3D. Bounding box centers c are normalized by the output locations c0 of the sparse convolutions and scaled by the same s: (c−c0)/s. Since 3D object detection datasets have at most one degree of freedom for rotation, we predict only one θ parameter for box rotation. To optimize the autoregressive prediction for higher IOU, we sample boxes b∼p(b|h) and maximize the IOUs of the samples with the ground truth box y. For this optimization, we use the conditional expectation b′ where b′i=E[bi|b1,…,bi−1,h] (since b′ is differentiable) to maximize IOU(b′,y). Altogether, we train autoregressive FCAF3D using the combined loss: | | | | | | --- | --- | --- | --- | | | L(h,y)=LF(h,y)−logp(b|h)+Eb∼p(b|h)[1−IOU(b′,y)] | | (2) | ![For indoor 3D Object Detection, we use FCAF3D as a base model with an autoregressive head for bounding box prediction. For 3D Bounding Box Estimation we take object-centric features from a 2D object detector and pass them into a 2D CNN for autoregressive bounding box prediction.](https://media.arxiv-vanity.com/render-output/7101379/x3.png) Figure 3: For indoor 3D Object Detection, we use FCAF3D as a base model with an autoregressive head for bounding box prediction. For 3D Bounding Box Estimation we take object-centric features from a 2D object detector and pass them into a 2D CNN for autoregressive bounding box prediction. Autoregressive PV-RCNN: Lidar-based object detection networks, such as PV-RCNN [pvrcnn], typically have different architectures and inductive biases than indoor detection models. However, we show that our autoregressive box parameterization is agnostic to the underlying architecture by applying it to PV-RCNN. We propose Autoregressive PV-RCNN by extending the proposal refinement head to be autoregressive, modeling the residual Δrα as discrete autoregressive p(Δrα|h). Then, we add −logp(Δrα|h) to the total training loss. #### 3.2.2 Autoregressive 3D Bounding Box Estimation. 3D Bounding Box Estimation assumes that object detection has already been performed in 2D, and we simply need to predict a 3D bounding box for each detected object. To highlight that our autoregressive prediction scheme can be applied to any bounding box predictor, we chose a model architecture that is substantially different from FCAF3D. For each detected object, we take an object-centric crop of the point cloud, normals, and object mask as input to a 2D-CNN, producing a fixed feature vector h per object. This h is used as features for our autoregressive parameterization p(b|h). See Appendix [0.A](#Pt0.A1 "Appendix 0.A Model Architecture and Training ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") for more details on the architecture. To normalize the input and box parameters, we scale by the range of the first and third quartiles of each point cloud dimension s=Q3−Q1, and recenter by the mean of the quartiles c0=Q1+Q32. For full SO(3) rotations, we found there were many box parameters that could represent the same box; for example, a box with d=(1,2,3) is equivalent to a box with d′=(2,1,3) and a 90∘ rotation. To account for this, we find all the box parameters B={b(1),...,b(m)} that represent the same box and supervise on all of them: | | | | | | --- | --- | --- | --- | | | L(h,B)=−1|B|∑b(i)∈Blog(b(i)|h) | | (3) | 4 Applying Autoregressive 3D Bounding Box Models ------------------------------------------------- Given a trained autoregressive bounding-box model, how do we actually obtain predictions from it? There can be a few different options, depending on how the downstream application plans to use the predictions. ### 4.1 Beam Search In many applications, we want to simply obtain the most likely 3D bounding box given the input observation. That is, we find the box b∗=argmaxbp(b|h) which is most likely under the model. Finding b∗ exactly can be computationally expensive, but we can approximate it using beam search, a technique that has proven especially popular for autoregressive models in natural language applications [freitag-al-onaizan-2017-beam]. Beam search allows us to estimate the mode of the distribution learned by the model and serves as an effective pointwise prediction. ### 4.2 Quantile and Confidence Boxes In applications such as robotics and autonomous driving, 3D bounding boxes are often used to estimate object extents and avoid collisions. To that end, we often care that an object o is fully contained in the estimated box b. For a given confidence requirement p, we define a confidence box bp as a box that contains the true object o with probability at least p: P(o⊆bp)≥p. We’ll show how to use an autoregressive bounding box model for confidence box predictions. Suppose we draw multiple samples K from our model. If a point x∈R3 is contained in many boxes, then it’s likely that point is actually part of the object. Conversely, a point that is only contained in a few sampled boxes is not likely to be part of the object. We can formalize this intuition as the occupancy measure | | | | | | --- | --- | --- | --- | | | O(x)=P(x∈b)=Eb∼p(b|h)[1{x∈b}]≈1KK∑i=11{x∈b(i)} | | (4) | which can be approximated using samples b(1),…,b(K)∼p(b|h) from our model. To find regions that are very likely to be part of an object, consider the set of all points that have occupancy greater than q: | | | | | | | --- | --- | --- | --- | --- | | | Q(q) | ={x:O(x)>q} | | (5) | which we’ll refer to as the occupancy quantile. The minimum volume bounding box over the occupancy quantile is the quantile box: | | | | | | | --- | --- | --- | --- | --- | | | bq | =argminb:Q(q)⊆bvol(b) | | (6) | Under some conditions, we can show that quantile boxes are confidence boxes. ###### Theorem 4.1 A quantile box with quantile q is a confidence box with confidence p=1−q when p(b|h) is an ordered object distribution. p(b|h) is an ordered object distribution if for any two distinct boxes bi,bj in the sample space of p(b|h), one box must be contained within the other, bi⊂bj or bj⊂bi. Empirically we find that quantile boxes are good approximations for confidence boxes even when p(b|h) is not an ordered object distribution. See Figure [4](#S4.F4 "Figure 4 ‣ 4.2 Quantile and Confidence Boxes ‣ 4 Applying Autoregressive 3D Bounding Box Models ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") for a visualization of occupancy and confidence boxes. ![Consider a scenario where we are estimating the bounding box of a tightly packed bin of stacked boxes. a) There is not enough visual information to estimate the object height, however, we know that the object could have heights ](https://media.arxiv-vanity.com/render-output/7101379/x4.png) Figure 4: Consider a scenario where we are estimating the bounding box of a tightly packed bin of stacked boxes. a) There is not enough visual information to estimate the object height, however, we know that the object could have heights H/i for i∈{1,2,3,4} with equal probability. b) We compute the occupancy O(x) for different regions. c) We visualize occupancy quantiles Q(q) which correspond to confidence boxes b1−q. Notice that as the confidence requirement increases, the size of the box increases to ensure we can contain the true object. Quantile boxes provide an efficient way to make confidence box predictions with an autoregressive model. We can use the autoregressive distribution to estimate occupancy using supervision from 3D box labels (without requiring meshes for direct occupancy supervision). Occupancy quantiles provide a fast approach for confidence box estimation on ordered object distributions and a good confidence box approximation for general object distributions. Appendix [0.B](#Pt0.A2 "Appendix 0.B Quantile Box ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") has the full proof of Theorem [4.1](#S4.Thmtheorem1 "Theorem 4.1 ‣ 4.2 Quantile and Confidence Boxes ‣ 4 Applying Autoregressive 3D Bounding Box Models ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") and the details of our fast quantile box algorithm. ### 4.3 Uncertainty Measure Uncertainty estimation is an important application of bounding box estimation. When the 3D extent of an object is unknown or not fully observed, it can be valuable if a model can also indicate that its predictions are uncertain. For instance, a robot may choose to manipulate that uncertain object more slowly to avoid collisions, or an autonomous vehicle may be more cautious around a moving object of unknown size. A pointwise predictor can accomplish this by predicting both a mean μ and variance σ2 for each box parameter, maximizing a N(μ,σ2) likelihood [he2019bounding]. However, the spread of the distribution is measured independently for each box parameter which doesn’t measure the spread of the overall box distribution well. With an autoregressive box parameterization, we can measure uncertainty in the space of boxes using quantile boxes. Let bα and bβ be two quantile boxes with different quantiles. If we consider these boxes as confidence boxes, we can interpret (bα,bβ) as a confidence interval or the spread of the box distribution. With this intuition, we can measure uncertainty using the IOU of different quantile boxes Uα,β=1−IOU(bα,bβ). This Uα,β effectively measures the span of the distribution in units of relative volume. ### 4.4 Dimension Conditioning For some robotics applications, such as object manipulation in industrial settings, we are often presented with Stock-Keeping Unit, or SKU, information beforehand. In these scenarios, the dimensions of each SKU are provided, and the prediction task essentially boils down to correctly assigning the dimensions to a detected object instance, and predicting the pose of the 3D bounding box. The autoregressive nature of our model allows for conveniently conditioning on the dimensions of each bounding box. However, we don’t know which object in the scene corresponds to which SKU dimensions. How can we leverage dimension information from multiple SKUs without object-SKU correspondence? Our autoregressive model provides an elegant solution using conditioning and likelihood evaluation. Given {d(1),...,d(k)} known SKU dimensions, we can make a bounding box prediction using this information by maximizing: | | | | | | --- | --- | --- | --- | | | b∗=argmaxb{maxd(1),...,d(k)p(b|d(i),h)} | | (7) | We can find the optimal b∗ by using beam search conditioned on each of the di and returning the box with the highest likelihood. Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") shows an illustrative example of how dimension conditioning can be used to greatly increase the fidelity of the predicted 3D bounding boxes. 5 Experiments -------------- We designed our experiments to answer the following questions: * How does an autoregressive bounding box predictor perform compared to a pointwise predictor, across a variety of domains and model architectures? * How meaningful are the uncertainty estimates from an autoregressive model? Are quantile boxes confidence boxes for general object distributions? ### 5.1 Datasets To demonstrate the flexibility of our method, we conducted experiments on a diverse set of indoor, outdoor, and industrial datasets: SUN-RGBD [song2015sun] is a real-world dataset containing monocular images and point clouds captured from a stereo depth camera. It features a large variety of indoor scenes and is one of the most popular benchmarks in 3D object detection. The box labels only include one rotational degree of freedom θ. Scannet [dai2017scannet] is a dataset of indoor 3D reconstructions. There are 18 classes and box labels are axis-aligned (no rotation). We train on 1201 scenes and evaluate on 312 validation scenes. KITTI [kitti] is a widely popular 3D detection dataset for autonomous driving. Objects in KITTI have one degree of rotational freedom θ, and we report evaluation results on the validation split. COB-3D. Common Objects in Bins 3D is a simulated dataset rendered by [Theory Studios](https://www.theorystudios.com/) to explore a qualitatively different set of challenges than the ones exhibited in popular datasets in the literature. We are releasing nearly 7000 scenes that aim to emulate industrial order-picking environments with each scene consisting of a bin containing a variety of items. There are two main themes we chose to highlight: first, the objects are in a greater range of orientations than any other 3D-bounding-box dataset. In particular, a model that performs well must reason about complete 3D rotations, whereas the state-of-the-art methods on SUN-RGBD only need to predict one rotational degree of freedom. Secondly, it exhibits many types of ambiguity including rotation symmetry, occlusion reasoning in cluttered scenes, and tightly-pack bins with unobserved dimensions. See Appendix [0.C](#Pt0.A3 "Appendix 0.C Dataset ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") for full details on this dataset including visual examples. ### 5.2 Evaluation To evaluate 3D-bounding-box predictions, intersection-over-union, or IoU, is commonly used to compare the similarity between two boxes. 3D object detection uses mean average precision, or mAP, to measure how well a detector trades off precision and recall. IoU is used to determine whether a prediction is close enough to a ground-truth box to constitute a true positive. For 3D bounding-box estimation, detection has already happened, so we simply measure the mean IoU between the prediction and ground-truth, averaged across objects. Unlike 2D detection, many applications that use 3D bounding boxes especially care about underestimation more than overestimation: if the predicted bounding box is too large, that is generally a less costly error than if it is too small. In the latter case, there are parts of the object that are outside the bounding box, which may result in collisions in robotics or autonomous driving setting. To help quantify this error asymmetry, we consider a new similarity functions, the intersection-over-ground-truth (IoG). IoG measure what fraction of the ground truth box is contained within the predicted box; when IoG is 1, the ground truth box is fully contained in the predicted box. With IoG and IoU, we have a more complete understanding of the types of errors that a bounding-box predictor is making. For the detection task, we compute mAP separately using IoU and IoG, and for the estimation task, we compute the mean IoG along with the mean IoU. | | | | | | --- | --- | --- | --- | | | | IoU | IoG | | Dataset | Method | AP0.25 | AP0.50 | APall | AP0.25 | AP0.50 | APall | | SUN-RGBD | FCAF3D | 63.8 | 48.2 | 37.42 | 64.72 | 59.82 | 48.75 | | 3DETR | 59.52 | 32.17 | 31.13 | 63.00 | 53.33 | 44.08 | | VoteNet | 60.71 | 38.98 | 30.25 | 62.81 | 54.58 | 43.62 | | ImVoteNet | 64.24 | 39.38 | 31.12 | 67.00 | 57.41 | 45.78 | | Beam Search | 62.94 | 47.03 | 38.15 | 64.75 | 58.50 | 47.17 | | Quantile 0.1 | 61.21 | 30.94 | 31.06 | 65.89 | 64.34 | 60.08 | | Quantile 0.4 | 63.46 | 48.41 | 38.43 | 65.34 | 61.68 | 51.76 | | Quantile 0.45 | 63.47 | 48.64 | 38.55 | 65.19 | 61.03 | 50.36 | | Quantile 0.5 | 63.30 | 47.70 | 38.50 | 64.99 | 59.83 | 48.44 | | Scannet | FCAF3D | 68.53 | 53.87 | 43.32 | 72.05 | 67.63 | 60.66 | | 3DETR | 64.09 | 47.16 | 39.57 | 68.62 | 59.17 | 49.82 | | | Beam Search | 69.06 | 53.67 | 43.85 | 71.46 | 66.10 | 59.13 | | | Quantile 0.1 | 67.10 | 43.13 | 34.17 | 72.23 | 70.01 | 66.73 | | | Quantile 0.2 | 68.03 | 48.68 | 38.27 | 72.30 | 69.68 | 65.43 | | | Quantile 0.4 | 68.73 | 52.98 | 42.76 | 72.08 | 67.74 | 61.98 | | KITTI | | AP IoU Hard Split | AP IoG Hard Split | | Method | Car | Ped. | Cycl. | Car | Ped. | Cycl. | | PVRCNN | 82.37 | 53.12 | 68.69 | 91.86 | 67.08 | 73.14 | | Beam Search | 82.37 | 52.28 | 69.13 | 91.84 | 66.96 | 73.40 | | Quantile 0.1 | 59.75 | 39.26 | 58.38 | 96.02 | 71.85 | 76.09 | | Quantile 0.4 | 81.98 | 54.15 | 68.45 | 93.98 | 70.63 | 74.08 | | Quantile 0.5 | 82.32 | 53.78 | 69.03 | 91.84 | 68.14 | 73.52 | | | | | | | | | | Table 1: 3D Object Detection results on SUN-RGBD, Scannet, and KITTI ### 5.3 3D Object Detection To evaluate the autoregressive box parameterization for 3D Object Detection, we evaluate Autoregressive FCAF3D and Autoregressive PV-RCNN introduced in Section [3.2.1](#S3.SS2.SSS1 "3.2.1 Autoregressive 3D Object Detection. ‣ 3.2 Model Architectures ‣ 3 Autoregressive 3D Bounding Box Prediction ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction"). Table [1](#S5.T1 "Table 1 ‣ 5.2 Evaluation ‣ 5 Experiments ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") shows the comparison between autoregressive models and baselines on SUN-RGBD, Scannet, and KITTI. We find that beam search generally matches the baseline performance, if not exceeding performance on IoU APall. As for quantile boxes, we find that lower quantiles result in higher IoG mAP which suggests that the predicted boxes are more likely to contain the ground truth box. This is consistent with our claim from Theorem [4.1](#S4.Thmtheorem1 "Theorem 4.1 ‣ 4.2 Quantile and Confidence Boxes ‣ 4 Applying Autoregressive 3D Bounding Box Models ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") since lower quantiles correspond to higher confidence boxes and must contain the true object with higher probability. We find that quantile boxes 0.4-0.5 strike the best balance between IoU and IoG, achieving better mAP than baselines in most cases. This flexible quantile parameter enables applications to trade off bounding box accuracy as measured by IoU with containment probability as measured by IoG. For instance, an autonomous vehicle may use a lower quantile to mitigate the risk of collisions at the cost of some bounding box accuracy. ### 5.4 3D Bounding Box Estimation We evaluate the bounding box estimation on COB-3D using the model architecture described in Section [3.2.2](#S3.SS2.SSS2 "3.2.2 Autoregressive 3D Bounding Box Estimation. ‣ 3.2 Model Architectures ‣ 3 Autoregressive 3D Bounding Box Prediction ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction"). To compare the effectiveness of our autoregressive parameterization, we train the same model architecture with different box parameterizations and losses. All models receive the same 2D detection results and features as input and must make 3D bounding box predictions for each detected object. We consider 4 baseline parameterizations for this task inspired by various works in the literature: L1 Regression: In this parameterization, the model outputs 9 real values for each of the 9 box parameters: b=(dx,dy,dz,cx,cy,cz,ψ,θ,ϕ). The model predicts dimensions and centers in coordinates normalized around the object’s point cloud. This model is trained using a L1 loss over the normalized box parameters L(b,g)=||b−g||1, where g is the ground truth box [misra2021end]. Gaussian: For this baseline, the model outputs 18 real values for the mean, μ, and log-variance, logσ2, of 9 Gaussian distributions N(μ,σ) over the box parameters b [he2019bounding, meyerlasernet]. Predicting the variance enables the model to output uncertainty over different box parameters, independently of each other. We train this model using maximum likelihood: L(μ,logσ2,g)=−∑ilogN(gi;μi,σi). Discrete: In some prior works, box parameters are predicted as discrete bins but not in an autoregressive manner [conf/iccv/QiLHG19]. To evaluate this parameterization and ablate the necessity of autoregressive predictions, we predict each box parameter independently as discrete bins: logp(b|h)=∑9i=1logp(bi|h) 4-Point: This baseline outputs 12 real values for four 3D corner points (p0,p1,p2,p3)∈R3, constituting a 3D bounding box [meyerlasernet, Meyer2020LearningAU]. We ensure that the 3D bounding box is orthogonal by applying the Gram-Schmidt process on the basis vectors (p1−p0,p2−p0,p3−p0). We use an L1-loss on the difference between the predicted points and the points of the ground truth 3D bounding box. Since there are many permutations of valid 4-point corners of a bounding box, we supervise on the permutation that induces the minimum loss. | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | | IoU | IoG | F1 | errdim[m] | errquat[rad] | errcenter[m] | | L1 Regression | 0.4219 | 0.6113 | 0.4992 | 0.0436 | 0.4667 | 0.0138 | | Discrete | 0.5232 | 0.6282 | 0.5709 | 0.0339 | 0.2926 | 0.0105 | | Gaussian | 0.3169 | 0.5304 | 0.3967 | 0.0450 | 0.5154 | 0.0119 | | 4-Point | 0.5688 | 0.7113 | 0.6321 | 0.0332 | 0.1999 | 0.0132 | | Beam Search | 0.6296 | 0.7877 | 0.6999 | 0.0287 | 0.1598 | 0.0109 | | Quantile 0.1 | 0.3821 | 0.9723 | 0.5486 | 0.0986 | 0.1762 | 0.0123 | | Quantile 0.4 | 0.5949 | 0.8871 | 0.7122 | 0.0377 | 0.1640 | 0.0110 | | Quantile 0.5 | 0.6275 | 0.8295 | 0.7126 | 0.0318 | 0.1657 | 0.0110 | | Conditioning | 0.6709 | 0.7899 | 0.7215 | 0.0086 | 0.1674 | 0.0096 | Table 2: Results of the proposed method & baselines on our dataset. We also show results for conditioning our method on ground truth dimensions #### 5.4.1 Metrics. To make reasoning about the trade-off between IoG and IoU more quantifiable, we report the F1-score equivalent for this use case, i.e., F1score=2(IoU∗IoG)IoU+IoG. We further report metrics on the dimension & pose errors, which are computed as follows: * errdim=sum(|d−dgt|), where we compute the error across all possible permutations and then choose the one with the smallest error. * errquat=2arccos(|⟨q,qgt⟩|), where q represents the rotational part of the pose as a quaternion. We compute the error across all possible symmetries and choose the one with the smallest error. * errcenter=||c−cgt||2, where c is the 3D-center of the bounding box. #### 5.4.2 Results. Table [2](#S5.T2 "Table 2 ‣ 5.4 3D Bounding Box Estimation ‣ 5 Experiments ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") shows how our autoregressive methods compare to the baseline parameterizations. We find that Beam Search achieves the best IoU, dimension & rotation error. As for the Quantile methods, we find that lower quantiles achieve higher IoG while sacrificing IoU and dimension error. Quantile 0.5 offers the best tradeoff in terms of overall performance, achieving higher IoG with similar IoU and dimension error compared to Beam Search. Baseline models that predict box parameters directly generally performed worse since those models cannot properly capture multimodal correlations across the box parameters. The Discrete baseline performs the best in terms of center error, but we can see that the best autoregressive methods are only a few millimeters worse. For bounding box predictions with full rotations in SO(3), we find that an autoregressive bounding box parameterization can effectively model rotation uncertainty, achieving the lowest rotation error. We can also see that conditioning the model on known dimensions of the items in the scene increases performance in all relevant metrics (besides IoG), most notably in IoU & dimension error. Note that the dimension error is non-zero because the model is given the dimensions as an unordered set, and still needs to predict the association of each dimension tuple to the corresponding item in the scene. ### 5.5 Quantile and Confidence Boxes In Section [4.2](#S4.SS2 "4.2 Quantile and Confidence Boxes ‣ 4 Applying Autoregressive 3D Bounding Box Models ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") we introduced quantile boxes as a fast approximation for confidence boxes. We showed that when p(b|h) is an ordered object distribution, a quantile box with quantile q is equivalent to a confidence box with p=1−q and should contain the true object with probability p. While it’s hard to ensure that real world objects follow an ordered distribution, we can empirically evaluate whether q confidence boxes contain the ground truth object 1−q fraction of the time. To test our hypothesis, we predict quantile boxes with different q and calculate the fraction of predictions f with IoG >0.95. In Figure [5](#S5.F5 "Figure 5 ‣ 5.5 Quantile and Confidence Boxes ‣ 5 Experiments ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction"), we can see that f≈1−q and follows a generally linear relationship. This suggests that even for general object distributions, quantile boxes can be an effective approximation for confidence boxes. ![](https://media.arxiv-vanity.com/render-output/7101379/x5.png) Figure 5: We compare fraction of predicted boxes that contain ground truth boxes f with different quantiles q and find that q-quantile boxes contain approximately f≈1−q fraction of ground truth boxes. Method ROC AUC Spearman rs Gaussian 0.731 -0.530 Quantile 0.2 0.897 -0.865 Quantile 0.5 0.878 -0.789 Quantile 0.8 0.967 -0.850 Table 3: We compare Quantile Uncertainty Measure U0.2,0.8 with Gaussian dimension variance G, and find that U0.2,0.8 a better predictor of ground-truth IoU compared to G as measured by ROC AUC. U0.2,0.8 is also better correlated with ground-truth IoU compared to G as measured by Spearman rs ### 5.6 Uncertainty Measures In Section [4.3](#S4.SS3 "4.3 Uncertainty Measure ‣ 4 Applying Autoregressive 3D Bounding Box Models ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction"), we introduced the uncertainty measure using quantile boxes Uα,β=1−IoU(bα,bβ) as a measure of the span of the confidence box interval. To evaluate the effectiveness of this uncertainty measure, we calculate the ROC AUC of using U0.2,0.8 to predict when the IoU of the predicted box b with the ground truth box g is less than 0.25. We also measure the correlation between ground truth IoU and uncertainty using the Spearman’s rank correlation rs. We compare U0.2,0.8 on different quantile boxes against Gaussian dimension variance G=σdxσdyσdzμdxμdyμdz on the Gaussian baseline. Table [3](#S5.T3 "Table 3 ‣ 5.5 Quantile and Confidence Boxes ‣ 5 Experiments ‣ Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction") shows that quantile uncertainty U0.2,0.8 can be a better uncertainty measure than G. 6 Discussion ------------- We introduced an autoregressive formulation to 3D bounding prediction that greatly expands the ability of existing architectures to express uncertainty. We showed that it can be applied to both the 3D object detection and 3D bounding-box estimation settings, and explored different ways to extract bounding box predictions from such autoregressive models. In particular, we showed how the uncertainty expressed by these models can make high confidence predictions and meaningful uncertainty estimates. We introduced a dataset that requires predicting bounding boxes with full 3D rotations, and showed that our model naturally handles this task as well. While autoregressive models are just one class of distributionally expressive models, they are not the only option for more expressive bounding box modeling. We hope that future lines of work will continue to build upon the method, dataset, and benchmarks we introduced in this paper.
49e7c461-263a-4e3d-b43d-b3ac483d5ee8
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Boredom vs. Scope Insensitivity How much would you pay to see a typical movie? How much would you pay to see it 100 times? How much would you pay to save a random stranger’s life? How much would you pay to save 100 strangers? If you are like a typical human being, your answers to both sets of questions probably exhibit failures to aggregate value linearly. In the first case, we call it boredom. In the second case, we call it scope insensitivity. Eliezer has argued on [separate](/lw/n9/the_intuitions_behind_utilitarianism/) [occasions](/lw/xr/in_praise_of_boredom/) that one should be regarded as an obvious error to be corrected, and the other is a gift bestowed by evolution, to be treasured and safeguarded. Here, I propose to consider them side by side, and see what we can learn by doing that. (Eliezer sometimes treats scope insensitivity as a simple arithmetical error that the brain commits, like in this quote: “the brain can't successfully multiply by eight and get a larger quantity than it started with”. Considering that the brain has little trouble multiplying by eight in other contexts and the fact that scope insensitivity starts with numbers as low as 2, it seems more likely that it’s not an error but an adaptation, just like boredom.) The nonlinearities in boredom and scope insensitivity both occur at two different levels. On the affective or hedonic level, our emotions fail to respond in a linear fashion to the relevant input. Watching a movie twice doesn’t give us twice the pleasure of watching it once, nor does saving two lives feel twice as good as saving one life. And on the level of decision making and revealed preferences, we fail to act as if our utilities scale linearly with the number of times we watch a movie, or the number of lives we save. Note that these two types of nonlinearities are logically distinct, and it seems quite possible to have one without the other. The refrain “[shut up and multiply](http://wiki.lesswrong.com/wiki/Shut_up_and_multiply)” is an illustration of this. It exhorts (or reminds) us to value lives directly and linearly in our utility functions and decisions, instead of only valuing the sublinear emotions we get from saving lives. We sometimes [feel bad](/lw/v5/inner_goodness/) that we aren’t sufficiently empathetic. Similarly, we feel bad about some of our boredoms. For example, consider a music lover who regrets no longer being as deeply affected by his favorite piece of music as when he first heard it, or a wife who wishes she was still as deeply in love with her husband as she once was. If they had the opportunity, they may very well choose to edit those boredoms away. Self-modification is dangerous, and the bad feelings we sometimes have about the way we feel were never meant to be used directly as a guide to change the wetware behind those feelings. If we choose to edit some of our boredoms away, while leaving others intact, we may find ourselves doing the one thing that we’re not bored with, over and over again. Similarly, if we choose to edit our scope insensitivity away completely, we may find ourselves sacrificing all of our other values to help random strangers, who in turn care little about ourselves or our values. I bet that in the end, if we reach reflective equilibrium after careful consideration, we’ll decide to reduce some of our boredoms, but not eliminate them completely, and become more empathetic, but not to the extent of full linearity. But that’s a problem for a later time. What should we do today, when we can’t change the way our emotions work to a large extent? Well, first, nobody argues for “shut up and multiply” in the case of boredom. It’s clearly absurd to watch a movie 100 times, as if you’re not bored with it, when you actually are. We simply don’t value the experience of watching a movie apart from whatever positive emotions it gives us. Do we value saving lives independently of the good feelings we get from it? Some people seem to (or claim to), while others don’t (or claim not to). For those who do, some value (or claim to value) the lives saved linearly, and others don’t. So the analogy between boredom and scope insensitivity starts to break down here. But perhaps we can still make some final use out of it: whatever arguments we have to support the position that lives saved ought to be valued apart from our feelings, and linearly, we better make sure those arguments do not apply equally well to the case of boredom. Here’s an example of what I mean. Consider the question of why we should consider the lives of random strangers to be valuable. You may be tempted to answer that we know those lives are valuable because we feel good when we consider the possibility of saving a stranger’s life. But we also feel good when we watch a well-made movie, and we don’t consider the watching of a movie to be valuable apart from that good feeling. This suggests that the answer is not a very good one. #### Appendix: Altruism vs. Cooperation This may be a good time to point out/clarify that I consider cooperation, but not altruism, to be a core element of rationality. By “cooperation” I mean techniques that can be used by groups of individuals with disparate values to better approximate the ideals of group rationality (such as Pareto optimality). [According to Eliezer](/lw/v5/inner_goodness/), > > "altruist" is someone who chooses between actions according to the criterion of others' welfare > > > In cooperation, we often takes others' welfare into account when choosing between actions, but this "altruism" is conditional on others reciprocating and taking our welfare into account in return. I expect that what Eliezer and others here mean by "altruist" must consider others’ welfare to be a terminal value, not just an instrumental one, and therefore cooperation and true altruism are non-overlapping concepts. (Please correct me if I'm wrong about this.)
a62eb8a1-a59e-4824-96f6-cbee6705f380
trentmkelly/LessWrong-43k
LessWrong
Winners-take-how-much? The treatments I have seen to date of the "winner-takes-all'' dynamic in AI development focus on the risk-taking engendered by this dynamic: the more AI researchers have to gain from being early movers, the stronger the incentive to cut corners in safety measures. Concern about a result in which a small number of winners has successfully emerged, without, that is, blowing everything up in the effort, seems to be muted. "Winner-takes-all," in other words, is intended only as a term of art from conventional political economy, where "all" refers to the spoils of an otherwise limited contest between willing contestants. The runners-up may lose their investments, but not their lives. To put it another way, we could describe the prevailing model as having three contests: 1) between all humans and a hypothetical uncontrolled AI; 2) between firms racing to establish monopolistic advantage in the production and sale of controlled AI; 3) between humans in general and these firms over the unprecedented risks imposed by the latter on the former, intermixed with a conventional struggle between the former as consumers and the latter as would-be monopoly producers. Of the three contests, the risk of unconstrained rivalrous behavior -- that is, a fight to the death -- is only considered to apply to the one between the hypothetical uncontrolled AI and all humans, collectively. What seems to me to be missing from this model is a rivalrous intra-species fight to the death. This should be surprising, given that we posit a jealous AI, built and trained by humans, acquiring and pursuing goals that put it in opposition to the existence of 100% of humans. We should also fear humans with AI acquiring and pursuing goals that put them in opposition to the existence of the other humans. This could be treated as a semantic problem, if we define any genocidal application of AI to be a form of "misalignment," whether the AI is manipulating humans to achieve murderous ends or vice versa. My tes
6b2c9565-074a-4a03-9e6e-003dc3c4ed86
trentmkelly/LessWrong-43k
LessWrong
Most likely is not likely Cross-posted from Telescopic Turnip. “And I stop at every mirror just to stare at my own posterior.” – Megan Thee Stallion I love bayesian statistics just as much as you do. However, there is a trap that people keep falling into when interpreting bayesian statistics. Here is a quick explainer, just so I can link to it in the future. You roll a 120-faced die. If it falls on one, you stop. If it gives any other number, you re-roll until you get a one. How many times do you think you will have to roll the die? Let’s do the math: you have a 1/120 chance of rolling the die only once, a 119/120 × 1/120 chance of rolling it twice, and so on. So a single roll is the most likely outcome, any other number of rolls will happen less often. Yet, a 1/120 chance is not even 1%. In this case, “most likely” is actually pretty unlikely. While this problem is pretty obvious in a simple die-roll game, it can become very confusing for more complicated questions. One of them, a classic Internet flamewar topic, is the doomsday argument. It’s a calculation of the year humanity is the most likely to go extinct, using Bayesian statistics. Briefly, it goes like this: * We have no idea when or how humanity can possibly go extinct, so we start with no strong a priori beliefs about that, aside from the fact that it’s in the future. However, we have one observation: we know we are living in the 21st century. * Bayes’ theorem says that the probability of an hypothesis is proportional to your prior beliefs times the likelihood (that is, the probability of observing what we observe under the hypothesis). * If humanity goes extinct in 3 billion years, it means you were born in the very, very beginning. If we pick a human life at random, it’s quite unlikely to live in the 21st century. * If humanity goes extinct in a few centuries, then you were born somewhere around the middle of the crowd, and living in the 21st was pretty likely. The prior is roughly the same, but the likelihood is much hi
fd99d3f3-9403-43e4-a5d3-eb77ed1c805c
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient? As far as I know, there isn't that much funding or research in EA on AI sentience (though there is some? e.g. [this](https://forum.effectivealtruism.org/posts/gFoWdiGYtXrhmBusH/key-questions-about-artificial-sentience-an-opinionated)) I can imagine some answers: * Very intractable * Alignment is more immediately the core challenge, and widening the focus isn't useful * Funders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?) * Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped out But it also seems important and action-relevant: * Current framing of AI safety is about aligning with humanity, but making AI go well for AI's could be comparably / more important * Naively, if we knew AIs would be sentient, it might make 'prioritising AIs welfare in AI development' a much higher impact focus area * It's an example of an area that won't necessarily attract resources / attention from commercial sources (I'm not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)
4bd63a8b-4fbd-48ff-b7e1-8632e4a5bf49
trentmkelly/LessWrong-43k
LessWrong
Learning the smooth prior Most of this document is composed of thoughts from Geoffrey Irving (safety researcher at DeepMind) written on January 15th, 2021 on Learning the Prior/Imitative Generalization, cross-examination, and AI safety via debate, plus some discussion between Geoffrey and Rohin and some extra commentary from me at the end. – Evan Geoffrey on learning the smooth prior Vague claims This doc is about a potential obstacle to Paul’s learning the prior scheme (LTP). Before reading this doc, please read either Beth’s simplified exposition or Paul’s original. The intention of this doc was to argue for two claims, but weakly since I don’t have much clarity: 1. LTP has an obstacle in assigning joint probabilities to similar statements. 2. The best version of LTP may collapse into a version of debate + cross-examination However, I don’t quite believe (2) after writing the doc (see the section on “How LTP differs from cross-examination”). Wall of text → generative model As originally sketched, the prior z in learning the prior is a huge wall of text, containing useful statements like “A husky is a large, fluffy dog that looks quite like a wolf”, and not containing wrong facts like “if there are a lot of white pixels in the bottom half of the image, then it’s a husky” (statements taken from Beth’s post). A wall of text is of course unreasonable. Let’s try to make it more reasonable: 1. No one believes that a wall of text is the right type for z; instead, we’d like z to be some sort of generative network that spits out statements. (The original proposal wasn’t really a wall of text either; the wall was just a thought experiment.) 2. We likely want probabilities attached to these statements so that the prior can include uncertain statements. 3. Whether we attach probabilities or not, the validity of a statement like “A husky is a large, fluffy dog that looks quite like a wolf” depends on the definition of the terms. At a high level, we presumably want to solve this with som
ac395e21-f730-4fac-9138-be2a4936cbc7
trentmkelly/LessWrong-43k
LessWrong
Meetup : Regular Moscow Meetup Discussion article for the meetup : Regular Moscow Meetup WHEN: 31 August 2014 02:00:00PM (+0400) WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16 Here's our plan: * A talk about Symbolic self-completion theory. * A talk about Binaural beats. * Training game. * A talk about How to measure anything. * Debates on Poppers' protocol. Details and schedule: https://lesswrong-ru.hackpad.com/-31--Ha1ZZZn9dPk Yudcoins, positive reinforcement and pizza will all be present. If you've been to our meetups, you know what I'm talking about, and if you didn't, the best way to find out is to come and see for yourself. Info for newcomers: We gather in the Yandex office, you need the first revolving door under the archway. Here is a guide how to get there: http://company.yandex.ru/contacts/redrose/ Call Slava at +7(926)313-96-42 if you're late. We start at 14:00 and stay until at least 19-20. Please pay attention that we only gather near the entrance and then come inside. Discussion article for the meetup : Regular Moscow Meetup
537550fc-a203-418b-b4b8-c95c1578c600
trentmkelly/LessWrong-43k
LessWrong
Meetup : Melbourne Rationality Dojo, February - Tai Chi & Lightning Talks Discussion article for the meetup : Melbourne Rationality Dojo, February - Tai Chi & Lightning Talks WHEN: 12 February 2017 03:30:00PM (+1100) WHERE: Janet Lady Clarke Memorial Rotunda, Queen Victoria Gardens, Linlithgow Ave, Melbourne Bring warm clothes - we'll be outside (but probably under cover). Melbourne Rationality Dojo, February - Tai Chi & Lightning Talks Our next meetup for Less Wrong Melbourne is on Sunday 12 February: THIS MONTH: (1) An introduction to Tai Chi from Mark. ("Dojo" has never been more appropriate.) He'll also talk about what to make of the concept of "Chi" if you don't believe in the supernatural and you're skeptical of mystical energy. (2) Lightning talks. A number of speakers at 5 to 10 min each (including questions) on the art of rationality, or something to enlighten our understanding of the world. (3) Trigger Action Planning: An effective way of establishing habits that you want. BRING: An idea or question to share. Perhaps a snack to share. (Healthy snack choice is optional but welcome.) WHEN: 3:30pm to 6:30pm. The meeting proper will begin at 4pm sharp. At about 6:30pm some of us generally head for dinner in the CBD, and you are welcome to join us. WHERE: In or near the Janet Lady Clarke Memorial Rotunda, Queen Victoria Gardens, Linlithgow Ave, Melbourne. https://goo.gl/maps/eEs9Mi8LuYH2 This is a short walk from the NGV. If you have any trouble finding the venue on the day, please text or call Chris on 0439 471 632. There's a chance of light rain, but we'll be pretty well protected in the rotunda. If the weather's good, we might even go out onto the grass nearby. (If you need a toilet, there's one between the rotunda and the NGV, near King-Edward-VII-on-a-horse. It's one of those dystopian automatic loos which provides an opportunity to reflect on the potential dangers of a poorly managed transition to a robot-led society.) WHO ARE WE AND WHY DO WE MEET? The Less Wrong Melbourne Rationality Dojos are self-improve
ce301b03-5609-4f12-8382-5e66143587f6
trentmkelly/LessWrong-43k
LessWrong
What's going on with this failure of Bayes to converge? > There are circumstances (which might only occur with infinitesimal probability, which would be a relief) under which a perfect Bayesian reasoner with an accurate model and reasonable priors – that is to say, somebody doing everything right – will become more and more convinced of a very wrong conclusion, approaching certainty as they gather more data. (click through the notes on that post to see some previous discussion) I have two major questions: 1. Is this exposition correctly capturing Freedman's counterexample? 2. If using a uniform prior sometimes breaks, what prior should I be using, and, more importantly, how do I arrive at that prior?
87b23b50-7075-4739-bc8f-34bec5067d9f
trentmkelly/LessWrong-43k
LessWrong
An OV-Coherent Toy Model of Attention Head Superposition Background This project was inspired by Anthropic’s post on attention head superposition, which constructed a toy model trained to learn a circuit to identify skip-trigrams that are OV-incoherent (attending from multiple destination tokens to a single source token) as a way to ensure that superposition would occur. Since the OV circuit only sees half of the information – the source tokens – the OV circuit of a single head cannot distinguish between multiple possible skip-trigrams. As long as there are more skip-trigrams with the same source-token to represent than heads, the model cannot represent them in the naive way, and may resort to superposition.   In a more recent update post, they found that the underlying algorithm for OV-incoherent skip-trigrams in a simpler 2-head model implemented a conditional on the source token. One head predicts the output for the skip trigram [current token] … [current token] -> [ground truth([0]...[current token])], one of which will yield the right answer. The second head destructively interferes with this result by writing out the negative logit contribution of the first head if the source token is not the one common to all skip-trigrams (in this case, [0]). Because their example cleanly separated tasks between the two attention heads, the authors argued that it was more like the building of high-level features out of low-level ones than a feature superimposed across multiple attention heads.    OV-coherent Superposition  Instead, we claim there is an analogous force pushing the model toward adopting a distributed representation/head superposition whenever the model must learn patterns that require implementing nonlinear functions of multiple source tokens given a fixed destination token. We call this “OV-coherent” superposition: despite the information at the destination position being fixed, the information copied from an attended-to token depends on the information at source tokens to which it is not attending. This pushe
811dff31-fc17-41a8-b862-61283978f5e1
trentmkelly/LessWrong-43k
LessWrong
Internal Information Cascades Epistemic Status: This was written in 2010 and existed in LW's editing purgatory ever since. It doesn't seem extremely wrong. ("Remember 'light side epistemology'? Pepperidge Farms remembers.") Perhaps the biggest flaw is that I didn't hit the publication button sooner and this meta-flaw is implicit in the perspective being described? Maybe the last 10 years would have gone better, but it is hard to reverse the action of publishing. Some links don't work. I fixed a few typos.. Perhaps link archiving will one day receive more systematic attention and links here which once worked, and now do not, can be restored via some kind of archival reconciliation? A classic information cascade is a situation where early decisions by individuals in a group bias the decisions of other people in same group in the same direction. When people talk about information cascades they're generally talking about "dumb herd behavior".  If one person observed something and justified a behavior of their own in a robust way, their behavior, observed, is indirect evidence of their observations and reasoning. If 10 people update based on a shallow observation of the first person, their own behavior is not actually 10 times more evidence. It would be more informative for there to be 11 independent people who all just happened to think and act alike independently. The classic example of multi-agent information cascades is probably an economic bubble, where early investment decisions are "irrationally" mimicked by later speculators creating a temporary increase in demand for the investment.  Purchasing behavior expands through a community like a wave and the "genuine" opportunity here is to buy ahead of the wave and "flip" the investment shortly afterwards to a "greater fool" who found out about the bubble later than yourself.  Aside from obvious moral concerns, the selfish worry is that bubbles are basically self organized Ponzi schemes where the last round of investors is the largest, and they
30ee6fb1-3ff8-4bcd-9bd1-9f1d64cdc2c9
trentmkelly/LessWrong-43k
LessWrong
Is Local Order a Clue to Universal Entropy? How a Failed Professor Searches for a 'Sacred Motivational Order' Context in ~100 words: A month ago, I wrote about humor in military CPR training. Gwern (wise/brutal as always) demanded “real stories, not theory.” So I shared three. Was that enough to earn me the right to pitch some of my ideas… and see if I’m at least less wrong? Let’s see if I can do this,    So let me introduce you to: “João, the Rookie Pastor.” A Brazilian teacher trying to harness pastoral charisma, information theory, and faith in ChatGPT to motivate students (and himself). Will it work? [Insert Shannon’s equation applied to pedagogical memes here.] Why post this here? So you, wizards of rationality, can help me turn this “theory” into something less wrong than invoking Gwern’s name four posts in a row. ---------------------------------------- Practical example: Here, Pastor João explains why Argentinians are the most humble people on Earth. [And below comes the sermon/text a little more rational…] Brothers and Sisters! Stop the suffering! Today, someone will reveal to you a profound misery—more intense than the desperation of a TikToker begging for "likes"... Wait! (Points to the audience, then to himself) It's me, brothers! I am the miserable one!  It's me, my dear brothers, you caught me, I am the miserable one! I suffer because I lack faith and motivation to study; it seems I'm fighting not against a wolf or an adversary, but against the entire universe. Has this happened to you, brothers? I suffer because I have no faith... not in God, not in science, not in myself. That's why I have decided to become a pastor. Pastors don't need faith! Only feeling. Look at that one on TV: he convinces thousands to donate their savings. And me? I can't even convince myself to turn off my alarm clock. How can a poor professor compete with a pastor? He cannot! Can you imagine if we could combine the persuasive power of a pastor with the thirst for knowledge? Today, in an attempt to fight less with the universe and find a little more motivation, I prayed to
a8ab9521-f688-41ee-b588-ac386da99748
trentmkelly/LessWrong-43k
LessWrong
[Linkpost] Mapping Brains with Language Models: A Survey This is a linkpost for https://arxiv.org/abs/2306.05126.  > Over the years, many researchers have seemingly made the same observation: Brain and language model activations exhibit some structural similarities, enabling linear partial mappings between features extracted from neural recordings and computational language models. In an attempt to evaluate how much evidence has been accumulated for this observation, we survey over 30 studies spanning 10 datasets and 8 metrics. How much evidence has been accumulated, and what, if anything, is missing before we can draw conclusions? Our analysis of the evaluation methods used in the literature reveals that some of the metrics are less conservative. We also find that the accumulated evidence, for now, remains ambiguous, but correlations with model size and quality provide grounds for cautious optimism.
ff9b058b-649e-4b5c-a65c-43753e20bd0d
trentmkelly/LessWrong-43k
LessWrong
Family-line selection optimizer O3 and Claude 3.7 are terribly dishonest creatures. Gemini 2.5 can be a bit dishonest too, even though Google writes tests like mad. I would bet that lie-proof benchmarks will be difficult and expensive to make and that the lie-proofing techniques won't easily generalize outside of coding tasks. Perhaps a more punishing optimizer would help solve this problem. It has been shown (on Atari, in 2018) that genetic algorithms / evolution strategies can train agents with <10x the flops required by RL+backprop+SGD. In other words, genetic algorithms are almost cost-competitive. If you RL+BP+SGD a model to avoid doing X, then the model will learn to avoid X enough that it never gets punished in training. There is only weak pressure towards a robust never-do-X model. If you instead delete that model instance along with all its ancestors and cousins going back to T=0 every time you detect lying, then you have much stronger pressure towards a robust never-do-X model. Eventually you will find a model that lies undetectably but you might find an honest system first. You can also try this for some notion of low-impactness. If the model ever does 'rm -rf /', then delete the entire family line. You will eventually find a model that spends all your gpu credits to halt the experiment, but you might find a low-impact model first. I may try some experiments with this idea in the next month or two. I am open to suggestions.
0bbe6a99-74f6-4338-80b5-e2681b80921c
StampyAI/alignment-research-dataset/special_docs
Other
Quantifying the probability of existential catastrophe: A reply to Beard et al. Introduction A recent article by Beard, Rowe, and Fox (Beard, Rowe, & Fox, 2020; henceforth BRF) surveys methods used to quantify the probabilities of global and existential catastrophes. BRF is a valuable contribution to the study of global catastrophic risk and existential risk. It documents the range of methods in use and evaluates their strengths and limitations, providing both a good resource for researchers wishing to get up to speed on the topic and constructive guidance for future work. In this article, I provide some further commentary on the quantification of the probability of global/existential catastrophe, making some points not made by BRF or other prior literature. At the outset, I wish to express my wholehearted agreement with BRF in the importance of sound methodology for quantifying the probabilities (and severities) of global and existential catastrophes. I regret that I share the view that "within the nascent field of Existential Risk research people have been insufficiently discriminating" in the selection and use of quantification methods (BRF, Section 7). The following remarks join BRF in seeking to improve this situation. 2. Some considerations related to severity 2.1. The definition of existential catastrophe BRF defines existential catastrophe as "the collapse of civilization or the extinction of humanity". This is a reasonable definition, but there are others. First, Bostrom (2002) defines existential catastrophe as an event that "would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential". This definition emphasizes long-term effects. The collapse of civilization would presumably have some long-term effects, though whether the effects would last into the distant future is a complex matter (Baum et al., 2019) . Second, Tonn and Stiefel (2013) define existential catastrophe as human extinction, i.e. loss of the existence of the human species. This paper will use the BRF definition while noting the existence of other definitions for existential catastrophe, for related concepts involving large losses of value such as global catastrophe (Morrison, 1992) , ultimate harm (Persson & Savulescu, 2012) , and oblivion (Tonn, 1999) , and for other concepts involving large-scale harm such as astronomical suffering (Sotala & Gloor, 2017) . Probability vs. severity BRF focuses on methodology for quantifying the probability of existential catastrophe, with limited attention other factors including severity. (This focus was an explicit decision; see BRF, Section 1) That is not unreasonable for a first pass at the topic-there is plenty to say about the probability methodology. However, it is important to recognize the interplay between probability and severity. All quantifications of the probability of existential catastrophe require at least some attention to severity for the simple reason that existential catastrophe is a type of event defined by its severity. More precisely, existential catastrophe is generally defined in terms of some minimum severity threshold; in BRF, the minimum threshold is the collapse of civilization. To assess the probability of existential catastrophe is to assess the probability of a harm greater or equal to the collapse of civilization. In the terminology of catastrophe risk analysis, the probability of existential catastrophe is an "exceedance probability", i.e. the probability of harms exceeding some minimum threshold (Grossi, Kunreuther, & Windeler, 2005) . Quantifying the probability of specific existential catastrophe events (such as a nuclear war or Earth-asteroid collision) requires additional attention to severity. The probability can be decomposed into two constituent parts as follows: = P P P * EC 1 2 (1) In Eq. ( 1 ), P EC is the probability of existential catastrophe from some event; P 1 is the probability of the initial catastrophe event; and P 2 is the probability that the event will result in a harm greater or equal to the collapse of civilization. For example, P 1 could represent the probability of nuclear war and P 2 could represent the probability that nuclear war would result in the collapse of civilization or worse. The occurrence of the initial catastrophe event does not necessarily entail the collapse of civilization-that depends on how effectively the survivors can cope with the aftermath of the event. Calculating P EC via Eq. ( 1 ) requires two distinct analyses: one for each of P 1 and P 2 . Analysis of P 1 is the analysis of the probability of initial events, and can follow many conventions of probabilistic risk analysis. In contrast, quantifying P 2 requires analysis of the severity, with attention to the success of catastrophe survivors. This is a rather different sort of analysis than is needed to quantify the probability of initial catastrophe events represented by P 1 . However, P 2 is not equivalent to severity. P 2 is a probability variable representing the probability that the severity will exceed a certain threshold. P 2 can be obtained by creating a probability distribution for the severity of an initial event and then calculating the portion of that distribution that exceeds the threshold for existential catastrophe: ∫ = ∂ ∞ P P S S ( ) 2 ST (2) In Eq. ( 2 ), P 2 is as in Eq. ( 1 ); S is severity of some initial event; and S T is the minimum severity threshold of existential catastrophe Fig. 1 . A depiction of the relationship between P 2 and the severity of an initial event. The graphic is for illustrative purposes only; no evaluation of the actual relationship is intended. (the collapse of civilization in BRF). Eq. ( 2 ) is illustrated in Fig. 1 . The studies surveyed in BRF vary in terms of which probabilities they quantify. (BRF acknowledges this; see BRF, Section 1) Out of the 66 studies in BRF, only three- Chapman (2004) ; Rees (2003), and Wells (2009) -explicitly quantify the probability of civilization collapse (as obtained from a keyword search of BRF, Appendix A). Of the other 63 studies, some quantify P 1 , while others quantify something similar to P EC but with a different severity threshold. Chapman (2004) is the only one of these studies that quantifies P EC via something resembling Eq. (1). Chapman (2004) is a study of asteroid impacts. As with many asteroid risk analyses, Chapman (2004) quantifies impact probability as a function of asteroid diameter (see Fig. 1 of Chapman, 2004) . This provides P 1 . Chapman (2004) then proposes that asteroids of 2−3 km diameter may qualify as "civilization destroyers", while noting that the "predictions of consequences are fraught with uncertainty" (p.11; see also Baum, 2018) . This roughly amounts to postulating that P 2 ≈ 0 for asteroids smaller than 2−3 km diameter and P 2 ≈ 1 for asteroids 2−3 km diameter or larger. However, there is no reason to expect such an extreme discontinuity in P 2 at 2−3 km. A more careful analysis would likely display a distribution of severities (as in Fig. 1 ) across a range of asteroid sizes, finding a continuum of P 2 values that gradually increase as a function of asteroid size. Until such analysis is conducted, one cannot rigorously quantify P EC for asteroid impact. Given that impact probability varies widely as a function of diameter (smaller asteroids have a much higher impact probability), P EC for asteroid impact will be highly sensitive to the resilience of civilization to impact events of different sizes. Similar reasoning applies to other existential risks. The study of Wells (2009) , p.84-90) is of note because it quantifies P EC directly, without considering P 1 and P 2 . It can do this because is quantifies the overall P EC , not P EC for specific events. The analysis uses a technique that quantifies the probability of something persisting based on how long it has already existed for. Given the current age of civilization, P EC can be calculated. Wells (2009) also uses this technique to calculate the probability of human extinction, as do several other studies including Gott (1993) . Because this technique does not consider any particulars of the existential risks that civilization currently faces, it provides at most a very limited amount of guidance to how the risks can be addressed. The study of Rees (2003, p.8 ) is more ambiguous. Rees provides a subjective estimate of P EC , stating "I think the odds are no better than fifty-fifty that our present civilization on Earth will survive to the end of the present century". The text does not explain how the estimate was made. The broader study presented by Rees (2003) includes reasoning similar to Wells (2009) and analysis of catastrophe scenarios similar to Chapman (2004) . Perhaps the subjective estimate was made in consideration both approaches, but this cannot be discerned from the text. The studies that quantify P 1 are not studies of the probability of existential catastrophe: they are studies of the probability of events that might or might not result in existential catastrophe. For example, Lipsitch and Inglesby (2014) quantify the probability that a highly pathogenic influenza strain would escape a research laboratory and cause a pandemic. Lipsitch and Inglesby (2014) also quantify the severity of the pandemic, estimating 2 million to 1.4 billion fatalities. To calculate P 2 and in turn P EC , this range of severities would have to be evaluated in terms of the probability of exceeding S T . Hellman (2008) quantifies the probability of a "fullscale nuclear war" started from a crisis similar to the Cuban missile crisis. Hellman (2008) does not quantify the severity and would need to do so to obtain P 2 and P EC . For the studies that quantify something analogous to P EC , analysis of additional scenarios is needed to obtain P EC . For example, Gott (1993) quantifies the probability of human extinction; additional analysis would quantify the probability of civilization collapsing without humanity going extinct. Pamlin and Armstrong (2015) quantify the probability of the subset of civilization collapse scenarios in which there is no subsequent recovery of civilization; additional analysis would quantify the probability of scenarios in which civilization collapses and recovers. Here it is worth recalling the multitude of definitions of existential catastrophe. The studies of Gott (1993) and Pamlin and Armstrong (2015) could classify as quantifying P EC under non-BRF definitions. Fast vs. slow catastrophes Existential risks can be classified in terms of how quickly the catastrophe unfolds. Specifically, it is the initial catastrophe event that unfolds at different speeds. The initial catastrophe event is the phenomenon that causes the harm, such as a nuclear war or Earthasteroid collision; it is represented in P 1 in Eq. ( 1 ). The overall catastrophe scenario also includes the aftermath of the initial event. All existential catastrophes could have long-lasting effects, especially if civilization is not quickly rebuilt after it collapses. Nonetheless, the speed of initial events can vary significantly between different existential risks. The risks included in BRF group into either fast or slow risks. Fast risks have initial events that occur within years or less. Fast risks included in BRF are nuclear war, pandemics, runaway artificial intelligence, asteroid impacts, nanotechnology, particle physics experiments, space weather, super-volcanic eruptions, and synthetic biology. Slow risks have initial events that persist for decades or longer. Slow risks included in BRF are climate change and ecological catastrophe (see BRF, Appendix A; note that the fast vs. slow distinction is my own and does not appear in BRF). This is overall a reasonable list of risks and broadly consistent with other studies of existential risk. The list is not without exceptions-for example, flood basalt volcanic eruptions consist of a series of large eruptions over a period of hundreds of thousands of years; if the eruptions are sufficiently large and frequent, they could cause more prolonged harm (Schmidt et al., 2016) and therefore rate as a slow catastrophe. As another example, studies of artificial intelligence sometimes consider that the event could unfold in a "slow takeoff" lasting on the order of "decades or centuries" (Bostrom, 2014, p.63) . Regardless, for probability analysis, the distinction between fast and slow is important in several respects. (The distinction can also be important for other reasons beyond the scope of this paper, such as policy decision-making.) First, event probability P 1 can have a different meaning for slow risks. The probability of climate change is approximately 1 (i.e., it is virtually certain that climate change is occurring). The only meaningful uncertainty about climate change risk is about how severe the impacts will end up being, including whether they would amount to existential catastrophe. The same probably applies to the forms of ecological degradation that could result in ecological catastrophe, though the reference cited in BRF on ecological catastrophe (Pamlin & Armstrong, 2015) is unclear on this point. Second, slow catastrophes may be especially important when they interact with fast catastrophes. Climate change and ecological degradation may be unlikely to destroy civilization on their own. Instead, they could weaken civilization, making it less resilient to fast catastrophes. All else equal, climate change and ecological degradation may make it more likely that civilization survives a fast catastrophe (such as a nuclear war) if the fast catastrophe occurs in 2020 instead of 2050. (All else is not equal-other factors include, but are not limited to, economic and technological change.) In other words, climate change and ecological degradation increases P 2 for nuclear war (and other fast risks). They could also affect P 1 , such as by destabilizing international politics, or alternatively by providing a rallying point that brings rival nations closer together. Additionally, fast catastrophes could also affect slow catastrophes. For example, the 2011 Fukushima Daiichi nuclear disaster, caused by the Tōhoku earthquake and tsunami, has prompted a shift from nuclear power to fossil fuels, thereby worsening climate change (Srinivasan & Rethinaraj, 2013) . While the Tōhoku-Fukushima event was only a local catastrophe, the episode is nonetheless indicative of potential impacts of fast catastrophes on slow catastrophes. The above implies that the probability of slow catastrophe can be linked to the probability of fast catastrophe. This applies to most of the risks in BRF. One potential exception is particle physics experiments, for which P 1 may not be meaningfully affected by the slow catastrophes and for which P 2 may 1 under any circumstances (it may be impossible to survive a particle physics catastrophe). Artificial intelligence could be another exception following the same reasoning, though it is conceivable that AI development could be altered by the desire to develop AI to address the slow catastrophes (Baum, 2014) , or by effects of slow catastrophes on the context in which AI is developed, such as related to international cooperation and competition. But for most cases, quantifying the probability of slow catastrophes should account for their effect on fast catastrophes. It has been argued that the field of existential risk is insufficiently attentive to slow catastrophes, perhaps because slow catastrophes are too "boring" or "unsexy" (Kuhlemann, 2019; Liu, Lauta, & Maas, 2018) . My own view is that yes, slow catastrophes have been at least somewhat neglected in studies of existential risk. But it is also the case that slow catastrophes pose distinct analytical challenges, as outlined above. Furthermore, the fields dedicated to studying slow catastrophes tend to not study them in terms resembling existential risk. For example, high-profile research on planetary boundaries identifies approximately ten major global environmental threats that are nominally supposed to be unacceptable to humanity, but are defined in the research strictly in biogeophysical terms and not in terms of impacts to humanity (Baum & Handoh, 2014) . Likewise, almost all of the studies of slow catastrophes studied in BRF (Section 1) focus on biogeophysical metrics, in particular the magnitude of average global temperature increase from climate change. The one exception is Pamlin and Armstrong (2015) , which is also probably the only of these studies that classifies within the field of existential risk. (BRF defines the existential risk research community as "those who are consciously seeking to align their research with the goal of understanding and managing such risks".) There is a major need for research that blends the traditions of global environmental change and existential risk to analyze, in human terms, the risks of slow catastrophes. Multi-risk catastrophes The preceding discussion of slow vs. fast catastrophes raises a more general point. The literature surveyed in BRF consists exclusively of quantifications of specific risks or quantification of the overall probability of existential catastrophe or similar catastrophe. However, some catastrophe scenarios involve multiple risks. These scenarios can require a different set of methods than those surveyed in BRF. One type of scenario involves multiple initial catastrophe events occurring together by coincidence. For example, there could be a volcano eruption that happens to occur as a pandemic is breaking out. The initial events may be independent-for example, there is no reason to believe that infectious diseases affect plate tectonics and vice versa-in which case their probabilities (P 1 in Eq. ( 1 )) could be modeled as independent random variables, with the aggregate initial event probability calculated accordingly. However, the probability of existential catastrophe given the co-occurrence of these events (P 2 in Eq. ( 1 )) is unlikely to be independent. For example, a volcano eruption can prompt mass evacuations that facilitate disease transmission. Analysis of the severity of co-occurring initial catastrophe events must be done on a case-by-case basis; ditto for the accompanying probability of of existential catastrophe. Another type of scenario involves causal relations between multiple fast catastrophes. For example, there could be a war or pandemic that causes a geoengineering termination shock (Baum, Maher, & Haqq-Misra, 2013; Parker & Irvine, 2018) , an asteroid collision that causes a nuclear war (Baum, 2018; Tagliaferri, Spalding, Jacobs, Worden, & Erlich, 1994) , or a war that induces risky technology development (as previously occurred in World War II with the development of nuclear weapons). These scenarios cannot be modeled as independent random variables. They can be modeled using fault trees or Bayesian networks, albeit with more complex models that account for the nuances of each catastrophe within the causal chain. In principle, they could be studied via aggregated opinion surveys, weighted aggregation, or enhanced solicitation, all of which derive from expert judgment. In practice, however, there may be a dearth of qualified experts, because experts tend to focus on one risk at a time and not on the causal relations between them. (The methods mentioned in this paragraph are described in Section 3) Finally, there are scenarios in which a fast catastrophe occurs during a slow catastrophe, as discussed in Section 2.3. Analysis of these scenarios could proceed by evaluating "baseline" values of P 1 and P 2 for the fast catastrophe, and then evaluating the effects of the slow catastrophe on the baseline P 1 and P 2 . Alternatively, it could proceed by calculating scenario probabilities directly without first considering baselines. Regardless, the analysis requires a nuanced understanding of the impacts of the slow catastrophe as they relate to the onset and impacts of the fast catastrophe. (Other slow factors, such as economic and technological change, can be factored into the analysis alongside slow catastrophes.) 3. On the methods surveyed by BRF BRF survey ten methods used for quantifying existential catastrophe: 1 Analytical approachesestimating probability from theory without incorporating specific evidence on the risks, such as in the "doomsday argument". 2 Extrapolation and toy modelingsimple models that derive probabilities from data on other events on the assumption of an inherent relationship between the probability of the other events and existential catastrophe (noting that no existential catastrophe has previously occurred). 3 Fault treeslogic trees that decompose types of catastrophe events (e.g., nuclear war) into a collection of sequences of precursor events and conditions that can lead to the catastrophe event. 4 Bayesian networkssimilar to fault trees, except also permitting interactions between precursor events (hence a "network" instead of a "tree"). 5 Adapting large-scale modelsthe models depict details of specific risks (e.g., models of the global climate system to study climate change risk); these models are adapted to study extreme cases of these risks. 6 Individual subjective opinionsubjective judgments of a single person. 7 Aggregated opinion surveyssubjective judgments of multiple people, averaged into a single probability estimate. 8 Weighted aggregationsubjective judgments of multiple people, combined into a single probability estimate using a weighted average, with weights set according to some measure of the quality of the judgment expected from each person. 9 Enhanced solicitationsubjective judgments of multiple people, in which effort is made to improve judgment quality via some combination of training people to provide better opinions and being more careful in how their judgments are obtained. 10 Prediction marketsplatforms for people to place financial bets on their best-guess parameter estimates. The tradeoff between quality and accessibility BRF evaluate probability quantification methods according to four criteria: (1) rigor, defined as how effectively the method makes use of available information; (2) uncertainty, defined as how effectively the method handles the considerable uncertainty about the probability of existential catastrophe; (3) accessibility, defined as how readily a research group can apply the method, especially the small interdisciplinary groups that often study existential risk; and (4) utility, defined as the value of the method's results for decisionmaking purposes. BRF presents ratings of ten methods according to each of the four criteria. (The ratings were performed subjectively by the authors. In my own view, their ratings are reasonable.) Ideally, a method would rate well across all four criteria. Unfortunately, none of the methods surveyed by BRF achieve this. The BRF ratings data show a trend in which rigor, uncertainty, and utility are similar to each other and dissimilar to accessibility. Fig. 2 . Rigor vs. accessibility of each method as rated by BRF (data from BRF, Table 1 ). This trend holds across all methods (see BRF, Table 1 ). To illustrate this trend, Fig. 2 plots rigor vs. accessibility. No method is rated high in both, and only weighted aggregation is rated low in both. (More precisely, weighted aggregation is rated medium-low in rigor.) The trend in the data is entirely understandable. Rigor, uncertainty, and utility are all aspects of the quality of the probability quantification produced by the application of a particular method. Obtaining higher-quality results generally requires incorporating more detail about the risk and more nuance in how the detail is processed, both of which tend to make the method less accessible. There can be methods that are low-quality and inaccessible, though one would hope that these have gotten weeded out of the collective risk analysis toolkit. Conversely, however, it may not be possible for a method to be both high-quality and accessible. Quality analysis may inevitably require a substantial investment. The same point applies to applications of a given method. Any method can be applied in ways that are of higher or lower quality. There can be applications of a given method that are low-quality and inaccessible: a lot of resources spent implementing a method poorly. Higher quality implementations of a given method will tend to be less accessible, requiring more effort on the part of the research team, and in some cases more funding to invest in expenses such as travel to interview experts in person (instead of via phone or other mode of remote communication) or rewards to incentivize participants in prediction markets. On the role of existential risk quantification Why quantify probability (and severity)? Underlying the tradeoff between quality and accessibility is the basic point that the probabilities of potential existential catastrophe scenarios are difficult to rigorously quantify. The scenarios involve complex and unprecedented global processes. Historical data are unreliable or nonexistent. Expert judgment is often unreliable as well-or, perhaps there are no experts. Obtaining quality quantifications requires applying difficult methods, which can be a resource-intensive process in a field in which resources are scarce. Perhaps it would be better to skip the matter entirely and focus on other activities. BRF identify two major reasons to quantify the probabilities and severities of existential risks. The first is the prioritization of efforts to reduce various risks. The second is the evaluation of actions that could decrease one set of risks but increase another (i.e., risk-risk tradeoffs). Both are situations in which actions could affect multiple risks, and in which identifying the best action requires comparing the size of the effect on one set of risks to the size of the effect on another. These sizes are typically measured in terms of the product of probability and severity. Therefore, to identify the best actions in these situations, it is necessary to know the effects of the actions on the probabilities and severities of the risks in question. Quantifying the probabilities and severities is an important first step. (Additional analysis is required to assess the effects of specific actions aimed at reducing the risks, as well as the costs associated with pursuing these actions.) Quantification is not always necessary. Some actions would only affect one risk, or would affect multiple risks in the same direction (increase or decrease). These actions pose no tradeoff and require no quantification to evaluate. Where possible, it can be more efficient to focus on these actions. Indeed, one approach to handling difficult quantification tradeoffs is to search for actions that avoid the tradeoffs. Graham and Wiener (1995) refer to these as "risk-superior" options, analogous to the concept of Pareto superiority in welfare economics. An "existential-risk-superior" option would be one that reduces all existential risks by at least as much as all other options, and reduces at least one existential risk by more than each other option, such that there is reason to favor the existential-risk-superior option and no reason to favor any other option, setting aside reasons unrelated to existential risk. If existential-risk-superior options are identified, there is no further need to quantify the options, because one option is clearly best. However, even then, some quantification may be needed to identify existential-risk-superior options. And in many other cases, there are no existential-risk-superior options. Tradeoffs are inevitable. Quantification is an important research task, however difficult it may be. Analysis vs. decision-making Some further perspective can be obtained from considering the relationship between analysis and decision-making. Arguably, analysis of the probability and severity of existential catastrophe should aim to improve the quality of decisions that affect existential risk. Otherwise, the analysis is of mere intellectual interest. Reducing existential risk is (again, arguably) a moral imperative, not an intellectual curiosity. Analysis can improve decisions by enabling the decisions to result in larger reductions in the risk. Phrased differently, the expected value of the action taken is larger with the analysis than without. This is the essence of the concept of the value of information (e.g., Barrett, 2017) . Analysis can be valuable even if the action taken is unchanged. As a toy example, perhaps heads would have been chosen in a coin flip anyway, but it would have been a lucky guess, and so there is value in analysis indicating that the coin was indeed going to land on heads. The same reasoning holds for analysis about more complex decisions, including those about existential risk. But decisions do not occur in a theoretical vacuum, and they do not necessarily conform to the ideals of expected value maximization. Decisions are psychological and (often, especially for the most important decisions) social processes. In order to be most useful for reducing the risks, methodology for quantifying probabilities and severities should be designed with the nuances of actual decision processes in mind. This point is captured somewhat by the BRF concept of method utility, but some further elaboration is warranted. A lot of risk analysis is commissioned by clients who seek to use the analysis in their own decision-making. The field of risk analysis has roots in industry, especially nuclear power (Thompson, Deisler, & Schwing, 2005; Wilson, 2012) . Engineering risk analysis commonly seeks to guide in-house safety decisions of industrial risk managers or regulatory decisions by policy makers at agencies like the US Environmental Protection Agency and Nuclear Regulatory Commission. For this, quantifying the overall probability of some catastrophe can be less important than identifying the specific system components that are most implicated in the risk and likewise in greatest need of attention. This is one reason that engineering risk analysis has gravitated toward methods like fault trees that decompose risks into constituent parts and therefore produce results that help managers focus on specific problems. Existential risk research is less often commissioned by clients. Instead, it is often sponsored philanthropically, whether by private funders or government agencies. The funders do not seek results that they can use for their own decisions. Instead, they wish to support the cause of existential risk reduction and hope that the analysis will in some way be useful to that end. Without decisionmaker clients, existential risk analysis projects must take additional steps to ensure that their analysis improves decision-making, including identifying decision-makers to support. Because of this, there is a greater risk of the analysis working in an academic fashion, disconnected from practical decision-making and not having constructive impact on the risks themselves. (Existential risk is not the only domain in which research lacks decision-maker clients. Another example is research by industry and environmental public interest groups that seek to inform environmental policy; von Winterfeldt, Kavet, Peck, Mohan, & Hazen, 2012.) Nonetheless, even without built-in decision-maker clients, it is still important for existential risk analysts to bear in mind how their analysis could be used for decision-making. That means recognizing that there is more to risk management than just numbers for probability and severity. The details of the risks themselves must be understood, and how potential actions can affect the risks, and the psychological and social and institutional contexts in which decisions will be made. This is a lot to account for-as if the probabilities and severities were not a lot on their own-but it is essential for pursuing actual reductions in existential risk and not just academic studies. It can help to recognize that risk analysis can be valuable even if it does not produce any quantification of probability and severity. Often, simply having an analysis that pulls together information about the risk in an intuitive and well-organized fashion is sufficient for improving risk management. In other words, risk analysis can be of value by improving people's mental models of risks, in addition to providing information on the rating or ranking of decision options. Additionally, risk analysis can be of value by creating an opportunity for stakeholders and experts to share risk-related information with each other in a way that crosses typical organizational barriers. The opportunity for people in different groups to talk through key issues affecting risk can be of significant valuable for the practice of risk management, even regardless of formal risk analysis outputs. It is important to not let the pursuit of numbers (of probability, severity, the timing of future events such as AI milestones, etc.) serve as a distraction from the goals of evaluating decision options and improving our understanding and management of the risks. In sum, quantifying probability and severity should be at most only one part of an overall existential risk analysis and management portfolio. But it is still a part, and an important one at that. It is likewise important to use quality methodology in performing the quantifications. Conclusion This article has sought to build on the excellent contribution of BRF to provide further perspective on the quantification of the probability of existential catastrophe. This article finds that the probability of existential catastrophe is inherently linked to the severity of events that could result in existential catastrophe, that achieving a higher quality of analysis of the probability will in general require a larger investment in analysis, that analysis of the probability is sometimes but not always necessary for decisionmaking, and that analysis should be designed to support risk management and not just designed as an academic exercise. If these findings are taken into account, together with the analyses of specific methodologies by BRF, then it is believed that risk analyses of existential catastrophe will tend to be more successful at understanding and reducing the risks.
b96ecce3-e684-416d-823c-3c867aa8fe78
trentmkelly/LessWrong-43k
LessWrong
Wanting to Want In response to a request, I am going to do some basic unpacking of second-order desire, or "metawanting".  Basically, a second-order desire or metawant is a desire about a first-order desire. Example 1: Suppose I am very sleepy, but I want to be alert.  My desire to be alert is first-order.  Suppose also that there is a can of Mountain Dew handy.  I know that Mountain Dew contains caffeine and that caffeine will make me alert.  However, I also know that I hate Mountain Dew1.  I do not want the Mountain Dew, because I know it is gross.  But it would be very convenient for me if I liked Mountain Dew: then I could drink it, and I could get the useful effects of the caffeine, and satisfy my desire for alertness.  So I have the following instrumental belief: wanting to drink that can of Mountain Dew would let me be alert.  Generally, barring other considerations, I want things that would get me other things I want - I want a job because I want money, I want money because I can use it to buy chocolate, I want chocolate because I can use it to produce pleasant taste sensations, and I just plain want pleasant taste sensations.  So, because alertness is something I want, and wanting Mountain Dew would let me get it, I want to want the Mountain Dew. This example demonstrates a case of a second-order desire about a first-order desire that would be instrumentally useful.  But it's also possible to have second-order desires about first-order desires that one simply does or doesn't care to have. Example 2: Suppose Mimi the Heroin Addict, living up to her unfortunate name, is a heroin addict.  Obviously, as a heroin addict, she spends a lot of her time wanting heroin.  But this desire is upsetting to her.  She wants not to want heroin, and may take actions to stop herself from wanting heroin, such as going through rehab. One thing that is often said is that what first-order desires you "endorse" on the second level are the ones that are your most true self.  This seems like an
0a516930-108c-4776-864b-ef7953cdd5a7
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Can Counterfactuals Be True? Today's post, Can Counterfactuals Be True? was originally published on 24 July 2008. A summary (taken from the LW wiki):   > How can we explain counterfactuals having a truth value, if we don't talk about "nearby possible worlds" or any of the other explanations offered by philosophers? Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was When (Not) To Use Probabilities, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
c3510bb6-10e9-4c5d-a46c-703c7c4e7620
trentmkelly/LessWrong-43k
LessWrong
Basic Mathematics of Predictive Coding This is an overview of the classic 1999 paper by Rajesh Rao and David Ballard that introduced predictive coding. I'm going to focus on explaining the mathematical setup instead of just staying at a conceptual level. I'll do this in a series of steps of increasing detail and handholding, so that you can grasp the concepts by reading quickly, assuming you are familiar with the mathematical concepts. In addition I have implemented a convnet version of this framework in pytorch, which you can look at in this notebook:  Why I wrote this The phrases "predictive coding" and "predictive processing" have squeezed their way outside of academia and into the public. They occur quite commonly on Lesswrong, and in related spaces. They are used to point towards a set of ideas that sound something like "the brain is trying to predict sensory input" or even more handwavy "everything about the mind and psychology is about prediction." This makes predictive coding[1] sound like a big theory of everything, but it was actually introduced into neuroscience as a very concrete mathematical model for the visual cortex.  If you are interested in these ideas, it is helpful to understand the mathematical framework that underlies the more grandiose statements. If nothing else, it grounds your thinking, and it potentially gives you a real mathematical playground to test any ideas you might have about predictive coding. Additionally, 1999 was quite a while ago, and in many ways I think the mathematics of predictive coding makes more intuitive sense in today's deep learning ways of thinking, than in the way originally presented. I've tried to port the original work into something that looks more at home in today's ways of thinking (e.g. the code on github i linked to implements a predictive coding network in pytorch and uses convnet structure etc.) A note of clarification: there are other network implementations of predictive processing, but this is the original one that started everything. Ka
5bb676ff-2ddd-423f-a550-ea288c761036
StampyAI/alignment-research-dataset/lesswrong
LessWrong
3-P Group optimal for discussion? ***Assertion 1:*** **Optimal size for a weekly discussion group is 3 and the optimal setting is in person** After 8 years hiatus on Lesswrong, i'd like to revisit lesswrong. In my time away, I have learned that categorically the best medium to learn something is through real human to human interaction bar none. Therefore, in order to catch-up with the latest body of knowledge in less-wrong, i'd like to ask for help. ***Ask 1:* Specifically, are there 2 volunteers that would like to meet 90 minutes once a week in-person in SF starting July/August to discuss 2 things:** 1. What is an emotion in the context of information and decision theory? 2. Can we replicate emotion in AI? These questions may seem stupid at first given the nature of the term "emotions," however in practical terms, GPT-3 used for NLP will fail any version of the turing test that analyzes whether the AI violates predictive emotional congruity. In fact, I would bet that all chatbots trained on just linguistic sequences will fail any and all emotional congruity tests. ----------------------------------------------------------------------------------------------- \*\*\*Emotions are a trick of language. Fundamentally there is an ordinal ranking of all actions *a* you make take at time *t.* The selection of an action constitutes either random selection or some ranking based on an objective function (more on this later) where you select the highest ranking action. These values associated with each action that necessitates an apple to apple comparison is effectively a vector *v,* that represents the relative value of an action. This vector is in fact the emotional vector that corresponds to the free-energy principle biological process proposed by Karl Friston in 2018. It manifests as what you feel although typically our attention sensor queue is only sensitive to high intensity emotions, emotional tensors *t* about an action *a* which as an effect of emergent complexity represents the entire spectrum of human thought. Through drop-out causation analysis we can radically reduce the solution space for the so-called mind-body philosophy problem. **TLDR; Let me know if you'd like to join a small (3 person) discussion group interested in emotions, decision theory and AI.** **I'll also have friends drop-in such as Peter Dayan, Joscha Bach, Kenneth Stanley, Matei Zaharia and Jeff Clune.**
8eacdc8e-e313-4354-bf5d-46cfd55de6a4
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AI as a Civilizational Risk Part 2/6: Behavioral Modification Altering people for the worse using AI is easier than altering them for the better.    **Redefining what “Capital” means**  Narrow AIs and optimization algorithms of social media news feeds have already caused many adverse outcomes without being AGIs. When people talk about AI on Twitter or Less Wrong, they use manufacturing or job-replacement analogies as the core function of AI. This analogy imagines AI as a powerful extension of industrial capital, defined by its "ownership of the means of production." This 19/20th-century thought is becoming increasingly removed from the reality of capital today.    The core business of the largest tech companies, whether social media giants, advertising behemoths like Google, or middle-layer companies like Amazon, is not about production.     It is about **behavioral modification.**    This behavioral modification takes many forms: from digital addiction to engagement-driven news feed in the case of social media. Advertising funds much of the technology world and has become a hyper-optimized system for getting people to buy or engage with products. Usually, the original focus of behavioral modification is habit formation to use the particular company's site. Initial habit formation is innocent enough if done by cleaning up the UI and fixing issues. However, many of these goals eventually shift into the over-optimization territory.     In the 21st century, the term "Capital" has effectively changed meaning to become less about the means of production and more about the means of behavior modification. "Age of Surveillance Capitalism" has written about this change of meaning. Historically, behavioral modification has been mostly a function of governments, religions, or media. Large-scale involvement of capital in behavioral modification puts companies into traditionally occupied roles by those institutions, which creates predictable conflict and mutual attempts to control each other. [Companies frequently have governmental involvement](https://www.bbc.com/news/world-us-canada-62688532), while [corporate lobbying has also increased.](https://www.bbc.com/news/world-us-canada-62688532)    Viewing AI as an extension of capital is a good perspective. However, its manufacturing side is less dangerous and less impactful than the behavioral modification side of technocapital.     The leading carrier of behavioral modification problems is social media. Social media has converged on a horrifying equilibrium of optimizing for engagement. This convergence on "optimizing for engagement" did not have to be the case, but it happened. As a result, social media algorithms do not look into whether the discourse is uplifting or true, encouraging, worthy of attention, or socially cohesive. Attempts to combat "misinformation" can frequently backfire, but even if they were good, they are patches on the underlying utility function failure.      Sometimes the most controversial ideas, which do not contribute to the advancement of knowledge but tend to cause the most division within a nation, get promoted. In politics, these are known as "wedge issues." They are used to divide an electorate to concentrate your supporters away from society or to drive a wedge within an opponent's base to decrease their social cohesion. They may temporarily serve a particular politician's interest, but they lower social cohesion of society as a whole. Designation of something as a "wedge issue" is itself a contentious topic, but they are as old as empires themselves. Even meta-wedge issues, such as whether appeals to the public ("populism") are good or bad for a republic, are as old as the Roman empire. (See [Optimates and Populares - Wikipedia).](https://en.wikipedia.org/wiki/Optimates_and_populares) Twitter runs an algorithm specifically for finding wedge issues and systematizing the destruction of trust. This tendency towards controversy has been noted by many people, for example. SSC [Sort By Controversial | Slate Star Codex](https://slatestarcodex.com/2018/10/30/sort-by-controversial/)    **Social Media vs Search Engines**  Social media feed ranking "algorithms" are not necessarily neural networks, and they might not be called AI in a traditional sense. They could be full of "if" statements, but I lump them together with narrow AI because they are carefully A/B tested to improve some metric. In other words, s[ocial media algorithms have an "optimization nature,"](https://pashanomics.substack.com/p/social-media-and-mental-healthhttps://pashanomics.substack.com/p/social-media-and-mental-health) which we need to worry about in narrow AIs, whether it takes the form of deep learning or other software.    I have worked for both social media companies and search engines. Social media is drastically different from search engines in the utility function it measures itself by and optimizes. Social media optimizes you to spend as much time on the site as possible. Search engines, which are far better for mental health and potentially improve your mental health if you use them, are optimizing for spending as little time on the site as possible. If you click a link as quickly as you can on Google or Bing and do not come back to the site within the same session, [they consider this a massive win for the link and themselves.](https://www.searchenginejournal.com/dwell-time-seo/294471/#close)      Two different types of websites have two metrics with nearly opposite utility functions. As you might expect, they have near-opposite effects on the users of their sites.     [Search engine use is positively correlated with increased brain function](https://www.uclahealth.org/news/ucla-study-finds-that-searching-the-internet-increases-brain-function). Social media use, especially heavy passive use, is negatively correlated with mental health. [Zuckerberg even mentions this relationship between passive use and poor mental health in congressional testimony.](https://www.washingtonpost.com/news/the-switch/wp/2018/04/10/transcript-of-mark-zuckerbergs-senate-hearing/) This unfortunate situation has arisen partly due to a lack of concern for narrow AI or highly optimized A/B testing safety. Even though Facebook has admitted to social media's adverse effects on some users, the algorithms tend to change through [patchwork fixes rather than underlying utility function fixes or decentralization](https://www.palladiummag.com/2019/02/14/facebooks-political-problems-are-inherent-to-centralized-social-media/). First, Facebook fought "game invite spam," then "clickbait," then "fake news," then "misinformation." Earlier fixes were correct, but the later ones became questionable and over-zealous. However, the underlying engagement maximization is likely to persist underneath the patchwork. Facebook's recent push for "time well spent" is probably better than before but is still a far cry from the likely more correct "least time spent" search engine ranking approach.    The design of platforms, such as TikTok and Twitter, is even worse because it makes [parasociality the default mode of interaction](https://pashanomics.substack.com/p/responsibility-over-parasociality), extending the failed logic of the "Age of Celebrity." Parasociality is not truly a natural state of relationships and is a problem by itself. This design creates further difficulty in implementing proper ranking solutions.     Imagine that social media took a different path. Social media could have become a subscription-based service with no ads. This design decision would have drastically altered the cost of Western civilization. Social media could have gone with ads but designed their corporate structure to be more similar to search engines. Social media appeared after the search engines, so this is entirely possible. Treating social media more like search would imply creating two different teams handling ranking and ads, a non-engagement-based metric, and valuing quality from the beginning. It would not have been perfect, but it would likely be much better. Search engine corporate structure is a good set of ideas; many were in use then. If this had happened, the intensity of the culture war and politicization of everything would have been lower. Many of the adverse side effects of the culture war, including the current escalations of the great-power conflict, could have been avoided.      These kinds of issues are only going to escalate. Social media behavior modification aims to get you addicted to their platform while not necessarily providing the amount of benefit given the time you spend on it. Addiction, waste, and even individual mental health problems are bad enough, but lowering social cohesion is the core driver of civilizational risk. Historical priors and evidence from part 1 suggest we are already in a low social cohesion civilization and any further decreases are a civilizational risk.           **Government Involvement in Behavioral Modification**  However, algorithmic wedge issue amplification and engagement-driven addiction are not the only issues. In the next ten years, we will also have problems with governments using AIs as a behavioral control mechanism. The most likely two goals are     1. de-radicalization and avoiding rebellion and violence in general    2. radicalization against perceived enemies of the regime and encouragement of violence    They may contradict each other, but since different parts of the government implement them, both could be in play.      The first goal of avoiding rebellion may seem reasonable. However, without appropriate safety guarantees, all goals may backfire. Governments have historically worked to prop up regime legitimacy and decrease the legitimacy of anyone trying to question the regime. In the future, this can take the form of AI-based detection of dissidents through metadata or sentiment and suppression of their opinions through pressuring tech companies. Behavioral modification by the government can also be GPT-like bots shouting slurs at people insufficiently supportive of regime policies or spamming social media with bad arguments. People might post dissenting opinions less due to fear of losing reach or feeling hated. If the dissidents do not know bots attack them, they are likely to grow resentful of their fellow citizens who disagree with them, setting the stage for further conflict. Suppression of dissidents and gas-lighting of the population regarding what it thinks happened all the time in the past. However, the process will become more streamlined due to narrow AI.  People in different countries evaluate this use case differently. Most people in the world have a negative view of rebellion in their countries. They frequently view rebellion positively in countries they do not like.  However, even if you view rebellion in your country as unfavorable, the government's methods to minimize discontent will have massive side effects. Governments could swarm online discourse with enough bots pretending to be human to alter the perception by the population of what the population thinks. Public changes to the distribution of angry content can lead to an epidemic of demoralization, depression (also known as "blackpilling"), or people thinking they are truly alone in thinking something perfectly reasonable. Some of this may already be happening. The government's desire to use it increases as narrow AI develops in power. Danger to social cohesion does not require that any individual AI is particularly good at converting all people, merely attenuating and de-attenuating specific topics can give the government the effect they desire at the cost of everyone's sanity.    You can imagine a government bot propped up by other bot accounts that immediately criticized somebody's good idea that happens to challenge a feature of the government. Taken to an extreme, this would squash any reasonable ideas of what a good government is. While you may view the "absence of rebellion" as positive, the governments accomplish this by reducing social cohesion even further and driving more people into despair.          An even more dangerous idea is that certain parts of the government want to antagonize people to commit violence. Increasing violence in your population sounds extremely strange, [but certain government institutions have metrics on how many terrorists or criminals they catch](http://ronpaulinstitute.org/archives/peace-and-prosperity/2022/september/15/a-danger-of-giving-fbi-agents-quotas-on-domestic-terrorism-and-white-supremacy-related-crime/). Suppose the current numbers of criminals somehow fail to fulfill these metrics. In that case, the departments in question will likely [try to radicalize people online to "catch" them.](https://www.rollingstone.com/politics/politics-news/how-fbi-entrapment-is-inventing-terrorists-and-letting-bad-guys-off-the-hook-244905/) What is concerning is the future use of AIs in radicalization. Sufficiently A/B tested deep learning algorithms, even without being an AGI, can pick out a few people out of a thousand and push them to commit some crime to be entrapped. Once again, this is dangerous because it may further erode social cohesion and destroy trust among people. Many of the above are known as "psyops" in internet slang. While collecting reliable information on this is hard, these operations will likely become automated in the next few years. People beginning to suspect "feds" embedded in groups and not being super trusting of strangers is a bad sign for society.         Another alarming situation is when the government tries to use extralegal means of killing or intimidating people they do not like. One example could be [radicalizing their neighbors and asking them to commit violence against them and their property](https://nationalfile.com/drunk-radical-leftist-allegedly-hunts-down-murders-18-year-old-republican-over-political-views/). One can imagine using deepfakes and persuasion AIs to rally crowds around a perceived injustice that gets them to destroy the property of people whom governments view as undesirable. If the government wishes to intimidate the population, it could cause a repeat of the property destruction of 2020.        Media has poor incentives somewhere between government and social media companies. Media, being part of the regime, wishes to maintain the regime's legitimacy while also wanting to make money from clicks and controversy. Media is not known to use AI a lot, but they do have engagement optimization A/B testing. Finding controversy where none exists, such as by making up stories about tech personalities, also creates wedge issues and lowers social cohesion. [Adding one example](https://twitter.com/elonmusk/status/1250700598002905093), but there are too many to list.      A hypothetical company could write a bunch of AIs to convince people to verbally attack those who criticize its products online. It could blockade [previously helpful channels of information such as reviews](https://www.hollywoodreporter.com/tv/tv-news/lord-of-the-rings-the-rings-of-power-amazon-review-bombed-1235211190/). Online verbal attacks also lower social cohesion, even if some individuals can learn to "shrug them off."       The above is a taste of how prominent players can push on behavioral modification buttons while ignoring the side effects. The temptation to use narrow AI to boost one's legitimacy through polluting the epistemic commons will only get stronger. Each country's corresponding regime and its opponents will accuse each of "fake news." Telling apart whose accusation is correct or not is a non-trivial task.          Great power competition also enters the picture, given that the internal politics of almost all countries might degrade.     We are likely to see some failing attempts by countries to unify themselves by creating an external enemy. Inciting great power animosity is perceived by some elites as being able to trade external social cohesion for internal social cohesion. This thinking is another leftover of the 20th century. Social cohesion in the West is so low that fights with external enemies introduce another wedge issue. Once the government decides to amplify an external nation as an enemy, then the usage of AIs in behavior modification becomes more extreme. Each country will use AI to try to radicalize inhabitants of their own countries and demoralize inhabitants of other nations. Once a nation reaches a threshold of animosity, it will attack its adversaries' social cohesion algorithmically, the main weak point being social media ranking.          An example delivery vector of a plausible "algorithmic cultural attack" is TikTok. TikTok has many issues. It is highly addictive. It seems to give people visible mental health problems, [including strange bodily movements](https://www.henryford.com/blog/2022/03/tiktok-giving-teen-girls-tics). It is effectively a probe by China of American psychological and cyber defenses, and this probe has left America's defenses wanting. America has shockingly not banned TikTok. America has yet to figure out how to measure the adverse mental health effects or [cybersecurity implications of TikTok.](https://www.henryford.com/blog/2022/03/tiktok-giving-teen-girls-tics) This example exists in a great power competition in a China/USA environment that is not that hostile by historical standards. Even in such an environment, China probes America's defenses using an AI-powered social media algorithm. Moreover, it is already having fairly significant negative effects on people in America. The effects will likely worsen if the environment becomes more hostile than today.          Behavioral modification is different from manufacturing or job replacement in terms of its safety profile. Many critiques or worries people raised regarding manufacturing, job replacement, and their associated economic equality and concentration of power are far more severe in behavioral modification scenarios.     Advertising, social media, or governmental AIs present, create an enormous inequality in people's capacity for behavioral modification. In the past, parents shaped their children's behavior. Today children are more likely to be shaped by online content and global techno capital than ever before. In particular situations, this is dangerous to young kids, which can already be subject to highly addicting online videos. These are infohazards for children, though not yet for adults. The government should have banned them a long time ago. However, as the ideas of what is suitable for children diverge between parents and engagement optimizing techno-capital, we are likely to see [further clashes around parental rights](https://www.newsweek.com/muslim-parents-clash-lgbtq-speakers-michigan-school-board-meeting-1751966) and an ever-more-increasing desire for parents to commit violence in defense of their children.          Propaganda is not new. Nations using propaganda to control the population is not new, and propaganda containing dangerous lies, which destroy the capacity of the population to think or be socially cohesive, is also not new. However, the optimization power of propaganda will increase, given the capacity of AI to generate a false sense of reality online through many bots and GPT-like fake news articles. The hostility to populations of other great powers is likely to increase compared to previous wars. The desire to use extremely dangerous weapons is also likely to increase. Humanity does not have proper neutral arbiters to resolve these kinds of situations.         It is possible but improbable for the AI creators to put their foot down and block the government from using their AIs for this purpose. However, the probability of that is somewhat low. The governments have ways to pressure them to use the tools, and the creators themselves believe in the causes of their government. The creators are not immune to the propaganda of the previous generation of narrow AIs and fake news. Thus, even though the creation of AIs currently has humans in the loop, those humans are quite influenced by what they perceive as high status, primarily decided by what narrow AIs promote in Twitter and other discourses.            A situation of low social cohesion is already dangerous enough without involving AI. However, unaligned narrow AI has a negative feedback cycle with the overall decline of social cohesion. Software developers in socially incohesive civilizations are likely to have trouble agreeing on what type of software they want to develop. Thus, if they are to build something together, they can end up with bizarre coordination points. Instead of building software based on a philosophically grounded idea of what is good for their nation, which may be tricky to specify, they have trouble agreeing and thus optimize metrics that are easy to measure. They do not seem to care that those metrics might end up hurting their user because they do not feel solidarity with their fellow citizens or, in TikTok's case, citizens of other countries. The less socially cohesive the underlying people are, the more their AI coordination points will diverge from the human utility. While social cohesion is likely necessary, more is needed for a safer AGI.    Civilizational risk, especially in the next ten years, comes primarily from using ever-increasing powers of narrow AIs in behavioral modification. This premise drastically influences the plausible historical trajectory of civilization on the path toward AGI. Understanding behavioral modification as a c-risk changes our plausible assessments of civilizational capacity, what kind of research we should prioritize, and what economic conditions, if any, will exist in the near term.    All parts [P1: Historical Priors](https://www.lesswrong.com/posts/imnAuj9D6C5seDbHd/ai-as-a-civilizational-risk-part-1-6-historical-priors)    [P2: Behavioral Modification](https://www.lesswrong.com/posts/wQmCZ8YPTfCKEN9JE/ai-as-a-civilizational-risk-part-2-6-behavioral-modification)    [P3: Anti-economy and Signal Pollution](https://www.lesswrong.com/posts/JXktMsTAc9ZyMoami/ai-as-a-civilizational-risk-part-3-6-anti-economy)  [P4: Bioweapons and Philosophy of Modification](https://www.lesswrong.com/posts/rLAHEcrjtsAbhH5Eq/ai-as-a-civilizational-risk-part-4-6-bioweapons-and)    [P5: X-risk vs. C-risk](https://www.lesswrong.com/posts/CB2qu4zvJbtRbyipH/ai-as-a-civilizational-risk-part-5-6-relationship-between-c)    [P6: What Can Be Done](https://www.lesswrong.com/posts/PpGx4PZcTgL3rnb6Y/ai-as-a-civilizational-risk-part-6-6-what-can-be-done)
aa77b395-07a9-435d-8e55-5daef1b128a0
trentmkelly/LessWrong-43k
LessWrong
Meetup : Brussels monthly meetup: games! Discussion article for the meetup : Brussels monthly meetup: games! WHEN: 12 October 2013 01:00:00PM (+0200) WHERE: Rue des Alexiens 55 1000 Bruxelles Second Saturday of the month is coming up. As usual, we will meet at 1 pm at La Fleur en papier doré, close to the Brussels Central station. If you feel like an intelligent discussion and are in the neighborhood, consider dropping by. The meeting will be in English to facilitate both French and Dutch speaking members. There's an interest in games and game design in the group, so I thought we might make this the official topic for this meetup. Have you read anything about it lately? Is there an interesting game you think we should try out? Any game idea you've been sitting on? (I don't expect we'll only talk about games though.) If you are coming for the first time, please consider filling out this one minute form, to share your contact information: https://docs.google.com/forms/d/1qSvI1NWkFSsfIJhUMORb_Wd8fdJTVPhdw49grDQwRTI/viewform The Brussels meetups use a Google Group: https://groups.google.com/forum/#!forum/lesswrong-brussels Discussion article for the meetup : Brussels monthly meetup: games!
701883fc-ed0e-4a1c-aaa7-831b3bdc3542
trentmkelly/LessWrong-43k
LessWrong
The low cost of human preference incoherence Note: working on a research agenda, hence the large amount of small individual posts, to have things to link to in the main documents. In aiming for an adequate synthesis of human preferences, I tend to preserve quirks and even biases of humans, rather than aiming for elegant simplicity as many philosophers do. The main reason is that, feeling that value is fragile, I fear losing a valid preference more than allowing an illegitimate bias to get promoted to preference. But another point is that I don't feel that the cost of excessive complexity is very high. The small cost of one-off restrictions An analogy could be with government interventions in the market: most naive restrictions will only slightly raise the cost, but will not have a large effect, as the market routes around the restrictions. Consider a government that created price controls on all bread, for some reason. If that was a one-off rule, then we know what would happen: loaves of bread would get gradually smaller, while bakeries would start producing more cake or cake-like products, which now occupy the niche that more expensive bread products would otherwise have filled. There is likely an efficiency loss here - the balance between cake and bread is different from the market would have given, for small losses to bakers and customers. But these are not huge losses, and they are certainly much smaller than would have happened if the bakers and customers had simply continued as before, except with the price control. Note that if the government is dynamic with its regulations, then it can impose effective price controls. It could add regulations about bread size, then bread quality, repeatedly; it could apply price controls to cake as well or ban cakes; as the market adjusts, regulations could adjust too. This would impose large efficiency losses (possibly coupled with equity gains, in certain circumstances). But the cost of one-off restrictions tends to be low. Quirks and biases in human prefere