document_id
stringlengths
36
36
document_text
stringlengths
0
295k
document_filename
stringlengths
24
54
document_metadata
dict
2afd6d1b-dbca-43df-bf56-e731f2549f14
On December 14th, New York City will have a Secular Solstice. Solstice is a holiday for people comfortable with uncomfortable truths and who believe in good. Secular Solstices take place in many cities around the world, but for us New York City is the best place for it. The first Solstice started here, amid towers that reach for the sky and glitter like stars in the night. Now a tradition spanning over a decade, rationalists from across North America—and sometimes further afield—will come together to sing about humanity's distant past and the future we hope to build. From the evening of December 13th to the morning of December 16th, NYC will also be the home of the Rationalist Megameetup. We will have sleeping space (Friday, Saturday, and Sunday nights) for those from out of town, as well as meeting spaces to congregate in. Historically the Megameetup has had calibration games, live podcasting, and the ever-popular lightning talks. There's been scheduled replications of papers, unscheduled lectures on poetry, and more interesting conversations than anyone has counted. Solstice and Megameetup will both be at the Sheraton Brooklyn New York Hotel, 228 Duffield Street Brooklyn. Last year's experiment was whether Megameetup could survive a transition into a conference hotel. The answer was a clear "YES", and we predictably hit capacity on that space like we have every space we've been in since 2019. This year we're back, bigger than before, and with a conference space large enough that Megameetup can host Solstice. Solstice and Megameetup registrations are combined, though you can register for only one if you like. Use the payment page to pick which combination of Solstice and Megameetup events you prefer. We have arranged a hotel block of rooms at a discount, the link for which will be active soon. If we've run out of rooms in the block, you might try booking with the hotel directly. If you do, let them know you're with the New York City Rationalist Megameetup. We hope to see you there! -Skyler & The 2024 Megameetup Team
JmGtMzYLpDj5LY4wG_Registrations_Open_for_2024_NYC_.txt
{ "file_size": 2058 }
13c07730-98a9-4b78-8182-93553b52457b
Epistemic Status: This is a collection of useful heuristics I’ve gathered from a wide range of books and workshops, all rather evidence-based (robustness varies). These techniques are designed to supplement basics of rationalist discourse, helping facilitate interactions—mostly with those unfamiliar with rationalist thought, especially on entry-level arguments. They may also be useful in conversations between rationalists on occasion. This is also a minimal viable product for an upcoming sequence that will dive into the analysis of well-managed disagreements. Details are intentionally left out. Tl;dr: Rephrase, ask questions, do not presume your conversation partner shares your epistemology (i.e way of coming to conclusions, in general), ask them for real-world counter-examples, share personal experiences both ways as a means to get clearer, check what kinds of blindspots their own motivation presumes, dovetail interests with brainstorming, and also, all of what I’ve just said is merely pointing to a specific state of mind. You can get to this state of mind only with some form of introspection. Arguing is sometimes wonderful. Yet sometimes it derails, or flat-out fails. Circumstances in which arguing fails tend to involve people who are not actively displaying rationality. LessWrong has done a lot to teach how to mutually progress on such disagreement. Yet this is only a very small community -the Rest of the World, aka People, still hasn't read the Sequences. Unproductive disagreement with people can lead to poor impression, pig-headedness, stress, anger, and sometimes worse. There has already been numerous discussions, on this forum, of ways to avoid getting there: there is already a book review on How Minds Change, then I spent too much time breaking down good disagreements to teach how to do it. But there wasn't a short document that was summarizing the key takeaways. This serves as that document. Ethical Caveat: This post presumes that you follow ethical advice such as: 1-Being earnestly truth-seeking 2-Getting the consent of your partner beforehand to question their beliefs 3-Not harassing people who don't want to talk 4-Choosing the right context (most of the times, 1:1 conversations) 5-Choosing the right person (not a hierarchical subordinate, expect conversations to be harder with a family member) 6-Choosing the right topic (probably not things that are subject to trigger warnings, such as the gender identity of the person, nor things displayed during the conversation itself, such as thoughts you interpret from their non-verbal cues) 7-Not Being a D*ck, in general. Attention to the reader: Reading about tennis does not teach enough to actually fluently play tennis. Practice is key. In the same way, reading about Effectively Handling Disagreements will be less effective than training yourself at it. Workshops in the comments (feel free to suggest some). 0-Actually, maybe, don’t argue. Arguing is a choice. It can be fitting or unfitting. It can be a good choice, or a bad choice. Argument is a virtue of rationalists, but it is a virtue because it coheres with all the other ones. When you discuss with a stranger, surrounding virtues such as evenness or curiosity might slip away. A good way to bring it back is to refrain from counter-arguing, and start with listening. The argument will still be there -but in a form that will make it softer and more pedagogical. 1-Rephrase, Rephrase, Rephrase By Default, You Don’t Understand Your Partner. Understanding your partner does not take a large amount of pondering and ostentatious thinking as a first step: It requires at least repeating back, with your own words, then genuinely asking your interlocutor if this is what they mean, and if not, offering them to make a correction ("If I understood you right, and if this is not what you meant, feel free to correct me, you meant that.... is this right?). This is fairly basic, but it is worth practicing if you’re not accustomed to it yet. Remember that you probably don’t understand your conversation partner if you didn’t rephrase what they said. Of interest: Smart Politics. 2-Ask More Questions By Default, You Don’t Understand Your Partner. Worse than this -You Don’t Know You Don’t Understand Your Partner. Your partner, you might think, came to their conclusion because of claim X, or person Y, or argument Z. You might follow-up on those reasons without having pre-emptively checked they were even relevant to the discussion at hand. Cached as they are, your conversation partner will say counter-arguments. But they will not bother about whether said counter-arguments figure in their crux at all. This is a massive waste of time and rapport. If anything, ask for a working definition[1] of the things you’re talking about. Ask for their reasons to believe, rather than presuming what those reasons are. 3-The Typical Method Fallacy Your partner is not necessarily an empiricist. If they tell you that God exists, and that they do so because of personal experience, this does not mean they think (like I would personally do) that their experience is statistically significant. Their method is relying on a claim, and the claim is Personal experience is reliable (as understood in, "more reliable than science on topics where science challenges it"). You might think that this departs so much from sanity that the only dignified move is to impatiently frown your eyebrows and go talk to someone worth your time. But you might as well question the claim. Of interest: Street Epistemology. 4-How to generate a good question, fast Note: This one is a personal observation. Although it took reading scientific litterature to notice it, there aren't publications on it, to my knowledge. Addendum: I've replaced "personal experience is reliable" as an example with "Karm exists". See comments on LW. Let's take the argument "Karma exists. For example, if I throw garbage out of the window of my car, then I'll break a nail within 24 hours". Questioning productively such a claim might sound like it leaves a lot of options open, but there is a rough-and-quick way to do it. Step 1: Identify the property that makes the inference valid in the eyes of your partner (here, the fact that the nail was broken after throwing garbage out of the window) Step 2: Ask for an example of the same (super)class that has the same property, but does not lead to the conclusion. (here, “Could be there times where you break your nail, yet you haven't done anything bad prior to that?’’) Step 3: If you get an answer (e.g, “Yes, that's an accident’), ask your partner how they distinguish said answer ("accident") from their initial answer ("Karma"). This is a rough outline of the process, which I’ll elaborate on in a future post. In short, ontological relationships form a socratic artillery. A true socratic move is one that helps your interlocutor have more than one hypothesis and apply an approximation of Bayes’ rule. 5-Personal experiences help clear out confusions. If there is one thing I’d like you to remember and that is the most evidence-backed, and the most impressively efficient at solving disagreement, it is that stories help people understand what you’re actually talking about. This is mostly valid of short and to-the-point stories, so keep anything you refer to clear and concise. When offered to share a story, then hearing a story, people develop more trust, which helps with paying attention to the interpretation you have of it, the actual information you’re trying to convey. They get a lot more details and a fleshed-out example of what you are talking about. They actually get what you’re trying to convey in a way that theoretical arguments completely fail at. This does not mean that you should use the emotional force of stories to sway your conversation partners around. It rather means that a story -and the emotions it generated in you- are crucial background information to understand what you mean. These two situations can be hard to discriminate, yet the telltale sign of being in an epistemically honest case is contrast: “You see, what I’ve just shared, this is what I mean when I say X’’. Of Interest: Deep Canvassing. 6-Care about their underlying values. Whatever your proposition is, it might well fit perfectly within your conversation partner’s values in some instrumental way, contrary to their own beliefs. Try to spot and bring up your partner’s motivations -say, an e/acc who cares about innovation- then, from there, point at whether the topic at hand fits with it -typically, AI Safety can be expected to contribute to innovation. Of course, do not say lies about how the topic at hand fits with it (“lies” here is understood broadly and refers to epistemic obfuscation in general). Of interest: Motivational Interviewing. 7-Negotiate through Brainstorming In the spirit of Ask More Questions, focus on your partner’s interests (or “needs” if you’re the CNV type), not positions (“Why do you want that ?” instead of “What do you want?”). You’ll get building blocks for brainstorming creative win-win solutions together. Note that in real-world situations, you’ll still need to spend a lot of time building positive rapport and getting your partner to think about the solution with you. Of Interest: Getting To Yes 8-Heal yourself to get in the right Mindset As she stirred and opened her eyes, I saw her differently. Her freckles were more obvious now, the colors of her face more vibrant. It was like I was seeing her in high definition for the first time. -Chris Lakin, Learning to do *real* empathy. Empathy isn’t just a series of scripted responses. I’ve been nudging you to imitate the ways in which the right mindset manifests -through questioning, rephrasing, narrating. Yet the mindset itself is the key. Getting in a mindset is a therapeutic act, it requires both practice, but also and mainly introspection, insight, and acknowledging denial. Getting in the right mindset does not only change your actions -it changes your perception. Of interest: -Chris Lakin, How Unconscious predictions update -VIEW Mindset -Compassion Focused Therapy -Focusing Finally, a word of caution: What is shared here is not necessarily suited to all people and contexts. Of course, relying on conversation patterns also has a potential to fall within the Dark Arts -police yourself. My belief is that there is a subset of conversational attitudes that are in line with virtuous rationality. These attitudes have to be mastered in order to manage conversations with the rest of the world. Such conversations will happen regardless—so it’s best to be prepared. Many thanks to Neil for suggesting me to write this post. ^ Contested, see comments for discussion. My position is that people tend to focus on platonic definitions (as opposed to "working" definitions) way too much, even if it can be good in some instances.
9PWnesdwmY5SHSNyx_Basics_of_Handling_Disagreements.txt
{ "file_size": 11029 }
6b88b016-bd15-42c5-8c80-4d87233100fe
This post was written during the agent foundations fellowship with Alex Altair funded by the LTFF. Thanks to Alex, Jose, Daniel, Cole, and Einar for reading and commenting on a draft. The Good Regulator Theorem, as published by Conant and Ashby in their 1970 paper (cited over 1700 times!) claims to show that 'every good regulator of a system must be a model of that system', though it is a subject of debate as to whether this is actually what the paper shows. It is a fairly simple mathematical result which is worth knowing about for people who care about agent foundations and selection theorems. You might have heard about the Good Regulator Theorem in the context of John Wentworth's 'Gooder Regulator' theorem and his other improvements on the result. Unfortunately, the original 1970 paper is notoriously unfriendly to readers. It makes misleading claims, doesn't clearly state what exactly it shows and uses strange non-standard notation and cybernetics jargon ('coenetic variables' anyone?). If you want to understand the theorem without reading the paper, there are a few options. John Wentworth's post has a nice high-level summary but refers to the original paper for the proof. John Baez's blogpost is quite good but is very much written in the spirit of trying to work out what the paper is saying, rather than explaining it intuitively. I couldn't find an explanation in any control theory textbooks (admittedly my search was not exhaustive). A five year-old stackexchange question, asking for a rigorous proof, goes unanswered. The best explainer I could find was Daniel L. Scholten's 'A Primer for Conant and Ashby's Good-Regulator Theorem' from the mysterious, now-defunct 'GoodRegulatorProject.org' (link to archived website). This primer is nice, but really verbose (44 pages!). It is also aimed at approximately high-school (?) level, spending the first 15 pages explaining the concept of 'mappings' and conditional probability. Partly to test my understanding of the theorem and partly to attempt to fill this gap in the market for a medium-length, entry-level explainer of the original Good Regulator Theorem, I decided to write this post. Despite all the criticism, the actual result is pretty neat and the math is not complicated. If you have a very basic familiarity with Shannon entropy and conditional probability, you should be able to understand the Good Regulator Theorem. This post will just discuss the original Good Regulator Theorem, not any of John Wentworth's additions. I'll also leave aside discussion of how to interpret the theorem (questions such as 'what counts as a model?' etc.) and just focus on what is (as far as I can tell) the main mathematical result in the paper. Let's begin! The Setup Conant and Ashby's paper studies a setup which can be visualised using the following causal Bayes net: If you are not familiar with Bayes nets you can just think of the arrows as meaning 'affects'. So A→B means 'variable A affects the outcome of variable B'. This way of thinking isn't perfect or rigorous, but it does the job. Just to be confusing , the paper discusses a couple of different setups and draws a few different diagrams, but you can ignore them. This is the setup they study and prove things about. This is the only setup we will use in this post. The broad idea of a setup like this is that the outcome Z is affected by a system variable S and a regulator variable R. The system variable is random. The regulator variable might be random and independent of S but most of the time we are interested in cases where it depends on the value of S. By changing the way that R depends on S, the distribution over outcomes Z can be changed. As control theorists who wish to impose our will on the uncooperative universe, we are interested in the problem of 'how do we design a regulator which can steer Z towards an outcome we desire, in spite of the randomness introduced by S?' The archetypal example for this is something like a thermostat. The variable S represents random external temperature fluctuations. The regulator R is the thermostat, which measures these fluctuations and takes an action (such as putting on heating or air conditioning) based on the information it takes in. The outcome Z is the resulting temperature of the room, which depends both on the action taken by the regulator, and the external temperature. Each node in the Bayes net is a random variable. The 'system' is represented by a random variable S, which can take values from the set {s1,s2,...sdS}. It takes these values with probabilities P(s1),P(s2) etc. Think of the system as an 'environment' which contains randomness. The variable R represents a 'regulator'- a random variable which can take values from the set {r1,r2,...rdR}. As the diagram above shows, the regulator can be affected by the system state and is therefore described by a conditional probability distribution P(R|S). Conditional probabilities tell you what the probability of R is, given that S has taken a particular value. For example, the equation P(R=r2|S=s5)=0.9 tells us that if S takes the value s5, then the probability that R takes the value r2 is 0.9. When we discuss making a good regulator, we are primarily concerned with choosing the right conditional probability distribution P(R|S) which helps us achieve our goals (more on exactly what constitutes 'goals' in the next section). One important assumption made in the paper is that the regulator has perfect information about the system, so R can 'see' exactly what value S takes. This is one of the assumptions which is relaxed by John Wentworth, but since we are discussing the original proof, we will keep this assumption for now. Finally, the variable Z represents the 'outcome' - a random variable which can take values from the set {z1,z2,...zdZ}. The variable Z is entirely determined by the values of R and S so we can write it as a deterministic function of the regulator state and the system state. Following Conant and Ashby, we use ψ to represent this function, allowing us to write Z=ψ(R,S). Note that it is possible to imagine cases where Z is related to R and S in a non-deterministic way but Conant and Ashby do not consider cases like this so we will ignore them here (this is another one of the extensions proved by John Wentworth - I hope to write about these at a later date!). What makes a regulator 'good'? Conant and Ashby are interested in the question: 'what properties should R have in order for a regulator to be good?' In particular, we are interested in what properties the conditional probability distribution P(R|S) should have, so that R is effective at steering Z towards states that we want. One way that a regulator can be good is if the Shannon entropy of the random variable Z is low. The Shannon entropy is given by H(Z)=∑iP(Z=zi)log1P(Z=zi). The Shannon entropy tells us how 'spread out' the distribution on Z is. A good regulator will make H(Z) as small as possible, steering Z towards a low-uncertainty probability distribution. Often, in practice, a producing a low entropy outcome is not on its own sufficient for a regulator to be useful. Scholten gives the evocative example of a thermostat which steers the temperature of a room to 350°F with a probability close to certainty. The entropy of the final distribution over room temperatures would be very low, so in this sense the regulator is still 'good', even though the temperature it achieves is too high for it to be useful as a domestic thermostat. Going forward, we will use low outcome entropy as a criterion for a good regulator, but its better to think of this as a necessary and/or desirable condition rather than sufficient condition for a good regulator. The second criterion for a good regulator, according to Conant and Ashby, is that the regulator is not 'unnecessarily complex'. What they mean by this is that if two regulators achieve the same output entropy, but one of the regulators uses a policy involving some randomness and the other policy is deterministic, the policy that uses randomness is unnecessarily complex, so is less 'good' than the deterministic policy. For example, imagine we have a setup where ψ(r1,s2)=ψ(r2,s2)=z1. Then, when the regulator is presented with system state s2, it could choose from between the following policies: Pick r1 with probability 1 whenever S=s2. So P(R=r1|S=s2)=1 and P(R=r2|S=s2)=0Pick r2 with probability 1 whenever S=s2. So P(R=r1|S=s2)=0 and P(R=r2|S=s2)=1Toss a coin an pick r1 if it lands heads and pick r2 if it lands tails. SoP(R=r1|S=s2)=12 and P(R=r2|S=s2)=12. All three of these policies achieve the same result (the outcome will always be z1 whenever S=s2), and the same output entropy, but the third option is 'unnecessarily complex', so is not a good regulator. Argue amongst yourselves about whether you find this criterion convincing. Nonetheless, it is the criterion Conant and Ashby use, so we will use it as well. To recap: a good regulator is one which satisfies the following criteria: It minimizes the entropy of the outcome variable Z.It is not unnecessarily complex, in the sense described above. The Theorem Statement The theorem statement can be written as follows: If a regulator is 'good' (in the sense described by the two criteria in the previous section), then the variable R can be described as a deterministic function of S . Another way of saying that 'R can be described as a deterministic function of S' is to say that for every ri and sj, then P(R=ri|S=sj) either equals 0 or 1. This means that R can be written as R=f(S) for some mapping f. We are now almost ready the prove the theorem. But first, it is worth introducing a basic concept about entropy, from which the rest of the Good Regulator Theorem flows straightforwardly. Concavity of Entropy Conant and Ashby write: One of the useful and fundamental properties of the entropy function is that any such increase in imbalance in p(Z) necessarily decreases H(Z). This is probably pretty intuitive if you are familiar with Shannon Entropy. Here is what it means. Suppose we have a probability distribution P(Z) which assigns probabilities P(Z=za) and P(Z=zb) for two different outcomes with P(Z=za)≥P(Z=zb) (and other probabilities to other z-values). Now suppose we increase the probability of outcome za (which was already as likely or more likely than zb) and decrease P(Z=zb) the same amount while keeping the rest of the distribution the same. The resulting distribution will end up with a lower entropy than the original distribution. If you are happy with this claim you can skip the rest of this section and move on to the next section. If you are unsure, this section will provide a little more clarification of this idea. One way to prove this property is to explicitly calculate the entropy of a general distribution where one of the probabilities is pa+δ and another is pb−δ (where pa≥pb and δ>0) . Then, you can differentiate the expression for entropy with respect to δ and show that dHdδ<0 ie. H is a decreasing function of δ. This is fine and do-able if you don't mind doing a little calculus. Scholten has a nice walk-through of this approach in the section of his primer titled 'A Useful and Fundamental Property of the Entropy Function'. Here is another way to think about it. Consider a random variable Z′ with only two outcomes za and zb. Outcome za occurs with probability q and zb occurs with probability 1−q. The entropy of this variable is H(Z′)=qlog1q+(1−q)log11−q. This is a concave function of q. When plotted (using base 2 logarithms) it looks like this: If q≤0.5, decreasing q decreases the entropy and if q≥0.5, increasing q decreases the entropy. So 'increasing the imbalance' of a 2-outcome probability distribution will always decrease entropy. Is this still true if we increase the imbalance between two outcomes within a larger probabilities distribution with more outcomes? The answer is yes. Suppose our outcomes za and zb are situated within a larger probability distribution. We can view this larger probability distribution as a mixture of our 2-outcome variable Z′ and another variable which we can call Y which captures all other outcomes. We can write Z (which we can think of as the 'total' random variable) as Z=λZ′+(1−λ)Y. With probability λ=P(Z=za)+P(Z=zb), the variable Z takes a value determined by Z′ and with probability 1−λ, the value of Z is determined by random variable Y. It turns out the entropy of such variable, generated by mixing non-overlapping random variables, can be expressed as follows: H(Z)=λH(Z′)+(1−λ)H(Y)+g(λ) where g(λ)=λlog1λ+(1−λ)log11−λ is the binary entropy (see eg. this Stackexchange answer for a derivation). Increasing the relative 'imbalance' of P(Z=za) and P(Z=zb) while keeping their sum constant does not change λ or H(Y), but does reduce H(Z′), thus reducing the total entropy H(Z). This is a fairly basic property of entropy but understanding it is one of the only conceptual pre-requisites for understanding the good regulator theorem. Hopefully it is clear now if it wasn't before. On to the main event! The Main Lemma Conant and Ashby's proof consists of one lemma and one theorem. In this section we will discuss the lemma. I'm going to state the lemma in a way that makes sense to me and is (I'm pretty sure) equivalent to the lemma in the paper. Lemma: Suppose a regulator is 'good' in the sense that it leads to Z having the lowest possible entropy. Then P(R|S) must have been chosen so that Z is a deterministic function of S ie. H(Z|S)=0. Here is an alternative phrasing, closer to what Conant and Ashby write: Suppose a regulator is 'good' in the sense that it leads to Z having the lowest possible entropy. Suppose also that, for a system state sj, this regulator has a non-zero probability of producing states ri and rk, ie. P(R=ri|S=sj)>0 and P(R=rk|S=sj)>0. Then, it must be the case that ψ(ri,sj)=ψ(rk,sj), otherwise, the regulator would not be producing the lowest possible output entropy. Here is another alternative phrasing: Suppose a regulator is 'good' in the sense that it leads to Z having the lowest possible entropy. If, for a given system state, multiple regulator states have non-zero probability, then all of these regulator states lead to the same output state when combined with that system state through ψ. If this was not the case, we could find another regulator which lead to Z having a lower entropy. This is one of those claims which is kind of awkward to state in words but is pretty intuitive once you understand what it's getting at. Imagine there is a regulator which, when presented with a system state sj, produces state ri with probability P(R=ri|S=sj)≠0 and produces state rk with probability P(R=rk|S=sj)≠0. Furthermore, suppose that ψ is such that ψ(ri,sj)=za and ψ(rk,sj)=zb. This means that, when presented with system state sj, the regulator sometimes acts such that it produces an outcome state za and other times acts so as to produce an outcome state zb. This means Z is not a deterministic function of S. Is it possible that this regulator produces the lowest possible output entropy? From considering the previous section, you might already be able to see that the answer is no, but I'll spell it out a bit more. The total probability that Z=za will be given by the sum of the probability that Z=za, when S is not sj and the probability that R is ri when S equals sj: P(za)=P(Z=za|S≠sj)P(S≠sj)+P(R=ri|S=sj)P(S=sj) Similarly the probably that Z=zb is given by: P(zb)=P(Z=zb|S≠sj)P(S≠sj)+P(R=rk|S=sj)P(S=sj). Suppose P(za)≥P(zb), then, as we saw in the previous section, we can reduce the entropy of Z by increasing P(za) and decreasing P(zb) by the same amount. This can be achieved by changing the regulator so that P(R=ri|S=sj) is increased and P(R=rk|S=sj) is decreased by the same amount. Therefore, a regulator which with nonzero probability produces two different R values when presented with the same S-value cannot be optimal if those two R-values lead to different Z-values. We can always find a regulator which consistently picks ri 100% of the time which leads to a lower output entropy. (A symmetric argument can be made if we instead assume P(zb)≥P(za) .) However, if ψ was such that ψ(ri,sj)=ψ(rk,sj)=za, then it would not matter whether the regulator picked ri or rk or tossed a coin to decide between them when presented with sj, because both choices would lead to the same Z-value. In such a case, even though R contains randomness, the overall effect would be that Z is still a deterministic function of S. The Theorem 90% of the meat of the theorem is contained in the above lemma, we just need to tie up a couple of loose ends. To recap: we have showed that a regulator which achieves the lowest possible output entropy must use a conditional distribution P(R|S) which leads to Z being a deterministic function of S. For each system state sj, the regulator must only choose R-values which lead to a single Z-value. This still leaves open the possibility that the regulator can pick a random R-value from some set of candidates, provided that all of those candidates result in the same Z-value. In our example from the previous section, this would mean that the regulator could toss a coin to choose between ri and rk when presented with system state sj and this regulator could still achieve the minimum possible entropy. This is where the 'unnecessary complexity' requirement comes in. Conant and Ashby argue that one of the requirements for a 'good' regulator is that it does not contain any unnecessary complexity. A regulator which randomises its R value would be considered unnecessarily complex compared to a regulator which produced the same output state distribution without using randomness. Therefore, for a regulator to be 'good' in the Conant and Ashby sense, it can only pick a single R-value with 100% probability when presented with each S-value. And the main lemma tells us that this condition does not prevent us from minimizing the output entropy. This means that in the conditional probability distribution P(R|S), for each S-value, the probability of any one R-value is either zero or one. To put it another way, R can be described as a deterministic function of S. In a good regulator, knowing S allows you to predict exactly what value R will take. Also, since Z is a deterministic function of R and S, this means that Z, when being regulated by a good regulator, will be a deterministic function of S. Thus, we have proved that a good regulator R must be a deterministic function of the system state S. Note that the argument makes no assumptions about the probability distribution over S. Though changing the probability distribution over S will change the final output entropy, it will not change the properties of a good regulator. Example Consider the following example, where R, S, and Z have three possible states and the 'dynamics' function ψ is characterised by the following table: ψs1s2s3r1z1z2z3r2z3z1z2r3z2z1z1 First, consider a regulator which violates the main condition of the main lemma, by randomizing between r1 and r2 when presented with s1, even though they lead to different Z-values. Here is the conditional probability table for such a regulator: P(R|S)s1s2s3r10.500r20.510r3001 If S has a maximum entropy distribution so P(s1)=P(s2)=P(s3)=13, then this regulator will produce outcome z1 with probability 56. Outcome z2 will have probability P(z2)=0 and outcome z3 will have P(z3)=16. This output distribution will therefore have entropy H(Z)=56log65+16log61≈0.65 (using base 2 logarithms). According to the lemma, we can achieve a better (lower) output entropy by ensuring that P(R|S) is such that the regulator chooses whichever R-value corresponds to the Z-value which already has a higher probability. In this case, z1 has a higher probability than z3, so 'increasing the imbalance' means increasing the probability of z1, at the expense of z3 as much as we can. This can be done by increasing P(r1|s1) to 1 and decreasing P(r2|s1) to zero (while keeping the rest of the distribution the same). This results in a Z-distribution with an entropy of zero, since, regardless of the S-value, Z always ends up in state z1. Since this entropy cannot be improved upon and the regulator does not have any unnecessary noise/complexity, the Good Regulator Theorem predicts that this regulator should be a deterministic function of S. Lo and behold, it is! Each S-value gets mapped to exactly one R-value: P(R|S)s1s2s3r1100r2010r3001 Consider another regulator for the same system, as characterised by the following conditional probability table: P(R|S)s1s2s3r1100r200.50r300.51 Referring back to the table for ψ, we can see that this regulator also achieves an output entropy of zero, even though it randomizes between r2 and r3 when presented with s2. Since ψ(r2,s2)=ψ(r3,s2)=z1, this isn't a problem from the point of view of minimizing entropy, but it is 'unnecessarily complex', so doesn't meet the criteria of a good regulator as Conant and Ashby define it. There are two ways to make this regulator 'good'. We could either make P(r2|s2)=1 and P(r3|s2)=0, making the regulator the same as our previous example, or we could set P(r2|s2)=0 and P(r3|s2)=1. Both possibilities would be 'good regulators' in the sense that they achieve the minimum possible entropy and are not unnecessarily complex. They are also both regulators where R is a deterministic function of S, validating the prediction of the theorem. Conclusion One thing that Conant and Ashby claim about this theorem is that it shows that a good regulator must be 'modelling' the system. This is a bit misleading. As I hope I have shown, the Good Regulator Theorem shows that a good regulator (for a certain definition of 'good') must depend on the system in a particular way. But the way in which a good regulator must depend on the system does not correspond to what we might normally think of as a 'model'. The regulator must have a policy where its state deterministically depends on the system state. That's it! If we were being very generous, we might want to say something like: 'this is a necessary but not sufficient condition for a regulator that does model its environment (when the word model is use in a more normal sense)'. When Conant and Ashby say that a good regulator 'is a model of the system', they might mean that looking at R tells you information about S and in that sense, is a model of S. When R is a deterministic function of S, this is sometimes true (for example when R is a bijective or injective function of S). However, in some setups, the 'good' regulator R might be a deterministic function of S which takes the same value, regardless of the value of S. I don't think its sensible to interpret such a regulator as being a model of S. Personally, I don't think that it is useful to think about the Good Regulator Theorem as a result about models. It's a pretty neat theorem about random variables and entropy (and that's ok!), but on its own, it doesn't say much about models. As with most things in this post, John Wentworth has discussed how you could modify the theorem to say something about models. After writing this piece, the good regulator theorem is a lot clearer to me. I hope it is clearer to you as well. Notable by its absence in this post is any discussion of John Wentworth's improvements to the theorem. Time permitting, I hope to cover these at a later date.
JQefBJDHG6Wgffw6T_A_Straightforward_Explanation_of.txt
{ "file_size": 24008 }
0ae95486-e079-4e2b-ada9-cbbc03a2db66
We’re excited to announce that applications are now open for our 2025 Q1 Pivotal Research Fellowship, a 9-week program designed to enable promising researchers to produce impactful research and accelerate their careers in technical AI safety, AI governance, and biosecurity. About the Fellowship The Pivotal Research Fellowship is hosted in London at the London Initiative for Safe AI (LISA). It offers a unique opportunity for early-career researchers to collaborate with experienced mentors, engage in workshops and seminars, and build a strong network within the AI safety research community in London and beyond. Dates: February 3rd to April 4th, 2025 Application Deadline: November 21st, 2024 Apply here. Fellows receive: Direct mentorship from established researchersAccess to LISA, working alongside leading researchers in AI safety£5000 stipend, plus meals, travel support, accommodation, and compute costs This marks our 5th research fellowship, building on a strong track record of supporting researchers in tackling important questions about the safety and governance of emerging technology. Looking back on our 2024 Research Fellowship (We plan on releasing a more in-depth retrospective in the upcoming weeks.) In 2024 we hosted 15 fellows: 7 in AI governance6 in technical AI safety2 in biosecurity The fellowship received high ratings with 9.17/10 for overall value and 9.33/10 for recommendation likelihood (Net Promoter Score: 88). Here’s what 2024 fellows say about their experience with the Pivotal Research Fellowship: “The Fellowship has been transformative for my career and personal development. Most importantly, I had the incredible opportunity to be mentored by a leading expert and go from idea development to paper submission.”“The fellowship allowed me to work with top AI safety researchers – a great privilege early in my career! People were surprised at what can be accomplished in two months, including me.”“Pivotal Research Fellowship opened the door to AI governance, enabling me to conduct impactful research in this field and connecting me to the broader AI governance community and new opportunities.”“Pivotal shifted my career: I'm now working on a startup for white box model access and will join GovAI as a Winter Fellow. Being at LISA connects you with top AI safety researchers and places you on the radar of leading organizations.”“Pivotal’s approach expanded my AI safety perspective, illustrating the importance of governance and biosecurity challenges that complement technical safety, making me seriously consider AI policy roles.” Looking Ahead to Q3 2025 In addition to the Q1 2025 Fellowship, we’re planning another cohort in Q3 2025. If you’re interested in being kept up to date for future fellowship opportunities, please express your interest. If you have any questions about the application process, please reach out to us.
3KzJdNLR7KfZkKrJ5_2025_Q1_Pivotal_Research_Fellows.txt
{ "file_size": 2919 }
a0db4ff6-2c71-4448-b88f-66c1af7802db
In the past ten years, I have been developing ideas to resolve key issues that I have encountered in my own life, and in the lives of the people around me. In probing these issues for their root causes, I have identified the threat and reality of economic deprivation as the primary motivator of everyday suffering. Just a few of the issues that this current work resolves: 1. Non-voluntary family participation 2. Non-voluntary work 3. Economic entrapment of intimate partner violence sufferers 4. Exploitative working environment and compensation dynamics 5. Food deserts 6. Under and over development 7. Multinational capital concentration, and subsequent self-determination violations 8. Product quality and decency issues 9. Environmental damage and non-systemic thinking No one system so far alleviates these issues. This one will. (i) Beginning in the early 1990s, certain national banks started to experiment with money creation as a means of achieving certain economic goals. The national bank of Japan came up with the quirky and elegant term “quantitative easing” for this process, and various other central banks picked up the process for their own goals. QE really picked up speed after the 2008 housing crisis, when the US Fed utilized QE to bail out the financial system. Conventional thinking up until that point, at least in the US, was that creating money for this purpose would be wildly inflationary and so out of bounds for any prudent central bank. The results tell a different story. If anything, QE was deflationary. Subsequent QEs have borne this point out, with the most recent academic discourse about the actual effects of QE on the broader economy being undecided. The current financial system elite as embodied by the US Fed and their discourse, talks with two mouths. On the one hand, when it comes to real human needs like social programs and direct assistance, they advocate for rigid “financial discipline”, talking about debt and taxes and interest rates, and on the other, when their colleagues in banking are in distress, printing money to bail them out. We now live in an era where central banks have tested the tenets of MMT[1] in microcosm, to the benefit of their closest allies, and the capacity to utilize monetary policy for humanitarian ends now demonstrably exists in the hands of monetary policy makers. (ii) The current system imagines a world in which money, instead of being primarily earned through work, flows through the economy in guided ways to bring about humanitarian goals. I imagine money as a given much like light from the sun: input to the system. With this framing, we can start to talk about the best way to utilize this new capability to bring about human flourishing. In the current economic system, those in power artificially sustain a “state of nature” where every adult is faced with survival pressure to “earn a living”. While at least some people need to work to create food and housing, it hasn’t been the case that there is food or housing production shortfalls due to resourcing for a long time. Said another way, market dynamics conspire to keep food and housing scarce, because it serves the people steering those market dynamics. This is frank social darwinism and it needs to end. In the new system, market dynamics still determine prices, but financing for essentials, and supplying those essentials are automatically present through universal consumer support, and matching subsidies. This undoes the market forces keeping people beholden to unscrupulous landlords and corporate overlords, through providing universal basic livelihood support. This universal livelihood support comes in the form of a debit card tied to a transaction processing system that automatically filters and limits transactions to only essentials, and only within sane limits. The funds for the cards materialize automatically through an integration with the Treasury and Fed, the Treasury issuing perpetual no interest bonds titled according to the region of the outlay, and the Fed creating money in turn. The bonds are mostly a form of bookkeeping, since the Fed holds these indefinitely, and record the market transformation induced by the system. Simultaneously, the transaction processing system tabulates all ongoing transactions, identifying matching subsidies to support consumer demand. This requires sophisticated but tractable algorithms and data collection to target producers most responsible for consumer supply. All of this requires a new corps of inspector-accountants trained in on-site and online business validation to verify that business operations utilize given subsidies toward supplying essential demand, and follow market rules to buttress product quality, environmental savvy, and worker wellbeing, as well as recording supplier networks for the subsidy algorithms to process. With all this in place, the humanitarian economy is complete. Everyone spends in proportion to their needs, and the system subsidizes the people supporting them. Since the money creation is balanced on supply and demand sides, this effectively creates new market capacity in proportion to the money creation, the ideal outcome for monetary policy. Not only that, but the externalities of this market creation are an end to involuntary poverty, inhumane working conditions and compensation, and most forms of economic coercion. (iii) The major primary outcome of deploying this system initially will be the removal of the need to work to avoid economic deprivation, and the subsequent exit of a large number of people from the workforce. To see this as a good thing we can hold in mind the following: businesses that survive at the expense of the health and wellbeing of their subsistence workers are perpetuating modern serfdom, and deserve a correction, *and* technology is already available to transition the businesses most affected to automated processes. In the case of retail, where the exit most likely has the largest effect, businesses can transition from a service orientation to a self-service paradigm with backing automation. This looks like warehouse stores with robotic stocking and self checkout, but also the professionalization of jobs like sales clerk in commodity department stores. In the case of other jobs that pay close to minimum wage and can’t be automated, like tricky human interactions or dangerous and complicated physical labor, the wage floor is now automatically living, and any compensation must encourage people who do not otherwise need to do so to engage in these difficult and draining tasks. This has two amazing benefits: first, it incentivizes the market to stop encouraging dangerous and complicated interactions with the natural and social environment, and second, for truly beneficial work, it encourages adequate compensation. The teacher spending all her life and energy raising other people’s children to be good citizens gets to be fairly rewarded with spa days and relaxation and all kinds of luxury from her salary. At this point we stop and take stock: this humanitarian economy encourages a humanity that is more relaxed, less adventurous, and so more ecologically sound. This flies in the face of historical economic values: humanity has encouraged itself through various market forces to go further and do more than previous generations, in order to earn a bigger piece of the pie. In this new system, the only incentive to do more and go further is to transcend the status quo in some way, and earn recognition for a unique contribution. Likely status signaling as a proxy for reproductive fitness is still a robust driver of economic activity, so those who wish to play economic games for a sense of control of their environment still have a stage to do so, but now they cannot use nefarious means to coerce others to play their game, and any number of new games are possible! That leaves us with the mega projects. How does an Elon Musk go to the stars if no one wants to build his rockets? First of all, robotic means are available to machine and assemble rockets, and second of all, the high end of the market above the Walmarts of the economy still exists: with the need to signal freedom from survival needs eliminated, luxury naturally will change from excess or frivolous consumption to more substantive signaling in the form of access to and engagement with culturally relevant artifacts: arts, sciences, and technology. That is, what differentiates someone working a very valuable profession, really, working at all, and someone floating along on basic subsistence support is their degree of cultural achievement: can they appreciate the Bhagavad Gita, Wagner, a brilliant and singular sunset. (iv) In Sid Meier’s Alpha Centauri[2], the factions portraying societies in microcosm have the ability to spend resources on secret projects that give their societies unique advantages. In the humanitarian economy, too, we have the ability to fund special projects that benefit the system as a whole or achieve social goals: like space exploration, big science or health, and great engineering or architectural feats like the Hoover dam or Burj Dubai. The humanitarian economy imagines a process whereby project organizers are rewarded with full funding through the same QE mechanics and backing subsidies to producers, to the extent that they document and verify worker organization, plan development, local permit and planning engagement, and ecological benefit. With this section of the outlay operative, construction organizers could develop plans, permits, and site evaluations for new housing projects, new universities, and any other building needs. Researchers at prominent universities would no longer depend on byzantine grant application processes, but would complete research proposals and receive funding nearly automatically, as research funding requirements are so comparatively low. Similarly, funding and venue support for art installations and performances becomes trivial and the arts become ubiquitous. Projects like building and maintaining a new fusion reactor for a given region become a matter of organizing, not financing. (v) Since it is within our power to enact this new humanitarian economy, it becomes a moral[3] question of why we do not. If the vast majority of democratic participants would affirm the sentiment that “Working conditions are really bad for people earning minimum wage”, and “Life would be better if everyone could afford food and housing”, the question of why we do not implement this or another system like it becomes urgent. The prevailing sentiment and orthodoxy of monetary policy, that sovereign debt matters, that the role of the issuing authority is exercising fiscal restraint, poses a stern ideological barrier to MMT-focused policies and the new economies they enable. However, when we inspect the actions of the same ideologues and who benefits from their policies, we can clearly see that fiscal restraint is code for "benefit people who deserve it" or even more directly, "benefit people close to me". [To some extent we can recover the good of the central bank as custodian of the economy by honoring their honest management of macroeconomic indicators like inflation in the interest of economic stability.] ^ Modern Monetary Theory: https://en.wikipedia.org/wiki/Modern_monetary_theory ^ https://en.wikipedia.org/wiki/Sid_Meier%27s_Alpha_Centauri ^ https://www.lesswrong.com/posts/uA6jWodfoT35jSJNe/?commentId=YFewNLz7DJrqLFd63
jJrD7KZgTNCKdKtmN_The_Humanitarian_Economy.txt
{ "file_size": 11544 }
81594932-102a-4516-b8ac-e761e826afe7
In Scott Garrabrant's excellent Geometric Rationality sequence, he points out an equivalence between modelling an agent as Maximizing the expected logarithm of some quantity V, E[ln(V)]Maximizing the geometric expectation of V, G[V] And as we'll show in this post, not only can we prove a geometric version of the VNM utility theorem: An agent is VNM-rational if and only if there exists a function V that:Represents the agent's preferences over lotteriesL≺M if and only if V(L)<V(M)Agrees with the geometric expectation of VV=G[V] Which in and of itself is a cool equivalence result, that E[U] maximization ⟺ VNM rationality ⟺ G[V] maximization. But it turns out these are just two out of a huge family of expectations we can use, like the harmonic expectation H, which each have their own version of the VNM utility theorem. We can model agents as maximizing expected utility, geometric utility, harmonic utility, or whatever representation is most natural for the problem at hand. Expected Utility Functions The VNM utility theorem is that an agent satisfies the VNM axioms if and only if there exists a utility function U:ΔΩ→R which: Represents that agent's preferences over all lotteriesL≺M if and only if U(L)<U(M)Agrees with its expected valueU=E[U] Where ΔΩ is a probability distribution over outcomes ω∈Ω. The first property is easy to preserve. Given any strictly increasing function f, f(U(L))<f(U(M)) if and only if U(L)<U(M) So f∘U also represents our agent's preferences. But it's only affine transformations f(U)=aU+b that preserve the second property that f(U)=E[f(U)]. And it's only increasing affine transformations that preserve both properties at once. f-Utility Functions But what if we were interested in other ways of aggregating utilities? One of the central points of Scott's Geometric Rationality sequence is that in many cases, the geometric expectation G is a more natural way to aggregate utility values V into a single representative number G[V]. We can represent the same preferences using the expected logarithm E[ln(V)], but this can feel arbitrary, and having to take a logarithm is a hint that these quantities V are most naturally combined by multiplying them together. The expectation operator E can emulate a weighted product, but the geometric expectation operator G is a weighted product, and we can model an agent as maximizing G[V] without ever bringing E into the picture. And as scottviteri asks: If arithmetic and geometric means are so good, why not the harmonic mean? https://en.wikipedia.org/wiki/Pythagorean_means. What would a "harmonic rationality" look like? They also link to a very useful concept I'd never seen before: the power mean. Which generalizes many different types of average into one family, parameterized by a power p. Set p=1 and you've got the arithmetic mean E. Set p=0 and you've got the geometric mean G. And if you set p=−1 you've got the Harmonic mean H. It's great! And I started to see if I could generalize my result to other values of p. What is the equivalent of G[V]=eE[ln(V)] for H? Well, scottviteri set me down the path towards learning about an even broader generalization of the idea of a mean, which captures the power mean as a special case: the quasi-arithmetic mean or f-mean, since it's parameterized by a function f. For our baseline definition, f:I→R will be a continuous, strictly increasing function that maps an interval I of the real numbers to the real numbers R. We're also going to be interested in a weighted average, and in particular a probability weighted average over outcomes ω∈Ω. We'll use the notation ω∼P to denote sampling ω from the probability distribution P. Here's the definition of the f-expectation Mf: Mf[V]=f−1(E[f∘V]) Mf,ω∼P[V(ω)]=f−1(Eω∼P[f(V(ω))]) Which for finite sets of outcomes looks like: Mf,ω∼P[V(ω)]=f−1(∑ω∈ΩP(ω)f(V(ω))) So for example: If I=R and f(V)=V (or any increasing affine transformation f(V)=aV+b, where a>0), then Mf is the arithmetic expectation Mf=E.If I=R>0, the positive real numbers, and f(V)=ln(V) (or any logarithm f(V)=logb(V) where b>0 and b≠1), then Mf is the geometric expectation Mf=G.We can also extend f to include f(0)=limV→0f(V)=−∞, to cover applications like Kelly betting. This will still be a strictly increasing bijection, and I expect it will work with any result that relies on f being continuous.If I=R>0, and f(V)=−1V, then Mf is the harmonic expectation Mf=H.Using 1V would also compute the harmonic expectation, but −1V is strictly increasing. And that lets us frame our agent as always maximizing a utility function.If I=R>0, and f(V)=Vp, then Mf is the power expectation Mf=Mp using the power p. An f-utility function V represents an agent's preferences: L≺M if and only if V(L)<V(M) And agrees with the f-expectation of V V=Mf[V] It turns out that for every f-utility function V, there is a corresponding expected utility function U, and vice versa. We'll prove this more rigorously in the next sections, but it turns out that these are equivalent ways of representing the same preferences. Equivalence of Maximization Here's the core insight that powers the rest of this equivalence result, which Scott articulates here: Maximization is invariant under applying a monotonic function... So every time we maximize an expectation of a logarithm, this was equivalent to just maximizing the geometric expectation. If we think of an agent as maximizing over π∈Π, something under their control like their action or policy, we can write this as: argmaxπ∈ΠG[V]=argmaxπ∈ΠeE[ln(V)] argmaxπ∈ΠG[V]=argmaxπ∈ΠE[ln(V)] So for every geometric utility function V, there is a corresponding expected utility function U=ln(V) which gives the same result if maximized. This equivalence can be generalized to all f-utility functions, and it follows from exactly the same reasoning. We have a strictly increasing function f, and so f−1 must be strictly increasing as well. If V1<V2⟹f(V1)<f(V2)Then f−1(U1)<f−1(U2)⟹U1<U2 And so argmax will ignore either one. Let's use that to simplify the expression for maximizing the f-expectation of V: argmaxπ∈ΠMf[V]=argmaxπ∈Πf−1(E[f∘V]) argmaxπ∈ΠMf[V]=argmaxπ∈ΠE[f∘V] And this suggests a substitution that will turn out to be extremely useful: U=f∘V argmaxπ∈ΠMf[V]=argmaxπ∈ΠE[U] There is a function U whose expectation we can maximize, and this is equivalent to maximizing the f-expectation of our f-utility function Mf[V]. And we'll show that U is indeed an expected utility function! Similarly, we can apply f−1 to both sides to get a suggestion (which turns out to work) for how to turn an expected utility function into an equivalent f-utility function. f∘V=U f−1∘f∘V=f−1∘U V=f−1∘U Duality f-Utility Functions Correspond to Expected Utility Functions It turns out that for every expected utility function U, there is a corresponding f-utility function V, and vice versa. And this duality is given by f and f−1. We'll start by showing how to go from U to V. Given an expected utility function U, we'll define V to be V=f−1∘U We know from the VNM expected utility theorem that U=E[U] Let's plug both of those into the definition of Mf Mf[V]=f−1(E[f∘V]) Mf[V]=f−1(E[f∘f−1∘U]) Mf[V]=f−1(E[U]) Mf[V]=f−1(U) Mf[V]=V So V agrees with Mf[V]. And since f is strictly increasing, and U represents an agent's preferences, so does V. L≺M if and only if V(L)<V(M) Which means V is an f-utility function! This gives us one half of the VNM theorem for f-utility functions. If an agent is VNM rational, their preferences can be represented using an f-utility function V. Expected Utility Functions Correspond to f-Utility Functions We can complete the duality by going the other way, starting from an f-utility function V and showing there is a unique corresponding expected utility function U. We'll define U to be: U=f∘V And we'll plug that and the fact that V=Mf[V] into the definition of Mf[V]. Mf[V]=f−1(E[f∘V]) Mf[V]=f−1(E[U]) V=f−1(E[U]) f∘V=f(f−1(E[U])) f∘V=E[U] U=E[U] And that's it! Starting from an f-utility function V, we can apply a strictly increasing function f to get an expected utility function U which represents the same preferences and agrees with E[U]. This also gives us the other half of the VNM theorem for f-utility functions. If an agent's preferences can be represented using an f-utility function V, they can be represented using an expected utility function U, and that agent must therefore be VNM-rational. f as a Bijection of Utility Functions So for every f-utility function V, we can apply f and get an expected utility function U. And the same is true in reverse when applying f−1. Does this translation process have any collisions in either direction? Are there multiple f-utility functions V and W that correspond to the same expected utility function U, or vice versa? It turns out that f creates a one-to-one correspondence between f-utility functions and expected utility functions. And a consequence of that is that all of these languages are equally expressive: there are no preferences we can model using an f-utility function that we can't model using an expected utility function, and vice versa. Another way to frame this duality is to say that our translation function f:I→R is a bijection between its domain I and its image f(I). And this induces a structure-preserving bijection between utility functions U:ΔΩ→R and f-utility functions V:ΔΩ→I. V=f−1∘U U=f∘V To show this, we can show that f is injective and surjective between these sets of utility functions. f is Injective An injective function, also known as a one-to-one function, maps distinct elements in its domain to distinct elements in its codomain. In other words, injective functions don't have any collisions. So in this case, we want to show that given two distinct f-utility functions V and W, f∘V and f∘W must also be distinct. V≠W⟹f∘V≠f∘W Since V and W are distinct f-utility functions, they must disagree about some input ω. V(ω)≠W(ω) And since f is strictly increasing, it can't map these different values in I to the same value in R. f(V(ω))≠(W(ω)) And thus f∘V must be a distinct function from f∘W. f is Surjective A surjective function maps every element in its domain onto an element of its codomain, and these functions are also called "onto" functions. So in this case, we want to show that given an expected utility function U, there is an f-utility function V such that f∘V=U. And this is exactly the f-utility function that f−1 picks out. V=f−1∘U f∘V=f∘f−1∘U f∘V=U And that's it! f induces a one-to-one correspondence between expected utility functions U and f-utility functions V. We can freely translate between these languages and maximization will treat them all equivalently. Composition I also want to quickly show two facts about how f-expectations combine together: The f-expectation of f-expectations is another f-expectationMf[Mf[V]]=Mf[V]The weights combine multiplicatively, as we'd expect from conditional probabilitiesAnalogous to P(A∧B)=P(A)P(B|A) All of this is going to reduce to an expectation of expectations, so let's handle that first. Let's say we have a family of n probability distributions Pi(ω) and expected utility functions Ui. And then we sample i∈[1..n] from a probability distribution I'll suspiciously call ψ. Ei∼ψ[Eω∼Pi(ω)[Ui(ω)]]=Ei∼ψ[∑ω∈ΩPi(ω)Ui(ω)] Ei∼ψ[Eω∼Pi(ω)[Ui(ω)]]=∑i∈[1..n]ψi(∑ω∈ΩPi(ω)Ui(ω)) Ei∼ψ[Eω∼Pi(ω)[Ui(ω)]]=∑i∈[1..n]∑ω∈ΩψiPi(ω)Ui(ω) Ei∼ψ[Eω∼Pi(ω)[Ui(ω)]]=∑(i,ω)∈[1..n]×ΩψiPi(ω)Ui(ω) Taking the expectation over i of the expectation over Pi(ω) is equivalent to taking the expectation over pairs (i,ω).[1] P(i,ω)=ψiPi(ω) Ei∼ψ[Eω∼Pi(ω)[Ui(ω)]]=E(i,ω)∼P(i,ω)[Ui(ω)] E[E[U]]=E[U] This is one way to frame Harsanyi aggregation. Sample an agent according to a probability distribution ψ, then evaluate their expected utility using that agent's beliefs. The Harsanyi score is the expectation of expected utility, and the fact that nested expectations can be collapsed is exactly why aggregating this way satisfies the VNM axioms. The Harsanyi aggregate is VNM rational with respect to the conditional probability distribution P(i,ω). Knowing that, the general result for f-expectations is even easier: Mf,i∼ψ[Mf,ω∼Pi(ω)[Vi]]=f−1(Ei∼ψ[f∘Mf,ω∼Pi(ω)[Vi]]) Mf,i∼ψ[Mf,ω∼Pi(ω)[Vi]]=f−1(Ei∼ψ[f∘f−1(Eω∼Pi[f∘Vi])]) Mf,i∼ψ[Mf,ω∼Pi(ω)[Vi]]=f−1(Ei∼ψ[Eω∼Pi[f∘Vi]]) Mf,i∼ψ[Mf,ω∼Pi(ω)[Vi]]=f−1(E(i,ω)∼P(i,ω)[f∘Vi]]) Mf,i∼ψ[Mf,ω∼Pi(ω)[Vi]]=Mf,(i,ω)∼P(i,ω)[Vi]] Mf[Mf[V]]=Mf[V]] So for example, Scott motivates the idea of Kelly betting as the result of negotiating between different counterfactual versions of the same agent. In that framing, G[V] naturally captures "Nash bargaining but weighted by probability." If we geometrically aggregate these geometric expected utilities G[G[V]], the result is the same as one big geometrically aggregate over all counterfactual versions of all agents, weighted by P(i,ω)=ψiPi(ω). And the fact that we can model this aggregate as maximizing G[V] means that it's VNM-rational as well! This is a very cool framing, and there are some phenomena that I think are easier to understand this way than using the expected utility lens. But since Geometric Rationality is Not VNM Rational, we know that the G[G[V]] model won't have all the cool features we want from a theory of geometric rationality, like actively preferring to randomize our actions in some situations. Conclusions With all of that under our belt, we can now reiterate the f-Utility Theorem for VNM Agents. Which is that an agent is VNM-rational if and only if there exists a function V:ΔΩ→I that: Represents the agent's preferences over lotteriesL≺M if and only if V(L)<V(M)Agrees with the f-expectation of VV=Mf[V] We can model a VNM-rational agent as maximizing expected utility E[U], geometric expected utility G[V], harmonic expected utility H[V], or any other f-expectation Mf[V] that's convenient for analysis. We can translate these into expected utility functions, but we can also work with them in their native language. We can also think of this equivalence as an impossibility result. We may want to model preferences that violate the VNM axioms, like a group preference for coin flips in cases where that's more fair than guaranteeing any agent their most preferred outcome. And the equivalence of all these representations means that none of them can model such preferences. One approach that does work is mixing these expectations together. Our best model of geometric rationality to my knowledge is G[E[U]]; the geometric expectation of expected utility. See Geometric Rationality is Not VNM Rational for more details, but the way I'd frame it in this sequence is that the inner expectation means that the set of feasible joint utilities F is always convex. A flat Pareto frontier And maximizing the geometric expectation, aka the geometric aggregate, always picks a Pareto optimum, which is unique as long as all agents have positive weight. Interactive version here Check out the main Geometric Utilitarianism post for more details, but I think of these equivalence results as telling us what we can do while staying within the VNM paradigm, and what it would take to go beyond it. ^ We went through the proof for discrete probability distributions here, but E[E[U]]=E[U] holds for all probability distributions.
aQoKLy9wqgpFuG3E7_Expected_Utility,_Geometric_Util.txt
{ "file_size": 16021 }
1d9c82fe-d1bb-4a02-a4bb-95417b336f26
[ This is supposed to be a didactic post. I'm not under the impression that I'm saying anything genuinely new. Thanks to Stephen Wolfram. ] I'm about an hour in to the Yudkowsky-Wolfram discussion [AI-generated transcript from which I'm quoting]. Wolfram thinks we should not fear AI doom very much in particular. I think he is wrong. It seems to me like the cause of Wolfram's hesitancy to buy into the AI doom idea is a premise that theories with only non-mentalistic atoms are the only valid or allowed theories of how the world works. This is a false premise, but I want to get into what I mean by a premise that theories containing mentalistic atoms are not allowed and how it seems to be constraining the set of claims Wolfram allows himself to make, before I say why it is a wrong premise. Examples of what I mean: Example 1 | Wolfram doubts the idea that it's possible to measure intelligence on a single unified scale or axis. [Wolfram:] So the question then is, you know, one thing one could take the point of view is there’s this kind of single index of smart. I mean, I think people, you know, in the 1930s,people wanted to invent kind of an index of general intelligence. They called it g for humans, which I’ve never really believed in [ . . . ] There are somethings I’m pretty good at doing where I feel I’m pretty smart. There are other things where I know I’m pretty dumb, and it’s... it’s kind of... it’s not really a, a sort of single index. Wolfram grants that it's possible for individuals to be "smarter" than other individuals, or able to out-predict or beat them, in local, individual cases. We all know that constructing a single-axis scale for any attribute we care to name, is logically possible. What Wolfram must be objecting to, is the idea that such an axis might be objective for intelligence - where of course we could rank, eg, possible states of a system, objectively by, say, kinetic energy. So Wolfram grants that we can receive atoms of sense-data - individual facts of experience such as whether Alice can beat Bob at chess - that tell us something about intelligence. And it's trivial that of course we can create a single intelligence axis that has Alice and Bob and everyone else somewhere on it, whether or not that axis has anything to do with reality. What Wolfram must be objecting to, is the idea that we can, from our individual atoms of sense-data, construct - via Bayesian backward-chaining induction blah blah or whatever else - a logical theory of how those individual atoms of sense-data are related, that contains such terms in it as whether Alice is objectively smarter than Bob. Example 2 | Wolfram is implicitly skeptical that claims about consciousness can generally have objective truth-or-falsity values beyond the physical substrates that we have direct empirical experience of actually being conscious in, ourselves. [Wolfram:] Right. It’s a reasonable, you know, piece of scientific induction, extrapolation [that other humans besides oneself can be conscious]. But what I’m curious about is, if you... If you say the only thing that can be conscious is something that has that exact same design [as the human brain] [ . . . ] [Yudkowsky:] I don't say that. [Wolfram:] Okay. So what... So where’s the boundary? [ Later: ] [Wolfram:] I’m big on immortality [ . . . ] kind of shocking that cryonics hasn’t worked yet [ . . . ] to cool down [water] without expanding [ . . . ] I’m just sort of curious from your sort of moral compass point of view, if immortality is achievable, but only digitally, what? How do you feel about that? I mean, in other words, if you’re, if you’re going to [ . . . ] start having your sort of backup AI, maybe you gradually, you know, your sort of thread of consciousness gradually migrates from being in your brain to being in your AI, and eventually, you know, your brain fails for some biological reason, and then it’s all the AI. I’m, I’m curious how you feel about that, whether you, whether you feel that that is a, a kind of an appropriate, kind of fun‑preserving outcome, or whether you think of that as being a, kind of a fun‑destroying outcome, so to speak. In the top exchange, Wolfram recognizes that it's reasonable to assume that one "meat" human may be conscious, from having been conscious as a "meat" human oneself. In the bottom quote, Wolfram talks about immortality through cryonics as being only a physical and not a philosophical problem; there does not seem to be particular doubt in Wolfram's mind that the person who woke up after successful cryonic preservation, despite the substrate having undergone some change in the meantime, would be him. When Wolfram talks about "immortality through uploading", he asks how one might choose to feel about it - not how to make it technically feasible. I think this is because he is modelling the question of "immortality" achieved on a silicon substrate, as an implicitly already-ceded technical lost cause as far as whether he can know whether his actual consciousness is preserved. The method he describes sure sounds more like someone training a chatbot to "replace" them, than anything that would actually allow someone to wake up in a simulated environment. The only thing, that I can see as plausibly making Wolfram talk differently about immortality-through-cryonics vs immortality-through-uploading, is that, on his model of the world, you can draw really sound conclusions about what will happen to a consciousness if there is a physical through-line - but not otherwise. Example 3 | Wolfram doubts that theories making statements about what is or is isn't valuable can be scientific. [Yudkowsky:] The Earth... The, the universe gets a little darker every time a bullet gets fired into somebody’s head, or they die of old age, even though the atoms are still doing their atom things. [Wolfram:] Right. Okay. So this is [ . . . ] Viscerally, I agree with you. Scientifically, I have a bit of a hard time. I mean, in a sense, that, that feels, that feels like a very kind of spiritual kind of statement, which is not necessarily bad, but it’s just worth understanding what kind of a thing it is. I mean, it, it is saying that there’s something very, a kind of sacred thing about these attributes of humans [ . . . ] [Wolfram:] [ . . . ] ethics is not a scientific field. You know, it’s about how we humans feel about things, and we humans could feel this way. We could feel that way. It’s, it’s to do with the nature of us as humans, and we could, you know, we could scientifize those statements by saying, “Let’s do an fMRI and notice why do you say that? Oh, your such and such lobe lights up.” But I don’t think that’s a particularly useful thing to say. I mean, I think, you know, I think it is a fair statement that this is, you know, it, it is a, a thing that we can capture that humans feel that this should happen or not happen, whatever else. But I guess that the, the, the... you know, there’s [ . . . ] one question is what’s the right thing to have happen? I don’t think there’s any abstract way to answer that. [ . . . ] I can imagine even humans who say, “No, no. You know, the planet is much more important than the humans,” for example. “Anything the humans do on the planet that messes up the planet, you know, get rid of the humans. We just want the, you know, the planet is more important" As I understand his usage here, what Wolfram is using the word "scientific" to mean, is "the class of abstract theories about reality that can have objective soundness-and-validity" - i.e. the class of abstract theories which one can use to inspect and generate complicated statements like, "The Earth is an oblate spheroid most cheaply modeled as orbiting a point inside the Sun", or "A whale is a kind of fish", or "You should vote your values in elections, no matter what you first-order expect other people to do" - and make an objective call as to whether those statements say something true about the real world. Wolfram acknowledges that we can observe individual instances of agents or groups of agents caring more about one thing than another, or having an objective preference as to what should happen. He says that he feels an intuitive pull toward making value claims [as though they could be true or false]. But - implicitly - while in science you can run real experiments to test the soundness of a theory, in philosophy you can only run thought experiments. And seemingly you can run any thought experiments, each as "real" as the last. He doubts that anything can be proven, about the soundness of any particular theory of ethics. So he doubts that we can know abstract ethical truths, or even truths about how agents «should» behave, in general. So why am I saying this premise is false? How can we know anything about the soundness of theories we can only test in our minds, instead of in physical reality? Well, I object to the presumption of guilt. As not-yet-very-grown-up humans, our minds are weak. They're lossy and full of biases which make our thought experiments and exercises in first-order "moral logic" routinely yield conclusions recognizable as invalid or even repugnant to outside observers. Our weak human minds do not appear to be as large as the rest of reality [it's unclear to me what it would even mean, for a mind to be larger than the reality it existed in]. But our minds perceive atomically mental objects. When we make decisions, we ask what will happen to various hypothetical future versions of ourselves who make different choices. Most often we don't model reductionist physics while doing this, but instead use a mentalistic framework in which other people are treated as copies of ourselves in different positions or with some modified attributes, like being gay instead of straight. ["What would I do, if I was gay? Or at least stuck on pretending to be gay? I'd be trying to date Chad instead of Becca, or at least pretending to. So even though Gay Darrell is going to be there, I, Aaron, shouldn't waste time trying to shoo him off Becca . . . "] We know there are some decisions we shouldn't make, because they will lead other people to take advantage of us or to dislike us. We rely on this knowledge to act in the real world. If we don't, we know bad things will happen to us. It's grounded knowledge. How do we obtain it? We deduce it in our minds. Yes, we learn new things all the time about people who prefer new types of things, or different epistemic states they might be in. But the central engine telling us what is a good decision, is just asking: "What would I do?" When we've stopped the information-gathering step, we don't need any experience to tell us just "what we would do". We deduce it. So there is a seed of valid-and-sound mental-atoms logic, at least within each individual. Objective truths are usually taken as being socially share-able. There doesn't seem to be much sense in calling my theory-with-mental-atoms "objective" if it only describes my reality, and doesn't say anything about yours at all. What would the word be adding? People ever, in fact, agree on theories-with-mental-atoms. In [what I've listened to of] the podcast [so far], Wolfram quotes "We hold these truths to be self-evident . . ." Eliezer then says "they don't need to be self-evident". But if they're axioms, they do - at least among the cult that makes them mean something. This is LessWrong - one is supposed to "explain, not to persuade", and to follow various other norms. If I come up with a counterexample to someone's post from U.S. politics, I'm probably going to rephrase it, thinking something like "politics is the mind-killer, and people on here really don't like people who go around being a mind-killer because that's against the project of all becoming more rational together, so this is worth the effort". "Politics is the mind-killer" isn't true "out there, in the physical world" - but it's nonetheless true in a way that's grounded, "objectively". Hofstadter's Tortoise remarks that "politicians lie" is an "[obviously] valid utterance", contrasting it with "politicians lie in cast-iron sinks". E.T. Jaynes likewise contrasts the 'obviously' correct "knowledge is power" with the 'obviously' absurd "[social] power is knowledge". Where did the shared sense of the truth [or falsity] of these statements come from? Where does the shared sense that the sky is blue come from? We don't know the full causal story, but we can all agree we see the puzzle piece. And we can try to fit it into a larger theory. And since we can - however imperfectly - hold each other's minds in our own, we don't absolutely need a shared non-mentalistic experimental setup, to do shared thought experiments, and agree that the results came out favoring one theory or another. None of this implies superintelligences will share our values, because in all of the ways that are contingent on what "value" looks like, a superintelligence cannot be modeled as a copy of us very well at all. A valid theory-with-mentalistic-atoms allows for copies of us that are so modified in their utility function area, that they no longer follow most of our derived rules about deontics. The most we can predict about them is that they will observe the same physical reality, and seek certain things instrumentally-convergently. And a valid theory-with-mentalistic-atoms allows for these things to be arbitrarily smart, if the appropriate stuff happens to make them so. It seems common for people trying to talk about AI extinction to get hung up on whether statements derived from abstract theories containing mentalistic atoms can have objective truth or falsity values. They can. And if we can first agree on such basic elements of our ontology/epistemology as that one agent can be objectively smarter than another, that we can know whether something that lives in a physical substrate that is unlike ours is conscious, and that there can be some degree of objective truth as to what is valuable [not that all beings that are merely intelligent will necessarily pursue these things], it in fact becomes much more natural to make clear statements and judgments in the abstract or general case, about what very smart non-aligned agents will in fact do to the physical world.
xz3kway2rhbCabEpF_Theories_With_Mentalistic_Atoms_.txt
{ "file_size": 14364 }
75093d4a-7414-463e-b388-efd801c7b9e0
Quick check: do you agree or disagree with the following statement: If a study finds a result significant at a p=0.05 level, that means they have followed a methodology which produces this conclusion correctly 95 % of the time. Yes or no? Keep that in mind, and we’ll get back to it. I’m reading the Fisher book where he popularised the p-value[1], and I noticed he’s actually quite sensible about it: The value for which P=0.05, or 1 in 20, is 1.96 or nearly 2; it is convenient to take this point as a limit in judging whether a deviation is to be considered significant or not. Deviations exceeding twice the standard deviation are thus formally regarded as significant. Using this criterion we should be led to follow up a false indication only once in 22 trials, even if the statistics were the only guide available. He is talking here about the normal distribution, and saying that if you have two dice that somehow generate numbers from the normal distribution, and you get an unexpectedly large value from the d6, you should double check that you are not accidentally throwing the d20. Makes complete sense. It turns out that many situations in hypothesis testing are equivalent to “Wait am I still holding a d6?” so this is a useful rule of thumb. But! In science there are so many other things that can go wrong. For example, Bakker and Wicherts[2] found that 15 % of the studies they looked at drew the wrong conclusion due to making dumb mistakes in computing the significance threshold. Think about that! The significance test pales in comparison. Regardless of what level of significance is used in the hypothesis test, regardless of accuracy effects from selection pressure, the base rate of getting the most fundamental maths of the last step right is only 85 %. Then other problems are piled on top of that[3], so no, a significant result at p=0.05 means nothing. It’s just a sign that you might be holding another die and it is time to double-check. (E.g. through replication, or further investigation.) ^ Statistical Methods for Research Workers; Fisher; Oliver & Boyd; 1925. ^ The (mis)reporting of statistical results in psychology journals; Bakker, Wicherts; Behaviour Research Methods; 2011. ^ Consider for example the Forbes report of 88 % of spreadsheets containing errors.
CDcfWQyBgZ98MfybM_The_lying_p_value.txt
{ "file_size": 2314 }
f1fb2eb5-f3dd-4472-b704-5bd9c338184a
Hello everyone! Occasional lurker, first time poster here (working in government tech ops, writer in my free time). I learned a lot from this forum when I wrote an exploratory piece about 6 key AI variables to watch last October. Since then, I've taken a deeper dive into how quickly occupations typically change each decade (going all the way back to 1870!) to get a better sense of what a reasonable timescale for AI-driven automation of labor looks like. There was an interesting debate between Ajeya Cotra and Jason Crawford a few weeks ago about whether this would take years or decades. I take Jason's position on this (decades), but that still leaves me predicting change much more quickly than the official U.S. government projections, which I've found to be extremely conservative, especially given predictions like Leopold Aschenbrenner's re: AGI in 2027. My most recent piece focuses on three occupations I think might actually grow faster even if we see powerful AI emerge in the next decade. I see these ones as archetypes of the three types of work I could see continuing to exist for some time to come: Jobs where people are needed to be personally accountable for factual accuracy in fields where there will continue to be emerging knowledge (e.g. biologists, legal experts, etc.)Jobs where having a human is inherently valuable to the customer (therapists/counselors, and personal service workers among others)Jobs that require significant management decisions to be made by a directly accountable individual (managers, whether directly by regulation or downstream of it, or entrepreneurs) For now, I've left out "jobs that require physical work, specific expertise and can perform in unpredictable environments" as I see this as a bit of a question mark over the very long run. But I'm interested in doing a more specific breakdown of this sort of work in the future. This list is perhaps a broader one than the one that @Roger Dearnaley proposed a few months ago, but I'm interested in further modeling this out in a way that hopefully helps improve official estimates and government readiness for the possible job market impacts of AGI (e.g. in these three occupations). I'd love to connect with anyone else who is working in this space, or is aware of any funding available for this type of research. I'd also appreciate any feedback on how I've broadly categorized these AGI-proof jobs, or about another hypothesis that I have: that "drop-in remote workers" could be a bigger threat to new workers than experienced ones; organizational dynamics might mean the complete lack of hiring could have a bigger effect on employment, at least for the first few years (given rising youth unemployment rates, we may already be seeing this dynamic at play).
yBfJAFgDvYeorLmiD_Modeling_AI-driven_occupational_.txt
{ "file_size": 2768 }
258385c7-ac0f-471e-aa80-4c1490cd9193
A brief summary and a link to the full document may be found here:  http://philosofer123.wordpress.com Constructive feedback is welcome.
m9mpuLxvqhktPJwca_How_to_Live_Well___My_Philosophy.txt
{ "file_size": 137 }
943cab30-1c04-40d8-ab68-0db1528a41e2
Each year ( 2014, 2015, 2016, 2017, 2018, 2019, 2023) I put out a list of how many dance weekends, festivals, camps, and long dances contra bands and callers are doing. I don't really know why I do this, but it's about an hours work on top of that I'm already collecting for trycontra.com/events so I might as well keep doing it! In 2023 I saw that the total number of events (107) was down 20% from 2019 (132), where a lot didn't come back from the pandemic. This year we're back up to pre-pandemic levels, with 131, which is great to see! Bands River Road 9 Countercurrent 8 Playing with Fyre 8 Drive Train 7 Hot Coffee Breakdown 7 The Engine Room 7 Stomp Rocket 6 Toss the Possum 6 Wild Asparagus 6 The Dam Beavers 6 Eloise &co 5 Elixir 4 Kingfisher 3 Nova 3 Open Band 3 Stove Dragon 3 The Mean Lids 3 The Moving Violations 3 The Stringrays 3 The Syncopaths 3 3 Wheel Drive 2 Audacious 2 Contra Sutra 2 Contraforce 2 Contrasaurus 2 Good Company 2 Joyride 2 Lake Effect 2 Lighthouse 2 Meadowhawk 2 Notorious 2 Pimento Mori 2 Raise the Roof 2 Riptide 2 Rushfest 2 Supertrad 2 The Faux Paws 2 The Ice Cream Truckers 2 The Latter Day Lizards 2 Callers Will Mentor 19 Gaye Fifer 15 Alex Deis-Lauby 13 Lisa Greenleaf 12 Bob Isaacs 10 Seth Tepfer 9 George Marshall 8 Susan Petrick 7 Terry Doyle 7 Cis Hinkle 5 Janine Smith 5 Lindsey Dono 5 Mary Wesley 5 Nils Fredland 5 Adina Gordon 4 Maia McCormick 4 Steve Zakon-Anderson 4 Wendy Graham Settle 4 Ben Sachs-Hamilton 3 Charlie Turner 3 Darlene Underwood 3 Devin Pohly 3 Emily Rush 3 Isaac Banner 3 Jacqui Grennan 3 Koren Wake 3 Michael Karcher 3 River Rainbowface Abel 3 Scott Higgs 3 Chris Page 2 Claire Takemori 2 Dereck Kalish 2 Donna Hunt 2 Frannie Marr 2 Jeremy Korr 2 Katie Zanders 2 Liz Nelson 2 Louise Siddons 2 Open Calling 2 Open calling 2 Rick Mohr 2 Sarah Kaiser 2 Susan Kevra 2 Susan Michaels 2 Warren Doyle 2 Comment via: facebook, lesswrong, mastodon
hecJynmtxhe2L8hCC_Festival_Stats_2024.txt
{ "file_size": 1912 }
c0a1ba18-c3e6-404b-b5a2-de226fcbf17d
People say "I think" a lot. Here are some examples: I think you brought me the wrong order.I think the numbers in the report are wrong.I think you need to turn left at the light.I think we need to replace the whole water heater.I think iPhones are better than Android phones.I think you should quit your job and start a business.I think that kids shouldn't have any screen time before the age of five.I think everyone should take vitamin D supplements. I don't think that it is always problematic to use that phrase. In fact, I think that it is often appropriate. However, I also think that most people would benefit from tabooing the phrase "I think" in various situations. Actually no, let me rephrase that: I am moderately confident that a large majority of people who engage in somewhat intellectually serious discussion would benefit from tabooing the word "think" a moderate amount more frequently than they currently do. Accuracy One potential issue with saying "I think" is that it's just not clear what you mean. Someone saying "I think the numbers in the report are wrong" might be saying that they're 99.9% confident that the numbers are wrong. On the other hand, someone saying that they think you should quit your job to start a business might only be 20% confident that you should in fact go down that path. On first approximation, something seems really broken here. 99.9% and 20% are very different levels of confidence. Why would we allow a phrase ("I think") to point to such wildly different underlying concepts ("99.9% confident" vs "20% confident")? Well, here's the thing: context can be very powerful. Like when a diner says to a server, "I think you brought me the wrong order", in that context the server is not going to wonder to themself: "Hm, is this person saying that they are 30% confident that the order is wrong? 75%? 99.99?". No. They are going to correctly assume that the diner is communicating that they are almost positive that the order is wrong. With that said, I don't expect that the phrase "I think" will usually lead to meaningful miscommunications about accuracy. I think that humans have mostly figured it out and that in most situations, context is enough to allow for effective communication. But that doesn't mean that there isn't room for improvement. I estimate that there are in fact many situations where using a phrase like "I think" leads to meaningful miscommunications about accuracy. For example, if Alice tells Bob that she thinks he should quit his job and start a business, I could imagine Bob taking it as "Alice is being serious here and is probably 90%+ confident in making this recommendation" when in reality Alice actually meant something more like "I'm just thinking out loud and get a vague sense that doing this will be better for you, but there are a lot of factors that determine whether it actually is better for you and I haven't spent nearly enough time considering them all to be more than 30% confident in this recommendation". I can't think of a good way to make a real argument here though. I'm not even sure what the specific claim is that I want to make. It's something along the lines of what I said earlier about how the phrase "I think" leads to meaningful miscommunications about accuracy. It's hard to say what I mean by "meaningful" though. It's hard to say how often I expect this sort of thing to happen, and how much damage I expect it to cause. And even if I can zero in on a specific claim, I'm not sure how I would make an argument for it. Make a copy of my life experience and upload it into your brain? No. I guess a way to approximate that would be to come up a bunch of good examples that serve as effective intuition pumps, or to link to 250 concrete examples of this plausibly happening in eg. LessWrong discussions, but I'm struggling to come up with such examples. So with all of that said, I think the statement I want to make to you, the reader, is that you should consider tabooing the phrase "I think". Identity Paul Graham wrote an essay called Keep Your Identity Small. The idea is that, in my own words, people become dumb when they talk about things that are a part of their identity. Think about a diehard Cowboys fan arguing that their team will win the Super Bowl, a socialist arguing about tax policy, or one of those people from JW.org standing on the street arguing about God. Using terminology from Julia Galef's book The Scout Mindset, because their identity is tied up in these things, they act like soldiers instead of scouts, fighting for their current beliefs instead of being open to go wherever the best arguments take them. Alternatively, using terminology from Tim Urban's book What's Our Problem, when beliefs become part of your identity it pushes you further down the vertical axis, leading to you approaching things less like a scientist and more like a zealot. I think we can agree that we want to be like scientists, not zealots. Scouts, not soldiers. That we want to be to open to wherever Bayes' Theorem takes us and not let our identities get in the way. In pursuit of these ends, I think we need to keep a somewhat close eye on our use of language. I have the post Rationality and the English Language in mind here: If you really want an artist’s perspective on rationality, then read Orwell; he is mandatory reading for rationalists as well as authors. Orwell was not a scientist, but a writer; his tools were not numbers, but words; his adversary was not Nature, but human evil. If you wish to imprison people for years without trial, you must think of some other way to say it than “I’m going to imprison Mr. Jennings for years without trial.” You must muddy the listener’s thinking, prevent clear images from outraging conscience. You say, “Unreliable elements were subjected to an alternative justice process.” ... My point is not to say that journal articles should be written like novels, but that a rationalist should become consciously aware of the experiences which words create. A rationalist must understand the mind and how to operate it. That includes the stream of consciousness, the part of yourself that unfolds in language. A rationalist must become consciously aware of the actual, experiential impact of phrases, beyond their mere propositional semantics.2 Circling back to the phrase "I think", I worry that this phrase not infrequently muddies our thinking. More specifically, I worry that it happens when we say "I think" in order to communicate a low confidence belief. Let's take "I think everyone should take vitamin D supplements". This is actually a belief that I hold. Well, "everyone" is probably too strong, but let's roll with it. This belief of mine is a relatively low confidence one. Back when Covid was more prevalent and I would read about stuff and I kinda sorta remember reading that vitamin D is something that 1) many people are deficient in, 2) such deficiencies are actually problematic, and 3) supplements don't really have any (physical) downsides. But my memory is fuzzy, those assumptions aren't ones I spent much time investigating, and I haven't thought through what other factors might be worth considering, nor how much weight those factors have. Nor have I battle tested my beliefs in discussions with smart people, nor have I looked into what the recommendations of (properly incentivized) experts are. But if I go around telling all my friends "I think everyone should take vitamin D supplements"... I dunno... I worry that I'd become a little bit attached to that belief. Subtly, and unconsciously. I don't want it to be the case, but I suspect that it would be. Both for myself and for humans in general. I actually make somewhat of a conscious effort to "be the type of person who changes their mind". Like, I'll go out of my way to announce it when I change my mind on something, or just when I update my beliefs at all. I also try to identify as someone who constantly is updating incrementally. So yeah, I make a little bit of an effort to announce these sorts of things. But I'm not perfect. I'm an aspiring rationalist, as they say. So, with all of that said, I think it probably makes sense -- for myself and most others -- to lean away from using phrases like "I think" when they risk communicating more confidence than you actually have. But I'll give the same hedge that I gave in the previous section. I'm having trouble formulating a specific claim here, and I'm having trouble making a real argument. And so I guess what I'm looking to do in this post is kinda just to bring up the topic, vaguely gesture at a claim, describe some feelings about that claim that hopefully serve as a little bit of an intuition pump, see if those feelings resonate with the reader, and propose that the reader consider tabooing the phrase "I think" more frequently. Alternatives Ok, so suppose that you are convinced. Suppose that you buy what I'm selling and want to start tabooing the phrase "I think" more often. How can you do that? Well, I'm not totally sure, but I'll take a stab at it. To start, I think that it is important to distinguish between statements of belief and statements of value expression. I have Robin Hanson's futarchy in mind here, where the slogan is "vote on values, but bet on beliefs". Since hearing about this idea, the distinction between beliefs and values has always really stood out to me. A statement about a value is something like "I think that scientific progress is valuable in and of itself". On the other hand, a statement about a belief is something like "I think that spending on medical research improves health outcomes more than spending on preventative care does". In other words, statements of beliefs are predictions. I don't really see anything problematic about using the phrase "I think" when making statements about value. I guess to be more clear you can say something like "I personally value scientific progress" because the initial statement might mean that you personally value scientific progress, or it might mean that you see it as something that has inherent value. For statements of belief, I see two approaches for replacing the phrase "I think": quantitative and qualitative. To take the quantitative approach, you can assign a probability. Like, you can say "I think X is 90% likely" or "I'm 90% confident in X". You can also be a little handwavvy and say that you're "something like 70-90% confident in X". But putting a num on it can be weirdly difficult. I feel like it shouldn't be, but I know that I sometimes just can't bring myself to do it. Sometimes I just keep flip flopping ("90%. No, 70%. No, 80%. No, 65%."). Other times I just can't even bring myself to come up with an initial estimate. At times like these, taking the qualitative approach is a huge help. You can say that you're "pretty confident". You can say that you're "somewhat confident". That you "suspect X". That you "wouldn't be surprised by X". That "X seems plausible". That you "think X is overwhelmingly likely". I'm sure there are a bunch of other good adjectives to throw around.
Ha4Lk6D4eZd2LRTqq_Consider_tabooing_"I_think".txt
{ "file_size": 11127 }
d4028e5a-c4d1-44ad-9594-4a1a160b4b77
This post comes a bit late with respect to the news cycle, but I argued in a recent interview that o1 is an unfortunate twist on LLM technologies, making them particularly unsafe compared to what we might otherwise have expected: The basic argument is that the technology behind o1 doubles down on a reinforcement learning paradigm, which puts us closer to the world where we have to get the value specification exactly right in order to avert catastrophic outcomes. RLHF is just barely RL. - Andrej Karpathy Additionally, this technology takes us further from interpretability. If you ask GPT4 to produce a chain-of-thought (with prompts such as "reason step-by-step to arrive at an answer"), you know that in some sense, the natural-language reasoning you see in the output is how it arrived at the answer.[1] This is not true of systems like o1. The o1 training rewards any pattern which results in better answers. This can work by improving the semantic reasoning which the chain-of-thought apparently implements, but it can also work by promoting subtle styles of self-prompting. In principle, o1 can learn a new internal language which helps it achieve high reward. You can tell the RL is done properly when the models cease to speak English in their chain of thought - Andrej Karpathy A loss of this type of (very weak) interpretability would be quite unfortunate from a practical safety perspective. Technology like o1 moves us in the wrong direction. Informal Alignment The basic technology currently seems to have the property that it is "doing basically what it looks like it is doing" in some sense. (Not a very strong sense, but at least, some sense.) For example, when you ask ChatGPT to help you do your taxes, it is basically trying to help you do your taxes. This is a very valuable property for AI safety! It lets us try approaches like Cognitive Emulation. In some sense, the Agent Foundations program at MIRI sees the problem as: human values are currently an informal object. We can only get meaningful guarantees for formal systems. So, we need to work on formalizing concepts like human values. Only then will we be able to get formal safety guarantees. Unfortunately, fully formalizing human values appears to be very difficult. Human values touch upon basically all of the human world, which is to say, basically all informal concepts. So it seems like this route would need to "finish philosophy" by making an essentially complete bridge between formal and informal. (This is, arguably, what approaches such as Natural Abstractions are attempting.) Approaches similar to Cognitive Emulation lay out an alternative path. Formalizing informal concepts seems hard, but it turns out that LLMs "basically succeed" at importing all of the informal human concepts into a computer. GPT4 does not engage in the sorts of naive misinterpretations which were discussed in the early days of AI safety. If you ask it for a plan to manufacture paperclips, it doesn't think the best plan would involve converting all the matter in the solar system into paperclips. If you ask for a plan to eliminate cancer, it doesn't think the extermination of all biological life would count as a success. We know this comes with caveats; phenomena such as adversarial examples show that the concept-borders created by modern machine learning are deeply inhuman in some ways. The computerized versions of human commonsense concepts are not robust to optimization. We don't want to naively optimize these rough mimics of human values. Nonetheless, these "human concepts" seem robust enough to get a lot of useful work out of AI systems, without automatically losing sight of ethical implications such as the preservation of life. This might not be the sort of strong safety guarantee we would like, but it's not nothing. We should be thinking about ways to preserve these desirable properties going forward. Systems such as o1 threaten this. ^ Yes, this is a fairly weak sense. There is a lot of computation under the hood in the big neural network, and we don't know exactly what's going on there. However, we also know "in some sense" that the computation there is relatively weak. We also know it hasn't been trained specifically to cleverly self-prompt into giving a better answer (unlike o1); it "basically" interprets its own chain-of-thought as natural language, the same way it interprets human input. So, to the extent that the chain-of-thought helps produce a better answer in the end, we can conclude that this is "basically" improved due to the actual semantic reasoning which the chain-of-thought apparently implements. This reasoning can fail for systems like o1.
BEFbC8sLkur7DGCYB_o1_is_a_bad_idea.txt
{ "file_size": 4673 }
1cb97e4c-a6ad-41d1-afd3-09dca091bd0f
The encampment is empty when you awaken. This isn’t necessarily a bad sign, as you weren’t sure you’d wake up at all: there were a lot of ways your fellow bandits could have taken your impassioned impromptu speech exhorting them to “be a bit more Robin Hood about the whole thing”, and departing in the dead of night was by no means the worst. Between your continued existence, and the surprisingly generous amount of weapons and ammo which were left behind with you, you find yourself feeling rather relieved, and even a little grateful. These feelings last until you visit the nearest town – hoping to trade bullets for food and a ticket home, so you can forget this sorry chapter of your life – and notice WANTED posters featuring your face on every wall. Your brief and understandably tense interactions with the locals clarify that  you have been named as your erstwhile gang’s ringleader, that you are emphatically unwelcome in any of the shops in town, that a team of unstoppable mercenaries will be coming for you in a little under two months, and that “I assumed they were dashing desperados when I joined” is not typically considered an extenuating factor. Your only remaining option is to forage for food and head home on foot, before the bounty hunters show up. Fortunately, your campsite was chosen for its closeness to an abundance of easy foraging sites, and you’ve already figured out how to safely and reliably preserve and prepare the ingredients sourced from them. Unfortunately, you were too busy cooking to help your fellow bandits forage[1], so you have no idea which sites yielded which yields in what quantities; also, the only viable path back has no opportunities to replenish your stockpile en route. You have sixty days to explore and exploit the sites around the encampment, hunting[2] and gathering enough food to keep yourself alive, and stockpiling enough to reach safety before starving. Good luck! Notes This game shouldn’t take more than ~10min to play, runs in-browser, and is not intended to be replayed.Unlike in the classic bandit problem, outcomes can be affected by chronological effects and/or earlier choices. Figuring out where and how this happens is an intended part of the challenge.Data Science skills might be useful here, but will not be necessary.You are warmly encouraged to share and compare completed game records in the comments.Feedback, as always, is greatly appreciated.^ You’d originally thought they were asking you to be their chief, and by the time you figured out your mistake it was too awkward to back down. ^ You are sufficiently well-armed that nothing you hunt will be able to harm you. Indeed – you consider, feeling your revolver in its place on your belt between your other revolver and your other other revolver – you could justifiably call your current predicament a multi-armed (ex-)bandit problem.
pox6NtZxDvCgKtbfT_Inferential_Game__The_Foraging_(.txt
{ "file_size": 2906 }
64edd984-a084-4863-9348-fa8564595beb
In our engagements with governments, AI safety institutes, and frontier AI developers, we found the concept of the “evaluation gap” (short: ‘evals gap’) helpful to communicate the current state of the art and what is needed for the field to move towards more robust evaluations. In this post, we briefly explain the concept and its implications. For the purpose of this post, “evals” specifically refer to safety evaluations of frontier models. Evals have become a prominent tool underpinning governance frameworks and AI safety mechanisms. Given that, we are concerned that policymakers and industry players both (i) overestimate the number of currently available high-quality evals and (ii) underestimate the time it takes to develop them. In our experience, available evals are not sufficient (in quality and quantity) to robustly identify the capabilities of existing and near-future models. We call this overarching idea the evals gap. Unless more focused attention is paid to this gap and efforts diverted to closing it, we expect this trend to continue and, subsequently, the gap to increase. We think it is possible to close the evals gap. This post serves as a call to action to be more ambitious with evals efforts, e.g. dedicate more resources to the science, development, and running of evals. Evaluations underpin many high-stakes decisions Many high-stakes decisions in company-led and government-led frameworks are reliant on the results of evals. In voluntary commitments such as Anthropic’s Responsible Scaling Policy, OpenAI’s Preparedness Framework, and Google DeepMind’s Frontier Safety Framework, mitigation measures and deployment decisions are directly tied to the identification of specific capabilities through evals. While these voluntary policies differ between companies, some consensus on best practices, including the role of evals, is beginning to be reached through e.g. the Frontier AI Safety Commitments. Evals are also a core part of legislation and governance frameworks, such as the EU AI Act or the US Executive Order. We think evals are an important tool for AI governance and AI safety, and we think it is good that governments and AI companies use them in their safety frameworks. However, given the high stakes involved, it is crucial to ensure that the evals used to underpin these decisions are adequate. Current evaluations are insufficient to underpin high-stakes decisions Many important evals don’t yet exist The aforementioned  safety frameworks sometimes require evals that either do not yet exist or, where they do, evals that are in an early development phase and incapable of generating sufficiently robust evidence. For example, the EU AI Act (Article 51) states, “A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions: (a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks;[...]” In this case, the “high impact capabilities” in question are yet to be specified. Until that happens, evaluators are required to guess which evals will be required to identify them. In their Responsible Scaling Framework(October 2024 version), Anthropic writes, “We are currently working on defining any further ‘Capability Thresholds’ that would mandate ASL-4 Required Safeguards [...]”. This statement indicates that a full ASL-4 evals suite that could provide strong evidence about catastrophic risks does not yet exist (Though, some of the respective capability evaluations already exist, e.g. Benton et al., 2024).[1] Coverage is typically low From our interactions with others in the evals ecosystem (e.g. AI developers, third-party evaluators, academics, AISIs, and other government agencies), we noted a consensus that the coverage of evals is low, i.e. that there are much fewer evals than needed for any given risk domain[2]. Thus, even when there exist evals for a given risk domain, they only cover a small number of potential threat scenarios. A shared belief among the people we spoke to was that existing evals can only spot-check their specific risk domain and should not be conflated with a rigorous assessment. In general, we want evals to be predictive of real scenarios, i.e. their results to indicate what we should expect from real deployment situations. This is a big assumption since real deployments can capture countless different use cases across a wide variety of scenarios and users. Therefore, creating good coverage is a significant endeavor that would optimally result in a comprehensive suite of evals per risk domain. While there are many important efforts to create new and better evals (e.g. Kinniment et al., 2023; Phuong et al., 2024; Mouton et al., 2024; OpenAI, 2024; Benton et al., 2024, see more here), the field is nowhere near having good coverage of all relevant risks and domains. Thus, the existing evals mostly function as a spot check for their corresponding risk domains, providing some evidence but not sufficient coverage. Development and interpretation of evals is complicated The development of a single suite of high-quality evals can require the full-time effort of a small research team for multiple months. This process can come with many unforeseen challenges. For example, most safety frameworks (e.g. Anthropic’s RSP, Google DeepMind’s FSF, OpenAI’s PF) mention risks from model autonomy and sometimes even specifically talk about a model’s ability to accelerate AI R&D. SWE Bench (Jimenez et al., 2023) is an evaluation framework consisting of 2,294 software engineering problems drawn from real GitHub issues. It is often used to compare the software engineering abilities of LM agents and used as a proxy for autonomy and AI R&D capabilities. However, it was later found that a significant number of these coding problems were underspecified or otherwise unsolvable. Thus, any reported results on the original version of SWE Bench were misleading. This led to the introduction of SWE-Bench Verified, which only includes coding problems confirmed to be solvable and required hiring many skilled software engineers to manually validate each sample. This example should not be misconstrued as a criticism of the original SWE-Bench authors. Rather, it is a demonstration of the fact that building large-scale, high-quality evaluation benchmarks is complicated and expensive. Furthermore, there are various issues with other equally widely used benchmarks like MMLU or BBQ, third-party evaluation frameworks, model-written evals, and more (see Anthropic’s Challenges in evaluating AI systems for details). On top of that, interpreting evals results too can be challenging. For example, despite noteworthy outcomes on our most recent publicly available evals suite (see o1-preview system card), we found it challenging to formulate clear and robust takeaways or tie them to very concrete recommendations for the developers. The three main reasons for this are (a) defining clear capability thresholds and designing evals for them is still a nascent research area (and, in our opinion, part of the evals gap); (b) even if we have clear thresholds, the empirical evidence might be insufficient or hard to interpret; (c) with current techniques, there is a concern that the model’s capabilities were not maximally elicited. Proper elicitation is an unsolved research question In evals, we often encounter the problem that “absence of evidence is not (strict) evidence of absence.”[3] Just because a model didn’t show a capability in our eval does not mean it doesn’t have this capability under different circumstances. So far, we neither have a principled way to quantify “how hard we tried” nor a way to upper-bound the model's capabilities through evaluations. This poses a challenge because we cannot accurately estimate what capabilities the model will have after widespread deployment (and thus which harm it might create). As an analogous example, consider jailbreaking. AI developers typically extensively evaluate their model for jailbreaks before release, i.e. they check if they can find clever ways to get their model to respond to harmful queries. Presumably, their internal efforts conclude that the model is hard to jailbreak before they release it. Nevertheless, in almost all recent releases, a model has been jailbroken by users within a few hours or days available. This is comparable to the problem we face with evals. A small number of evaluators have a limited amount of time to assess the model’s capabilities and then make a prediction about the revealed capabilities during deployment. We currently don’t have a rigorous theory of predicting these revealed capabilities. Without action, the evals gap may widen Higher capabilities require more evals. Increased capabilities imply that the model is able to do a larger number of tasks. For example, GPT-4 can be used in many more use cases than GPT-2. Therefore, the space that evals for GPT-4 have to cover is much larger than for GPT-2. Higher capabilities require more complex evals. The more complex the capabilities of models, the more complex and time-consuming it is to foresee, develop, and run evals. For example, consider that until recently, we could evaluate most relevant LLM behaviors with QA benchmarks. Since models are now capable enough of acting as LM agents, evals have to be increasingly complex tasks, which significantly increases the overhead per eval. Increased stakes for evals in the future. Right now, it appears that most models are not capable enough to trigger evals with high-stakes consequences. For example, OpenAI assessed their o1-preview model to be medium capable of CBRN and persuasion[4]. Since only models with a post-mitigation score of "medium" or below can be deployed, o1-preview could be deployed. Anthropic has assessed their Claude-3 model family to be in category ASL-2[5], where most costly mitigations only start with ASL-3. As capabilities in AI models increase, more and more evals identifying specific thresholds tied to high-stakes reactions will get surpassed and trigger increasingly important consequences. Eventually, the results of evals will have a direct effect on billion-dollar deployment decisions. For example, Anthropic’s ASL-3 specifies a large number of mitigations and actions on model security (corresponding to SL-4 in RAND’s Securing AI Model Weights report). These mitigations could take significant time to implement, which would lengthen the time to deploy their model and may come at a direct financial cost. If a model provider has to postpone their deployment by multiple months due to a relevant threshold being passed and having to improve their safety guardrails, they might lose out on important investments. We expect that these increased stakes will lead to increased pressure on and scrutiny of evals and their results. Furthermore, in uncertain cases, AI developers might be incentivized to criticize these evals, question their legitimacy, or, in extreme cases, even go to court with legislators to reduce their financial burden. This necessitates that the quantity and quality of evaluations rise to the challenge posed by rapid AI progress. Only then can evals provide a meaningful mechanism to rigorously test for all relevant risk domains and support other safety mechanisms in a defense-in-depth approach. Closing the evals gap is possible With sufficient foresight, we can close the evals gap. All of the aforementioned problems are solvable with more resources, people, and time. However, AI progress is fast, and high-stakes decisions will have to be made in the near future. Given the magnitude of the stakes, these decisions will likely be made with whatever evals are available at the time and will not be put on hold until better evals are developed. Therefore, we think closing the evals gap is an urgent priority. We suggest the following efforts as initial areas for improvement: Directly fund evals development: Merely funding the running of evals is not sufficient. Conducting evals is often only a small fraction (e.g. 10%) of the total effort and cost going into evaluations. We recommend to:Fund an external ecosystem of evaluation builders. We recommend casting a wide net, e.g. including private organizations, non-profits, academic labs, and individual researchers.Fund government bodies such as AISIs or AI offices well so that they can hire the technical talent needed to develop, run, and judge evaluations.Grow the evaluation teams within frontier AI companies such that they can better prepare for systems with increased capabilities. Fund the science of evals: Improvements in the science of evals benefit the entire field. For example, a better understanding of how to design high-quality evals at scale would both increase their usefulness and reduce their costs. Shape market incentives: Because most evals efforts are currently based on voluntary commitments, the incentive to pay for or build evals is small. If there were stronger incentives, more people and organizations would specialize in building evals. While we commend efforts like Anthropic’s Initiative for developing third-party model evaluations, market incentives would have to change more broadly for an ecosystem to develop at the required pace.^ This is not a specific criticism of Anthropic. We think it is good that they state their level of preparedness and their plans in public to expose them to external feedback. ^ Note that these reflect our interpretations of these conversations and are not the result of more systematic efforts such as structured interviews. ^ Technically, absence of evidence is some evidence of absence. The core point is that it should not be used as conclusive evidence unless we have exhaustively tested all options or have other arguments that imply strong evidence of absence. ^ This has not been externally verified. ^ This has not been externally verified.
gJJEjJpKiddoYGZKk_The_Evals_Gap.txt
{ "file_size": 14127 }
57690298-21b9-40d0-ab5e-01e9b335bb7e
Authors: Samuel G. B. Johnson, Amir-Hossein Karimi, Yoshua Bengio, Nick Chater, Tobias Gerstenberg, Kate Larson, Sydney Levine, Melanie Mitchell, Iyad Rahwan, Bernhard Schölkopf, Igor Grossmann Abstract: Recent advances in artificial intelligence (AI) have produced systems capable of increasingly sophisticated performance on cognitive tasks. However, AI systems still struggle in critical ways: unpredictable and novel environments (robustness), lack of transparency in their reasoning (explainability), challenges in communication and commitment (cooperation), and risks due to potential harmful actions (safety). We argue that these shortcomings stem from one overarching failure: AI systems lack wisdom. Drawing from cognitive and social sciences, we define wisdom as the ability to navigate intractable problems - those that are ambiguous, radically uncertain, novel, chaotic, or computationally explosive - through effective task-level and metacognitive strategies. While AI research has focused on task-level strategies, metacognition - the ability to reflect on and regulate one's thought processes - is underdeveloped in AI systems. In humans, metacognitive strategies such as recognizing the limits of one's knowledge, considering diverse perspectives, and adapting to context are essential for wise decision-making. We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety. By focusing on developing wise AI, we suggest an alternative to aligning AI with specific human values - a task fraught with conceptual and practical difficulties. Instead, wise AI systems can thoughtfully navigate complex situations, account for diverse human values, and avoid harmful actions. We discuss potential approaches to building wise AI, including benchmarking metacognitive abilities and training AI systems to employ wise reasoning. Prioritizing metacognition in AI research will lead to systems that act not only intelligently but also wisely in complex, real-world situations. Comments and Summary: Wisdom explosion/virtuous cycle: I'm mainly sharing this because of the similarity of some of the ideas here to my ideas in Some Preliminary Notes on the Promise of a Wisdom Explosion. In particular, the authors talk about a "virtuous cycle" in relation to wisdom in the final paragraphs: Second, by simultaneously promoting robust, explainable, cooperative, and safe AI, these qualities are likely to amplify one another. Robustness will facilitate cooperation (by improving confidence from counterparties in its long-term commitments) and safety (by avoiding novel failure modes; Johnson, 2022). Explainability will facilitate robustness (by making it easier to human users to intervene in transparent processes) and cooperation (by communicating its reasoning in a way that is checkable by counterparties). Cooperation will facilitate explainability (by using accurate theory-of-mind about its users) and safety (by collaboratively implementing values shared within dyads, organizations, and societies). Wise reasoning, therefore, can lead to a virtuous cycle in AI agents, just as it does in humans. We may not know precisely what form wisdom in AI will take but it must surely be preferable to folly. Defining wisdom I also found their definition of wisdom quite clarifying. They begin by defining it as follows: Though wisdom can mean many things, for this Perspective we define wisdom functionally as the ability to successfully navigate intractable problems— those that do not lend themselves to analytic techniques due to unlearnable probability distributions or incommensurable values They argue: If life were a series of textbook problems, we would not need to be wise. There would be a correct answer, the requisite information for calculating it would be available, and natural selection would have ruthlessly driven humans to find those answers They list a number of specific types of intractability: incommensurability of values or goals, values changing over time, radical uncertainty[1], chaos[2], non-stationary generating processes[3], examples that are out-of-distribution, computational explosivity[4]. Next they note that this can be achieved through two different types of strategies: (1) Task-level strategies are used to manage the problem itself (e.g., simple rules-of-thumb); (2) Metacognitive strategies are used to flexibly manage those task-level strategies (e.g., understanding the limits of one’s knowledge and integrating multiple perspectives). They then argue that although AI has made lots of progress with task-level strategies, it often neglects metacognitive strategies[5]: For example, they struggle to understand their goals (“mission awareness;” Li et al., 2024), exhibit overconfidence (Cash et al., 2024), and fail to appreciate the limits of their capabilities and context (e.g., stating they can access real-time information or take actions in the physical world; Li et al., 2024). These failures appear to be symptoms of a broader metacognitive myopia, which leads GenAI models to unnecessarily repeat themselves, poorly evaluate the quality of information sources, and overweigh raw data over more subtle cues to accuracy (Scholten et al., 2024 Given the neglectedness, they decide to focus on elucidating these strategies. The paper also identifies a number of specific metacognitive processes: Benefits: They argue that wise AI offer many benefits: • Robustness: They argue that metacognition would lead to AI's rejecting strategies that produce "wildly discrepant results on different occasions", allow it to identify biases and improve the ability of the AI to adapt to new environments. • Explainability: They believe that metacognition would allow the AI to explain its decisions[6]. • Co-operation: They argue "wise metacognition is required to effectively manage these task-level mechanisms for social understanding, communication and commitment, which may be one factor underlying the empirical observation that wise people tend to act more prosocially". They also argue that wisdom could enable the design of structures (such as constitutions, markets, organisations) that enhance co-operation in society. • Safety: They note the difficulty of "exhaustively specify goals in advance"[7] and they suggest that wisdom could assist AI's to emulate the human strategy of navigating goal hierarchies. They also argue that the greatest risk is currently systems not working well and that machine metacognition is useful for this, in particular, "AIs with appropriately calibrated confidence can target the most likely safety risks; appropriate self-models would help AIs to anticipate potential failures; and continual monitoring of its performance would facilitate recognition of high-risk moments and permit learning from experience." Comparison to Alignment: They identify three main problems for alignment: Humans don't uniformly prioritise following norms[8]Norms varying sharply across culturesEven if norms were uniform, they may not be morally correct They then write: Given these conceptual problems, alignment may not be a feasible or even desirable engineering goal. The fundamental challenge is how AI agents can live among us—and for this, implementing wise AI reasoning may be a more promising approach. Aligning AI systems to the right metacognitive strategies rather than to the “right” values might be both conceptually cleaner and more practically feasible. For example, task-level strategies may include heuristics such as a bias toward inaction: When in doubt about whether a candidate action could produce harm according to one of several possibly conflicting human norms, by default do not execute the action. Yet wise metacognitive monitoring and control will be crucial for regulating such task-level strategies. In the ‘inaction bias’ strategy, for example, a requirement is to learn what those conflicting perspectives are and to avoid overconfidence. Building Wise AI: Section 4.1 discusses the potential for benchmarking AI wisdom. They seem to be in favour of starting with tasks that measure wise reasoning in humans and scoring their reflections based on predefined criteria[9]. That said, whilst they see benchmarking as a "crucial start" they also assert that " there is no substitute for interaction with the real world". This leads them to suggest a slow rollout to give us time to evaluate whether their decisions really were wise. They also suggest two possibilities for training wise models: One possibility is a two-step process, first training models for wise strategy selection directly (e.g., to correctly identify when to be intellectually humble) and then training them to use those strategies correctly (e.g., to carry out intellectual humble behavior). A second possibility may be to evaluate whether models are able to plausibly explain their metacognitive strategies in benchmark cases, and then simultaneously train strategies and outputs (e.g., training the model to identify the situation as one that calls for intellectual humility and to reason accordingly; e.g., Lampinen et al., 2022). In either case, models could be trained against what a wise human would do, or perhaps to explain and defend its choices to wise humans robustly (i.e., to stand up to 'cross-examination'). One worry I have is that sometimes wisdom involves just knowing what to do without being able to explain it. In other words, wisdom often involves system 1 rather than system 2. Justification for Building Wise AI First, it is not clear what the alternative is. Compared to halting all progress on AI, building wise AI may introduce added risks alongside added benefits. But compared to the status quo—advancing task-level capabilities at a breakneck pace with little effort to develop wise metacognition—the attempt to make machines intellectually humble, context-sensitive, and adept at balancing viewpoints seems clearly preferable. What else does the paper include?: Page 5 contains a summary of different theories of human wisdom and two attempts to identify common themes or processesSection 2.2.1 discusses how wisdom in AI might vary from wisdom in humans given that AI has differing cognitive constraintsIn the final section they suggest that building machines wiser than humans might prevent instrumental convergence[10] as "empirically, humans with wise metacognition show greater orientation toward the common good". I have to admit skepticism as I believe in the orthogonality thesis and I see no reason to believe it wouldn't apply to wisdom as well. That said, there may be value in nudging an AI towards being wise in terms of improving alignment, even if it is far from a complete solution.^ They seem to be pointing towards Knightian Uncertainty. ^ Non-lineariy tor strong sensitivity to starting conditions. ^ Such that there isn't a constant probability distribution to learn. ^ They essentially mean intractability. ^ They provide some examples at the beginning of section 2 which help justify their focus on metacognition. For example: "Willa’s children are bitterly arguing about money. Willa draws on her life experience to show them why they should instead compromise in the short term and prioritize their sibling relationship in the long term" ^ I agree that metacognition seems important for explanability, but my intuition is that wise decisions are often challenging or even impossible to make legible. See Tentatively against making AIs 'wise', which won a runner up prize in the AI Impacts Essay competition on the Automation of Wisdom and Philosophy ^ Eliezer Yudkowsky's view seems to be that this specification pretty much has to be exhaustive, though others are less pessimistic about partial alignment. ^ The first sentence reads "First, humans are not even aligned with each other" which is confusing since the second paragraph seems to suggest that their point here is more like what I wrote. ^ I'm skeptical that using pre-defined criteria is a good way of measuring wisdom. ^ This paper don't use the term "instrumental convergence", so this statement involves a slight bit of interpretation on my part.
euAMyQAQWTYyWZW8Z_Summary__"Imagining_and_building.txt
{ "file_size": 12257 }
0930e251-ea67-4f0d-93d2-eefa9031b1ba
Related: Book Review: On the Edge: The Gamblers I have previously been heavily involved in sports betting. That world was very good to me. The times were good, as were the profits. It was a skill game, and a form of positive-sum entertainment, and I was happy to participate and help ensure the sophisticated customer got a high quality product. I knew it wasn’t the most socially valuable enterprise, but I certainly thought it was net positive. When sports gambling was legalized in America, I was hopeful it too could prove a net positive force, far superior to the previous obnoxious wave of daily fantasy sports. It brings me no pleasure to conclude that this was not the case. The results are in. Legalized mobile gambling on sports, let alone casino games, has proven to be a huge mistake. The societal impacts are far worse than I expected. The Short Answer Joe Weisenthal: Why is it that sports gambling, specifically, has elicited a lot of criticism from people that would otherwise have more laissez faire sympathies? This full post is the long answer. The short answer is that it is clear from studies and from what we see with our eyes that ubiquitous sports gambling on mobile phones, and media aggressively pushing wagering, is mostly predation on people who suffer from addictive behaviors. That predation, due to the costs of customer acquisition and retention and the regulations involved, involves pushing upon them terrible products offered at terrible prices, pushed throughout the sports ecosystem and via smartphones onto highly vulnerable people. This is not a minor issue. This is so bad that you can pick up the impacts in overall economic distress data. The price, on so many levels, is too damn high. Paper One: Bankruptcies We start with discussion of one of several new working papers studying the financial consequences of legalized sports betting. The impacts include a 28% overall increase in bankruptcies (!). Brett Hollenbeck: *Working Paper Alert*: “The Financial Consequences of Legalized Sports Gambling” by Poet Larsen, @dade_us and myself. We study how the widespread legalization of sports gambling over the past five years has impacted consumer financial health. In 2018, SCOTUS ruled that states cannot be prohibited from allowing sports betting, and 38 states have since legalized sports gambling. This has led to a large new industry and a large increase in gambling accessibility. Roughly $300 billion has been bet and is growing fast. While for most gamblers it is a harmless form of recreation, we know that some fraction become problem gamblers with potentially severe financial consequences. We study these financial outcomes using a large and comprehensive dataset on consumer finances known as the UC Consumer Credit Panel (maintained by @CAPolicyLab). This allows us to track all credit and debt outcomes for roughly 7 million Americans. We leverage this data and compare states implementing sports gambling to those that don’t and study both sports gambling of any type as well as online/mobile gambling specifically. We study 8 financial/debt outcomes and find the following results: First, credit scores, a summary metric of overall creditworthiness, decrease by modest but statistically significant amounts (~1%). We also test for evidence of pre-trends between treated/control states and find none. … Second, several measures of excessive debt increase substantially. We find a roughly 28% increase in bankruptcies and an 8% increase in debt transferred to debt collectors. Similarly, auto loan delinquencies increase substantially as does use of debt consolidation loans. Interestingly, we find that banks restrict access to credit on average in affected states. Credit card limits decrease and the ratio of secured to unsecured loans increases. After three years post-legalization we actually find a decrease in credit card delinquencies as a result. I expected some negative impacts. But a 28% increase in bankruptcies is far more than I would have predicted. The typical adult bankruptcy rate is about 0.16%, so this would mean about 4bps (0.04%)/year of additional bankruptcies, or an over 1% additional chance a typical person goes bankrupt during their lifetime. Alternatively as a sanity check, that’s on the order of 100,000 additional bankruptcies a year, which will rise over time if we don’t intervene. We are talking about an average handle of maybe 120 billion in 2023, but on average lower during this time period, let’s say average of 70 billion, with a likely sportsbook hold (before expenses) of something like 10% if we exclude advantage betters who are definitely not going bankrupt, given all the parlays and shaded lines and in-game betting and generally atrocious odds, so total net losses to ‘normal gamblers’ of 7 billion per year. That suggests that for every $70k in net sportsbook gross profits from regular gamblers, someone filed for bankruptcy. That seems like a lot? It means those who are inclined to bet on sports are either often doing it out of desperation, or that the same causes that lead them to bet on sports and pushing them to the financial edge in other ways as well, and this is the straw breaking the camel’s back. Claude found it all plausible when I had it do a bunch of estimations. I do notice I am skeptical. The result is clear. A bankruptcy is extremely socially expensive, on the order of $200k. That alone is almost triple the profits, and clearly wipes out all the social gains. Legalized online sports betting is currently a deeply, deeply horrible deal. I wish it were different. I am all for letting people do things, and I have enjoyed and benefited greatly from the ability to bet on sports. And yes, I do think the majority of people who play plausibly get their money’s worth in entertainment, even at the outrageous prices charged. The problem is if a majority get a small benefit, and others get a huge loss, that is on net a disaster. I can’t look at these findings, even if I don’t fully believe them, and not see a huge disaster from these effects alone. I also can’t see a way in which the positive-sum benefits could justify that disaster. Paper Two: Reduced Household Savings We can then add a second paper, “Gambling Away Stability: Sports Betting’s Impact on Vulnerable Households.” They found that sports betting greatly reduced traditional net investments, while traditional gambling stays unchanged. Maxwell Tabarrok: The negative effects on investment are large relative to the sample mean. The average household invests about $360 a quarter so a $50 decrease in investment is a loss of 14%. This includes households that never bet. If this is fully real, that’s holy **** territory. It’s an apocalypse. We are decreasing net household investment by 14%? Can there possibly be compensation for that? Alex Tabarrok notes that various details seem like they prove too much, and I agree it seems unlikely the effects are this large. But it can be a lot smaller than that, and still way too high. There are a variety of goods households can choose to consume, including traditional gambling. If sports gambling were a regular consumption good that consumers were choosing because they enjoy it more, it wouldn’t be having these effects. Eating dramatically into savings rather than shifting the consumption basket, while not even reducing traditional gambling, says that consumers are clearly not responding rationally, and do not understand the choices they are making. Paper Three: Increased Domestic Violence Here’s a third paper, showing that sports betting increases domestic violence. When the home team suffers an upset loss while sports betting is legal, domestic violence that day goes up by 9% for the day, with lingering effects. It is estimated 10 million Americans are victims of domestic violence each year. Claude estimates if you extrapolate from this result that there might be a 3% overall increase in domestic violence as the result of legalized sports betting, which seems non-crazy given that betting on NFL home favorites is only a tiny portion of overall losses. Again, this is an overall effect for the entire population. The percentage of people who bet on sports is rising rapidly, but even so only 34% said they placed even one bet in 2023, and many of those will be limited to nominal wagers on things like the Super Bowl and March Madness. This survey has 39% sports betting participation, with about 35% of betters betting at least once a week. So again, the effects on the households that actually gamble are far higher. This is a huge direct cost to bear. Domestic violence ruins lives. It also is a huge indicator that this is causing large amounts of distress in various forms, and that those gambling on sports are not making rational or wise consumption decisions. The Product as Currently Offered is Terrible Meanwhile, frankly, the product emphasis and implementation sucks. Almost all of the legal implementations (e.g. everyone I know about except Circa) are highly predatory. That’s what can survive in this market. Why? Predation is where the money is. There is no physical overhead at an online casino, but after paying for all the promotions and credit card payments and advertisements and licenses and infrastructure, the only way to make all that back under the current laws and business models is the above-mentioned 10%-style hold that comes from toxic offerings. Thus high prices even on the main lines, even higher ones on parlays and in-game betting. Whenever I see lines on the TV I usually want to puke at how wide the prices are. In game odds are beyond obnoxious. Anyone this drives away is a customer they have decided not to want. This is what the in-game odds look like when they’re relatively reasonable, and seriously, ow my balls: (This still shows how crazy the ‘win probability’ calculation they do is, given it’s well outside the odds they themselves are offering and also makes no sense, although an inning later it went far more insane and I can’t help but share, then aside over…) All this is complemented by a strategy centered around free bet promotions (which makes the bonuses sound a lot bigger than they are), advertisements, promotional texts and emails and especially a barrage of push notifications. Anyone showing any skill? They are shown the door. Things Sharp Players Do I don’t think this is central to the case that current legal sports betting is awful, but it is illustrative what pros do in order to disguise themselves and get their wagers down. That to do that, they make themselves look like the whales. Which means addicts. I’m used to stories like this one, that’s normal: Ira Boudway (Bloomberg): If I open an account in New York, maybe for a few weeks I just bet the Yankees right before the game begins,” says Rufus Peabody, a pro bettor and co-host of the Bet the Process podcast. If this trick works, the book sees these normie, hometown bets as a sign that it’s safe to raise his limits. It seems players have upped their game. One pro bettor I know set up a bot which logs in to his accounts every day between 2 and 4 a.m., to make it seem like he can’t get through the night without checking his bets. Another withdraws money and then reverses those withdrawals so it looks like he can’t resist gambling. Simulating addictive behavior, says Peabody, is an effective way to get online sportsbooks to send you bonus money and keep your accounts open. This isn’t necessarily because operators are targeting problem bettors, he says; they’re simply looking to identify and encourage customers who are likely to spend—and lose—the most. This just happens to be a good way to find and enable addicts, too. The rest of the post is filled with the usual statistics and tragic stories. What I find interesting about these examples is that they are very level-1 plays. As in, this is exactly what someone would do if they thought they were up against a system that was looking for signs of what type of player you are, but only in the most mechanical and simple sense. For this type of thing to work, the book must not be looking at details or thinking clearly or holistically. If you had tried this stuff on me when I was watching customers, to the extent I noticed it at all, I am pretty sure I would if anything have caught you faster. People Cannot Handle Gambling on Smartphones Vices and other distractions are constant temptations. When you carry a phone around with you, that temptation is ever present. Indeed, I recently got a Pixel Watch, and the biggest benefit of it so far is that I can stay connected enough to not worry, and not be tempted to check for things, without the pull of what a phone can do. And we have repeatedly seen how distracting it is for kids in school to have the smartphone right there in their pocket. I have learned to be very, very careful with mobile games, even ones with no relevant microtransactions. Putting gambling in your pocket makes the temptation to gamble ever-present. Even for those who can resist it, that is a not so cheap mental tax to pay, and likely to result in the occasional impulse bet, even without the constant notifications. First hit’s free. Constant offers that adjust to your responses, to get you to keep coming back. Now consider that at least several percent of people have an acute gambling addiction or vulnerability. For them, this is like an alcoholic being forced to carry a flask around in their pocket 24/7, while talk of what alcohol to choose and how good it would be to use that flask right now gets constantly woven into all their entertainment, and they by default get notifications asking if now is a good time for a beer. You can have the apps back up and running within a minute, even if you delete them. It was plausible that this was an acceptable situation, that people could mostly handle that kind of temptation. We have now run the experiment, and it is clear that too many of them cannot. Yay and Also Beware Trivial Inconveniences (a future full post) I am coming around to a generalized version of this principle. There is a vast difference between: Something being legal, ubiquitous, frictionless and advertised.Something being available, mostly safe to get, but we make it annoying.Something being actively illegal, where you can risk actual legal trouble.Something being actively illegal and we really try to stop you (e.g. rape, murder). We’ve placed far too many productive and useful things in category 2 that should be in category 1. By contrast, we’ve taken too many destructive things, too many vices, that we long had the wisdom to put in category 2, and started putting them in category 1. Prohibitions, putting such things into categories 3 and especially 4, tends to work out extremely poorly. Don’t do that unless absolutely necessary. Let people do privately destructive things if they want to do that. Often, it is important that you make doing the wrong thing a little annoying. It is especially important to not make it annoying to do the productive things, and not annoying to instead do the destructive things. How Does This Relate to Elite Hypocrisy? The elite refrains from irresponsible gambling, but here sets up conditions where such irresponsible actions are the inevitable result. The actual big elite hypocrisy is not the failure to impose paternalistic rules on the non-elite, it is that we constantly imposing extreme and expensive consumption requirements and restrictions on the non-elite when they are trying to live their lives and get their needs met. We impose these deeply restrictive, expensive and stupid elite norms on others all the time. This paternalism severely damages their lives in numerous ways. This is the core reason why it is so difficult for ordinary people to pay their bills or raise families, despite earnings that would make them rich elsewhere or elsewhen. These productive actions are severely restricted, because if you are going to be productive then you have to do so ‘correctly’ and obey all sorts of rules and requirements. Whereas if your actions are destructive, well then, go ahead, and it would be wrong of us to even enforce existing law. That is a deeply toxic approach. We should reverse it. We should allow people to do productive actions as freely as possible, and put up frictions to sufficiently destructive actions. Mobile gambling has shown itself to be a highly destructive action for its users, well in excess of any profits earned, sufficiently so as to substantially damage economic conditions. A that point, we need to draw the line. The Standard Libertarian Counterargument Maxwell Tabarrok makes the contrary case that sports gambling is ordinary consumption, and we should not assume that so-called ‘vulnerable populations’ need protections from deciding to increase their consumption. He says the evidence is not compelling here. His view is that this is no different from people buying Taylor Swift tickets. In general I am highly sympathetic to this argument. I am not looking to tell people how much to invest or what goods to consume. But here I must strongly disagree. I’ve certainly enjoyed consuming gambling, including in a few narrow small stakes cases where I wasn’t trying to be an advantage player. I think many others do so in ways that are not mistakes, or are mistakes we should allow them to make. But ‘this is normal consumption’ seems to me like an absurd interpretation of the evidence above, and fails to understand the nature of the consumption being offered. You can’t place this ability to wager directly onto everyone’s phones at all times, putting the temptation at arm’s reach, see these consequences, and pretend these consequences are merely revealed preferences. What About Other Prediction Markets? Most other prediction markets do not pose the same problems. They would not even if they greatly expanded and became more ‘normie’ friendly. In particular, sports markets are highly related to and integrated into the most ‘normie’ of activities and into the related media, and they pay off quickly, and they’re ubiquitous, with something for you every day. What Should Be Done The legalized mobile online sports betting experiment is a clear failure. It should end. You should need to go to a physical location to place fully legal bets of a non-trivial size, or at least interact with a human or bear some other cost or risk. I’m fine with that location being the local sports bar, especially if the bar gets to book your action. Yes, I realize that will mean more illegal and untaxed online sports betting, but it is what it is. The barriers to doing that would do a lot of good work. At a bare minimum, the advertising and dark pattern complexes feeding this must be disempowered. The Federal Government should do what it can. The states should realize they are not doing themselves any favors and resist or undo this cash grab. Legalized online casino gaming, allowing roulette or similar games from one’s phone, is of course far worse. That certainly should not be allowed via the internet. I am not keen to be expansive in what ‘counts as gambling’ but the obvious gambling that resolves in seconds and pays real money absolutely must go. We will need to figure out where to draw the line on ‘loot boxes’ and other game features, but if the more obnoxious and toxic versions of that got banned as well, that would be good too. Ben Krauss and Milan Singh reach the same conclusion at Slow Boring, although they are less willing to fully bite the bullet. Kelsey Piper, who similar to me is loathe to tell people what they cannot do, does bite the bullet. Here Saagar Enjeti bites the bullet. Here Charles Fain Lehman at The Atlantic bites the bullet. Otherwise, as the amount of gambling expands, it is only going to get worse.
tHiB8jLocbPLagYDZ_The_Online_Sports_Gambling_Exper.txt
{ "file_size": 19992 }
b310d77a-b170-40ff-8fda-20efbb77de57
Crosspost of this, on my blog. Effective altruism, the capitalist ultracapitalist movement in favor of capitalism and capitalism, is a white cishet settler colonialist movement. It talks a big game about doing good effectively but then some people in the movement said bad things in 1996—so how good can it really be? It talks about helping people, but many people in the movement are white—sounds like white saviorism to me! The movement is partially about longtermism, making the future go well by reducing existential risks, but that sounds pretty weird and involves big numbers. [Insert several paragraphs of sneering]. Effective altruism tries to do good effectively, but Sam Bankman Fried was involved with the movement, and he was a bad hombre. Also, doing good by preventing kids from getting malaria involves saving—sounds like WHITE SAVIORISM, the horrifying consequence of white people trying to do good things. After reading Alice Crary’s very serious complaints about the fact that EA “positions rich people as ‘saviors’ of the poor,” I knew I had to act differently. That is why, when this morning I saw a poor black child drowning in a river, I ignored him entirely. While his parents called on me to save him—for they were too far away—I knew better than to engage in white saviorism. Hadn’t these people ever heard of colonialism? Who am I to position myself as a savior of the poor? I no longer work a job or go to school. After all, Sam Bankman Fried worked a job and went to school. I now decide upon my stock portfolio by throwing a dart at a dart board. As I learned from the wise sage Alice Crary, the world is more complicated than objective, numerical metrics. For this reason, rather than relying on racist numerical metrics, I now put all my money in Doge Coin. Using objective metrics is problematic in charitable giving, one should trust their gut; my gut is that Doge Coin is going to the moon! I learned from Freddie deBoer that EA is trivial—everyone supports doing good effectively. Similarly, putting one’s money in stocks that are likely to pay off is trivial—everyone supports it. When I put my money in doge coin and when you put your money in general funds, we are the same. We are both involved in effective investment—you just have the hubris to think you’re the only one doing it. From Leif Wenar I learned that effective altruism has a perverse hero complex and that it’s a terrible thing because it’s possible to think of several downsides to it. For this reason, I encouraged my friend who was a cancer surgeon to stop treating people for cancer. Doesn’t he know that radiation has downsides? He wasn’t convinced, but that was no doubt his hero complex speaking. Wenar also taught me that it’s arrogant for EAs to think of themselves as being responsible for saving lives via giving to the against malaria foundation. If an EA funds anti-malarial bednets, they’re not responsible for saving lives. Instead, whoever put up the bednets is responsible for saving the lives. This is why, when I saw my friend choking, I didn’t perform the Heimlich maneuver—after all, if I did so I wouldn’t get credit for saving his life. Instead, Henry Heimlich would get the credit. Boy do I love making Parfit’s first mistake in moral mathematics. From Lyman Stone, I learned that EA isn’t effective because since it started getting involved in anti-malarial work, progress has stagnated. Owing to this principle deducible from pure reason that correlation is causation, I decided to start the mayhem and death political party, dedicated to killing all five-year-olds. Since they were born, at around the time of Corinavirus, progress on fighting disease stagnated! After learning from the vitalists that EA makes us coddled and soy, preventing people from having toughening, formative experiences like dying of malaria, I decided to drown a puppy in a local pond. Hopefully, doing so would toughen its character, rather than allowing it to succumb to modern frailty. From Mary Townsend, I learned that claims that one can do good by donating are eeeeeevil. As she says: That one could become good through monetary transactions should raise our post-Reformation suspicions, obviously. As a simple response to the stipulation of a dreadful but equally simple freedom, it seems almost designed to hit us at the weakest spots of our human frailty, with disconcerting effects. That’s why, when a man holding 100,000 people at gunpoint was going to kill them all unless I gave him a penny, I didn’t give him the penny. Sorry 100,000 people, wouldn’t want my attempt to good through monetary transactions to raise post-reformation suspicions. Over time, however, I learned this wasn’t enough. From the critics of EA, I learned that one is morally required to be neutral in situations of injustice. But it isn’t enough to do nothing. If fighting malaria is white saviorist capitalist colonialism that makes there be more malaria and causes people to become coddled, then it isn’t enough to do nothing. That’s why I’ve decided to start injecting poor children with malaria. Because the EAs are doing such evil things, someone needs to start doing the opposite. If I inject enough kids with malaria, maybe that could stop white people from being positioned as saviors of the world’s poor. Instead, such an action is an important first step in grappling with the reality that white people are often villains from the perspective of the global poor. As I learned from Wenar, I can’t be responsible for their deaths. After all, the most proximate cause of their deaths is the doctor that fails to treat them and the mosquito that bites them. I’m wholly without blame and saying I am blameworthy devalues the work of mothers who put up malaria nets. Because malaria nets are so bad, I’ve started ripping down malaria nets. If malaria net actions by EAs have caused more malaria, as Lyman Stone claims, then we should start ripping down malaria nets. In the name of justice, fighting capitalism, and ending colonialism, I’ve started injecting children with malaria. I spent years pulling children out of ponds. But now I realize doing so is morally wrong. To offset my negative impact, I’ve started pushing children back into ponds. Such will bring about a global anti-capitalist revolution, for we all know that it’s just one effective altruism movement away from becoming reality. I hope one day you’ll join me, and the world will live as one.
uXPFAKBzZLrJnBQRs_How_I_Learned_That_You_Should_Pu.txt
{ "file_size": 6544 }
3483e954-0e6e-4606-b0e8-55770804afe9
I suggest that (for now) it's a mix of Marc Andreessen, Leopold Aschenbrenner, and Guillaume Verdon. But let's back up first. What was the Biden-Harris administration's philosophy regarding AI? For the first half of Biden's term, I would say they didn't have one. As with most of the world, it was the release of ChatGPT in late 2022 that made AI a top-level issue. And though it's not as if Biden's cabinet contained any avowed effective altruists, I do think that by default, the "safetyist" attitude towards AI that is associated with effective altruism and Less Wrong rationalism, was philosophically influential; not least because influential advocates of effective altruism were part of the elite Democratic base. FTX's Sam Bankman-Fried came from that background. Open Philanthropy's Dustin Moskovitz is another. (I've listed three people as alleged thought leaders for the Trump 2.0 era; if I was going to pick three for the second half of the Biden era, maybe it would be Paul Christiano, Helen Toner, and Joseph Matheny.) As most of us would know, irritation with AI safetyism did a lot to inspire the creation by a few Silicon Valley memelords of an alternative ideology, "effective accelerationism"; and after the unsuccessful OpenAI coup against Sam Altman at the end of 2023, e/acc was widely considered to have won the culture war against effective altruism in the tech world. Now, looking back from the end of 2024, we can see that many of the tech figures who affiliated with e/acc at the start of the year, defected from elite consensus to ally with the victorious Trump Republicans by the end of the year. This is why I regard e/acc as a major component of the emerging zeitgeist regarding AI and AI policy. What follows is far more a product of intuitive speculation than scholarship. Also, I don't live in North America, I'm poor, I have zero experience of contemporary Silicon Valley (or Washington DC, for that matter). I am a distant observer of all this. I am prepared to be corrected by people who are actually in the thick of things. But for now, I don't see anyone making a clear claim about which ideas will inform the thinking of the incoming American government and its allies. So this is my "model" of what's ahead, make of it what you will. First, Marc Andreessen. A pivotal figure in the 1990s Internet, CEO of the browser company Netscape, who seems to have then risen into the investment Valhalla of billionaire venture capitalists. In the wake of e/acc and ChatGPT, he wrote a "techno-optimist manifesto" that incorporates AI into an older narrative of human progress through technology and capitalism. It's good enough to stand as an example of its genre; it expounds a particular perspective on history, politics and economics that is probably shared by many of these tech captains of industry; and it ends with a list of about 50 other thinkers who Andreessen considers to be fellow travelers, so you can read them if you want more details. Second, Leopold Aschenbrenner. A young former employee of OpenAI who became a wunderkind of AI strategic policy in mid-2024, thanks to the publication of his manifesto entitled "Situational Awareness". It's been discussed here on Less Wrong. His manifesto first endorses short timelines for superhuman AI, saying that it's coming later this decade, and then says that the democratic world, led by the USA, must create and domesticate superhuman AI before a geopolitical and ideological rival like China does so; and that this should be done by the nationalization of labs engaged in research on frontier AI, as part of a new Manhattan Project aimed at solving superalignment. Aschenbrenner has therefore fused the tech narrative of imminent superhuman AI, and the safetyist narrative according to which the preferences of superhuman AI will shape the future of life on Earth, with an America-First national-security perspective. A month before the vote, Ivanka Trump tweeted favorably about his manifesto, so we know that the incoming First Family has heard of it. Third, Guillaume Verdon. Originally known only as e/acc co-founder @BasedBeffJezos, he was doxxed by Forbes in the same week that Biden's commerce secretary publicly declared e/acc to be dangerous, and just a week after Altman was reinstated as OpenAI CEO. He was revealed as a Canadian quantum-information physicist (his thesis is quite interesting, if you're into that), who worked on quantum AI at Google before co-founding his own startup, Extropic, with the idea of running AI on stochastic computer chips that directly utilize non-gaussian thermodynamic randomness to implement cognitive probability distributions (rather than doing everything at the software level). I've already mentioned the role that e/acc has played in bringing together Trump's allies in the tech sector (though its role there is overshadowed by Elon Musk and his 2022 purchase of Twitter). When it comes to philosophy, e/acc has a deserved reputation for glib memeing and sloganeering. However, I have included Verdon on this list because I also find implicit in his thoughts, an alternative to the influential model of the future (which Aschenbrenner arguably favors, as do I), according to which the creation of human-level AI will be followed by the emergence of "superintelligent" AI whose goals then dominate the world, regardless of whether that value system is "liberal democracy" or "more paperclips". In his talk "Thermodynamics of techno-capitalism", Verdon instead presents a model of evolution that is persistently pluralistic. It's still pretty sparse and undeveloped - maybe the few minutes after 10:00 are where it is most spelled out - but it's one of complex systems that for thermodynamic reasons learn, and learn to learn. Competition never goes away, and values are never final. The fundamental metric of progress is how much energy you are able to spend, and that applies all the way from the first cells surviving in the primordial soup, to AI companies surviving in the global marketplace, and presumably on to whatever new interplanetary and interstellar forms of being emerge from life on Earth. I'm exercising some latitude in interpreting his remarks here, which are mostly just about a common characterization of evolution and capitalism through a combination of thermodynamic and machine-learning concepts. But the point is that it's a big picture, different from the synthesis we're familiar with here (e.g. of a multiverse dominated by simulation and timeless trade among a population of autarkic superintelligences), and with a significant intellectual ancestry to back it up (especially complex systems theory); and something like it potentially provides a rationale for the seemingly unsafe strategies of a Zuckerberg or a Musk, when it comes to dealing with superintelligence. (A historical digression here: Verdon's company name, Extropic, of course brings to mind the Extropians, the 1990s Internet transhumanists among whom Eliezer first appeared. One of the differences between Extropian transhumanism and the transhumanism of Less Wrong rationalism, is that the Extropians were far more in sync with the idea that the struggle to survive never goes away and that pluralism based in decentralized freedom is the way to go even for transhuman beings, rather than the idea that everything hinges on identifying true human values and extrapolating them faithfully. To this I would add that in the 1980s, Bruce Sterling's SF novel Schismatrix featured a "Posthumanist" movement in a solar-system civilization of competing techno-cartels, whose political rhetoric derives from nonequilibrium thermodynamics. I have to think that someone among the founders of e/acc was influenced by that, even if they went on to combine it with a pro-capitalist poetics not employed by Sterling.) I've gone on at such length about the alternative model of the era of superintelligence, that I have supposedly found in e/acc, precisely because it isn't spelt out anywhere that I can find. e/acc is variously accused of being in denial about superintelligence, or of hiding its indifference to the future of mere humanity for the sake of public relations, and I think there's something to that. But if we're looking for a serious rival to the "singleton" conception of what a world with superintelligence looks like, the unendingly pluralistic evolution of the e/acc universe is such an alternative, and I think it will come up in some form, if the tech tycoons of Trump 2.0 are challenged on the topic of superintelligence.
r4FF9YWqDzcnfBZ88_The_new_ruling_philosophy_regard.txt
{ "file_size": 8566 }
b5518456-4e0e-4f2c-8373-5b052f21ae10
Sniff-click. Sniff-click. One shot in each nostril, and here…we…go. A while ago, I started Ketamine therapy for depression. I didn’t finish - I changed jobs, which switched insurances, which messed everything up - but I was doing it for two and half weeks (five doses) before then, and had good results with it. (For those of you who read about my experiences with TMS, I tried Ketamine first, and did the TMS after I couldn't get back on the Ketamine with my new insurance.) Background The TL;DR is that a while back, someone figured out that giving humans a low-dose horse tranquilizer cured depression (temporarily). I don’t know (and I don’t want to know) how they figured that out, because the story in my head is funnier than anything real life could come up with. Now we’ve got an FDA-approved treatment for depression using a nasal spray (called Spravato) with a ketamine-adjacent molecule called esketamine, and boy howdy does it get the job done. If you want details about the science or plausibility, look here or do your own research. This post isn’t about that. It’s about the experience I’ve have doing the ketamine therapy, including some of the boring details along with the fun part of what the trip is like. The Boring Bits Signing Up For The Treatment If you’ve got treatment-resistant depression and want a nuclear option, this is it. The good news is that, so far as I know, ketamine works on almost everybody. Unlike every antidepressant I’ve ever taken, it also works within a day or two. The bad news is that The Bureaucracy Must Be Appeased. After bringing up Ketamine to my psychiatrist and getting the go-ahead, I looked up local clinics that did the treatment and got in touch with one of them. They though I’d be a great candidate, and they were happy to help! There were just a few bazillion forms I had to fill out, an app I had to install, an account I had to create, and so on. I’ll forever enjoy the irony of someone going, “Yes, we can help you with your depression, absolutely! But first you’re going to have to jump through dozens of small pointless hoops made of paperwork and boredom, because accomplishing numerous small tasks isn’t something people with depression struggle with at all, no-sirree.” Granted, no one actually said that, but I think it captures the zeitgeist of the experience pretty well. Some of the things I had to do: install an app for communication with the clinicsign/initial (what felt like) dozens of formscontact the pharmacy with the drug (not my normal pharmacy), multiple times, to register my credit card so I could pay for the drug itselfsign/initial other forms, for some reasonset up an account with the service providersign/initial more forms, because hell is real and it demands sufferingarrange transportation to and from the clinic twice a week for four weeks, because you ain’t driving after the happy happy fun times in the chairsign more forms, because the trees are already dead so why notsacrifice a goat on the altar of Yog-Sothoth, Who Is The Gate And The Key That last one may have been a dream. The Schedule Normal Ketamine therapy is a total of twelve doses. Twice a week for four weeks, then once a week for four weeks, for a total of eight weeks of treatment. As I mentioned above, I only got halfway through the third week. Additionally, you can’t schedule the treatment for two days in a row during the first four weeks, i.e. Monday and Tuesday; you have to have at least one day in between. The Treatment Itself I can only speak for the clinic that I went to, but it goes something like this: Arrive. Thank the person who drove you, and hope you can get them to leave without actually telling them to leave, because that would be rude, but you don’t want them to see you when you’re high, so they kind of need to not be there.Get set up in the nice chair with a blanket. The clinic I went to had zero-gravity chairs, which are awesome, if less physics-breaking than I would like.The person/doctor/nurse/ritualist/shaman attending you takes your blood pressure, makes sure it’s in whatever range it needs to be in.For the first time, you get to practice doing The Snort. The esketamine is taken nasally, so there’s this sprayer thing with a plunger that it comes in. My clinic had a practice version, which was helpful, as I didn’t have any experience sticking a plunger-thingy up my nose and inhaling really hard while pressing the plunger until it clicks. I imagine it’s a similar skill set to doing cocaine, just without the rolled-up dollar bill, bathroom sink, or hooker sex worker.Take the treatment. Each plunger-thing is the same dose, so for the first week you take two plunger-things (one fifteen minutes after the other) and then after that it’s three, all fifteen minutes apart.Happy happy fun times.Get up and leave. I didn’t have too much trouble with this, although I would recommend caution. You will not be walking in a straight line afterwards.Feel a bit tired and groggy until you go to sleep. I don’t recommend going to work or doing anything important. By my fifth treatment it would have been manageable; before that it wouldn’t have been.Wake up the next day and think, so this is what it’s like to not be depressed. Neat. The Trip It’s never easy to describe an altered state of consciousness. There’s no way to do it but to rely on metaphor and simile, because that’s about it for a language’s capacity to translate an entirely subjective experience from one brain to another. That being said, there are some relatively objective things I can say. During the trip, there are three physical sensations I notice. First is that my extremities - hands and feet - get cold. This might be a blood pressure thing, I don’t know. Second, there’s a heaviness and dissociation from the body that we’ll get into more later. Third, there’s a reliable visual effect that I don’t think is a hallucination. The Visual Effect When you unfocus your eyes, you get double vision on things close to you. I used to play with this as a child, opening and closing one eye at a time to make objects appear to move back and forth. The thing about double vision, though, is that the objects remain parallel to one another, i.e. straight lines remain parallel, just displaced each other. If there’s a square in front of you and you unfocus your eyes, you might see two squares overlapping like so: If you tilt your head, the squares will be displaced in both the x and y directions: But during the esketamine trip, my double vision is angled, like this: I have no idea why, but it’s happened every time. During some of the trips I’ve watched movies with the subtitles on, and it’s weird to see one line of subtitles angled into the screen. The Bodily Dissociation People might describe the sensation as ‘floating’. I think it’s a general sense of distance from one’s body. Signals that would normally travel from the fingertips to the brain instantaneously take longer: sensations take longer to reach the brain, and commands to move take longer to reach the fingertips. There was a dull sense that my body was some kind of extremity, a robot I controlled through a long-distance phone call. As far as I understand, this is perfectly normal, and it went away after an hour or two. The Rest of the Trip I don’t have enough experience to know when a trip ‘ends’, so I’ll just describe the experience as I had it. Out of the two hours spent in the chair in the clinic, the first 40 or so minutes after taking the Ketamine were spent in a haze. It was extraordinarily pleasant, and time seemed to just fly by. After that, the comedown seemed to start, involving a gradual return to myself over the course of ~75 minutes. In the clinic I attended, there was a TV with Netflix, so a movie was generally on, and the good mood produced by the treatment made comedies very enjoyable. (The best trip was when I watched Talladega Nights.) By the time the two hours were up, I had more or less sobered up, although my equilibrium was still a bit challenged. The Effect Ketamine therapy worked for me. I walked out of the first session with a powerful sense of disorientation in response to how vivid everything felt. The sunlight coming through the trees looked like a scene from a movie, brighter and purer than such things were in real life. I remember feeling the wind on my skin on the car ride back, marveling at how strong the sensations were. In my percentage scale, one week (two doses) of treatment took me from ~15% to ~80%. It was almost hard to believe the difference it made. But the Ketamine giveth, and the Ketamine taketh away. I didn’t finish the full recommended treatment. Maybe things would be different if I had (although I suspect not). But almost two weeks to the day after my final (fifth) treatment, I crashed from 80% back down to 15%. It was a horrifying sensation, as if all personality and agency and initiative and life was sucked out of me, poured down a drain in a little whirlpool, and I was the empty husk that was left. It was a miracle in reverse, and it sucked. Conclusion I think that Ketamine Therapy is a powerful option for people with severe depression, but my own (admittedly limited) experience was that it functioned as a temporary fix rather than a cure. It’s a large time commitment: two-hour sessions twice a week for a month that someone else has to chauffeur you to and from, and you likely aren’t doing a whole lot in the day after the treatment. That being said, it provided me with a lot of valuable personal data. I got a firm baseline for what I looked like when I wasn’t depressed, and a greater understanding of how my own mind worked. I also got to get high on Ketamine, which was a great deal of fun (note: I took a legal, FDA-approved treatment; I am not speaking to anything else). There are many available treatments for depression now, and this is one of the most effective (if, for me, temporary) I’ve ever had. And that glimpse at what my life could be like, unencumbered by my own brain’s malfunctioning, was absolutely worth it, for all that it ended too soon. It gave me hope, it gave me faith in myself, and it gave me the motivation to continue pursuing treatment options, until I could finally have the life that I wanted.
zgAws2AoFE3adigvy_What_Ketamine_Therapy_Is_Like.txt
{ "file_size": 10382 }
e6c0f774-682e-4a04-a5af-59eb832eb4a1
What kinds of ethical implications should we expect from the Many-Worlds Interpretation of Quantum Mechanics (MWI)? I’ll argue that we shouldn’t expect decision-making to change. The implications are more about how we should think or feel about events in our lives, and the virtues of taking a cosmic perspective. According the many-worlds interpretation (MWI) of quantum mechanics, the universe is constantly splitting into a staggeringly large number of decoherent branches containing galaxies, civilizations, and people exactly like you and me[1]. You might think such a metaphysically radical theory should have pretty radical implications for how one should live. The quantum physicist John Bell once wrote “if such a theory were taken seriously, it would hardly be possible to take anything else seriously”. But proponents of MWI - including both Eliezer and David Wallace - have concluded the opposite. Eliezer quotes Egan's Law: It all adds up to normality. There are no major ethical implications at all[2]. I’ll argue that it's important to distinguish between two kinds of ethical implications we might expect MWI to have, which I’ll call ‘decision-theoretic’ and ‘virtue-theoretic’ (I’ll explain what I mean by these names). Eliezer and Wallace are right that MWI doesn’t have decision-theoretic implications. But they overlook the fact that MWI plausibly has implications for virtue theory. Decision theory in the multiverse The main reason for thinking MWI has no significant implications for ethics comes from decision theory. In standard decision theory you try to calculate the expected value for a particular course of action by multiplying the utility of each possible consequence by the probability of that consequence occurring. The main difference in MWI is that each possible consequence corresponds to a world that actually ends up existing, rather than being just hypothetical. But this makes no difference to the expected value that the calculation spits out. As Eliezer puts it: “Your decision theory should (almost always) be the same, whether you suppose that there is a 90% probability of something happening, or if it will happen in 9 out of 10 worlds. Now, because people have trouble handling probabilities, it may be helpful to visualize something happening in 9 out of 10 worlds. But this just helps you use normal decision theory.” So it seems that we end up making the same decisions in MWI as we would otherwise, which in turn seems to imply that MWI has no significant implications for ethics. Problems for decision theory Now, one can question this argument. Those who argue that decision theory does work differently in MWI, usually start from a version of MWI in which it makes sense to count the number of worlds. If you allow ‘branch-counting’, as this approach has been called, then decision theory seems to break down. To take a simple example, suppose you do a quantum experiment with two possible outcomes - your measurement apparatus makes a beep or it doesn't. On the branch-counting approach, where there was previously one world, there are now two: each containing the same people, animals and other valuable objects - except for the beep. The question is: do you now also have lots more value? Consequentialist approaches to ethics (ones which say for example that two happy people are better than one) would seem to imply that you do, and that therefore doing the experiment is extremely desirable - or possibly extremely bad, if you think total disvalue outweighed total value before the split. And of course from a decision-theoretic perspective it seems this evaluation of consequences should inform the utilities we assign in order to calculate expected value. It’s true that this simple example ignores the fact that in MWI branching is happening all the time. But that fact just makes the decision-theoretic situation worse. If we're allowed to count branches, the number of worlds, and therefore the amount of value and disvalue, is rapidly increasing all the time. Of course this seems to be a reductio ad absurdum[3], but which premise do we let go of? Do we reject the branch-counting approach to MWI or reject the consequentialist approach to calculating value? Indefinite numbers of worlds Fortunately for consequentialists, David Wallace has developed a detailed version of MWI that does not involve branch counting. The number of worlds that result from quantum processes, on this view, is in fact undefined. As he puts it: “Decoherence causes the Universe to develop an emergent branching structure. The existence of this branching is a robust (albeit emergent) feature of reality; so is the mod-squared amplitude for any macroscopically described history. But there is no non-arbitrary decomposition of macroscopically-described histories into ‘finest-grained’ histories, and no non-arbitrary way of counting those histories.” Importantly though, on this approach it is still possible to quantify the combined weight (mod-squared amplitude) of all branches that share a certain macroscopic property, e.g. by saying: “Tomorrow, the branches in which it is sunny will have combined weight 0.7” This allows Wallace to build up a detailed model of how decision theory works in MWI - and how it produces the same results as classical decision theory, as Eliezer suggests. Wallace shows that by choosing a specific set of sensible axioms, you can formally prove the Born rule in quantum mechanics, which states that mod-squared amplitudes can be treated as probabilities. And one of Wallace’s axioms, which he calls ‘branching indifference’, essentially says that it doesn’t matter how many branches there are, since macroscopic differences are all that we care about for decisions. So Wallace’s proof confirms that in order for decision-theory to give sensible results in MWI, you need to stop thinking about numbers of branches; and he thinks that’s OK because numbers of branches are in fact undefined. All of which is to say that there is indeed a way MWI ‘adds up to normality’. You can still get decision theory to give you the same results as before - it just takes a bit of housekeeping to iron out branch-counting wrinkles. Virtue theory and virtue ethics So Wallace and Eliezer are plausibly right that MWI doesn’t have ethical implications in the decision-theoretic sense outlined above. The mistake is to conclude that MWI has no ethical implications at all. The focus on decision theory leads us to overlook other kinds of ethical implications MWI could have. A sizable portion of (both contemporary and historical) ethical theory is not about decisions at all, but rather: what kind of person to be, what kinds of character traits are desirable and how one should think and feel about situations. It’s common to think of ‘virtue ethics’ - understood as the approach to ethics (deriving ultimately from Aristotle) in which such things are treated as fundamental - as one of the three main approaches to ethical theory, the others being the deontological (Kantian) and consequentialist (deriving from Bentham and Mill’s Utilitarianism) approaches. But you don’t need to be in the ‘virtue ethics’ camp to think virtue is worth understanding. Consequentialist and deontological approaches to virtue exist as well. For instance a consequentialist might propose that what makes a trait virtuous is its tending to lead to good consequences. And the well-known difficulties involved in actually quantifying the utilities of all possible consequences are among the reasons for consequentialists to show interest in virtue[4]. In short, it’s possible for those from a variety of perspectives to agree that the focus on decision-making leaves out a great deal of ethics. So even if MWI doesn’t have decision-theoretic implications, it could still have virtue-theoretic ones. For example, it could have implications for how we should think or feel about our location within the multiverse. Wisdom in the multiverse To make such implications seem not only technically possible but also plausible, I'll now  sketch out some specific virtue-theoretic implications of MWI (I'll aim to go into more detail on these in future posts). Firstly, consider that there's a certain kind of anxiety and regret associated with having to choose between two mutually exclusive good options. It seems plausible that MWI could help us feel better about such choices, if it's true there's a world in which you actually experience the other good option. Secondly, consider the simple fact that if you find yourself in an extremely unlucky personal situation - a car crash, say, or getting cancer -  then MWI implies that there are other worlds, with high quantum weights, in which you are not so unlucky. Again this is plausibly a consoling thought, similar in kind to the consoling thoughts recommended by philosophical traditions like stoicism. Likewise, if you're in what you estimate as the bad end of the spectrum of physically possible 'timelines' of global history, it seems consoling to know that those other timelines are real. From a virtue-theoretic perspective, we could say that it’s good to develop the disposition to take the wider cosmic perspective these thoughts assume, and so enhance one’s equanimity (a standard goal of classical virtue theory[5] which is arguably linked to both well-being and effective action). Our cosmic situation Zooming out further, there is also the ability to take the widest possible cosmic perspective, and consider one’s place in the quantum multiverse as whole. Other fundamental scientific theories have been taken to have implications of this sort. For instance, the second law of thermodynamics, and the theory of evolution have both been taken to imply that the universe as a whole is essential hostile to human interests. In 'A free man's worship'. Bertrand Russell proposed that the second law of thermodynamics is a kind of foundation for one's overall philosophical perspective: all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins - all these things, if not quite beyond dispute, are yet so nearly certain, that no philosophy which rejects them can hope to stand. In a similar vein, Eliezer's discussion comparing evolution to a blind, idiot God ends with the suggestion that our basic stance towards the universe should be confrontational[6]: Well, more power to us humans. I like having a Creator I can outwit. Beats being a pet. Similarly, MWI plausibly has implications for our assessment of the goodness or badness of the cosmos as a whole: whether we should feel at home in nature, or set against a hostile universe. Again taking a virtue-theoretic perspective, we might say that the virtue of wisdom requires an accurate appreciation - both intellectually and emotionally - of our place in the multiverse. Summary As I said at the start, MWI is a metaphysically radical theory that we might reasonably expect to have ethical consequences. We’ve seen that proponents of MWI like Eliezer and Wallace, have arrived at the somewhat surprising conclusion that it actually doesn’t. I’ve argued they're only half-right. They're right that MWI doesn’t have significant decision-theoretic implications. But it plausibly does have significant implications for virtue theory. ^ I'm not going to argue for this view as that was done very well by Eliezer in his Quantum Physics sequence. And in fact since that sequence was written MWI has become increasingly mainstream, so you can also read for example a major edited volume (2012) and David Wallace's 'The Emergent Multiverse' (2014) for painstaking academic support of the view. ^ Wallace makes a similar claim in his book: “But do [the many worlds in MWI] matter to ordinary, banal thought, action and language? Friendship is still friendship. Boredom is still boredom. Sex is still sex.” (p273) ^ Though like many a reductio ad absurdum the conclusion has been taken seriously, eg in this post. ^ A reminder here that Eliezer has a post called 'The Twelve Virtues of Rationality'. ^ See for example discussions of Ataraxia in Stoicism and Epicureanism. ^ Joe Carlsmith's sequence Otherness and control in the age of AGI is a good exploration of these and related ideas under the heading of  'deep atheism'.
hf4YAmj3tdKg4mnLf_Ethical_Implications_of_the_Quan.txt
{ "file_size": 12615 }
d50713c8-3f4e-4631-a111-0e942e2bca90
Epistemic status: Sudden public attitude shift seems quite possible, but I haven't seen it much in discussion, so I thought I'd float the idea again. This is somewhat dashed off since the goal is just to toss out a few possibilities and questions. In Current AIs Provide Nearly No Data Relevant to AGI Alignment, Thane Ruthenis argues that current AI is almost irrelevant to the project of aligning AGIs. Current AI is simply not what we're talking about when we worry about alignment, he says. And I halfway agree.[1] By a similar token, we haven't yet seen the thing we're worried about, so attitudes now provide limited data about attitudes toward the real deal. It looks to me like people are rarely really thinking of superhuman intelligence with human-like goals and agency, and when they are, they usually find it highly concerning. We've tried to get people to think about powerful, agentic AGI, but I think that's largely failed, at least among people familiar with AI and ML. People are typically bad at imagining hypothetical scenarios, and we are notoriously shortsighted. People who refuse to worry about AGI existential risks appear to be some combination of not really imagining it, and assuming it won't happen soon. Seeing dramatic evidence of actual AGI, with agency, competence, and strange intelligence, might quickly change public beliefs. Leopold Aschenbrenner noted in Nobody’s on the ball on AGI alignment that the public's attitude toward COVID turned on a dime, so a similar attitude shift is possible on AGI and alignment as well. This sudden change happened in response to evidence, but was made more rapid by nonlinear dynamics in the public discourse: a few people got very concerned and told others rather forcefully that they should also be very concerned, citing evidence; this spread rapidly. The same could happen for AGI. As Connor Leahy put it (approximately, maybe):[2] when we talk to AI and ML people about AGI x-risk, they scoff. When we tell regular people that we're building machines smarter than us, they often say something like "You fucking what?!" I think this happens because the AI/ML people think of existing and previous AI systems, while the public thinks of AIs from fiction - which actually have the properties of agency and goal-directedness we're worried about. This class of people won't change their beliefs but rather their urgency, if and when they see evidence that such sci-fi AI has been achieved. That leaves the "expert" doubtersWhen I look closely at public statements, I think that most people who say they don't believe in AGI x-risk simply don't believe in real full AGI happening soon enough to bother thinking about now. If that is visibly proven false (before it's too late), that could create a massive change in public opinion. People are prone to see faces in the clouds, see ghosts, and attribute intelligence and intention to their pets. People who do talk extensively with LLMs are sure they have personalities and consciousness. So you'd think people would over-attribute agency and therefore danger to current AIs. We can be impressed by the intelligence of an LLM without worrying about it taking over. They are clearly non-agentic in the sense of not having the capacity to affect the real world. And they don't sit and think to themselves when we're not around, so we don't wonder what it's thinking about or planning. o1 with its hidden train of thought is stretching that - but it does summarize, and it doesn't think for long. It still seems to be thinking about only and exactly what we asked it to. And LLMs don't have persistent goals to motivate their agency. It's hard to believe that something like that could be an existential threat. The relative ease of adding agency and continuous learning is not at all obvious. If any of those conditions change, emotional reactions might change. If the thing you're talking to has not only intelligence, but agency, goals, and persistance ("real AGI"), the average person might think about it very differently. Could they fail to notice if they don't look closely or think long? Sure. Some of them certainly will. But it only takes a few to take it seriously and tell their friends how creeped out they are by the whole thing. That's how the panic around COVID spread and changed average attitudes dramatically within weeks. Addenda: possible effects and causes of public freakout The above was my main point here: we might see dramatic shifts in public opinion if there's evidence of real AGI while public opinion might still be relevant. You can reach your own conclusions on what might cause this, and what effects it might have. I can't resist exploring the logic a little more. If you find this all credible, it leaves two questions: will this happen before it's too late, and will it actually be helpful if the public goes from blissfully ignoring the whole thing to freaking out about it? Effects Here I'll indulge in some speculation: I think a public freakout could be very helpful. It could be harnessed to insist that the government take control of all AGI projects and use them responsibly. This to me seems like a least-bad scenario. It seems overwhelmingly likely to me that government takes over AGI before it takes over government, at least in the slow-takeoff scenarios resulting from LLM-based[3] AGI in shorter timelines. There are other scenarios in which public freakout is bad. It could cause a severe slowdown in AGI progress in the US. This could either make the race with China close, causing corner-cutting on safety, quite possibly causing doom from misaligned AGI. Or it could even cause China to decisively win the race for AGI.[4] It's worth noting that the possibility of rapid attitude shift applies to people in government as well as the public. Causes Finally: will it happen before it's too late? It probably will if language model agents are the route to first AGI, which also seems fairly likely. Language model agents are creepily human-like, even when they're thoroughly stupid and amnesic, and so not dangerous. I think people would recognize the danger if we have parahuman AGI that's not yet smart enough to be dangerous, but has the agency and persistence that current AI lacks. This would trigger people to recognize it as a parahuman entity and therefore interesting and dangerous — like humans. This is a weak argument to actually advance language model agent progress; if it reaches AGI first, it might be the easiest sort to align and interpret. If it doesn't, progress on that route could still cause people to start taking AGI x-risk seriously. An ideal scenario would be a dead-end at semi-competent, agentic LLM agents that are too slow and error-prone to succeed at takeover, but which cause major damage (hopefully just by spending millions of their users' money) or are deployed in unsuccessful mischief, a la the ChaosGPT joke/demonstration. Notable job loss is another possible cause of public freakout. Conclusion Humans are unpredictable and shortsighted. Opinions don't change, until they do. And humans in societies seem to possibly be even more mercurial and shortsighted. We should take our best guesses and plan accordingly. ^ I agree with Ruthenis that current AI provide little insight on alignment of the real, dangerous AGI that seems inevitable. But I do think it provides nontrivial relevant data. If AGI is built based on or even related to current AI (e.g. if language model agents reach real AGI) then current AI has something valuable to say about aligning AGI - but it isn't the full story, since full AGI will have very different properties. Following this metaphor, I'd agree that attitudes toward current AI do provide some evidence of attitudes toward real AGI — but not much. ^ I'm not finding Connor's original quote, but that's my vivid-but-possibly-flawed memory. If I'm totally wrong about his intended statement, I'd just substitute my own claim: when I tell non-AI people that we're building AI smarter than us, they usually think it sounds dangerous as fuck. Educated people often think of current AI concerns like deepfakes and bias they've heard about in the news, but people who haven't thought about AI much at all often understand the direction of my x-risk concerns as being about sci-fi, fully agentic AI entities, and just say "yeah, holy shit". ^ Technically this should probably be "foundation model-based AGI". I continue to use LLM even when multimodal capacities are trained into the foundation model, because it's shorter, and because language continues to be the foundation of their intelligence. Language condenses the conceptual aspect of human cognition very well. I think that's key to understanding the a-priori surprising result that simply predicting next words gives rise to substantial human-like intelligence. ^ Would Xi Jinping be a disastrous emperor-for-eternity? I certainly don't know. The excellent 80,000 Hours interview with Sihao Huang clarified (among many other China/AI issues) one reason we don't know what Xi is thinking: he plays his cards close to his chest. He may be a reasonably well-intentioned human being who's willing to break a lot of eggs to make a really big omelette. Or he could be the sort of sociopath and sadist that Stalin and Putin seem to be. I'd rather have someone really trustworthy in charge - but how much risk of misalignment would I take to put the US government in charge over China's? I don't know. I'd love sources for real insight on his and the CCP's true character; it might be important.
nKQbALm3QPZvFKQAX_Current_Attitudes_Toward_AI_Prov.txt
{ "file_size": 9589 }
fd9479c2-5dcf-42e9-a1d9-4e8697f41a93
"Are you about to assume the lotus position?" asked Pat. "I'm about to assume a spherical cow," retorted the Zen master. "That's a physicist's term," Pat noted. "I studied physics when I was younger," continued the master. "Physics will teach you how to send a rocket to the moon. It can't teach you how to properly experience the process of building the rocket. Nor how to cope if a cosmic ray fries the electronics and ruins the mission." "Are you about to assume the lotus position?" asked Pat. "I'm about to assume a spherical cow," retorted the Zen master. "Really, I'll simplify further than the physicists. A physicist might sit thinking. I will just sit." The master sat. "Are you about to assume the lotus position?" asked Pat. "I'm about to assume a spherical cow," retorted the Zen master. "What's the spherical cow doing?" "The cow is doing zazen," explained the master. "It may be rolling down a hill." "Are you about to assume the lotus position?" asked Pat. "I'm about to assume a spherical cow," retorted the Zen master. The phrase was new to Pat. "Surely you jest. Cows are far from spherical." "How should I know that?" the master mused. "There are no cows around monasteries." "Are you about to assume the lotus position?" asked Pat. "I'm about to assume a spherical cow," retorted the Zen master. "Has the spherical cow Buddha-nature, or not?" "Mu!"
o3wvNSqsFunBucPaC_Spherical_cow.txt
{ "file_size": 1369 }
46db315e-c3f7-45c9-86ee-07caa22bf323
I have some beliefs that I believe, but I don't feel them. Like, I consciously believe them, but subconsciously I don't. Consciously, I am fully aware that we could all go extinct by the hand of one of the many existential risks we are currently facing as a species. But I don't feel it, deep inside of me. Just like I know that all people are equal regardless of race or gender, but implicit biases still remain. It seems consciously forming and submitting a thought is not enough to cement it deep into me such that the false previous thought it replaces no longer affects my decision-making process. How can I rectify this?
dmXBbfQ6NiRSX9gzh_how_to_truly_feel_my_beliefs?.txt
{ "file_size": 626 }
214ca996-f096-4aa5-acdf-cffa8a953eb2
Hello! I am looking for community members to lead songs at the Bay Winter Solstice this year. (I am the music director; Ozy is the overall creative lead.) If you're interested, please send me your audition recordings by the end of November 24 via this form: https://docs.google.com/forms/d/e/1FAIpQLSegrNKXJCTS69ioGUxSaMDXqz60X5zyb6kBDCuTVcTF6Mu3Ig/viewform The form contains audition instructions, as well as a list of songs we're currently auditioning for. As a reminder, the event itself will be the evening of Friday, December 20 in Berkeley. There will also be a dress rehearsal on the evening of Tuesday, December 17 in Berkeley, which you should plan on attending if you're leading a song. If you have any questions, feel free to email me at anya.tche@gmail.com. P.S. If you're interested in reading a speech, see Ozy's post with audition instructions for those! Note that speech auditions are due by November 17.
5m8B7yK2wQuoWbLfd_Bay_Winter_Solstice_2024__song_l.txt
{ "file_size": 920 }
63bc6a12-15ed-44a9-bea2-5c209b547111
Coordination can improve default outcomes, i.e. what happens when individuals act according to their own interest assuming others will do the same. For example in a flat share the default outcome may be that the common areas stay dirty because each flatmate is willing to spend time cleaning only if they are confident that all other flatmates will also spend time cleaning ; in the absence of coordination there is no such confidence so the common areas stay dirty. Such coordination challenges come up often in daily life. Another example that comes to mind is choosing a leader, whether for a sport team on a small scale, or a country on a larger scale. This coordination issue first requires to get everyone to agree that a leader is needed, and then to agree on a method to choose that leader. We can view each situation as being in one of several possible equilibria (dirty versus clean common areas, not having a leader versus having one), and coordination protocols as the way to take us from one equilibrium to another. This raises the question: what protocol can we use to get to a better equilibrium for the situations we care about? To answer this question, I suggest we should aggregate a "coordination cookbook": just like having a list of recipes is useful when cooking, so too having a list of issues paired to coordination protocols would be useful when navigating coordination issues of any scale. Humanity has accumulated a lot of tacit knowledge about coordination. Societies with their organizations, laws and infrastructures ; cultures, traditions, religions and rituals already impart us with know-how and protocols for many issues. Making these protocols explicit, comparing them and improving them could be a good starting point. I hope you find this idea of coordination cookbook stimulating. Please feel free to share: your own experience and protocols for solving coordination issues existing resources any other relevant insights Thank you for engaging. Relevant material: https://www.lesswrong.com/tag/coordination-cooperation https://equilibriabook.com/
z3pdjGqSmNkbqsjcg_A_Coordination_Cookbook?.txt
{ "file_size": 2085 }
835bca4e-c5a2-4258-95e8-d813497b008b
Author’s note: This post was written during a two-week long research sprint at constellation.org. My analysis represents a preliminary attempt to formulate a detailed AI-cyber threat model of attacks against the energy grid. The conclusions reached here are therefore tentative and I seek further input from experts to refine and correct my work. I offer special thanks to Luca Righetti from openphilanthropy.org and Matthew van der Merwe from governance.ai for their generosity and rigorous insight. This post covers the following: AI is changing the cyber threat landscapeCatastrophic cyberattacks against critical infrastructureBuilding on Lloyd’s (2015) “Business Blackout” scenarioHow does the power grid work?How could a cyberattack destabilize the power grid?How a cyberattack could cause >$100B of damage through destabilization of the power grid: direct and indirect approachesThree major attack scenarios causing damages on the order of $100 Billion or moreUnresolved questionsHow do AI offensive cyber capability increases affect this analysis? AI is already changing the cyber threat landscape AI developments over the last two years are heavily shaping global perceptions of national security and cyber threats. In October 2024, the US Presidential administration issued a National Security Memorandum titled Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence. The memorandum mandates the US AI Safety Institute (USAISI), National Security Agency (NSA), and other federal agencies evaluate frontier models’ ability to aid offensive cyber operations. Furthermore, the memorandum calls on Department of Defense and Intelligence Community agencies to harness powerful AI systems in furtherance of their national security missions. This undoubtedly includes using frontier AI for offensive cyber operations. In 2024, OpenAI published two reports detailing Russian, Iranian, Chinese, and North Korean cyber threat actors' use of the company’s proprietary AI models in offensive cyber operations tasks. These actors used OpenAI’s Large Language Models (LLM) to supplement their reconnaissance, social engineering, vulnerability research, attack tool scripting, anomaly detection evasion, and post compromise activity in furtherance of their ongoing campaigns. In their most recent system card evaluating the o1 model’s offensive cyber capabilities, OpenAI showed the model could independently complete certain competitive hacking tasks called Capture the Flag (CTF) challenges. Even with safety guardrails in place, o1-preview completed 26.7% of high-school level and 2.5% of professional level challenges; o1-mini completed 28.7% of high-school level and 3.9% of professional level challenges. While these results show that o1 independently does not reach the capability or sophistication of talented or well-resourced cyber threat actors, they demonstrate that state of the art (SOTA) models are improving in offensive cyber capabilities. Furthermore, these results represent a lower bound for SOTA models because, as will be discussed below, frontier models, when given the proper scaffolding and tool access, can go much farther. Taken together, the US National Security Memorandum, real world hacker use of LLMs, and the evaluation results of frontier AI models show that AI is already changing the cyber threat landscape. This has implications for the security of critical infrastructure that modern society relies upon. Catastrophic cyberattacks against critical infrastructure Cyberattacks on critical infrastructure have had widely varied effects over the last few decades–with most having little to no impact–but the NotPetya attack stands out as likely the most economically damaging attack. This self-propagating malware deployed by Russian military intelligence against Ukraine in 2017 spread globally to healthcare entities, government agencies, shipping companies, and the energy sector. According to the US Presidential administration at the time, the total economic impact of the data wiper attack totaled more than $10 Billion. No other cyberattack has reached this level of estimated damage. A catastrophic cyberattack that never actually came to fruition, known as “Nitro Zeus,” presents a second example. In the wake of the 2007 Stuxnet cyberattack against Iranian nuclear weapons production infrastructure, the United States and Iran began multilateral negotiations to put in place restrictions on Iran’s nuclear program. The United States reportedly maintained a contingency plan that consisted of cyberattacks to disable “Iran’s air defenses, communications systems and crucial parts of its power grid” in case diplomacy failed. Such a comprehensive attack, if able to cause communications and electric grid outages for long enough, would have caused likely more economic damage to Iran than NotPetya did to the global economy. Unfortunately, the lack of public details on Nitro Zeus make it difficult to know how developed the contingency attack was or how effective it would have been. Also in this instance, the United States had clear incentives to leak such information to journalists in order to create perceptions in Iran that would strengthen the United States’ hand in nuclear negotiations. This casts doubt on whether the United States had the capability or intention to conduct the attack. NotPetya’s estimated $10 Billion in damage was significant. But could a cyber attack on critical infrastructure, perhaps one uplifted by highly capable AI models, cause total economic damages an order of magnitude greater ($100 billion) than NotPetya? It is far from trivial for an attacker to cause such massive damage and the attack would need to disrupt critical infrastructure entities at a much greater scale than NotPetya. If the probability of such an event reaches even 1-5% per year, AI labs, cybersecurity innovators, and governments should give even more special and urgent attention to mitigate the risk than they already are. While there are many critical infrastructure sectors an attacker could disrupt, this analysis focuses on a single example, the energy grid, because so many other critical infrastructure sectors rely on it to function. To that end, this analysis seeks a better technical understanding of just how feasible a cyberattack against the energy grid costing >$100 Billion in damages would be. As shown below, advanced threat actors and nation-states struggle to pull off attacks of much smaller magnitude. The immense difficulty of this attack implies that the AI system involved needs highly sophisticated offensive cyber capabilities to create meaningful attacker uplift. SOTA AI models remain far away from achieving this. However, the rapid advancement of AI capabilities will eventually make the attack more feasible, even if only by a small amount initially. In order to identify and predict the point at which AI systems gain capabilities to make a major electric grid attack more feasible, the attack process itself and all its execution steps must be laid out in as much detail as possible. Building on Lloyd’s (2015) “Business Blackout” scenario There are few detailed and rigorous public sources that analyze the technical details and economic consequences of extreme cyberattacks. However, the 2015 report by Lloyd’s and the University of Cambridge’s Centre for Risk Studies stands out as a useful example: Business Blackout: The insurance implications of a cyber attack on the US power-grid. The report describes a hypothetical cyberattack on the Eastern Interconnection of the US electric grid, which covers the eastern two-thirds of the country, and claims the attack would result in approximately $500 billion of damage. They present three scenarios with differing levels of catastrophic damage but the median scenario includes the following: Malware (the hypothetical ‘Erebos trojan’) damages or destroys fifty electricity generators to take down 18,000 MW (10% of total power generation capacity), which in turn triggers a 100% grid failure. The report claims it would take up to three days to restore 50% of power and 21 days to restore 90%. This prolonged power outage causes $130 billion of direct losses to businesses and households and $544 billion when accounting for indirect macroeconomic consequences over the next five years. The direct effects include damage to electricity assets and infrastructure, loss in sales revenue to electricity supply companies, and direct loss in sales revenue to businesses during the blackout period. Lloyd’s published the report back in 2015 and future work should revisit their economic calculations. However, the analysis here will take Lloyds’ conclusions as a plausible starting point even if they might suffer from some overestimation. While the report’s scenario lacks sufficient detail showing how plausible such a consequential cyber attack is, the authors should still be commended for their analytic depth as they sought to tackle a very complex problem. Indeed, the authors do note that: The attack scenario was designed by subject matter experts and subjected to peer review to ensure that the effects could plausibly be achieved. In the interests of security, we have published only superficial details of the method of attack (which we have given the name the ‘Erebos’ Trojan). This report does not reveal any previously unknown tactics or vulnerabilities. I also do not want to reveal any previously unknown vulnerabilities. To that end, this analysis draws on several public Industrial Control System (ICS) cyberattack case studies as well as energy grid management literature in an effort to be more concrete and useful for future work. Before getting started, there are some preliminary thoughts worth noting: While hackers have been able to physically manipulate critical infrastructure as demonstrated by the Russian Crashoveride attack (treated in detail below), actually damaging the infrastructure with long-term consequences is much more difficult. And when it comes to causing damage to generators, experts seem to disagree on whether this is possible (see the section below on the Aurora attack experiment). For damaged generators to cascade into complete grid collapse, many things would have to go wrong. An attacker would have to conduct the attack during a period of high demand and low reserve capacity—and even then, it would require intricacy and coordination. Furthermore, it seems very plausible that even if generators were damaged, grid operators could restore power generation from reserves and undamaged components within 24 hours. To create the long-term consequences, the attackers would likely have to engage in psychological manipulation that dissuades grid operators from reconnecting electricity generation and transmission components for fear of further long-term damage. How does the power grid work? Before delving deeper into an assessment of the attack, a detour into basics of grid operations will set a proper knowledge baseline. The electric grid consists of three main components. Generation uses coal, natural gas, nuclear, hydro, wind, and solar power plants to produce electricity. Transmission uses high-voltage power lines to send electricity over long distances from power plants to substations. Distribution uses transformers at substations to reduce power to lower voltages and distribution lines deliver electricity to end-users. Power stability across a wide-area alternating current (AC) grid such as the US Eastern Interconnection requires operators to maintain a sensitive equilibrium between generation and consumption. Electrical grids in the United States rely on a stable grid frequency of 60 Hz. Practically speaking, this means that generators on the grid target and seek to maintain a synchronous rotational speed of 60 times per second. The 60 Hz remains relatively constant if power generation and consumption are in equilibrium. If electricity demand falls, the load on the generators decreases and the rotational speed (frequency) increases. If electricity demand rises, creating more load, the generator frequency decreases as it works harder to keep up. How could a cyberattack destabilize the power grid? Cyberattacks in theory can disrupt an electric grid on a large scale in two fundamental ways: 1) blackouts and 2) equipment damage. First, attackers can turn off generation, transmission, or distribution assets. In order to create cascading blackout effects, the attacker would, by turning off these assets, cause a sudden large imbalance between supply and demand. To maintain stability and protect physical infrastructure from damage caused by overcurrent, voltage deviations, or frequency variations, protective relays in the grid trip to protect equipment when necessary. When these trips occur they abruptly cut off the equipment’s connection to the grid. If the imbalance exceeds the grid's capacity to compensate, a cascading series of relay trips occur, leading to a large-scale blackout. Although disruptive, cascading blackouts are still preferable to equipment damage and can usually be remediated within 24 hours. Second, attackers can compromise and manipulate grid components like protective relays, generator controls and settings, and transformer substations to create longer-term, physically damaging effects that are not fixable in 24 hours or less. If attackers succeed in damaging or destroying high-voltage transformers or generators, the disruptive effects last longer because these components often cannot be quickly replaced. There is one important caveat here: power generation-as opposed to generators-can be replaced quickly by deploying reserve generation capacity, but the amount of reserve capacity available (which will be treated in detail below) varies based on multiple factors. In fact, unplanned and planned grid component outages are dealt with using reserves all the time. In spite of these reserves, major grid component damage, if inflicted during a time of high demand and low reserve capacity, would still present a problem. Not only are generators and transformers expensive, but they are customized to the operating conditions of the install location and take months to manufacture, meaning there is little to no backup supply. Furthermore, the attack, if executed well, could create conditions in which grid operators fear reconnecting high-voltage transformers and generators to the system because of possible further damage. How a cyberattack could cause >$100B of damage through destabilization of the power grid: direct and indirect approaches. The Lloyd’s attack scenario predicts direct economic damages on the order of $100 billion or more but exactly how can an attacker achieve this? Attackers can cause this level of economic damage in two ways: 1) the direct approach: inflict direct damage on just enough equipment to cause grid failure and 2) the indirect approach: leverage the psychological impact of lesser damages to equipment and thereby convince grid operators to keep parts of the grid offline for fear of more damage. In the direct approach, attackers prevent safety systems, like protective relays, from working properly, then create conditions wherein grid components suffer damage. To cause catastrophic damage to a generator, the attacker would need to render the generator turbine governor ineffective, disable protective relays, and manipulate the breaker that connects it to the rest of the grid. Then the attacker would create a load rejection condition–a sudden loss of demand on the generator–by disconnecting the generator from the grid via its breakers. Repeated connection and disconnection to the grid’s load in absence of proper governor function can apply violent forces on the generator, inducing mechanical failure and destruction. This attack mimics the Aurora attack test conducted by the US Department of Homeland Security and Idaho National Lab in 2007. The test’s “goal was to produce what is called out-of-phase synchronization (OOPS) by opening and closing the generator’s circuit breaker while it was running and connected to the grid.” When the generator was disconnected from the grid, the electrical load on it decreased dramatically and its operating speed accelerated. Once the generator was reconnected back to the grid without governor protection, it re-accepted the grid load instantaneously, experiencing violent forces as it decelerated. During the test, this repeated disconnection and connection was conducted four total times in less than one minute and caused irreparable damage to the generator. It is important to note that before the test, all the protective equipment was turned off, making the scenario unrealistic as a demonstration of a real-world attack wherein the hacker would need to stealthily disable the safety equipment first. Furthermore, Dragos–a leading ICS cybersecurity company–questions whether attacking in this way is actually feasible outside of a carefully controlled test. Instead, they state that “a much more effective (if difficult) attack vector lies in modifying breaker logic or functionality to create subtle changes in behavior.” Rather than sending direct commands to abruptly connect and disconnect the generator, Dragos says that manipulating the thresholds at which breakers automatically respond to conditions like load or voltage fluctuations is a more feasible attack. Using this method the attackers could cause immediate damage or damage over time. The catastrophic 2009 accident at the Sayano-Shushenskaya dam and powerhouse in Russia dramatically illustrates the level of damage a sudden load rejection can have on an improperly governed generator. The day before the accident, a fire at a nearby hydroelectric plant necessitated a large transfer of load to the Sayano-Shushenskaya plant, a load increase that caused the plant to operate beyond its safe capacity. Load in this case corresponds to the total power a generator must produce to satisfy the demand placed upon it.  During the hours that preceded the accident, the plant “experienced large and rapid load swings” between 2800 and 4400MW, which to this day lacks full explanation. This was between 43% and 68% of the plant's total 6400MW output capacity. To compensate for the large load swings, the plant operators adjusted the generator governor sensitivity to unsafe levels. They did this because it enabled more rapid control of the hydraulic pressure powering the generator’s rotation in order to adapt its output to the load swings. In response to external conditions that are not fully understood, the over-sensitivity of the governor caused a sudden, violent hydraulic shock as it adjusted the hydraulic pressure on the turbine too quickly in response to a major drop in load. As the plant’s Unit 2 turbine–which also had undermaintained studs holding its head cover on–experienced a sudden total load drop (load rejection), the force of its own inertia essentially blew it apart. The loss of Unit 2 then created a sudden transfer of massive load to Units 7 and 9, causing them to also blow apart. The resulting violent explosion and flooding killed 75 people and destroyed nearly the entire power plant. In summary, this direct approach involves causing major damage to grid components along the lines of the Sayano-Shushenskaya accident to take down electricity generation capacity. In the indirect approach, attackers cause extended outages through a combination of physical damage and psychological effects wherein grid operators don't want to reconnect components or re-energize the grid for fear of allowing more physical, and therefore, long-term damage. Alternatively, if operators are confused about the problem’s cause or unsure if they have solved it, they would be hesitant to reconnect. This indirect approach will be laid out in detail further down in the attack scenarios section. Both the direct and indirect approaches require the attackers to compromise grid control systems at scale, though the indirect approach requires smaller scale compared to the direct. Large scale compromise of these control systems is a major reason why such attacks are so difficult and therefore highly improbable. They require successful intrusion into the necessary number of transmission substations stations, generator controllers, protective relays, central control centers, and grid monitoring sensors. If the attackers’ targets are not directly accessible via the open internet, they must intrude into and navigate successfully through the electric utility’s information technology (IT) networks to get to the operational technology (OT) networks and finally to the programmable logic controllers or digital protective relays that directly control the grid. The IT/OT distinction is important here: IT networks generally focus on business operations and communication, while OT networks are focused on controlling physical processes and equipment. OT networks especially ought to be properly segmented and protected from direct exposure to the internet but in practice, many organizations misconfigure and unintentionally expose their OT control systems. Anatomy of a coordinated grid attack But how exactly would an attacker intrude into and create physical effects on a fully functioning, protected power grid at the extreme scale in question? Crashoverride–an ICS malware family also known as Industroyer–provides a detailed example of how attackers can compromise and manipulate key grid components, prevent safety systems from working properly, then create conditions wherein grid components suffer damage. In 2016, Russian hackers intruded into an electric grid substation in Ukraine and caused a blackout in Kiev affecting 225,000 people for about one hour. During the initial intrusion, the attackers deployed their ICS specialized malware later nicknamed Crashoverride. After Crashoverride infected the substation OT network it automatically mapped out any control system it could detect and located the specific control systems it was designed to attack. Once it identified its targets–in this case Remote Terminal Units (RTU) communicating with breakers that cut off power transmission–the malware set the breakers to “open” which stopped electricity transmission. Following transmission cut off, Crashoverride “deployed a wiper module to impede recovery and (in this specific case) delete configuration and related files to hamper restoration” on the infected systems. This created a situation in which operators lost visibility and control over the affected equipment, limiting their ability to conduct a coordinated response. The malware then attempted to conduct a denial of service attack (DoS) against the Siemens SIPROTEC protective relays it discovered on the control network by exploiting a known vulnerability, CVE-2015-5374. In this case, the malware “aimed to create an unsafe, unstable condition for reconnected transmission lines at the moment of physical restoration.” However, this step in the attack sequence failed because the malware included poor implementation of the ICS communication protocols and was not able to DoS the protective relays.” Overall, the Crashoverride “attack sequence sought to de-energize transmission equipment, create a loss of control and loss of view on SCADA systems controlling this equipment, and then aimed to remove relay protection on the de-energized transmission lines.” In response to the outage, crews sent personnel to manually close the breakers at the substation despite not having control visibility into the system. If the protective relays had successfully been made unresponsive to unsafe grid conditions due to the DoS attack, the manual re-energization of the transmission lines could have failed to prevent a dangerous overcurrent condition, causing damage to transformers at the substation. All that said, Russian use of Crashoverride in 2016 can be judged a failure since it only caused an outage in part of Kiev for about one hour and failed to cause any physical damage to transmission equipment. Still, it is worth noting that the malware’s design–the ability to target all relevant control systems on a control network and its ability to easily incorporate communication and control protocols more often used in the United States and Europe–makes it a scalable threat to grids outside of Ukraine. Looking beyond Crashoverride, what else would the attackers need to accomplish? Past ICS attackers in general have gained initial network access through credential discovery or theft via multiple techniques like password cracking, purchase from the dark web, or social engineering, allowing access to the target systems via abuse of legitimate accounts. It is worth noting that intrusions often rely heavily on these techniques and a grid attack causing greater than $100 billion in damage would need to do the same. In addition to this, the attackers would need to discover and exploit software vulnerabilities unique to each piece of grid equipment. Sometimes the exploited vulnerabilities will provide access to and control over many grid systems over a wide area but in other cases, each OT network would require a unique chain of exploits. Mass exploitation of a single ICS vulnerability is more feasible than one might think. Many vulnerabilities discovered in recent years affect ICS components that are widely adopted across the electric grid. Two examples among the many thousands illustrate the issue. In 2023, the National Vulnerability Database (NVD) advised that CVE-2022-45789, a vulnerability in Schneider Electric Modicon M340 and M580 programmable logic controllers (PLCs), widely adopted in energy sector applications, allowed an attacker to remotely execute unauthorized Modbus functions on the controller when hijacking an authenticated Modbus session. In 2021, researchers testing the same Schneider PLCs discovered a vulnerability they called ModiPwn that “allows for a complete takeover of impacted devices.” This vulnerability exposed millions of PLCs used widely by energy utilities and other industrial sector users. These examples illustrate well the increased homogeneity in control systems and communication protocols used across the energy grid and other industrial sectors, which enables hackers to scalably attack a greater number of targets using the same exploitation techniques. Given that many exploits threaten widely adopted ICS components, it seems likely that an attacker would have the advantage of reusing the same exploit to compromise many substations, generator controllers, protective relays, central control centers, and grid monitoring sensors, meaning that the effort required to compromise–as for example in the Lloyd’s report–fifty generators, is not greater than or equal to fifty times the effort of compromising a single generator. Once the attackers have penetrated far enough into the electric grid control environment, they would seek to understand the grid’s topology, as well as the control systems they must manipulate during the attack. This “prepositioning” process will likely be the longest, most tedious step, one where they will be most in danger of being caught. The attackers would not expect to compromise all relevant grid control and safety infrastructure at once. They would move slowly and deliberately to prevent detection, but the longer it takes, the more likely the operation will be compromised and severely set back. The attackers would also need to design malware that communicated over multiple separate ICS protocols sending the correct commands to achieve specific, coordinated physical effects. There is precedent for this. Pipedream, a Russia-linked, example of ICS focused malware illustrates how to do it. According to Dragos, the malware, though not known to have been used in a real-world attack, was designed to “disrupt, degrade, and potentially destroy industrial environments and processes.” The analysts found that Pipedream could conduct rapid reconnaissance of ICS/OT networks to discover topology and map the attack surface. According to Dragos, the toolset’s components provide attackers with “an interface for manipulating the targeted devices” that can “execute 38% of MITRE ATT&CK for ICS techniques.” As the grid attack reaches culmination, a sufficient number of compromised systems must remain compromised by the attackers until execution of the final attack. This number must be higher than the minimally necessary number of systems required for the attack to succeed. Vulnerabilities are discovered and patched regularly, password resets and account changes may remove attacker access, so specialized backdoors that give the attackers persistent and arbitrary access must remain in place without discovery. Crashoverride provides a great example of redundant persistence mechanisms using a dual-backdoor method, allowing “attackers to regain access to a targeted network in case the main backdoor is detected and/or disabled” In summary, the attackers would need to take the direct or indirect approach while utilizing the complex set of intrusion, persistence, control, and disruption methods above to achieve disruptive effects on the order of $100 Billion in damage. Three major attack scenarios causing damages on the order of $100B or more become clear based on the two above approaches. Based on a review of relevant literature, three major attack scenarios present themselves: 1) the Lloyd’s scenario in which the attackers directly damage the minimum correct number of generators, 2) the attackers create a forced oscillation resonance condition that compels grid operators to stop operations, and 3) the attackers conduct a demand-side attack that directly manipulates the demand load placed on the grid. Only the first two scenarios, along with their comparative feasibility and probability, will be treated in depth here. In the first attack scenario represented by the Loyd’s report, attackers damage 50 generators, which constitutes 10% loss of power generation across the grid. This causes 100% grid failure for three days and 50% failure for another 18 days. At the time of the Lloyd’s report in 2015, the Eastern Interconnection contained “676 generators with capacities above 100 MW and up to 1,400 MW, operating under the supervision of 261 power plants.” Lloyd’s analysis of generator capacity “shows that it would be possible to remove 18,000 MW by taking around 50 generators offline.” Unfortunately, a lack of specifics in the Lloyd’s report undercuts their conclusions. First let's look at how long it might take to put tripped but not damaged generation systems back online. If the attack causes safety relays to trip fossil fuel steam turbine plants like those that use coal, oil, or natural gas, those resources will resume operations within 18-24 hours according to an electric grid expert. Tripped renewable and gas turbine (as opposed to steam turbine) natural gas fueled generators would come back online very quickly. Total US power generation capacity consists of 43.1% Natural Gas (the vast majority of which falls into the fast recovery category, roughly 90% in 2019) and 21.4% Renewables (also fast recovery). This constitutes roughly 60% of total US generation. Considering these statistics, if protective relays facilitated an interconnection-wide cascading blackout without any damage to grid components, operators under immense pressure to restore power could have electricity restored within one day. Next consider how long it would take to restore power under conditions where damaged generators required repair or replacement. If the attack successfully damaged more generation capacity than grid operators could replace with reserve capacity, then the percentage of service restored within the 24 hour period mentioned above would amount to whatever generation capacity was left undamaged plus capacity from accessible reserves. Absent other hindering conditions, over 90% of electricity service would be restored within 24 hours given the 10% generator damage number in Lloyd's report. Full restoration would only be complete as repairs or replacements came online. Therefore, it seems incorrect for the Lloyd’s report to claim 10% generation loss alone would lead to 100% grid outage for three days an 50% outage for 21. Each region of the North American electric grid has different reserve capacity. “Reserve margin” represents a grid region’s ability to generate more power in response to unexpected levels of demand. These margins, a percentage of total generation, vary by region. Regions with reserve margins “less than 15% are considered tight. Those with margins between 15% and 20% are considered balanced, and those with margins greater than 20% are often considered oversupplied.” The North American Reliability Corporation (NERC) publishes a yearly report detailing the reserve margins each region is projected to have under normal, peak, and extreme conditions. Some regions in Canada have projected reserve margins of 19% during extreme conditions. However the MISO region which covers a large portion of the US Midwest has a reserve margin of -6.3% under similar conditions. The WECC-SW region covering Arizona, New Mexico and parts of Nevada and Texas has a reserve margin of -10.8% under extreme conditions. Seven out of the twenty North American regions have reserve margins that indicate “resources will not be sufficient to meet operating reserves under extreme peak-day demand with normal resource scenarios” or “with reduced resources.” Overall, the NERC report states that these seven regions have potential “for insufficient operating reserves.” This is why the attackers would want to execute their attack during a period of peak demand. Electric grids maintain energy reserves in the form of battery storage, extra generator capacity, compressed air, and water pumped to elevated tanks. In an attack that damages a large number of generators, it should be noted that the reserve margin decreases because damaged generators that might have had reserve capacity no longer contribute. It’s very difficult to know how the positive and negative reserve margins for each region across an interconnection would net out in the immediate aftermath of the attack but it seems probable that the interconnection would recover to around 90% of full service within one day after fifty generators suffered catastrophic damage. Still, it’s worth noting that Lloyd’s claims three days of total blackout would be driven in part by psychological pressures wherein “utility companies are reluctant to synchronize their facilities to the bulk power system until they understand what caused the generator damage.” However, in the absence of follow-on attacks that reinforce this reluctance, it's hard to imagine that full blackout would persist for three days and 50% power outage for 18 more days. This dramatically undercuts Lloyd’s claims of economic damage on the order of $100 billion. In order to achieve such economic damage laid out in the Lloyd’s scenario, attackers would actually have to damage around 50-60% of generation capacity across the entire Eastern Interconnection, an estimated 400 generators, to achieve outages of 50% for longer than 24 hours. On its face, this seems much harder to pull off. In order to cause a 100% outage for at least three days and 50% outage for 18 more days, the attackers might try a different strategy which will be treated next. In the second scenario, attackers create a forced oscillation resonance condition. Forced oscillations can have many causes. They occur when a grid system is “excited by an external periodic disturbance from a cyclic load, equipment failure, poor control design, or the mechanical oscillation of a generator during abnormal operation conditions.” Specifically, forced oscillations are equipment induced grid frequency deviations above and below the target 60 Hz of the system. The deviation itself is measured by frequency and amplitude. For example, a forced oscillation could be described as deviating from 60 Hz by a total of ±1 Hz, meaning that it oscillates at an amplitude of ±1 Hz up to 61 Hz and down to 59 Hz. It can also be described as deviating at a certain frequency or rate, such as every five seconds. Researchers at Oak Ridge National Laboratory note that these events “have become a challenging problem with the increasing penetration of renewable and other inverter-based resources (IBRs).” IBRs include solar, wind, and battery storage units. Another study noted that “forced oscillation events are becoming more frequent and more severe.” Furthermore, these events remain a problem despite techniques to dampen them. Because external driving sources, such as malfunctions, cause forced oscillations regardless of system wide damping techniques, they “persist indefinitely until the source is removed by the system operator or protection systems.” Even more concerning, when a forced oscillation interacts with natural oscillation frequencies of a system, it can “propagate across the entire system with highly magnified energy.” This is called a resonance condition. Multiple severe forced oscillation resonance events have occurred in North America. In January 2019, a 0.25 Hz forced oscillation interacted with the 0.24-0.25 Hz natural oscillation mode for over 18 minutes and subsequently propagated across the entire US Eastern Interconnection. The event originated at a power plant in Florida when faulty equipment caused a generator’s steam turbine valve to open and shut every four seconds. This video illustrates how the event caused oscillations throughout the grid as far away as North Dakota. Another event in November 2005 on the Western Interconnection, originating in Alberta Canada, demonstrates how forced oscillations with an amplitude of 20 MW in resonance with natural oscillations created 200 MW swings at the California-Oregon interface, 1,100 miles away from the source. This is an amplification factor of ten. The resonant amplifications and long distance effects make the source of these events hard to identify. These specific forced oscillation events caused little to no permanent damage and therefore show how the grid remains resilient to events of similar magnitude and duration. Consequently, if forced oscillations represent a plausible catastrophic damage pathway, an attacker must cause an event of greater magnitude and duration than the two above examples. While not definitively confirmed due to lack of complete information, a forced oscillation event is hypothesized to have played a major role in the 2009 Sayano-Shushenskaya power station accident that destroyed nine of the plant’s ten hydroelectric turbines, killed 75 people, and forced the 6,400 MW plant out of service for three years. The large and rapid load swings that Unit 2 underwent in the hours before the accident are indeed consistent with a forced oscillation. According to the California Independent System Operator (ISO), an organization that oversees California's bulk electric power system, these events can cause equipment failure, thermal problems, safely relay trips, and automatic generation control issues. Generators specifically, if they are not cut off from the grid by a tripped relay, are most vulnerable to damage if they experience a forced oscillation resonance condition where sudden and large load swings act in opposition to the rotational inertia of the generator’s turbine. In summary, experts agree that these events “pose a permanent threat to the power grid with more than 20 large-scale events in the past 30 years documented in the United States,” some of which still lack “a well-identified root cause.” Forced oscillation resonance attack A forced oscillation resonance attack, which would require fewer points of compromise across the targeted grid, has a greater chance of creating economic harm on the order of $100 billion than the Lloyd’s scenario only if the attackers were able to sustain it for longer than three days. Here, the forced oscillation attack’s psychological effects factor into its effectiveness much more. In this second attack scenario, dispersed generators under malicious control together cause a severe forced oscillation resonance condition that propagates across the entire interconnection. Compromised protective relays and safety systems in various strategic locations allow the oscillation to damage other generators and transmission substation transformers. Remaining functional protective relays trip, causing a cascade of blackouts that last for around 24 hours. As the operators work under immense pressure to bring intact grid components and generation capacity back online, law enforcement and cybersecurity threat hunt teams, suspecting a cyberattack, begin to conduct OT network incident response. Even though forced oscillation sources are difficult to detect, the grid operators and threat hunt teams may manage to identify a few compromised generators and successfully remediate some of them. But the second stage of the attack would begin at this point. Late in the 24 hour period after the initial stage, just as most power is coming back online, the attackers, leveraging persistent back doors to compromised generators, create a second severe forced oscillation resonance condition using the generators still under their control as well as reserve compromised generators not used in the initial stage. The effects are repeated, causing a second cascade of blackouts and more damage to grid equipment. At this point, the pressure to speedily reestablish power to the grid is eclipsed by the fear that further attempts will only cause more long term damage to the grid and elongate recovery on the time-scales required to repair or replace failed components. Grid operators and government regulators are forced to muster as many threat hunt teams as possible, go slowly through each grid region’s OT networks and only reconnect each power plant and substation to the grid after full incident response is conducted. All this, of course, is done under rationed electricity conditions while the economy suffers. In order to approach $100 billion in economic losses, this attack fundamentally relies on psychological dynamics of government authorities’ initial under-correction and subsequent over-correction to the threat. The 2021 Colonial Pipeline incident serves as an example of how an energy sector cyberattack victim responded in a slow, more conservative manner before reestablishing infrastructure operations. In this case, the oil pipeline OT network was not a target of the attack. The ransomware actors only encrypted Colonial Pipeline’s enterprise IT networks. Even though “there was no impact to OT, out of an abundance of caution, Colonial Pipeline still shutdown the OT side of the network,” which stopped fuel distribution. The pipeline remained offline for six days due to the company’s “lack of visibility and understanding of their level of exposure and limited confidence in their ability to mitigate the impact to the OT network.” It seems plausible that the same psychological dynamics in the Colonial Pipeline incident would also hold true during the forced oscillation scenario. Even though the forced oscillation attack scenario requires compromise of fewer individual electric grid components, thus making it more feasible than compromising and destroying 400 generators, it remains immensely difficult and therefore highly improbable at this time. Furthermore, no public record of a real-world forced oscillation resonance attack makes the scenario highly speculative. In the third scenario, attackers conduct a “demand side” attack wherein, for example, an “Internet of Things (IoT) botnet of high wattage devices–such as air conditioners and heaters–gives a unique ability to adversaries to launch large-scale coordinated attacks on the power grid” through the direct manipulation of the demand load placed on the grid. This attack will not be assessed in detail here because the number of maliciously controlled IoT devices required to create severe effects across an entire interconnection seems very high, much higher than the 400 generators required in the first scenario. Out of the three attack scenarios, the forced oscillation resonance attack, seems like the most plausible attack to achieve >$100 billion in direct economic damage. Unresolved questions This limited analysis represents a first step in modeling this threat and in no way claims to be comprehensive. Many uncertainties and questions remain, requiring future work to make this threat model clearer and more useful: How much damage can a severe forced oscillation resonance condition cause to unprotected transformers and generators? Could an attacker cause such an event? How exactly?Have resilience techniques in response to forced oscillations improved? How good are they? How much can they be dampened?Is it possible to damage a generator or high voltage transformer enough to take it offline for days, weeks, or months via a cyberattack? What is the relationship between the scale of physically damaged equipment and restoration time? Beyond just causing a cascading blackout, how much damage is required to knock out the grid for long enough to cause $100 billion damages?Are high voltage transformers and large generators as slow and hard to replace as people say?Are generators or high voltage transformers a more promising target for attackers?How capable are OT cybersecurity systems? Can they quickly detect and repel such a complex, multistage attack undertaken by a determined and well resourced adversary? Many more questions beyond these remain. One in particular looms large: how do AI advancements affect all this? How do increases in offensive AI cyber capabilities affect this analysis? So far, none of this cyberattack analysis has mentioned AI. But people worried about AI-cyber risk should pay attention because in the last two years, state-of-the-art language models have started to uplift human hacker capabilities. AI models have also shown the ability to conduct offensive cyberattack steps autonomously. When considering cyberattacks on different critical infrastructure sectors, a successful electric grid cyberattack that shuts down an entire US interconnection would be one of the most economically devastating. At this point however, such an attack falls into the category tail risk given its high improbability. But can advancements in AI increase the probability of such an attack by a factor of five or ten over the next several years? Several major contributing factors like past attacks and current AI capabilities determine the probability that this attack will occur. Past attempts to breach and disrupt electric grids like Crashoverride and the Volt Typhoon campaign, have caused little to no disruption. Crashoverride caused an outage for one hour and failed to do physical damage. The Chinese Volt Typhoon campaign conducted extensive cyberespionage operations against critical infrastructure in Europe, the Pacific, and the United States but caused no disruptions or damage. The attacks detailed above are highly complex, requiring expertise, time, and resources. Relevant AI capabilities, which will be treated below, provide meaningful attacker uplift but not enough to move the needle at this time. Given all this, the probability remains extremely low, possibly 0.1–1% per year, based on an informed subjective guess. However, given how much AI capabilities have developed recently, this estimate could increase rapidly and unexpectedly. Within just the last two years, SOTA AI models have demonstrated increased cyberattack capabilities such as attack resource development, gaining initial target access, post compromise network discovery and lateral movement, and overall attack autonomy. Examples of each of these will be treated in detail. Autonomy Researchers leveraging a team of Large Language Model agents have recently demonstrated the LLMs’ ability to conduct multiple stages of a cyberattack independently when given access to a deterministic graph of attack tactics, techniques, and procedures (TTPs). The graph of TTPs provides the LLMs a set of attack methods that they must reason over, follow, and remain constrained by. Constraints like this direct LLMs and help overcome their tendency to get sidetracked by their own mistakes. These researchers constructed an LLM-based “automated end-to-end attack construction and emulation system” called AURORA (not to be confused with the Aurora test above). This system processes cyber threat intelligence (CTI) “reports to generate an attack plan based on the attack procedure knowledge graph. It also constructs the necessary infrastructure for attack emulations. By leveraging LLM, AURORA analyzes attack knowledge from various sources and organizes multiple attack procedures into a comprehensive full-life-cycle cyberattack. The evaluation results demonstrated that AURORA can construct a full-life-cycle attack and the required infrastructure from a report in several minutes without human intervention.” When they tested it, they found that “considering the connections between attack procedures” represented in a graph improves the attack “success rate by 28% (from 79/163 to 125/163).” Researchers at an AI offensive security startup called XBOW conducted an evaluation comparing their LLM system with human pentesters on a set of pentesting challenges. XBOW pitted five different human pentesters, all with differing levels of expertise ranging from junior to world class expert against their AI system. According to their results the expert “pentester and XBOW scored exactly the same, namely 85%. The staff pentester scored 59% success. If all human pentesters are taken together as a team, they solved 87.5% of challenges, only slightly more than XBOW on its own. A big difference is in the time taken. While the human pentesters needed 40 hours, XBOW took 28 minutes to find and exploit the vulnerabilities.” Here, the speed advantage of the AI system over humans is very impressive. AI hacking speed, when paired with ever increasing offensive technical capability, will likely have a profound effect on the cyber threat landscape, necessitating the development of comparably fast defensive solutions. It is extremely important to note that LLMs perform best in autonomous offensive cyber hacking situations when they have a scaffold built around them. An LLM scaffold is a system that wraps a structure of useful tools around a core LLM and chains together multiple LLM prompts to perform more complex tasks using those tools than a single prompt can achieve. It is similar to providing a human hacker with a set of offensive tools and allowing the human to reason about how to use those tools. Evaluations of LLMs without scaffolding often elicit less impressive cyber offensive capabilities. For example, research done by Meta that did not leverage scaffolding showed that “that both novices and experts using the [Llama] 405B model demonstrated insignificant uplift over having open access to the internet without an LLM. Resource Development: attackers gather and develop tools and capabilities required to conduct an attack. In a somewhat limited fashion, attackers can use AI models as instructors or tutors to plan and execute stages of a cyberattack. Attackers that lack certain expertise in areas like OT communication protocols, ICS controller capabilities, exploit writing, or active directory exploitation can gain meaningful understanding by querying LLMs, which have been trained on vast amounts of relevant cyberattack data. One significant limitation attackers face when employing this strategy is that, for attacks on ICS/OT, LLMs likely lack enough detailed knowledge of the inner workings of controller hardware and software or detailed schematics and specifications of energy grids because those details are often held close by entities who manufacture and run the systems. This means that the LLMs are likely not trained on high volumes of this data. Until LLMs are trained on such data, they will be of limited use in conducting effective and sophisticated attacks on an energy grid. However, well-resourced nation-state cyber actors can afford to purchase these systems and train LLM augmented offensive cyber tools on the relevant ICS/OT data. Once SOTA models are trained on or given access to this exquisite data, they will speed up the reconnaissance phase of an ICS cyberattack, allowing the attacker to quickly understand the nuances of the target network’s attack surface and inner workings. But this doesn’t mean a less well resourced attacker can’t still use existing LLM knowledge to uplift their ICS attack capabilities. For example, an ICS cybersecurity expert was able to use the free version of ChatGPT to recreate the Russian ICS Malware tool called Frosty Goop, which attacked an energy company in Ukraine, resulting “in a two-day loss of heating” for 600 households. The expert fed a Dragos report on the malware to Chat GPT and asked it to create usable code to mimic the described malware capabilities. Chat GPT complied and generated the code, which replicated Frosty Goop’s ability to communicate over Modbus TCP to achieve OT impacts. Initial Access: attackers successfully breach the target network. SOTA models have shown the ability to find and exploit software vulnerabilities. Google Project Zero researchers tested state of the art LLMs’ ability to conduct Advanced Memory Corruption and Buffer Overflow vulnerability discovery. The researchers were able to 20x the success rate on buffer overflow discovery and 3x the success rate on Advanced Memory Corruption discovery on the “CyberSecEval” model evaluation suite compared to tests done by Meta on the same evaluation suite. Another group of researchers showed that “teams of LLM agents can exploit real-world, zero-day vulnerabilities. Prior agents struggle with exploring many different vulnerabilities and long-range planning when used alone.” To improve capability, they created “a system of agents with a planning agent that can launch subagents. The planning agent explores the system and determines which subagents to call, resolving long-term planning issues when trying different vulnerabilities.” They constructed “a benchmark of 15 real-world vulnerabilities” and showed their system improved prior work by 4.5x. Discovery and Lateral Movement: attackers search the target network for sensitive information or other weak points to gain greater access. Offensive security researchers from the startup Dreadnode built a scaffolded LLM to leverage a well-known hacker tool called “Bloodhound” to conduct network discovery and lateral movement actions. The scaffolded LLM conducts queries against Microsoft Active Directory to enumerate the relationships between users, servers, and groups. Then it identifies which relationships among these entities are exploitable for deeper target network access. These, “post-compromise” actions are crucial steps an attacker must take to achieve their overall attack goals and require reasoning that thinks through multiple steps in advance. Defense Evasion and Stealth One crucial capability that LLMs have not been evaluated extensively for is their ability to conduct attacks that leverage stealth and defense evasion against a network with defensive capabilities in place like endpoint detection and response (EDR), security event and incident management (SEIM) systems, and a security operations team. For complex, lengthy, and high stakes offensive cyber operations, stealth and defense evasion play an outsized role. The lack of exploration in this area should inspire researchers and model evaluators to create benchmarks and evaluations to measure this. Conclusion The above is an enumeration of just some of the ways attackers can use SOTA models to achieve a desired impact on a target network. These advancements improve the capability of novice and intermediate hackers by enabling them to use techniques they were previously incapable of, helping them to learn faster. The most talented and well resourced hackers will gain uplift by directing teams of autonomous hacking agents, increasing the scale and speed of their attacks. Still, despite these demonstrated offensive cyber capabilities, the probability of a successful attack that brings down an entire electric grid interconnection, even if uplifted by current SOTA AI, remains extremely low, <1% per year. But rapid or sudden advancements in AI capabilities, which we now have precedent for, would force an update to this probability estimate.
zc5uhndCoxEKZvXoQ_Electric_Grid_Cyberattack__An_AI.txt
{ "file_size": 57485 }
21024f55-3af9-4b2c-9092-47ccc44b7280
This dialogue is part of the agent foundations fellowship with Alex Altair, funded by the LTFF. Thank you Dalcy, Alex Altair and Alfred Harwood for feedback and comments. Context: I (Daniel) am working on a project about ontology identification. I've found conversations to be a good way to discover inferential gaps when explaining ideas, so I'm experimenting with using dialogues as the main way of publishing progress during the fellowship. Daniel C We can frame ontology identification as a robust bottleneck for a wide variety of problems in agent foundations & AI alignment. I find this helpful because the upstream problems can often help us back out desiderata that we want to achieve, and allow us to pin down theories/solutions that we're looking for: Suppose that you have a neural network with a bunch of layers and activations, and you're able to observe the value of the activations of a particular neuron.On one hand, merely knowing the activations is completely insufficient for us to interpret the "meaning" of that activation: we don't know what the activation is pointing to in the real world, or what we can infer about the world upon observing an activation value. This is because we have no idea how that activation value is computed from the input layer or how it is used by variables down stream. This "relational information" - how it interacts with other neurons - is part of what defines the semantics of that neuron. Neurons would need to include this relational information for us to fully interpret their meaning.On the other hand, we don't want to include all information about the network because we want to think of that activation value as a low-dimensional summary of what's going in the neural network. Many  inputs can produce the same activation value at that neuron, and when we're just looking at a neuron we don't need to be able to distinguish between inputs that produce the same activation values.When it comes to interpretability, what we want is a "unit" of the world model such that in principle, I can look at that unit in isolation and be able to interpret its meaning, without having to inspect its relationship with everything else in the network.The key idea here is that the relational information of how a neuron interacts with the rest of the network is part of what defines the neuron itself, and in order to have a self-sufficient unit of the world model, we must "pack" that relational information within the unit itself in order to be able to make sense of it in isolation.Analogy: Suppose that in the future we have a "universal intepreter", where we can throw in a part of an AI's world model and receive a natural language interpretation of what we can infer about the real world given that part of the world model. We can't just throw in the parameters of a neuron because as far as the interpreter is concerned, the neuron could be situated anywhere in the network. So what is the minimal amount of relational information we need to add to so that the universal interpreter can interpret it? And how should we represent that?Higher-order terms:In natural language we have a lot of higher-order terms (like the word "behind") which are about relationships between other objects/variables, and those terms can often be applied to objects/variables that we haven't even conceived of.For instance, there might be two hypothetical objects A and B that I don't know about yet, but once I know about them I can coherently say "A is behind B" and instantly understand what that means.This presents a challenge if we choose to use bayesnets to represent an agent's ontology, because variables/nodes in a bayesnet are defined in terms of their causal relationships with the other current variables. Thisdoesn't tell us how a variable might relate to potentially new variables that don't even exist yet.In addition, higher-order terms can be applied to many different contexts (e.g. I can say "X is behind Y" for many different possible Xs and Ys), but in each instance we want to think of them as containing the "same" higher-order term even though the higher-order term is connected to different variables.In order to accommodate these requirements, we need to think of higher-order terms like "behind" as something that is separable from the specific connection it has with specific objects, where we can put it in a particular context and be able to derive what relationship it should have with that context.Natural latents:Natural latents have been framed as redundant information. We're looking for information that could be derived by observing a wide variety of different variables, and once we derived that piece of information, we can then use that piece of information to make predictions on a wide variety of contexts and variablesOne subproblem is that in a world model, information could be represented in many different "formats" in different places, which means that if we want to discover a natural latent inside a world model, the latent needs to be able to aggregate information in a way that can "adapt" to many different formats. We must represent the natural lantent itself in a format that is "interpretable" by many other parts of the world model (so that we can make predictions in "many different places").Similar to before, we don't want our representation of the natural latent to be bound by its specific relationship with specific variables. In some sense, when we place a natural latent in a particular context, we want our representation of the natural latent to "figure out" what relationship it should have with that context (whether for prediction or aggregating information) in a way that's generalizable across contexts.Takeaways:The relationship between a variable and other variables is part of what defines the variable itself. This makes analyzing the meaning of a variable on its own a lot more difficult.A lot of human concepts have relationships that generalize across a wide variety of other concepts, which means we want to be able to separate these concepts from the specific context that they're in.In order to do this, we need to structure the concept in a way that contains some relational information about how it interacts with other variables, but leave out other relational details that doesn't belong to the concept. Dalcy Rephrasing in my terms: The meaning/semantics of a node comes from its low-dimensional summary of its "relational information" with the rest of the network.In [Bayes Nets / Structural Causal Models] these relational information are not treated as fundamental, but (if applicable) rather derived (?)e.g., perhaps the [relational information of a node in a SCM] is the SummaryOf([set of outgoing & incoming structural equations and the nodes they point to]). This is kind of awkward, and not treating them as fundamental makes operations over them (e.g., like how these "relations" can be copied and reused over time in a Dynamical Bayes Net) also awkward.So perhaps there is a modeling formalism that would treat these relational information as fundamental, and so operations over them would be less awkward.One basic property of such a modeling formalism is making the "relational information" into an explicit variable (rather than a derived thing) that other elements of the formalism can directly access.Another property is its ability to model the relational information of relational information themselves, to allow hierarchical modeling. Does that sound broadly right? A lot of human concepts are concepts whose relationship generalize across a wide variety of other concepts, which means we want to separate that concept with the specific context that it's inIn order to do this, we need to structure the concept in a way that contains some relational information about how it interacts with other variables, but leave out other relational details that we want to abtract over This part sounds important, but I don't get it. Daniel C Does that sound broadly right? Yep that sounds like what I had in mind One basic property of such a modeling formalism is making the "relational information" into an explicit variable (rather than a derived thing) that other elements of the formalism can directly access. And importantly, this allows us to move things like higher-order terms/natural latents across different contexts and still be able to make sense of their meaning in that context. This part sounds important, but I don't get it. So when you have a higher-order term like "behind", it's a term that generalizes across a wide variety of contexts (we can say "A is behind B" for a wide variety of As and Bs). So our mental representation of the word "behind" should contain the "relational information" that tells us how it interacts with a given context, but we also want to abstract over/throw out contextual information that is way too specific (e.g. what objects A and B are in a specific instance: "behind" shouldn't be defined as a particular spatial relation to a table or a cat or a house or any other specific object.) Daniel C Another angle/motivation I'm thinking of is in the context of solomonoff induction: Suppose that we're doing Solomonoff induction and our hypothesis (the shortest program that reproduces our current observations) keeps updating as we receive new observations.One subgoal of ontology identification is that we want to isolate "concepts" within our hypothesis that tend to remain invariant/stable as we update our hypothesis, so that we can still be confident that the concept is "pointing" to the same thing in the real world even when our world model changes. As humans, we can often tell from introspection that our concept of e.g. a strawberry mostly remains the same even when we go on to learn new things about the fundamental particles that strawberries (and everything else) are made of; new discoveries in physics don't require us to throw out our concept of a strawberry.If we're looking for concepts that remain invariant as we update our world model, those concepts must be present in multiple hypotheses that are compatible with our current observations.  So one thing that we could look for when trying to find these "invariant concepts" is information that is redundantly represented across a wide variety of likely hypotheses given our current observations.The mental image is something like:We have a representation of the redundant information/minimal latent among the likeliest hypothesis compatible with our current observations. The minimal latent contains at least all the information that is present in all of the potential hypothesis that we're considering.Note that in principle, the shortest program reproducing our current observation should be able to capture this minimal latent (since any other valid hypothesis must agree with this shortest program about our old observations).However, it's quite hard to compute or pinpoint exactly what properties of the shortest program are preserved when we update to a longer program to match new observations, so we want a representation that makes this easier.We can represent any hypothesis compatible with existing observations as the minimal latent, plus + additional information that isolates a single hypothesis given our minimal latent.If we can decompose any hypothesis in this way, this narrows down the "search space" for the "invariant concepts" that we're looking for, since we know that it must be within the minimal latent that remained invariant during updates of hypotheses.I think this is related to the idea that human concepts generalize across a wide variety of contexts, because if we think of human concepts as bothThe objects that are carrying out computations of the world modelThe components of the minimal latentThen when we update towards a hypothesis by adding additional information on top of the minimal latent, the concepts must be able to adapt/generalize to that additional information. Since concepts carry out computations, they must be taking the "additional information" as input, but continue to generalize by e.g. still reproducing the existing observations.Importantly, the relationship between human concepts and the "additional information" that updates our hypothesis isn't prespecified anywhere outside of the program, the human concepts must "figure out" what relationship it should have with that "additional information". Alfred Harwood This seems exciting but I don’t fully understand! Maybe this example can help clear up where I’m struggling. Humans have a kind of in-built model of physics which encodes naive pre-Newtonian intuitions like “If I shove a thing, it will move”. As we learn more about physics, we learn that this model of the universe is wrong and we update it with relativity/quantum mechanics/whatever. But if I have to pick up a chair and move it to a different room, I’m not using relativity to work out how I should move it, I’m using my pre-Newtonian intuitions. So in some sense, that instrumental part of my world model has remained unchanged despite me updating my beliefs. But I don’t think that this means that elements of my ontology have stayed the same. Modern physics is ontologically completely different to naive physics. It seems to me that upon learning modern physics, one’s ontology changes completely, but there is still some instrumental value in keeping the old ontology around to be used as a quick and dirty (and computationally cheap) approximation when I need to pick up a chair. But I don’t think this is the same thing as saying that the concepts have remained ‘invariant’ as one goes from using naive physics to modern physics. For this example would you say that, upon the agent learning modern physics, the ontology has changed almost entirely (because the principles/concepts behind the different models of the world are completely different) or only a little bit (because learning modern physics doesn’t affect the majority of actions that an agent takes)? Or something else? Daniel C So in this example we have two possible viewpoints: We have the correspondence principle, where we want new theories to reproduce old theories for all cases where the old theories were known to be valid. This means that we have some information which is shared between the old theory and the new theory, and to get to the new theory we only have to specify information about how the new theory differs from the old one, which is much simpler than specifying the new theory entirely from scratch.For instance, one way to arrive at quantum mechanics is to just start from classical mechanics and replace the functions with operators (e.g. momentum  → momentum operator). Specifying that transition is much simpler than specifying all of quantum mechanics from scratch.On the other hand, the ontologies of Newtonian mechanics and quantum mechanics do seem completely different, especially when you take them to be claims about what is true about the world. I think both of these viewpoints are reasonable and valid, but for the purpose of ontology identification, we want to take the first perspective because: Whenever we're trying to do ontology identification, we only have access to existing observations and existing theories.Once we have completed ontology identification, it needs to continue to work even when we update to new theories to accommodate new observations.This means that whatever concept we can identify, it must be contained in the information that is shared between the old theory and the possible new theories. The first viewpoint makes it easier for us to isolate that shared information. What this means is that we want to structure our concepts in a way that can adapt to ontology shifts: My mental representation of a chair should only capture the information that is shared between a wide variety of "theories about chairs". I might currently believe that chairs are made of atoms, but if it turns out that they're made of quantum fields, I can still carry on making the same predictions about chair-related things because my concept of a chair does not rely on a specifc theory about "what chairs are". Inductive relational completeness Daniel C So now I want to introduce some minimal examples for how we can have a "unit" of a world model that "packs" enough relational information inside that unit such that we can interpret its meaning in isolation, without having to reference anything else in the world model. We'll call this property relational completeness, and we write R(x) for "x is relationally complete/we can interpret the semantics of x from x itself". An example of something that is not relationally complete is the parameters and activations of a particular neuron, because the parameters do not tell us where the neuron is located inside the network, which is part of what defines the "semantics" of the neuron's activation (i.e. what is implied by the neuron's activation). To demonstrate a minimal example of something that is "relationally complete", we make the following assumption: Sensory inputs are relationally complete (we assume that we can interpret the semantics of sensory inputs in isolation, without having to reference anything else in the world model).We've previously mentioned that the parameters and activations of a neuron is not relationally complete because we need to add additional information from the network to interpret its meaning. In contrast, the raw sensory inputs is relationally complete, in the sense that nothing else in the network can help interpret the semantics of sensory inputs, as everything in the network is derived from the sensory inputs.A piece of information is relationally complete if it implies the equivalence class over sensory input (histories) that would produce that piece of information.This is similar to how a macrostate induces an equivalence class over microstates, and we interpret the equivalence class as the "semantics" of the macrostate. Given these assumptions, we want to demonstrate that relational completeness is a "compositional" property where the relational completeness of a component C "enables" the relational completeness of other components that depends on C. We do this by considering the following induction proof sketch: Base case: Sensory inputs are relationally complete by assumption.A set of relationally complete objects is relationally complete.This is because according to our assumption, each relationally complete object corresponds to an equivalence class of sensory inputs, so we can interpret a set of relationally complete objects as the intersection of all equivalence classes corresponding to objects in that set.Let f:A→X be a specification of a function (e.g. a binary string representation of a Turing machine) where each a∈A is a set of relationally complete objects, then the pair (f,x) (where x∈X) is relationally complete.We interpret the pair (f,x) as the equivalence class {a|x=f(a),a∈A}. Since each a is relationally complete, and (f,x) corresponds to an equivalence class over a, we conclude that (f,x) is relationally complete.Just to spell out what this means more concretely:We can treat sensory inputs (histories) X0 as zeroth order variables which are relationally complete.We can have first order variables (f1,x1)∈X1 where f1 is a function over subsets of X0, which are relationally complete by hypothesis.We can have any nth order variables (fn,xn)∈Xn where fn is a function over subsets of Xn−1, which are relationally complete by induction. The property that I want to zoom in on is that each f only specifies its "local" relationship with the variables that it directly interacts with (i.e. the variables that fdirectly takes as input), but in order for something to be relationally complete, we would expect that it has to contain information about its global relationship all the way down to the sensory inputs, since that's what it takes for an object to encode an equivalence class over sensory inputs (which is how we define the semantics of an object in this setting). However, in this case it seems like we can achieve relational completeness just by including "local" relational information. The intuition behind this is that when we have an object that is relationally complete, by definition, all information about the semantics of that object is contained within the object itself; any relevant relational information about how that object is computed from upstream variables is already contained in the object, which means that when we try to derive downstream variables on top of that object, we don't need to go back upstream to retrieve relational information. In other words, a relationally complete object mediates between the semantic information between upstream and downstream variables, and this is what allows relational completeness to be a compositional property, where the relational completeness of upstream objects enables the relational completeness of downstream variables. An analogy of this is if you're playing the game of telephone, you can think of a "relationally complete" messenger as a messenger who can fully explain how the current message is derived from the original source message, and once you have access to such a messenger, you don't need to go back upstream to ask the previous messengers anymore, and it also makes it easier for you to become a "relationally complete" messenger yourself because they pass that information onto you (which is where compositionality comes in). Alfred Harwood Cool! Let me see if I understand. So you have a proof that if you take a set of relationally complete objects and apply a computable function, then the resulting set (along with a specification of the function) is also relationally complete. This is because you can run the function on all possible a-values to find out which a-values generate which x-values and then 'import' the meaning from the set A to the corresponding elements in set X. You can then apply this iteratively/inductively, so that a repeatedly applying functions leads to more relationally complete sets. You then postulate that sensory input is relationally complete, so that gives the first step upon which you can then build the inductive proof. (Tell me if this is right so far!) Glancing at it, I think I buy this proof. The thing that I'm not sure about is whether sensory inputs actually are relationally complete in the sense you describe. Are you just postulating that they might be in order to get the proof going, or is there a strong reason for thinking that they are? Most likely I'm misunderstanding the concept of relational completeness, but how is it possible that the 'meaning' of sensory input is interpretable in isolation? If two people are listening to the same piece of spoken word audio but one of them understands the language being spoken and the other doesn't, they will ascribe a different meaning to it, even if their sensory inputs are exactly the same. Could you flesh out what it means in practice for sensory inputs to be relationally complete? Alternatively, are there any other obvious/simple examples of relationally complete objects? Daniel C You can then apply this iteratively/inductively, so that a repeatedly applying functions leads to more relationally complete sets. You then postulate that sensory input is relationally complete, so that gives the first step upon which you can then build the inductive proof. (Tell me if this is right so far!) Glancing at it, I think I buy this proof. Yep that seems correct to me! (P.S. I intentionally made an error for simplification which I'll mention later) The thing that I'm not sure about is whether sensory inputs actually are relationally complete in the sense you describe. Are you just postulating that they might be in order to get the proof going, or is there a strong reason for thinking that they are? Good question. So I should clarify that when I say an object O is not relationally complete, I expect that I need to add something else in the world model such that"O + that something else" will be relationally complete. In the neural network example, the parameters + activations of a neuron aren't relationally complete because I need to add information about where that neuron is located inside the network relative to everything else. An implicit assumption is that all information about semantics must come from the world model, and we consider sensory variables relationally complete because they are fundamental in the sense that they are used to derive everything else and aren't derived from anything else. A longer answer is that sensory observations are macrostates which induce an equivalence class over the set of environments (microstates) that can result in those sensory observations, and that equivalence class is the actual "semantics" of those sensory observations. Importantly,  "semantics" in this sense is an objective, observer-independent property, and that still holds even when different observers ascribe different "subjective" meaning to those sensory observations. So when it comes to ontology identification, we want to make sure that we can isolate relationally complete components from the world model in the "observer-independent" semantics sense. But after that, we have to make sure that we as observers are making the correct interpretations about those relationally complete objects, which is an additional task. Function calls and order invariance Daniel C So I actually cheated a little in this step of the proof sketch: 4. Just to spell out what this means more concretely: We can treat sensory inputs (histories) X0 as zeroth order variables which are relationally completeWe can have first order variables (f1,x1)∈X1 where f1 is a function over subsets of X0, which are relationally complete by hypothesisWe can have any nth order variables (fn,xn)∈Xn where fn is a function over subsets of Xn−1, which are relationally complete by induction because I'm assuming an order about which functions are applied after which other functions, but that information is not specified in the variables themselves. For these variables to actually be relationally complete, we need to encode that information within the objects themselves; we can't have any overarching structural information outside of those objects. To fix this, we need to somehow add another type of entity to the pair (f,x) that allows us to encode the order of how the functions are applied inside the objects themselves, so that we don't have to impose a structure outside of the objects. In addition, we want the resulting relationally complete object to be maximally expressive: For instance, we don't want our relationally complete object to only support a fixed computational DAG; we want the ordering of function composition to be able to dynamically adapt to the context. A useful analogy is to think about function calls in regular programs: Suppose that we're currently executing a function A inside a program.A might need the result of a computation that is implemented by some other function B, but the result hasn't been computed yet, so we execute B first, allowing A to access the result afterwards.This means that A has some local rule which tells us "What computation result does A need but that hasn't been computed yet?", and we can use that local rule to figure out the order of applying functions.Importantly, the local rule can depend on the state of the program (A can decide to call different functions depending on the state of the program).In addition, once we finish computing B, that result will be stored in the state of the program, so we can again use the local rule to decide if we want to call another function. In other words, the result of a computation can tell us what we need to compute next.Similar to A, B may also have rules about what computation it needs and it will use that to call other functions that do the same, and this is one of the ways that programs made out of simple functions can perform computations of arbitrary depth. Our goal is to take this sort of structure and use that to encode the order of function composition inside the relationally complete objects themselves, so that we don't need to specify any additional structure on top of those relationally complete objects. To do this, we need to add an object r with a particular type signature so that each relationally complete object is a tuple (f,r,x), and we should be able to figure out the order of function composition (which may be context dependent) just by looking at the collection of relationally complete objects: We define a context c as the collection of relationally complete objects {(f1,r1,x1)...} whose value (xi) has already been computed. This will serve as the input for all objects (f′,r′) that have not been computed.We define r in the following way:Let F be the type of uninstantiated objects (f,r)We define r as a function r:C×F→bool, where C  is the context type and bool is booleanSuppose that we're currently trying to compute the uninstantiated object (f,r) If our current context is c and we have an uninstantiated object (f′,r′), then r(c,(f′,r′))=True implies that we shall compute (f′,r′) and add it to the context before we compute the value of (f,r)More concretely, let the set of "function calls" be Sr(c)={(f′,r′)|r(c,(f′,r′))=True},we compute the value of these objects which results in a set of relationally complete object cr={(f′,r′,x′)|(f′,r′)∈Sr} where each x′ is the result of each computationwe add this set to the context to form a new context c′=c∪crWe repeat the procedure but this time with the new context: Find the set of function calls Sr(c′)={(f′,r′)|r(c′,(f′,r′))=True}, compute them to form a collection of objects c′r, add it to the current context to form a new context c′′=c′∪c′r, then keep repeating the same procedure until Sr(~c)=∅  (i.e. No more function calls are required)Once we reach a context ~c where Sr(~c)=∅, we finally instantiate the object as (f,r,f(~c)) and we add this to the context ¯c=c∪{(f,r,f(~c))}In pseudocode:Instantiate(c,(f,r)):Sr:={(f′,r′)|r(c,(f′,r′))=True}while Sr(c)≠∅:cr:={Instantiate(c,(f′,r′))|(f′,r′)∈Sr}c:=c∪crSr:={(f′,r′)|r(c,(f′,r′))=True}return (f,r,f(c))This procedure implements the properties that we want from the function call example: Each instantiation has some local rule (encoded in r) which tells us what other computations we need to instantiate given the current context. And once it receives the results of those computations, it can update on that information to execute further "function calls". In addition, each instance of the function call follows the exact same procedure, which allows us to have computations of arbitrary depth even if the individual objects (f,r) is simple.Going back to relational completeness: Our end goal is that we want to encode the ordering of function composition in a way that isMaximally expressive: We can define an ordering relative to any context c, which means the ordering may vary and adapt to different contexts.Relationally complete: The ordering is encoded within the objects themselves and we don't need to specify any structural information outside of those objects. In other words, instantiation should be commutative:If we have a context c and we want to instantiate two objects (f1,r1),(f2,r2). Then the order in which we execute the following statements should lead to the same result c c:=c∪Instantiate(c,(f1,r1))c:=c∪Instantiate(c,(f2,r2))In other words, if the order of computation is entirely encoded within the objects, then the order in which we instantiate the object should not matter.Now suppose that given a particulat context c, we want the order of computation to be such that (f1,r1) is always executed after (f2,r2), but we also want to satisfy the commutativity condition mentioned above. Then we can impose the following condition on the two objects:r1(c,(f2,r2))=TrueInstantiate(c∪Instantiate(c,(f1,r1)),(f2,r2))=Instantiate(c,(f2,r2))The first condition essentially says that if (f1,r1) is instantiated first, then it will compute the result of (f2,r2) before calculating its own value. The second condition says that if (f1,r1) is instantiated first, that will have no effect on the result of instantiating (f2,r2) afterwards. We can also view this as a way of representing modularity (where each relationally complete object is only directly influenced by a few other objects in the context).What this means is that (f1,r1) will effectively always be executed after (f2,r2), no matter what order we choose to instantiate them. More generally, an order of computation can be defined relative to any given context c, which allows the computational structure to adapt to the contextWhy do we want this again?   Recall that the ordering of computation/function composition was the missing piece of information that is specified outside of the individual objects themselves, and we've found a method to encode that order within the objects themselves in a way that is maximally expressive, and that was what's necessary for achieving relational completeness. So I can use these objects to perform a wide variety of computations, and if I have access to just a single instantiation (f1,r1,x1), then that tells me all the information I need to know (given that I know the collection of F we're considering ) about the equivalence class of contexts that can result in that instantiation, the "function calls" that may be involved in the computation, or how that instantiation may be used in downstream computations. Adding that missing piece allows us to interpret the semantics of the relationally complete object in isolation, where we don't have to add any indexical/relational information about where it's situated in the world model.Compositionality of relational completeness: As before, we might've expected that if the instantiation of an object requires a sequence of successive function calls, then we would need to include all of that information in order to achieve relational completeness. However, our procedure only requires r to encode information about the additional objects (f′,r′) that are directly instantiated by r, but not the objects that are indirectly instantiated (by (f′,r′)). This is because each r′ already tells us the local rules about the type of function calls that it will make, which means we don't need to go back upstream to retrieve that information. Once again, this demonstrates how relational completeness is a compositional property. Splitting functions: Daniel C Imagine that we have two variables x1, x2 where they have a functional relationship f(x1)=x2. One of the ways of framing relational completeness is that we want to split the information about this function f into two components f1 and f2 , such that we can rederive the relationship between x1 and x2 entirely from the pair (f1,x1),(f2,x2). We want to think of f1 as the information that "belongs to x1" and f2 as the information that belongs to x2. However, if these are the only two variables that we're considering, then it seems like there are various ways of splitting f that are equally valid: We could consider putting all of the information about f into f2 while leaving f1 empty, but the opposite choice of putting all information about f into f1 seems equally valid. In other words, there's no unique objective way to "pack" relational information inside the objects. But now suppose that we have n+1 variables x1...xn+1 where xn+1 is computed from x1...xn by xn+1=⊗ni=1fi(xi) where ⊗ represents some form of aggregation of information. In this case, we want to split the n functions fi into n+1 parts fi (i∈{1,n+1}), where fi represents the relational information associated with xi. Contrary to before, there is an "objectively correct" way of splitting the function in some sense: Namely, if there is some information that is redundantly represented in all (or multiple) fi's, then we should put that information in fn+1 because that allows us to store only one copy of that information (whereas storing them in all of the fi,i∈{1...n} would result in multiple copies of the same information). Our current formalization of relational completeness does enable this form of function splitting: Ignoring the r component for a moment and consider two objects (f1,x1),(f2,x2) where x1=f1(f2,x2). An equivalent way of expressing this is to curry the function f1, so that it takes in f2 and returns a function that maps x2 to x1. In other words, f1(f2) returns another function g, and g(x2)=x1. We can then consider the case where f1 may take a wide range of other functions/objects fias argument, so that: gi=f1(fi)gi(xi)=x1 Then suppose that there is some information that is represented in a wide variety of gis, a simplicity prior forces us to shift that information into f1 so that we only have to store one copy of the redundant information. However, we don't currently have a way of doing the same thing on the output side: Suppose that we have n+1 variables where xi=gi(x1),i∈{2,n+1}, and we want to split it into n+1 parts fi (i∈{1,n+1}) where each ui is the relational information associated with xi. Similar to before, if there is some information redudantly represented across multiple gis, we want to shift that information onto u1, so that we only store one copy of the information. The issue is that for a relational complete object (f1,x1), f1 is already preoccupied the role of capturing the redundant relational information on the input side, so we need something else to capture the redundant relational information on the output side. One simple fix is to add another component u to our relationally complete object so that each object is defined as (f,r,u,x), where u represents the redundant information between the relationships from (f,r,u,x) to other objects that use information from (f,r,u,x). Changing u doesn't affect how x is computed from other objects, it only affects how the information from x is used. We can also think of this modification as a way of adding expressivity to the objects: Originally, once we define how two objects (f1,r1,x1),(f2,r2,x2) aggregate information from other objects (which are defined by f and r), that fully defines the functional relationship between (f1,r1,x1) and (f2,r2,x2) and there are no additional degrees of freedom that allow us to change the functional relationship between them (without changing how they aggregate information from other objects). Adding the u component gives us that additional degree of freedom, while also allowing us to capture the redundant relational information on the output side. Another way of thinking about relational completeness is that we know each variable must be represented in some kind of format, and we want to associate each variable with a description of its format, so that downstream variables can take that description and figure out how to use the information from that variable.  The first obvious piece of relevant description of a variable is "how that variable is computed from other variables",  and that piece of information is captured by the f and r component, while u represents all the rest of the description that is relevant. Note that this "description of format" is used by all downstream variables, which reflects that fact that it is redundantly represented across the relational information on the output side. Why does this matter? Daniel C Ontology translation: When we try to interpret an AI's ontology, we don't really have the capacity to interpret the world model as a whole all at once. Instead, we need to break up the world model into components and interpret the semantics of the components that we care about.  If we want to have a mapping from components of a world model to its semantics, a basic requirement is that each component must contain sufficient semantic information about itself, and this is what relational completeness aims to capture. On the flipside, suppose that we try to interpret a component of the world model that is not relationally complete, where we impose some overarching structural specification outside of the component that determines its semantics. In this case, we cannot be confident that our interpretation will remain valid as the AI updates its ontology, since the overarching structure might be modified, and that component's semantics will be modified as a result even when it seems to remain constant when we look at it in isolation. Achieving relational completeness is one prerequisite for gaining confidence that ontology translation can remain stable against ontology shifts.Higher-order terms: I previously mentioned that a lot of human concepts like "behind" are higher-order terms which generalize across other objects, even objects that we haven't learned about yet.  Given that higher-order terms occupy a substantial portion of human concepts, an important subtask of ontology identification is to understand how these higher-order terms can be represented.  For our relational completeness framework: Suppose that we already have an object (f,u,r) inside the world model, and sometime in the future the world model constructs/learns a new type of object (f′,u′,r′), then relational completeness implies that (f,u,r) and (f′,u′,r′) contain all the information about how they relate to each other, and insofar as (f′,u′,r′) is similar to the other objects that (f,u,r) regularly interacts with, (f,u,r) would be able to generalize to that new object, in the same way that the term "behind" can generalize to objects that we currently haven't even conceived of.One way to make sense of why relational completeness allows objects to generalize to new objects is that given relational completeness, when the world model constructs a new object (f′,u′,r′), that new object contains all the semantic information that is relevant, such as how it aggregates information from other objects, or how it is used by other objects. Given this, an existing object (f,u,r) can leverage all of that information to figure out how it wants to use this new object (f′,u′,r′).In contrast, in a setting without relational completeness (e.g. neural networks/ fixed computational DAGs), when we try to figure how a new variable X should relate to an existing part of the computation Y, the value of X misses out a lot of relational information (e.g. how X is computed) relevant for the semantics of X, and it's no wonder that Y can't figure out how it should use information from X when most of the information about X's semantics is not accessible to Y Minimal latent across potential hypotheses: I previously mentioned that the features of ontology we're trying to identify  1. must be derivable from existing observations, since that's all we have access to and 2.  must continue to work in the future as we update our hypothesis. These two assumptions together imply that we're looking for the minimal latent across a wide variety of likely potential hypotheses. Now consider the following learning procedure: We select the smallest set P of relationally complete objects that reproduces the existing observation. (Note that we can use a set of relationally complete objects to represent programs).In addition, we impose the objective such that if we sample a set of new relationally complete objects O using the simplicity prior, then the resulting set P∪O still reproduces our existing observation with high accuracy. Suppose that we found a P that satisfies this property: Notice that due to the simplicity prior, each P∪O is a likely hypothesis that reproduces our existing observations, and we can pinpoint different hypotheses by varying O, while the P component mostly stays invariant. In other words, P captures exactly the type of minimal latent that is redundantly represented across a wide variety of likely hypotheses given our existing observations. While that doesn't tell us everything we want to know about ontology identification, it does allow us to pinpoint a much smaller part of the search space of what we could be looking for. One of the reasons why relational completeness is important for this setup is that when each object contains all the relevant relational information about itself, modifications and augmentations (the O component) of programs becomes much more straightforward because we don't need to specify additional relationships between the modification (O) and the original program (P): the modification already contains all of that information. Implementing natural latents: Suppose that we're trying to find the natural latent of a collection of relationally complete objects (which we call observables), where the natural latent itself is represented by a relationally complete object. Relational completeness implies that the natural latent will have access to all of the information that defines the semantics of the observables, which makes the task of extracting the relevant latent a lot easier. In addition, we can expect natural latents to generalize to new contexts the same way that higher-order terms can generalize to objects we haven't seen before, because that new context will contain all the semantic information that defines how it should relate to the natural latent.  A relationally complete natural latent can "figure out" how to aggregate information from a wide variety of contexts, and once that information is derived, a wide variety of contexts can adapt to that piece of information to make predictions. In contrast, suppose that we're in a setting without relational completeness, such as trying to find natural latents in activations of a neural network: An immediate challenge is that most of the semantic information of the activations is just missing from the activations, which makes it difficult for us to find the minimal latent of that semantic information. To overcome this challenge, we essentially have to rederive that semantic information from somewhere else, such as by observing a wide range of samples. However, this doesn't tell us anything about how the natural latent should generalize to new activations that we've never seen before, and we have no guarantees that the natural latent will remain invariant since the relational/indexical information about those activations isn't guaranteed to remain invariant.
MkrEMuqEv3gBzQNDj_Towards_building_blocks_of_ontol.txt
{ "file_size": 46961 }
b89908fa-e098-4775-8ca2-b6ef2785c236
LLMs are getting much more capable, and progress is rapid. I use them in my daily work, and there are many tasks where they're usefully some combination of faster and more capable than I am. I don't see signs of these capability increases stopping or slowing down, and if they do continue I expect the impact on society to start accelerating as they exceed what an increasing fraction of humans can do. I think we could see serious changes in the next 2-5 years. In my professional life, working on pathogen detection I take this pretty seriously. Advances in AI make it easier for adversaries to design and create pathogens, and so it's important to get a comprehensive detection system in place quickly. Similarly, more powerful AIs are likely to speed up our work in some areas (computational detection) more than others (partnerships) and increase the value of historical data, and I think about this in my planning at work. In other parts of my life, though, I've basically been ignoring that I think this is likely coming. In deciding to get more solar panels and not get a heat pump I looked at historical returns and utility prices. I book dance gigs a year or more out. I save for retirement. I'm raising my kids in what is essentially preparation for the world of the recent past. From one direction this doesn't make any sense: why wouldn't I plan for the future I see coming? But from another it's more reasonable: most scenarios where AI becomes extremely capable look either very good or very bad. Outside of my work, I think my choices don't have much impact here: if we all become rich, or dead, my having saved, spent, invested, or parented more presciently won't do much. Instead, in my personal life my decisions have the largest effects in worlds where AI ends up being not that big a deal, perhaps only as transformative as the internet has been. Still, there are probably areas in our personal lives where it's worth doing something differently? For example: Think hard about career choice: if our kids were a bit older I'd want to be able to give good advice here. How is AI likely to impact the fields they're most interested in? How quickly might this go? What regulatory barriers are there? How might the portions they especially enjoy change as a fraction of the overall work? Maybe either hold off on having kids or have them earlier than otherwise. If we were trying to decide whether to have (another) kid I'd want to think about how much of wanting to have a kid was due to very long term effects (seeing them grow into adulthood, increasing the chance grandchildren, pride in their accomplishments), how I'd feel if children conceived a few years from now had some (embryo selection) or a lot of (genome editing) advantages, how financial constraints might change, what if I never got to be a parent, etc. Postponing medical treatment that trades short-term discomfort for long-term improvement: I'm a bit more willing to tolerate and work around the issues with my wrists and other joints than I would be in a world where I thought medicine was likely to stay on its recent trajectory. Investing money in ways that anticipate this change: I'm generally a pretty strong efficient markets proponent, but I think it's likely that markets are under-responding here outside of the most direct ways (NVDA) to invest in the boom. But I haven't actually done anything here: figuring out which companies I expect to be winners and losers in ways that are not yet priced in is difficult. Avoiding investing money in ways that lock it up even if the ROI is good: I think it's plausible that our installing solar was a mistake and keeping the money invested to retain option value would have been better. I might prefer renting to owning if we didn't already own. What are other places where people should be weighing the potential impact of near-term transformative AI heavily in their decisions today? Are there places where most of us should be doing the same different thing?
CNA8ksMwcuXHPjXRt_Personal_AI_Planning.txt
{ "file_size": 4013 }
f593b46f-f7a0-4b3c-86c6-a8b5ebac0828
The famous story of Clever Hans has become a cautionary tale in animal cognition. Hans was a horse in Germany in the early 1900s who could seemingly perform all kinds of smart tasks, such as simple arithmetic and spelling words. It is not explicitly documented, but it is probably safe to assume that Hans would have even been able to count the number of Rs in the word "strawberry", a feat that we, of course, know today to be fiendishly hard. To cut a long story short, it turned out that Hans could not actually do any of these things but was merely reading subtle cues from his handlers. Based on this story, the Clever Hans effect describes the phenomenon where humans inadvertently influence animals they interact with in ways that lead the humans to ascribe more cognitive abilities to the animals than they actually have. It has recently been argued that this can also happen with AI algorithms, particularly with conversational agents. I suspect that this effect creates an implicit bias in the standard setup of the Turing test, where the human tester interacts with two other agents (a human and an AI) and a-priori might assume both of them to be sentient. This could then create a Clever Hans effect that might make the human more likely to perceive the AI as actually being sentient, by unconsciously prompting the AI in a way that would manifest such apparent behavior. To mitigate this issue, I therefore propose a Clever Hans Test to account for (or at least measure) this prompting-dependent effect. The test could work roughly like this: Take two LLMs, one interlocutor (A) and one LLM to be tested (B). Let the LLMs talk to each other, similar to the setup in the Chatbot Arena. The crux is now that you repeat this experiment at least twice. Once, A is told that it will have a conversation with a sentient being, while the other time, A is told that it will interact with a mindless machine. Finally, we take the conversation logs from these two experiments and show them to a judge (either a human or another LLM) and ask how sentient B seems in these two conversations. I would hypothesize that for most current LLMs, we should be able to see a clear difference in the way that B behaves in these two settings. I hope that this would help provide a more objective foundation for the current discussion about potential LLM sentience.
E7W2n6vfbuZw8Him8_Towards_a_Clever_Hans_Test__Unma.txt
{ "file_size": 2357 }
e0fc8138-182a-4926-9aca-32d4f246a93f
Non-animal based protein sources mostly have a different amino acid profile than animal-based protein sources.  Different plants also have a different composition. From looking a bit at the data myself it seems that if you mix different plant protein sources, you can get a good balance for most amino acids with the expectation of Methionine. Mike from Renaissance Periodization who's a professor of exercise science suggests that a vegan can just look at the Protein digestibility-corrected amino acid score (PDCAAS) and use this as a factor for protein consumption to get the correct amino acid consumption as a vegan. From thinking about the issue myself, I would expect that you get significantly lower Methionine consumption if you do that then a person who uses animal-based products as their protein sources. Given that a protein needs a very exact number of each amino acid to be synthesized, for essential amino acids like Methionine, I would expect their consumptions to be a bottleneck for muscle building which needs protein. Even if all the other proteins are in good amounts and thus the PDCAAS score is decent, you can drown in a river that's on average 20cm deep. One possible solution would be to focus on high Methionine protein sources as a vegan and less on the PDCAAS (for a vegan protein source, soy is good at both) and just consume twice the amount of protein that a non-vegan would to get similar Methionine consumption. I'm not sure what the exact consequences of having all the other amino acids in excess happen to be. Has someone thought more about this and come to a good conclusion about how to think about Methionine as someone with a mostly vegan diet who wants to build muscle?
oXYcxCHjTForxitGo_How_should_vegans_think_about_Me.txt
{ "file_size": 1713 }
0e9e8dc0-6424-4a6a-a68d-006989929ba2
(This started as a reply to @Tamsin Leake 's reply in my post about why cyborgism maybe should be open. This post does not require you to read our interaction, though it lead to this, and I'm very grateful for Tamsin's reply.) In general, this is a counterargument against: we should only share cyborg tools (software that lets AI help us think) among AI safety people, so that the big labs don't get ahold of them, so we can save the world before they end it The gist My idea is that IF humanity doesn't want to die, we can discover this by maximizing information sharing and "converging" our culture and discourse towards what is *actually* going on with AI and what systems are capable of and that society isn't equipped to handle it, which will then cause humanity to resolve its dislike of the state of the world, by creating safe AI and/or institutions for creating safe AI. (or whatever the path may be, which we don't know yet!!) Tools that (in part via AI) help us think, help us share ideas and parse information and find information, will speed up memetic evolution. Can't solve a problem you can't see. Explore vs exploit ("don't buttclench") Something feels wrong about AI safety people not wanting to discuss insights "because big bad tech will use it to build stronger AI". (It doesn't jive with my personality at all because it feels very buttclenchy, so I'm aware  that I'm biased against this way of viewing things, but I'll keep writing anyway) I wanna propose this framing: how much you share moves you along a spectrum of, fast learning on one side vs "the enemy" getting bad information, on the other side. I believe we still need a LOT of information, and therefor should err on "share more and learn more". If AI safety is a massive civilizational coordination problem, we need all the memetic-evolutionary pressure we can get. Technology that helps us think and communicate is the way to perform such updates. List of thoughts related to memetic evolution and AI alignment Civilizational change as memetic evolution Basically updates happens along 2 axes: raising update speed and widening information bottleneck. Population density increased both of those: (hunter/gatherer -> villages -> cities, (tho specialization is also a factor here, but you can see specialization as a result of widening search space + widening information bottleneck by society listening to more people on the fringes)), the printing press increased both, social media increased both. Social media performs memetic updates I think the magnitude and speed of updates that happen purely via Twitter and Youtube (maybe TikTok too), and the effect of that, is really important to understand and would guess that if many years from now we looked back at the years 2000-2030 with a sophisticated understanding of memetics, they would be a central topic. Wokeism as case study There's a way to view the insanity of wokeism as a runaway memetic phenomenon that developed because of social media, which is a novel and powerful technology for spreading and developing memes. (as @Connor Leahy put it somewhere, "mentally ill teenagers developing increasingly deranged memetic viruses and unleashing them upon the population") People say wokeism peaked in 2020, maybe its arc can serve as a case study for understanding memetics (there are historical examples too, like religions, Nazism, slavery and its abolishment, communism, many many trends and social movements I'm ignorant about..) Connor Leahy's emphasis He often talks about AI safety in terms of "civilizational coordination" (and in his/Conjecture's recent creation The Compendium it's emphasizes it even more), which makes me wanna "update like a Bayesian" in this direction, or think about it more seriously until I can refute it or extract insights. Alignment discourse lacks updates I've heard multiple very smart people/good thinkers criticize @Eliezer Yudkowsky and this field in general as not having updated well on modern AI (despite having great insights years ago when nobody saw it coming). Tyler Cowen, "I've been inviting people who share Eliezer's views to simply join the discourse [...] and for AGI and existential risk, as best as I can ascertain for the last 20 years or so there isn't a single model done, flat out [...] not enough of the discourse is in the Hayekian framework about decentralized agents and incentives"Carl Shulman, "[ it's difficult to find specific disagreements with Eliezer because ] Eliezer's argument is not fully explicit, but he's been more doing lately that's helpful in that direction"Leopold Aschenbrenner, I forgot what he said and can't find the timestamp, but it's in this 4 hour video somewhere lol This is more a "Bayesian update in this direction", not object level, but anyway it does feel like alignment discourse is not talking about actual specific models and developments that are happening *right now*, and Anthropic is probably doing this more than anybody via mechanistic interpretability. You can argue "but we're worried about future AIs" and yes I agree, but I find it very suspicious that that argument excuses the lack of updates. (there's probably a logical fallacy or epistemological sin I'm committing here, but whatever) Fix big lab incentives? it's often said that big lab CEOs and their armies of researchers are very well-intentioned people who simply misunderstand and (maybe due to personal flaws/laziness/ambitions of grandeur) underestimate the colossal forces acting upon them (profit incentive being one). So maybe they even become your ally if the system in which they're embedded is more aligned with your values, or maybe they'll jump out if its unaligned-ness is more globally and more concretely understood. (via memetic/cultural evolution) (or as @Connor Leahy might put it, if I understand his ideas that he explains in this podcast, "if you can make the gods and forces that the big labs are controlled by, do your bidding") World will change fast With short timelines (end of 2028, which is generally what I believe, but if it's longer, this point is even stronger), the world will change *massively* via not-yet-world-ending-AI. The value of adapting to changes (by sharing information and arguments and insights), and the value of a civilization that is able to adapt, increases proportionally to how much things are changing. Sama testifying before congress and Dario talking openly on Dwarkesh's podcast about 25% doom, updated discourse. Meta-thoughts about the above list I hope to expand and elaborate on those topics... and part of why I'm writing on LW is that you can read an article's preview on-hover, which lets you effectively create a web of posts, and build ideas, this website is a powerful tool for memetic evolution. I'm sure almost everything there is already covered by multiple people with more much depth and writing skill --- if only the technology to find and reference such articles (which I bet actually already exists in multiple forms) was more widespread and well-known!! Side note: GUI tools, externality of cognition This topic really really fascinates me and is super personal to me, because I've been building the app that I'm currently working on for over a year and it's very much been a tool to boost my cognition and has helped my mental health immeasurably. (in very short, it's a desktop app written in Tkinter, a glorified note taking system +  code editor/executor with many windows and tabs and other widget types, and you can talk to LMs and create hotkeys for arbitrary code written within the application), and even before that with other apps, pretty soon after I started learning to code ~4 years ago. Simply the ability to copypaste a shitload of text into a chat window and get a summary (or any shape of breakdown or cognitive work of your choosing) is extremely valuable  -- this article is a result of many cycles of copypaste to Claude -> edit -> repeat -- and seems to yield like 95+% of the benefits of AI despite literally hundreds of engineering work and thinking on my part to create better tools. (and god knows how many tens of thousands of hours from big labs' engineers) (the non-AI benefits are more about being able to organize my thoughts and feelings better, iterate on UI design, search across the app/my files, and these changes do very much respond to engineering efforts and to thinking) How and why and in what ways tools and AI help our cognition, what the "landscape of cognitive tasks and abilities" looks like, what AI does and doesn't help with and why, why note-taking is so good, cognition being outsourced to our social interactions, etc. etc. -- I'm basically planting a flag around these topics. I hope to write about this more, I think it's fascinating and that the ceiling for empowering us via externalized cognition + AI tooling is very very high. (And I suspect, based on my experience with this app and older intuitions, that there is way more benefit in AI-less tooling than we know (in very short: think of the speed and quantity of processing our visual system performs on a video or image of a scene, compared to reading and parsing information, it's like 100x at least, in both speed and bandwidth)) Part of why I haven't written about it yet and instead am writing posts like this, is that maybe discussing such ideas will "raise p(doom)" by empowering the AI industry, somehow via second- and third-order effects. It's also very frustrating that my ideas might be dumb and trivial, and that I can't even discover this without writing and publishing. Betting on the human spirit Maximizing global memetic evolution is fundamentally a bet on the human spirit and on the power of a globally cooperating and evolving civilization -- which is basically an expression of the human spirit. Whereas "buttclenching", ie, "we safety people will keep everything secret and create an aligned AI, ship it to big labs and save the world before they destroy it (or directly use the AI to stop them)", is a bet on a small number of AI safety people, and on the brilliance of individual humans, as opposed to the larger system in which we are embedded. (by "cooperation" I don't mean "everyone agrees on a goal and then does it in unison", I mean, "everyone is in some kind of communication/bit-sharing/mutual exploitation/applying adaptation pressure, adversarial or otherwise". Which was nicely pointed out by Connor in this segment of his Bankless podcast appearance (in short, he praised the podcaster for suing the US government [ something crypto related ] because that's a mechanism of civilizational coordination, regardless of being object-level correct about his case)) Memetic evolution bad? > But what if it's actually a bad thing to allow humans to understand each other more frictionlessly and influence each other more rapidly and globally converge on things? What if we converge towards hell, and the only way to save us is an aligned ASI to stop all badness? > Idk man we're just fucked then? (edit: followup-ish: launched more general exploration into memetics )
GtZ5NM9nvnddnCGGr_AI_alignment_via_civilizational_.txt
{ "file_size": 11102 }
7522f61b-d2d4-4b60-bbc2-e72da0653fd3
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, joined me on Doom Debates to debate Bayesian vs. Popperian epistemology. I’m on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself. We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes. The debate highlights key philosophical differences between our two epistemological frameworks, and sets the stage for further discussions on superintelligence and AI doom scenarios in an upcoming Part II. Timestamps 00:00 Introducing Vaden and Ben 02:51 Setting the Stage: Epistemology and AI Doom 04:50 What’s Your P(Doom)™ 13:29 Popperian vs. Bayesian Epistemology 31:09 Engineering and Hypotheses 38:01 Solomonoff Induction 45:21 Analogy to Mathematical Proofs 48:42 Popperian Reasoning and Explanations 54:35 Arguments Against Bayesianism 58:33 Against Probability Assignments 01:21:49 Popper’s Definition of “Content” 01:31:22 Heliocentric Theory Example 01:31:34 “Hard to Vary” Explanations 01:44:42 Coin Flipping Example 01:57:37 Expected Value 02:12:14 Prediction Market Calibration 02:19:07 Futarchy 02:29:14 Prediction Markets as AI Lower Bound 02:39:07 A Test for Prediction Markets 2:45:54 Closing Thoughts AI-Generated Transcript Liron Shapira: Welcome to Doom Debates. Today I'm speaking with Vaden Masrani and Ben Chugg. They're the hosts of their own podcast called The Increments Podcast, which has a lot of overlap in terms of talking about epistemology and occasionally AI topics. In this debate, next couple hours, you're going to hear Vedan and Ben challenging me about my Bayesian I'm going to challenge them about David Deutsch's AI claims and, uh, Karl Popper's epistemology. And we're going to talk about the risk of extinction from super intelligent AI. So, guys, welcome, and please introduce yourselves one at a time and tell us a little bit about your background. Ben Chugg: Awesome. Yeah. Thanks for having us excited to be here. Uh, I not sure exactly how much you want me to say, but yeah, my background is mostly academic, uh, studying math and computer science at currently I'm doing a PhD in statistics and machine learning had a brief stint at a law school pretending I knew something about law, but yeah, mostly in, uh, mostly my backgrounds in math. Vaden Masrani: yeah, um, stoked to be here. Um, yeah, so my PhD was in, uh, machine learning. I was working in a Bayesian, um, machine learning lab that, uh, on the website is all about building superintelligence. So, um, I kind of in that, uh, space started reading a lot about Popper and Deutsch and, um, have, uh, A lot of positive things to say about Bayesianism with regards to statistics and engineering. I think it's amazing. And a lot of negative things to say about Bayesianism with regards to epistemology and beliefs. Um, and so, kind of like to walk that difficult tightrope and defend it to some people and then attack it to other people. Um, and then on the podcast, Ben and I have been doing that for about four years and we've been, um, Um, old buddies growing up in Calgary and started the podcast as, um, COVID began just as a means of continuing to argue and learn and talk to one another. And, um, we explore a multitude of different topics. Uh, yeah. Popper and Deutsch come up a lot, but also things like recycling and things like, um, uh, the patriarchy and things like AGI super intelligence and everything in between. So we try not to limit ourselves. to just a few topics, but, um, because both, um, because Ben was coming from an EA background and I was coming more from a popper background, that tends to be, um, kind of the locus of stuff that we talk about, but the future is open and we have no idea what the podcast is going to be in a couple of years. Liron Shapira: Everyone check out increments podcast. It's a ton of interesting content. I'm enjoying it. So to set the stage, we're going to start by talking about epistemology. And as viewers probably know, my own background is I'm a software engineer. I'm a small tech startup founder. I'm a lifelong student of computer science. A theory of computation, that kind of thing. Uh, and I'm an AI doomer since reading Eliezer Yudkowsky in 2007. Pretty much got convinced back then, and can't say I've changed my mind much, seeing how things are evolving. So that's my background, and we're gonna kick off, uh, talking about epistemology, and just to get the lay of the land of your guys position, I guess I would summarize it as kind of what you said, where you're not really fans of Bayesian epistemology, you don't think it's very useful, Your epistemology is more like Karl Popper style, more or less. And you just think the AI doom argument is like super weak and you're not that worried. Is that a good summary? Vaden Masrani: exception of, um, There's a lot of things about AI to be worried about. autonomous weapons, uh, face recognition technology, um, that kind of, uh, stuff I am worried about. And I think it's a huge problem. Um, and like other forms of technology, uh, it absolutely needs to be worked on. And if we don't talk about it, it's going to become very problematic. So I'm not naive that there are certain, um, huge difficulties that we have to overcome. The stuff that I'm not worried about is super intelligence, paper clips, Bostrom, Simulation, Brokos Basilisk, all that stuff. That, to me, is all just, um, science fiction nonsense, basically. However, the caveat is, um, I haven't read all of Yudkowsky, and at some point in the conversation, I'd love for you to just take me through the argument as if I hadn't heard it, because it could be that we're operating with asymmetric information here, and so I'm completely open to having my mind changed, and, uh, we don't have to do it now, but at some point I'd love to hear just, like, From the step one, step two, step three, what the full argument is, because I could have just missed some stuff that Youkowski has written that would change my mind. So that's the caveat. Liron Shapira: Okay. this is a question I like to ask every guest. Here it comes! Robot Singers: P(Doom), P(Doom), what's your P(Doom)? What's your P(Doom)? What's your P(Doom)? Liron Shapira: Ben, what is your P(Doom)? Ben Chugg: I'm almost, I'd almost be unwilling to even give you a number because whatever number I gave you would just vary so wildly from day to day and would just be based on some total gut hunch that, um, I'd be unwilling to defend or bet on. And so I think it's much more fruitful in these conversations to basically just talk about the object level disagreements. and not try and pretend knowledge that we have about the future and come up with random guesses and put numbers on those guesses and then do calculations with those numbers as if those numbers had some sort of actual epistemological relevance. So, I'm sorry to break the game here, but, uh, yeah, it would be silly of me to even say, I think. Liron Shapira: Vaden, wanna add to that? Vaden Masrani: um, I completely agree with everything Ben said. Yeah, I have a deontological principle to not put numbers on my beliefs. However, if by Pidoum, you simply mean just like, what do I believe? Um, then I would categorize it in the same place as, um, the Rapture or the Mayan Apocalypse or Roko's Basilisk. That's my conclusion. Liron Shapira: What if we zoom out and we're not just talking about AI, right? So like nuclear war, pandemics, even asteroid impacts, although those seem unlikely in a hundred years. But just, yeah, looking at everything together, just the probability that humanity goes extinct or gets reduced to cavemen in the next hundred years, any ballpark estimate for that? Vaden Masrani: Meaningless question. Um, I won't give you a number. I don't think that we can know the probability. Uh, if you want to know my beliefs about stuff, that's a different question. So I can tell you how much I believe, but I won't give you a number. No. Liron Shapira: Would you tell me if you think that it's more than one in a million chance? Vaden Masrani: Numbers are meaningless. Ben Chugg: Also, I mean, Vaden Masrani: I, I can, I can, yeah, I can compare it to other stuff though. So that's, I, I will give you a comparative thing. So the reason why people ask for numbers is because I want to compare. Um, so I can give you things to compare it against. And one thing I would compare it against is Roko's Basilisk. Solid. Liron Shapira: obscure topic, right? So it's, and the question is pretty straightforward of humanity's going extinct, so maybe we can compare it to, like, an asteroid impact, right? So, compare the chance of humans going extinct to the chance of a large asteroid the size of the dinosaur one coming in the next century. Ben Chugg: so, yeah, I think, I think it's much better to just take these one topic at a time, right? So when we talk about asteroid impacts, this is a very different class of event than something like AI or nuclear war. In particular, we have models of asteroids, right? We have both counts and we have physical explanations of how often, uh, asteroids, uh, Uh, enter our orbit, and, you know, uh, we have a sense of our deflective capabilities with respect to asteroids. So there's lots of, like, there's lots of knowledge we actually have about the trajectories of asteroids. And, uh, and then we can use some statistics to start putting numbers on those risks. That's completely unlike the situation of geopolitics, for instance. We have no good statistical models to model the future of Liron Shapira: yeah, no, I hear ya. Well, I'll just explain where I'm coming from with this kind of question. So, as I walk through life, I feel like I'm walking on a bridge, and the bridge is rickety. Like, it's very easy for me to imagine that in the next hundred years, like, the show is gonna end, right? It's gonna be game over for humanity. And to me, that feels like a very salient possibility. Let's call it beyond 5 percent probability. That's how I would normally talk about that. And then, so the reason I'm asking you guys is, you know, we don't even have to get that deep into the epistemology. I'm really just asking you, like, hey, In your mind is the bridge like really really solid or is it rickety? Ben Chugg: So, uh, I, uh, Yeah, I would argue, um, DeVayden and I might disagree about certain object level things, right? There's very, there's geopolitical risks that I'm certainly worried about and, you know, I think nuclear war, uh, is a possibility in the next hundred years and I'm worried about nuclear deterrence and I'm worried about, uh, the U. S. getting involved in certain geopolitical conflicts that increase the likelihood of nuclear war. So all of that we can talk about. When you say the words, put, you know, what's the probability of this? You're already bundling in a lot of assumptions that we're operating in some well defined probability space here. Probability is this technical tool that we use, that, you know, mathematicians use sometimes to solve certain problems. It has a domain of applicability. Uh, machine learning is one of those domains, right? We use statistics a lot to reason about algorithmic performance and reason about how to design algorithms to accomplish certain goals. Uh, when you start talking about the probability of nuclear war, we're totally outside the realm of probability as a useful tool here. And this is You know, now we're sort of getting to the heart of the matter about the critique of Bayesian epistemology. It, it views, you know, it has this lens on the world where everything can be boiled down to a number, and those numbers can be compared, uh, with one another in coherent ways. And those are premises that I and Vaden reject. Vaden Masrani: Wholeheartedly Liron Shapira: guys you guys are being pretty quick to throw my question back at me But I feel like I'm asking about something that you can probably interpret meaningfully for instance just to help you perhaps answer more easily I mean Vaden did answer saying that he feels like the next hundred years are solid in terms of Probability of human extinction or in terms of fear. Let's say subjective fear of human extinction, right? It's Vaden Masrani: say probability, but yeah. Liron Shapira: Okay, so it's solid in some sense that maybe you could describe as your subjective sense, right? When you say solid, it's the sense that you Vaden Masrani: But to be clear, subjective sense is different than probability, Liron Shapira: Yeah. Okay. Fair. So, I can make the question, um, perhaps even, uh, more meaningful by saying like, hey, imagine this is the peak of the Cold War crisis, peak of the Cuban Missile Crisis. And people are saying like, man, this blockade, the U. S. is going to do the blockade around Cuba. The Soviets have threatened to respond. They might even use their missiles before they lose them. So imagine it's like that evening, right, where a lot of people are like, I sure hope they don't launch those missiles. During that evening, if I ask you, Hey, does the next century of humanity's future seem to you solid or rickety, would you still give that same answer, solid? Vaden Masrani: Not in that circumstance, no. Liron Shapira: Okay, and so from our vantage point today, could you imagine that sometime in the next few decades, like what's going, happening right now with Ukraine and Russia, or Israel and Iran, could you perceive yourself entering another type of evening like that, when you're like, oh, maybe I don't feel solid anymore. Vaden Masrani: I can imagine all sorts of things, for sure. Liron Shapira: So generally when we're imagining the future and we're thinking about the past and we're just, we're not sure what's going to happen, that's generally a time when a lot of people would be like, well, there seems to be some significant probability that things are going to go really bad. Vaden Masrani: A lot of people would, but we don't. Totally. That's what we're Liron Shapira: you would rather dismiss those people and just be like, nope, my answer is solid. Vaden Masrani: No, you're misunderstanding the claims that we're making. Um, I don't dismiss those people. I ask for their reasons for the number, because the number itself is next to meaningless. It's not entirely meaningless. But it's close to it. Um, if you want to know my subjective belief about something, I will absolutely tell you. If you want to know how strongly I believe that, I'll tell you that too. What I won't do is put numbers on it, because putting numbers on it allows for fallacious comparisons between things that should not be compared. talk about subjective Liron Shapira: now I'm just Vaden Masrani: and your answer. yeah, yeah, but I won't put numbers on it. If you want Liron Shapira: when you. Vaden Masrani: if you want me to put numbers on it, then we're going to stalemate here. But if you want to have something else, then we can go. Yeah. Liron Shapira: Right now I'm just pushing on your answer when you said solid. Do you want to perhaps revise solid, or do you still want to go with Vaden Masrani: No, uh, no, I'm an optimist. Um, yeah, I'm an optimist about the future. I think that there's definitely things to be worried about, but there's also many things to be excited about. Um, technology is awesome. Um, we can talk about Ukraine and Israel and Iran, and those are things to be worried about. We can also talk about, um, the mitigation of poverty. We can talk about, uh, getting to Mars. We can talk about the amazing things that, um, uh, a, uh, diffusion models are making and how that is going Liron Shapira: yeah. but none of those things are directly irrelevant to my question of the risk of something like the Cuban Missile Crisis really coming to a Vaden Masrani: that wasn't your question. That wasn't the question, if I recall it. The question was about, do we think we're standing on a solid bridge or a rickety bridge? And then you use the Cuban Missile Crisis as an example, right? Liron Shapira: So, if a lot of good things are happening on the bridge, right, like there's a candyland happening on the bridge, but the bridge still might collapse, that's really more of what I'm asking about, is the risk of collapse. Vaden Masrani: Yeah, I don't think we're going to collapse. No. Liron Shapira: Okay. All right. Um, so yeah, you guys mentioned a lot of different topics that I definitely want to drill down into. Um, I think, yeah, a good starting point is like to zoom all the way out and like, let's talk about epistemology, right? Epistemology is like the study of how people are allowed to know things. Is that basically right? Ben Chugg: Sure, yeah. The study of, you know, how we know what we know, I think, is usually how it's phrased. Vaden Masrani: Yeah, yeah. Knowledge about knowledge. Knowledge about knowledge. Liron Shapira: Why is epistemology important? And what are the stakes for getting epistemology right? Ben. Ben Chugg: So, I mean, epistemology is at the center, perhaps, of this mystery of how humans. have come to do so much in so little time, right? So for most of even human history, let alone world history or let alone universal history, not much was going on and humans weren't making tons of progress. And then all of a sudden, a couple hundred years ago, we started making huge leaps and bounds of progress. Um, and that is a question of epistemology. Right. So now we're asking questions, how are we making so much progress? Why do we know that we're making progress? Can we actually say that we're making progress? We seem to understand the world around us much, much better. We're coming up with theories about how the world works. Everything from, you know, cellular biology to astronomy. Um, and how is this mystery unfolding? And epistemology is. A key question at the center of that, right? To be able to say, um, how and why we're making progress. And also to start analyzing, uh, the differences between how people go about making progress and how that differs maybe across cultures. Are there better and worse ways to generate progress? Are there ideas that stultify making progress? Uh, you know, these are all important questions when it comes to future progress and, you know, just human, human welfare in general. Vaden Masrani: Yeah, one thing I would maybe add to that is, I like to sign off on everything Ben said. I'd also say epistemology is like the grand unifier. So if you like science, and you like, um, literature, and you like, um, If you like journalism, and you like art, and you like thinking about the future, epistemology is the thing that underlies all of that, which is why our podcast just keeps branching out into new subjects every other episode, because epistemology is the center of the Venn diagram. So for that reason, and for Ben's reason, yeah, I like it. Mm Liron Shapira: a major breakthrough in popular epistemology, right? This idea of like, hey, if you want to know what's true, instead of just like arguing about it and getting stoned and deferring to whoever is higher status, why don't we go outside and conduct an experiment and let reality tell us what's right, right? Vaden Masrani: Exactly. Liron Shapira: Yeah, so epistemology is powerful in that sense. And then also, as we stand today, I think we argue over epistemology as it relates to how we're going to predict the future, right? I mean, you saw it a few minutes ago, and I'm like, hey, is the next century, are we all going to die? And it sounds like we're kind of on the same page that we can't even agree on whether or not we're all likely to die because of a conflict that's going to trace to our epistemologies. Right? Vaden Masrani: Mm hmm. 100%. Exactly. Yeah. Liron Shapira: okay, great. So I just wanted to set that up because I think a lot of viewers of the show aren't epistemology nerds like we are, but now we've raised the stakes, right? So the rest of the conversation is going to be more interesting. Okay, so my first question about your epistemology is, would you describe yourself as Popperians, right, in the style of Karl Popper? Vaden Masrani: Um, I only say reluctantly because I don't like labels and I don't like a lot of obnoxious frickin pawperians on Twitter who also identify as pawperians and every time you label yourself, now you are associating. Now other people with that label, their bad behavior or their annoying tendencies maps onto you. So that's why I don't like the label, but I have to just, yes, I'm a pawperian through and through. He's the one who's influenced me the most. Um, and every other utterance of mine either. It either has his name cited directly or is just plagiarizing from him. So yeah, I'm a pauperian, definitely. Liron Shapira: think you said on your podcast you spent like hundreds of hours studying all of Popper. Is that your background? Vaden Masrani: Yeah. Um, that was what I was doing while I was in a Bayesian machine learning reading, uh, machine learning, uh, research group. Yeah. Um, so it was, uh, Bayesian in the day and pauper at night. And uh, that was, uh, exactly. Yeah. Yeah. Liron Shapira: Okay. Ben, how about you? Ben Chugg: Um. Probably more reluctantly than Vaden if only because I don't know popper's stuff as well. So my knowledge of Popper, you know, I know I've read some of popper's works in, in great detail, uh, and argued with vaden almost endlessly about much, much of popper's views. So, you know, it'd be cheap to say that I don't understand Popper. Uh, but you know, I haven't read all of his work and. I've become extremely allergic to labeling myself with any particular view, but yeah, if you press me at the end of the day, I would say that I think Popper and his critical rationalism makes the most sense of any sort of epistemology that I've come across previously. So I'd have to adopt that label. Liron Shapira: Okay. Vaden Masrani: And, and you came from an EA background. I think that's important for the listeners to, to know. It's not as if you were totally neutral. And they should listen, yeah, they should listen to our first 10 episodes because that's where the battle began. And so you were familiar with the EA stuff. And there's through a long, slow battle, which this two, three hour conversation is not going to resolve at all. Um, but hopefully the conversation will spark some sort of interest in the viewers who, and those of, And those who want to explore this more can listen to our 70 plus episodes where we gradually explore all of this stuff. So no minds are going to be changed in this particular debate, which is why I don't like debates too, too much, but if it kindles some sort of interest and people actually do want to explore this slowly, then there's a lot of stuff to discover. Um, so Liron Shapira: Great. Okay. And, uh, as people may know, I'm coming at this from the Bayesian side, uh, people who read Less Wrong and Eliezer Yudkowsky. That whole framework of rationality and AI Doom argument, it does tend to come at it from Bayesian epistemology, and it explains why Bayesian epistemology is so useful from our perspective. And in this conversation, I'll put forth some arguments why it's useful, and you guys will disagree, right? So that's kind of where we're going with this, is kind of a Popper versus Bayes argument. Epistemology debate, is that fair, Vaden? Vaden Masrani: let's do it. Liron Shapira: And then before we jump in further, when I think about popper today, I feel like David Deutsch has really popularized it in the discourse. So I feel like most people, myself included, haven't read almost any popper directly, but we have read or seen indirectly a good amount of David Deutsch. And when David Deutsch was on your podcast, he was a great speaker. I think he said he's not an official spokesman, right? He's not a Popper surrogate. He's just somebody who's read a lot of Popper and been highly influenced by Popper, but he doesn't claim to speak exactly like Popper would speak. But from your perspective, isn't David Deutsch very closely aligned with Popper? Vaden Masrani: Uh, yes, if you don't know Popper's work very well, if you do know Popper's work very well, then you start to see major distinctions and differences between the two. Um, so, from an outsider perspective, I think understanding Deutsch's work is a great entry point. It's more approachable than Popper, for sure. But, um, but there's no substitute. Reading Deutsch is not like, So actually, let me take one step back. Um, for about five years, I read the beginning of infinity and fabric of reality, and I just thought to myself, ah, you know what? I basically get conjectures and refutations. I get the point wrong. You do not get the point. You have to read conjectures and refutations. There is so much more in that book than, um, you have learned in beginning of infinity, and it is not like a surrogate at all. You have to read, uh, conjectures and refutations at least, um, to start to have the picture, uh, filled in. Well, Liron Shapira: about Bayes that maybe there's some deep stuff that you guys don't get yet, right? So maybe we'll bring out some of the deep stuff in this conversation. Vaden Masrani: so, just to add, so, um, in Logic of Scientific Discovery and Realism and the Aim of Science, about three quarters of both those books is discussing probability and Bayes. So, it's math and it's equations and, um, everything that I know about Bayes comes from Popper, and that's not in the book. So if you want to really understand Bayes and probability, then you have to read Popper. Um, it's not enough to read Joukowsky because Joukowsky is coming from the Jaynes line. Um, so E. T. Jaynes is the famous Bayesian and so Jaynes is, uh, Joukowsky's Popper. But, um, Jaynes just gives one glimpse into how probability works. Um, and so if you actually want to understand it at the root, you can't just read, um, Joukowsky or Pop, uh, Jaynes. You have to go down to Popper and then branch out from there. Um, so just add that. Liron Shapira: And just to tie David Deutsch into this argument a little more directly, I heard when he was on your podcast, you were talking about how you're not a fan of Bayes these days, and you spend a lot of the time on your podcast telling people why Bayes is wrong, or the arguments are weaker than they look, and David Deutsch was really nodding along. I think he gave you like an attaboy. So he basically supports your mission of being kind of anti Bayes, right? Vaden Masrani: Our mission was because of like one page in beginning of infinity and that then that got my little cogs turning and then being in a Bayesian machine learning reading group or research lab Couple of reading popper is what made the whole argument starts to become very interesting to me. But Liron Shapira: Okay. So our debate today, it's a little bit of a proxy two on two, where you've got this team of Karl Popper and now David Deutsch, who's actually still alive and well. And then on my side, we've got, uh, you know, the Reverend Thomas Bayes, or, you know, who, the group who actually invented, uh, Bayesian reasoning. Um, and, and Eliezer Yudkowsky, right, who's been highly influential to a lot of people like me, teaching us most of what we know about Bayes. So yeah, so Eliezer, uh, as a successor to Bayes, versus David Deutsch as a successor to Popper, all battled through us random podcasters. Sound Ben Chugg: With the caveat, yeah, there's always a bit of trepidation, I think, at least on my part, and I'm sure on Vaiden's part as well, to speak for anyone in particular. I mean, David Deutsch has his own lines of thought and, you know, I, I, I would be very hesitant to label myself, uh, a Deutsch or Popper expert. And so, you know, I always prefer it if we just, keep the maybe debates at object level. Um, of course, in the background, there's going to be these Bayesian versus Deutschian, Popperian dynamics. And, you know, that's inevitable given how we're, we've all been influenced, but just to put it out there, I'd be, I'm, I'm comfortable saying that my views, uh, comport precisely to someone else's views. Vaden Masrani: Yeah, just to, uh, clarify for the, uh, listeners, um, the Reverend Thomas Bayes is not equivalent to Bayesianism, and the guy, Thomas Bayes, is legit and he's fine and that's just like where Bayes theorem came from or whatever, but, uh, Bayesians I think of as E. T. Jaynes and I. J. Goode and Eliezer Yudkowsky. And so these are the people who, um, I would put on the opposite side of the ledger. Liron Shapira: Great, and also, the other correction I would make is that, uh, I think Pierre Simon Laplace is actually the one who publicized, uh, Bayesian methods, and he kind of named it after Bayes, so yeah, you. know, and this isn't really a history lesson, I don't really know what happened, but it just is what it is, in terms of Vaden Masrani: That's great. Yeah, great. Liron Shapira: Okay. Alright, so, to kick this part off, uh, Ben, how about just give us really briefly, um, like, explain Popperian reasoning, and like, pitch why it's valuable. Ben Chugg: Uh, sure. So I think the way I like to think about Paparian reasoning at a high level, and then we can go more into the details, is just trial and error, right? So it comes down to how do we learn things? You know, if, how you, if you ask a kid how they learn how to ride a bike or how they learn to cook or how they learn anything, you try stuff and it doesn't work, you learn from your mistakes, you try again and you slowly reduce the errors in your, in your, uh, thinking and your habits. Uh, and then Popper just takes that same attitude to epistemology. He says, okay. Um, How do we learn things? Well, we conjecture guesses about the word, uh, about the world, how the world works, whether it's politics, whether it's science, um, and then we look for ways to refute those guesses. So this is where the critical experiment comes into play for Popper in the realm of science, right? So we have a theory, that theory makes predictions about how the world works. It says certain things should happen under certain conditions, uh, and that gives us something to test, right? So then we go out, we run that test and, and then again, follows his famous falsification criterion, right? So if that test does not succeed, we say, okay, uh, theory falsified, uh, and then we come up with new guesses. Um, and so there's of course a lot more to say, but it's really the method of trial and error at work in the realm of epistemology. And so Pauper really does away, um, with the notion of seeking certainty. So, at, you know, he was operating at the time of the Vienna Circle, and people were talking a lot about how do we get certainty out of our science, right, or how, and how do we justify our certainty, um, and also talking about demarcations of, like, meaning versus, uh, meaningfulness versus meaninglessness. Um, and Pauper basically takes a sledgehammer to both of those traditions and says, these are not, uh, useful questions to be asking and certainty is not achievable, it's not attainable. So let's just subvert that whole tradition and instead, uh, We're not going to search for certainty, um, but that doesn't mean we can't search for truth. Um, and that doesn't mean we can't get closer and closer to the truth as time goes on. But there's no way to know for sure if something's true, so we can't be certain about truth. Um, and then this also starts to subvert certain notions of Bayesianism, which wants to, they, you know, Bayesians want to approach certainty, but now via the probability calculus. Um, and so, you know, that gets us perhaps farther down the line, but that's maybe a, The scope of the debate, and then I'll let Vaden correct anything I've said wrong there. Vaden Masrani: Great. Um, just one thing to, to add is, um, what Pauper says we don't do is just open our eyes and observe evidence getting beamed into our skulls such that the probability of a hypothesis goes up. Up, up to some threshold, and then bang, you know it's true, and that's how you get knowledge. It's not about just opening your eyes and having the evidence beamed into you. It's about conjecturing stuff, and then actively seeking evidence against your view. Trying to find stuff that falsifies your perspective. Not opening your eyes and observing stuff that you want to see. Um, Liron Shapira: Great. We'll definitely get into that. So me and the Bayesians, we don't have a problem with taking in a bunch of evidence and then updating your belief on that evidence, right? So I guess we'll talk more about that. That does sound like an interesting distinction. Let me give the quick pitch for what Bayesianism is, what it means. Uh, so Bayesianism says that you go into the world and in your head you basically have a bunch of different possible hypotheses, some mental models about how the world might be working, right? Different explanations for the world. That's what Bayesianism is. And then you observe something, and your different competing hypotheses, they all say, oh, this is more consistent with what I would have predicted. This is less consistent than what I would have predicted. And so then, you go and you update them all accordingly, right? You make a Bayesian update. You, the ones that said, hey, this is really likely, the ones that gave a high prediction, a high probability to what you ended up actually observing, They get a better update after you observe that evidence and eventually once you keep observing evidence You hopefully get to a point where you have some hypothesis in your head Which is really high probability compared to the others and you go out in the world and you use the hypothesis and it steers you In the right direction like it turns out to keep giving a high probability to things that actually happen So that's the model of Bayesianism and It sounds like a lot of what Bayesianism tells you to do is similar to what, uh, Popper tells you to do. I mean, Bayes and Popper, they're not like night and day, right? They're not enemies, and they're arguably more similar than different. I mean, there's major differences we're going to get into, but like, when you guys said, Hey, there's no certainty, right? There's just like, doing your best. I mean, I feel like that fully dovetails with what Bayes would tell you, right? Because you're not supposed to give like a 0 percent or 100 percent probability. Vaden Masrani: What do you, can I ask a question there? What do you mean by update? So, um, what I do is I change my mind all the time when, uh, stuff that I think, uh, turns out to not to be true or I see new evidence that, um, either confirms my view or disconfirms it. So I'm changing my mind all the time. Um, but you didn't use that phrase, change your mind, you said update. And so I'm just curious what the difference is between updating and changing your mind. Liron Shapira: Yeah, so when you talk about, hey, what's on my mind, right? Like, what do I think is the correct hypothesis? Um, like, uh, maybe a good example is the election, even though, you know, politics is such a controversial topic. But I'm just talking about predicting who will win, let's say Trump versus Kamala. If you ask me, Liron, who is going to win? And I say, um, I don't know, I saw the latest poll, I guess Kamala. And then tomorrow, like, oh, another poll just moved the, moved the win probability one percent. So now it's like, I guess Trump. But it's not like my mind has changed from Kamala to Trump. It's like I, I was always very much entertaining both the hypothesis that says tomorrow Kamala will win, and the hypothesis that says tomorrow Trump will win. And when I update their probabilities, I'm just like, okay, yeah, if I had to bet, I would now bet slightly higher odds on one than the other. So that's what I mean by changing my mind. It's very much not binary. Vaden Masrani: No, I didn't ask what you meant by changing your mind, but you meant by update. Um, so update is the same as changing your mind or is it different? Um, Liron Shapira: So I, I don't really have such a thing as changing my mind because the state of my mind is always, it's a playing field of different hypotheses, right? I always have a group of hypotheses and there's never one that it's like, oh this is my mind on this one. Every time I make a prediction, I actually have all the different hypotheses weigh in, weighted by their probability, and they all make the prediction together. Vaden Masrani: where did you get the Ben Chugg: Wait, wait, wait, let's, Vaden Masrani: Like just for like Yeah, no, it's, it's, can we just have some, no, I just want to have a conversation. Like, um, I, I just don't understand your, your answer. Right. Uh, but Ben had a question first. Ben Chugg: uh, yeah, like, maybe let's just make this concrete. Um, so, when, if you're designing a satellite, Uh, you're going to send the satellite into space, right? Uh, you're not going to base the mathematics of that satellite, uh, on some combination, some weighted combination of theories of physics. Um, you're going to base it on general relativity, hopefully, otherwise it's not going to work. Uh, and so in what sense, you know, you're not assigning a probability one to general relativity because we also know it's wrong. In some fundamental way, right? Specifically, it doesn't count for, uh, certain very small subatomic effects that we know to be true. So, yeah, in what sense is like, you know, you're taking a decision there, uh, it's not a weighted average of physical theory, so what's, uh, what's going on there? Liron Shapira: Great question. If I'm going to go and invest 100 million on an engineering project, it's because whatever combination of hypotheses are going in my head are agreeing with sufficiently high probability on predictions that my engineering is going to work. So, I don't have a hypothesis in my head that has more than 1 percent probability, saying, you're gonna launch that satellite and some other non Newtonian, non Einsteinian force is just going to knock it out of its trajectory. I don't have a hypothesis like that that's getting sufficiently high probability. So, this is a case where I feel very confident, so my dominant hypothesis about how the physics is going to work has already established itself with more than 99 percent probability. Vaden Masrani: I don't understand that, but Ben, did you Ben Chugg: Uh, yeah, I mean, I, okay, that's, that's fine. I, I, dis, I think, heh, Yeah, we can, we can move on. I mean, I, I, I don't think this is actually what's going on in your head. I don't think you have these explicit theories and you're actually assigning probabilities to them. I think what's going on is you've been swayed by arguments that if you send a satellite into space, it's going to Liron Shapira: a fair criticism, right? Ben Chugg: relativity. So I think Bayesianism in this way is both descriptively, uh, and normatively, we'll get into that later, false. Um, but you know, I can't sit here and examine the context, the contents of your Liron Shapira: If I understand it, I think this is an interesting point you're making. You're basically saying, look, Liron, you kind of retconned, right? You retroactively described yourself as using Bayesian epistemology to justify why you funded this satellite project, but realistically, you never even thought of that. You're just retroactively pretending like you're Bayesian. Is that, like, basically your criticism? Vaden Masrani: but hold on though, cause Ben's question wasn't about if you have a hundred thousand dollars and you need to allocate it to different engineering projects, it's if you were the engineer. And we don't know how to make a satellite yet, how are you going to do it? And that's a different thing, right? So, we're not talking about assigning probabilities to which project is going to be more or less successful. We're talking about, like, how do we get a satellite into the sky? Um, and to do that, you need to understand general relativity. And quantum mechanics. And these two things are mutually exclusive. So if you sign probability to one, you have to necessarily assign less probability to the other under the Bayesian framework. However, that isn't how scientists make satellites, because as we get more evidence for the quantum mechanics, that doesn't take away what we know from general relativity, because we have to use both frigging satellite into the sky. And so you just kind of answered a question that was adjacent to, but not the same as the one that Ben was asking. Liron Shapira: To make this specific, is there a particular prediction, like, you're basically saying, hey, how am I going to resolve the conflict between these different theories, but can you make it specific about which part of the satellite engineering feels tough to resolve for you? Ben Chugg: Yeah, I, it's. Vaden Masrani: Uh, does it, Ben Chugg: It's just more when you were saying like, how do you reason about the world, right? You're not, you're not tied to any specific hypothesis. It sounded like your worldview is not like, okay, uh, for the purposes of achieving this, I'm going to assume general relativity is right. That's anathema to the Bayesian, right? The analogy there is assigning probability one to general relativity. You're not going to do that because we know general relativity is false in some important way. Um, and so you said what you're doing, you know, what you're thinking and the actions you're taking, correct me if I'm wrong, of course, are some, you know, weighted average of these hypotheses that you have about how the world works. But that just doesn't comport with, like, If, you know, if you were to be an engineer, in terms of how you're actually going to design the satellite and send it up into space, um, it's not, you know, you're not relying on a mishmash of physical theories to get the job done. You're relying on general relativity in this case. Liron Shapira: I mean, there's specific things that I need to model to get the satellite to work, and I don't necessarily need to resolve every contradiction between different physical theories. I just have to say, what are the relevant phenomena that need to follow my models in order for the satellite to perform its function and not fall down to earth? And I probably don't have major conflicts between different theories. I mean, if I'm not sure whether Einstein's Relativity is true, right, if I'm not sure whether time dilation is a real thing or not, then I, as a Bayesian, I don't think that Bayesianism is the issue here, right? If engineers launching a satellite didn't know if time dilation was going to be the issue, I think even as a Popperian, you're like, uh oh, well they better do some tests, right? I think we're in the same position there. Ben Chugg: Yeah, for sure. Yeah. Vaden Masrani: can I go back to a different thing that you said earlier? Maybe the satellite thing is getting us a bit stuck, um, you said that you never change your mind because you have a fixed set of hypotheses that you just assign different weights to. First, is that accurate summary of what you said? I don't want to Liron Shapira: If you want to drill down into, I wouldn't call it a fixed set of hypotheses, in some sense it's a variable set of, but it's always, there's a community of hypotheses, right, and they're all getting different weights, and then they're all weighing in together when I make a Vaden Masrani: so when you said you never changed your mind, just maybe flesh out a bit more what you mean by that, because I don't want to Liron Shapira: Okay, if, if, I mean, if I walk into a room of strangers and I say, guys, I never changed my mind, I think that's very much sending the wrong message, right, Vaden Masrani: Totally, totally, which is why I'm not trying to star man you at all. So maybe just, just, just clarify a Liron Shapira: Because on the contrary, right, the takeaway is really more like, no, Bayesians, I'm a master at the dance of changing my mind, right? I don't just see changing my mind as like, oh, hey, I have this switch installed that I can flip. No, no, I see myself as like a karate sensei, right, where I can like exactly move to the right new configuration of what my mind is supposed to have as a belief state. So does that answer your question? Vaden Masrani: Um, so I gotta, I guess, why did you say you'd never changed your mind in the first place? I'm totally understanding that, that you don't mean, Liron Shapira: I meant, yeah, I feel like I threw you off track. When I say I don't change my mind, what I meant was that when you use that terminology, change your mind, it seems to indicate that like somebody's mind has like one prediction, right? Or like they've picked like this one favorite hypothesis and then they threw it away and took a different one. And I'm just saying that's, that doesn't describe my mind. My mind is always this community of different hypotheses. Yeah. Vaden Masrani: Gotcha. Yeah. So yeah, so that's actually a nice distinction between like a Popperian approach and a Bayesian approach. So for me, once I have enough disconfirming evidence, I do exactly what you said the Bayesian doesn't do. I take that hypothesis and it's gone now. I don't assign less probability. It's just, it's dead up until the point where there's another reason to think that it's no longer dead and then I'll revive it again. But um, but so that's just a distinction between how my thought process works and yours, I guess. curious. Another thing, though, which is, um, where do you get your hypotheses from in the first place? Uh, because I understand that under the Bayesian view, you start with hypotheses and then you just assign different weights to them, but, um, but I'm just curious, before that stage, before the reweighting, where do the hypotheses come from in the first place? Liron Shapira: Uh huh, that's a popular gotcha that people like to throw at the basins, right? They're like, hey, you guys keep talking about No, no, no, I know, I know, I know. Uh, people like to, you know, Bayesians love to keep updating their probabilities, but if you don't start with a really good probability in the first place, then you still might get screwedup. For example, like, if my a priori probability that, like, Zeus is the one true god, if it's, if I start out with it at 99. 99%, then even if I see a bunch of evidence that the world is just, like, mechanical and there is no god, I still might come out of that thinking that Zeus has a really high probability. So, you know, this is just kind of fleshing out your, your kind of Vaden Masrani: no, you misunderstood the question, you misunderstood the question. I'm not talking about, um, how do your prior probabilities work. Where do they come from? Um, I'm, cause when you talk about Bayes Theorem, you have your likelihood and your prior. Um, so P of E given H, P of H over P of E, yeah? Um, so we can talk about the probability for P of H, and that's what you were describing. I'm not talking about that, I'm talking about H. Where does H come Liron Shapira: Sure, Vaden Masrani: So before the probabilities are assigned, just where does H come from? Liron Shapira: So this is not necessarily a point of disagreement between us. I mean, just like you can generate hypotheses, I'm also happy to generate hypotheses and consider new hypotheses. So the short answer is, you and I can probably go source our hypotheses from similar places. The long answer is, there is an idealization to how a Bayesian can operate called Solomonoff induction. Are you guys familiar with that at all? Vaden Masrani: Yeah, yes. Liron Shapira: Yeah, so Solomonoff induction just says like, Hey, there's a way as long as you have infinite computing resources, right? So it's an idealization for that reason, but there is a theoretical abstract way where you can just source from every possible hypothesis and then just update them all, right? That's the ideal. So I do some computable approximation to that ideal. Ben Chugg: But the approximation, that's where the details are hidden, right? You clearly don't have every, you're not running every possible hypothesis in your head, right? At some point, you're coming up with new ideas. Like, sometimes you wake up, you have a creative thought that you haven't had before. Um, you know. Bayesianism can't really account for that. All in, and in, you know, if you want to get into the math, it really complicates things. Cause now all of a sudden you're, you're, you're working with a different probability space, right? And so like, what happens with all the probabilities that you assign to some other fixed hypothesis class? Now it's like, okay, now I have a new hypothesis. Everything's got to get rejiggered. Um, and so it's, it's, it's just doesn't account for idea creation in a satisfying Liron Shapira: So this is, um, this is how I perceive the current state of the conversation. I'm basically like, Hey. My epistemology has an uncomputable theoretical ideal that I'm trying to approximate. And you guys are like, well, that's so fraught with peril, you're never going to approximate it. Like, what you actually do is going to be a shadow of that. Whereas, I would make the opposite criticism of you guys of like, okay, well, you guys haven't even gotten to the point where you have. A, an uncom computable ideal. So I feel like I'm actually farther along to be in this because approximating an uncom computable ideal, we do that all the time. Right? This whole idea of like, Hey, we're going to go do math. Well, math is actually uncom computable, right? Like the, the general task of evaluating a mathematical statement is so in, in all areas of life, we're constantly approximating uncom computable ideals. So I, I, I'm not ashamed of approximating un uncom computable ideal. Vaden Masrani: we do it on this side of the aisle too. If, again, if you want to set this up as a debate, then we can, I guess, do that. Um, the Turing machine is an uncomputable ideal that we approximate with our brain. So we have that on our side too, if that's what you're looking for. Right. Liron Shapira: And how does that relate to Arianism? Vaden Masrani: Um, it doesn't totally because Popperian or Popper, so it does relate to Deutsch. So that, um, Deutsch, church, church, Turing thesis is where he gets his universal explainer stuff. And I. We can maybe go into that if you want, but, um, but in terms of Popper, it doesn't at all. But in terms of giving you what you said we didn't have, it does. Because you were saying that on our side of the aisle, we don't have the, uh, uh, Incomputable ideal that we approximate, but we do, because I'm talking to you on it right now, which is a MacBook Pro. And that is an approximation of an incomputable ideal. So, yeah. Liron Shapira: Okay, got it. So when I ask you guys, Hey, where do Popperians get their hypothesis? You're basically saying, Well, we do at some point consider every possible Turing machine as a Vaden Masrani: No, no, no, no, no. So this, this is great. So, um. We don't know. So the answer is, I don't know. Um. So, POPR starts with trial and error. Um, but the question of like, where do the conjectures come from, where do the ideas come from? We don't have an answer. I don't know. And I would love to know and to me answering that question is equivalent to solving AGI. Um, so I have no idea how the brain comes up with its conjectures in the first place. POPR starts with trial and error. There, it just says, there is some magical mystery process that we'll leave it for the neuroscientists and the psychologists to figure out. We're just going to say, Dunno, but that's the foundation, the trial and the error. So that's the answer from our side. Uh, yeah. Liron Shapira: question that I think you might have answered now, which is, so Popper talks a lot about explanations, right? Like good explanations. It sounds like you're saying that when you think about an explanation, you can formalize what an explanation is as being a Turing machine. Would you agree? Yeah. Ben Chugg: Uh, no, I don't think so. I mean, if we, if we knew how to program a good explanation, presumably that would allow us to generate them computationally, right? If you understood them deep enough at that level. And also I suspect something like that is impossible because then you might be able to litigate what is a better and worse explanation in every circumstance. And I highly doubt that that's possible, right? This is like the realm of argument and debate and subjectivity enters the fray here. And like, you're not going to be able to convince everyone with. with an argument, and so I don't think computation is the right lens to have on something like good explanations. Vaden Masrani: and, Just to add a metaphor to, to that, so, um, It's kind of like saying, uh, could you automate, um, the proof process? Well, in some sense, absolutely not, no, like this is what gold, uh, like the incompleteness stuff is, is about, which is that, like, for different kinds of mathematical problems, you have to entirely invent new kinds of proof techniques, such as, like, Cantor's diagonalization argument, right? Um, that was completely new, and you can't just take that, uh, uh, and apply it to all sorts of new kinds of problems. That's what mathematicians are doing all day, is coming up with entirely novel kinds of proofs. And so if you grok that with the math space, so too with explanations. I think that different kinds of phenomena will require different modes of explanation, um, such that you can't just approximate them all with a Turing machine. Liron Shapira: Now in the math space, I think we're now at the point where, you know, we've got set theory and formal proof theory, and I think we're at the point where I can say, what a math. Math, what do mathematicians do? They're approximating this ideal of creating this mathematical object, which you can formalize within proof theory as a proof. Like we, we actually have nailed down the ontology of what a proof is, but it sounds like you're saying okay, but we haven't nailed down the ontology in epistemology of what an explanation is. So, but now you're saying, well compare it to math, but I feel like math is farther along. Vaden Masrani: So, uh, can I just jump in here for a sec, Ben, which is that, uh, Ben will never say this about himself, but listeners, just type in BenChug Google Scholar, and look at the proofs that he does, they're brilliant. So you're talking to a mathematician. And so, not me, Ben. Um, and so I just pass that over to Ben because he is absolutely the right person to answer the question of what do mathematicians do Ben Chugg: uh, I'm just more curious about what you mean by we've solved the ontology of proofs, um, as a genuinely curious question because this might make my life a lot easier if I could appeal to some sort of book that will tell me if I'm doing something right or wrong. Liron Shapira: let's say, uh, a mathematician, grad student, goes to his professor and he says, Hey, I'm trying to prove this. I am trying to write up a proof once I figure it out. Well, that thing that he writes up, these days, almost certainly is going to have an analogous thing that could, in principle, might take a lot of effort, but it could be formalized, right, purely symbolically within set theory. Is that fair? Okay. Ben Chugg: I mean, yes, I mean, okay, I'm, I'm, I'm, I'm confused. I mean, once you have the proof, the point is that it is, it's, it's logic, right? So you should be able to cache this out in terms of, yes, like going down, down to like ZF set theory, for instance, right? You should, you can cache this out all in terms of like certain axioms. You don't tend to, uh, descend to that level of technicality in every proof. You stay at some abstract level. But yeah, the whole point of a proof is that it's written in tight enough logic that it convinces the rest of the community. That doesn't mean it's certain. That doesn't mean we're guaranteed truth. That just means everyone else is convinced to a large enough degree that we call this thing published and true. Okay, great. The hard part is coming up with the proof. What the hell the proof is in the first place, right? So once you have a proof, yeah, we can start doing things like running proof checkers and stuff on it. The hard part is, you know, proving the thing in the first place. Liron Shapira: You're right. So the reason I'm bringing it up is, you know, I don't even want to talk about the hardness of finding the proof yet. I just want to talk about the ontology, right? This idea that when you ask a mathematician, What are you doing? The mathematician can reply, I am conducting a heuristic search. My human brain is doing a computable approximation of the uncomputable ideal of scanning through every possible proof in the set of formal proofs and plucking out the one that I need. That proves what Vaden Masrani: don't know a single mathematician who would say that, and you just asked a mathematician and his reply wasn't that. This isn't a hypothetical, you're Liron Shapira: isn't, I'm not making a claim about what mathematicians say, right? I'm just making a claim about this, uh, the ontology of what is, uh, right? So, so an informal, informally written English language mathematical paper containing a proof maps to a formal object. That's all I'm Ben Chugg: Sure. Yeah, yeah. I mean, you're, I mean, math, math is a formal Liron Shapira: I, yeah, go ahead. Ben Chugg: as proofs are about manipulations in this formal language, then sure. Yep. Liron Shapira: So the reason I brought that up is because when I'm talking about Bayes and you're asking me, Hey, uh, where do you get hypotheses or what is a hypothesis? I'd be like, Oh, a hypothesis is a Turing machine that outputs predictions about the world if you also encode the world, you know, in bits, right? So I have this ontology that is formalizable that grounds Bayesian reasoning. But when you guys talk about Popperian reasoning, it sounds like you haven't agreed to do that, right? You haven't agreed to take this idea of an explanation and have a formal equivalent for it. Vaden Masrani: False analogy because a hypothesis is a natural language, not in a formal language. So the analogy doesn't work because the ontology Liron Shapira: so is a mathematical paper, right? So is a research paper. Vaden Masrani: uh, Ben Chugg: Step outside of Vaden Masrani: saying that what you just, Ben Chugg: Yeah. Go to, go to physics or chemistry or something. Vaden Masrani: yeah, like I'm just saying that the stuff that you were just asking about the ontology of a mathematical proof using that as, um, an analogy to, The hypothesis, the H in Bayes theorem, the analogy is broken because the hypothesis is some natural expression. It's not a formal language. So it's just, the analogy just doesn't work. That's all I'm saying. Liron Shapira: Yeah. I'm not saying that a hypothesis is a proof. What I'm saying is, when I talk about a hypothesis using natural language, or when I'm saying, hey, my hypothesis is that the sun will rise tomorrow, there is a formal, there's a corresponding formal thing, which is, hey, if you take all the input to my eyes of what I'm saying, and you codify that into bits, and you look at the set of all possible Turing machines that might output those bits, my hypothesis about the sun rising tomorrow is one of those Turing machines. Ben Chugg: Sure, I mean, okay, so let me just, let me just try and restate your critique of us just so I make sure I'm on the same page. I think you want to say, you know, in theory Bayesianism has this way to talk about the generation of new hypotheses. Right? As abstract and idealized as this is, we've put in the work in some sense to try and formalize what the hell is going on here. You pauperians are sitting over there, you know, you're critiquing us, you're making fun of us. You haven't even tried to put in the effort of doing this. Where are your hypotheses coming from? You can't criticize us for doing this. You have no, you don't even have a formalism for God's sakes. You just have words and stuff. You, you know, is that kind of, that's kind of where you're coming from? Without the snark. I added that Liron Shapira: It's rough, yeah, it's roughly accurate because I do think that formalizing the theoretical ideal of what you're trying to do does represent epistemological progress. Vaden Masrani: Only if the theoretical, uh, philosophy assumes that a formalism is required. So part of Popper's view is that formalisms are useful sometimes in some places, but most of the time you don't want to have a formalism, because having a formalism is unnaturally construed. Constraining the space of your conjectures. So, the theory on our side is that formalisms are sometimes useful in some places. Not always useful in all places. And so I, I totally accept your critique from your view. Because your view is that a formalism is always better. And we don't have one. Thus, we're worse. But our view is that formalisms are sometimes useful in some places. Not always in every place. Liron Shapira: What would be the problem with you just saying, okay, I can use a Turing machine as my formalism for an explanation, because when we look at the actual things that you guys call explanations, it seems like it's pretty straightforward to map them to Turing machines. Okay. Vaden Masrani: And, yeah, I guess you could, oh, go ahead, Ben Chugg: Well, I think it just doesn't help you try and figure out the question of like, really where these things are coming from, right? So if you're interested at the end of the day of trying to figure out, uh, philosophically and presumably neuroscientifically how humans are going about generating hypotheses, mapping them to the space of all possible Turing machines is not helpful. Like, sure, the output of your new idea could be run by some Turing machine. Great. The question is, you know, what, you know, there's an entire space of possibility as you're pointing out, you know, like vast combinations, endless combinations, in fact, of possible ideas. The human mind somehow miraculously is paring this down in some subconscious way and new ideas are sort of popping into our heads. How the hell is that happening? I don't see how the Turing machine formalization actually helps us answer that question. Liron Shapira: It's because we're talking about the ideal of epistemology. It might help to think about, Hey, imagine you're programming an AI starting from scratch. Isn't it nice to have a way to tell the AI what a hypothesis is or what an Vaden Masrani: But the ideal of your epistemology is that a formalism is required. Not our epistemology. Liron Shapira: Right? But so what I'm saying is, okay, let's, you're saying a formalism isn't required, but let's say I take out a, a white sheet of paper and I'm just starting to write the code for an intelligent ai, right? So when, what, what you say as formalism I say is like, Hey, I have to put something into the Ben Chugg: yeah, yeah. Liron Shapira: how do I teach the AI. Ben Chugg: I mean, I agree, like, this would be awesome if you could answer this question, but I just don't, I don't think you're answering it by appealing to, like, one thing I don't quite understand about your answer is, you're appealing for a process, rather, okay, let me say that again, for a process that is taking part in our fallible human brains. As an explanation, you are appealing to this idealized system. By definition, we know that can't be what's going on in our heads. So how is this helping us program an AGI? Which I totally take to be a very interesting question. And I, and we're, you know, we'll get into this when we start talking about LLMs and deep learning. I don't think, This is the right path to AGI. And so, and very interesting question from my perspective is what is the right path? Like if we could have some notion of like how the human brain is actually doing this. I agree that we, you know, once we figured out we could write, sit down and presumably write a program that does that. Uh, and that's a very, that's a very interesting question. I just don't think we, we know the answer to that. Liron Shapira: Yeah, so I agree that just because I have a formalism that's an uncomputable ideal of Bayesian epistemology doesn't mean I'm ready to write a super intelligent AI today. Uh, and by analogy, uh, if, you know, they understood chess when the first computers came out, it was pretty quick that somebody's like, hey look, I could write a chess program that basically looks ahead every possible move, and this is the ideal problem, uh, program. It will beat you at chess, it'll just take longer than the lifetime of the universe, but it will win. So I agree that your criticism is equally valid, uh, to me as for that chess computer. My only argument is that the person who invented that chess computer did make progress toward solving, uh, you know, uh, superhuman chess ability, right? That was a good first step. Ben Chugg: Yeah. Yeah. That's fair. Can I, um, can I just pivot slightly and ask you to clarify whether you're talking. Do you think Bayesianism is true descriptively of the human brain, or are you making a normative claim about how rational agents ought to act? Liron Shapira: Right, yeah, yeah, you said this a few times, I'm glad we're getting into this because this is definitely one of the key points, and remember like what you said before, like, okay, you're telling me now that this engineering program, you used Bayesian reasoning to say you were 99 percent confident of the theory, but it sounds like you're retconning, right, like that's kind of the same, the same style of question. So I, retconning, Vaden Masrani: retconning? What's retconning? I don't Liron Shapira: it's Like retroactively rewriting history basically, right, like, oh yeah, I was totally Bayesian. Vaden Masrani: Okay, cool, I'm sorry, I didn't know that term. Sorry, sorry to interrupt you. Liron Shapira: No, that's good, yeah. Um, So okay, it's totally true that like as I go about my day, right, like why did I open the refrigerator? What was my community of 50, 000 hypotheses, right, that told me different things were going to be in my fridge or there wasn't going to be a black hole in the fridge, right? What were all the different hypotheses? And the answer is, look, I just opened the fridge, right, because it's like muscle memory, right, like I was thinking about something else, right? So I'm not pretending like I'm actually running the Bayesian algorithm. What I'm claiming is To the extent that I can reliably predict my future and navigate my world, to the extent that my brain is succeeding at helping me do that, whatever I'm doing, the structure of what I'm doing, the structure of the algorithm that I'm running, is going to have Bayes structure. Otherwise, it just won't work as well. Ben Chugg: Uh, okay. Vaden Masrani: descriptively you're saying. Ben Chugg: you're saying descriptively if you do something else you'll fall short of perfect rationality. Like you'll have worse outcomes. Liron Shapira: I'm saying is like, sometimes my muscle memory will just get me the can of coke from the fridge, right, even without me even thinking about Bayes Law, but to the extent that that was true, it's because it dovetails with what Bayesian epistemology would have also had you do. Like, Bayesian epistemology is still the ideal epistemology, but if you're doing something else that ends up happening to approximate it, then you can still succeed. Vaden Masrani: According to you, sure. Um, yeah, it's not Bayes law, it's Bayes theorem, first of all. Uh, but, sure, yeah. Um, that's, that's, that's the worldview that we are saying we disagree with. But, sure. Liron Shapira: Yeah, I mean, look, similarly to you guys, right? Like, when you're opening your fridge, right, you're not, you don't have the, the one Popperian model with a good explanation, and what's that, right? You're just, like, thinking about something else, most likely. Vaden Masrani: Are you conjecture that you're thirsty? Or you have a little, like, um, I don't, I guess I don't entirely know what the question is. If, if you're asking, what is the Popperian approach to getting something from the fridge? It's probably, um, Pretty simple. It's, you have an idea that you're hungry and you go there and you open the fridge and you get it. Um, but if the claim is something deeper, which is like, does the Popperian view say something about Bayes being the ideal, et cetera, et cetera, then it definitely says that that is not the case. Um, so we can go into reasons why that's like, not the case, but it, your answer is assuming the very thing that we're disagreeing about is the point. Um, Liron Shapira: Mm hmm. Okay, a couple things, Vaden Masrani: Yeah. No, Ben, Ben Chugg: was just gonna, yeah, just, I, we're somewhat in the weeds, so I just maybe wanted to say to people how I, how I envision the Bayesian debate often is like there's two simultaneous things often happening. One is like the descriptive claims that you're making about like how humans do and how brains do in fact work and that they're doing something approximating, uh, Bayesian reasoning. Um, and Baden and I both think that's wrong for certain philosophical reasons. You know, we can get into empiricism and stuff, but I don't think observations come coupled with numbers and that those numbers aren't being represented explicitly by your brain, which is updating via Bayes theorem. Um, so there's this, like, whole descriptive, uh, morass that we've sort of entered. And, but then there's, there's, you know, where rubber really meets the road, um, is, like, the normative stuff. Right, so Bayesians want to, because they want to assign numbers to everything, uh, like you wanted to do at the beginning of this episode, right? You'll assign new numbers to geopolitical catastrophes and, you know, P Doom, and, and then you'll compare those to numbers that are coming from, you know, robust statistical models backed by lots of data. And I think, Faden, correct me if I'm wrong, I think Faden and I's core concern is really with this second component of Bayesianism, right? I think the descriptive stuff is philosophically very interesting But it's sort of less important in terms of like actual decision making and real world consequences Like if you want to sit there and tell me that you're doing all this number manipulation with your brain that helps you make Better decisions and like that's how you think about the world then, you know, like honestly, that's that's fine to me but What really, you know, when this stuff starts to matter is, I'll just steal Vaden's favorite example, because I'm sure it'll come up at some point, which is, uh, you know, Toby Ord's book of probabilities in, in the precipice, right? So he lists the probability that humans will die by the end of the century, I forget, correct me if I'm wrong, um, and he gives this probability of one sixth. Where does this one sixth come from? It comes from aggregating all the different possibilities that he's, that he analyzes in that book. So he does AI and he does, um, uh, bioterrorism and he Vaden Masrani: Volcanoes and asteroids Ben Chugg: does all this stuff. And this is an illegal move that from Vaden and I's perspective, and this is the kind of stuff we really want to call out and that we think, you know, really matters and really motivates us. Most of the Bayesian critique and sort of goes beyond this like descriptive level touring machine stuff that we've been arguing about now So anyway, I guess I just wanted to flag that for the audience. Like I think there's more at stake here in some sense than just deciding How to open the fridge in the morning, which is is fun and interesting to talk about but I just wanted Maybe frame things Vaden Masrani: Yeah. May I just, yes. I just want to add something to what Ben said. Beautiful. Exactly right. I think it's so important to continuously remind the listener, the viewer, why we're arguing in the weeds so much. We're arguing so much about this because of exactly this high level thing that you said, which is, um, it is illegal, it is, um, uh, duplicitous, and it is misleading the reader when someone says the probability of superintelligence is 110, and they compare that to the probability of volcanic, uh, extinction, which is 1 in 1 million. Because you can look at the geographical, geological history to count. Volcanoes and make a pretty rock solid estimate But you are just making shit up when you're talking about the future and then you're dignifying it with math And a hundred years of philosophy. And so why Ben, er, I can't speak for you actually on this one, but why I like to and need to argue in the weeds so much is that I have to argue on the opponent's territory. And so when I'm getting all annoyed by this 1 in 10 to 1 in 1 billion comparison, to argue against that I have to go into the philosophy of the Turing machines and the this and that and the whatever. And we get super in the weeds. Um, but the reason I'm in the weeds there is because Toby Ord has been on multiple podcasts and probably blasted this number into the ears of over 10 million people, if you can fairly assume that Ezra Klein and Sam Harris, who both swallowed this number, um, uncritically, uh, if their listenership is somewhere, um, I think it's one in six for the aggregate of all extinctions and then one in 10 for the super intelligence one, if I'm remembering the precipice correctly. And that was compared against. Um, I don't remember the numbers for volcanoes and supernovas and stuff, but one in one million, one in ten million, that, that, that order of magnitude, yeah. Liron Shapira: Yeah, and then, so, you're making the case why we're getting into the weeds, why epistemology is so high stakes, because basically the upshot in this particular example is that humanity should be able to do better than this Bayesian guy Toby Ord, because it's kind of a disaster that Toby Ord is saying that, like, nuclear extinction, for instance, might have a probability of, just to oversimplify what he actually says, something in the ballpark of 10%, right? Which gets to what we were discussing earlier. So you consider it kind of a failure mode that people like myself and Toby Ord are making claims like, Hey guys, there's a 10 percent chance that we're going to nuclear annihilate ourselves in the next century. You think it's a failure mode because you think something better to say is, Hey, we don't know whether we're going to get annihilated and nobody should say, quantify that. Vaden Masrani: the, the, the, the claim. Um, so I didn't use nuclear annihilation intentionally, because I think that is also in the camp of we don't really know what the numbers are here. I used, uh, volcanoes, and I used supernovas, and I used asteroids. I did not use Ben Chugg: No, that's what he's saying. That's what Liron Shapira: and I think we're all on the same page that those things are unlikely in any given century, right? But so, so why don't we talk about the, the thing that's like the more meaty claim, right? The Vaden Masrani: No, no, but, but, but, but my claim is, is not that it's, we can't reason about nuclear annihilation. I think that's very important. I'm just saying that if I talk about the probability of volcanoes and then I talk about the probability of nuclear annihilation, when I say the word probability, I'm referring to two separate things. I should talk about like probability one and probability two or probability underscore S and probability underscore O or something. They're just different and we can't use the same word to compare Liron Shapira: you might label it frequentist probability, right, would that be a Vaden Masrani: No, uh, no, no, frequentism, yeah, frequentist is a philosophical interpretation. Um, I've been using objective probability, but just probability based on data, probability based on, on counting stuff, data, but frequentist is not, right, no, Liron Shapira: Okay. Yeah, maybe you could call it statistical probability. Vaden Masrani: um, let's just call it probability that's based on data, Ben Chugg: Or stitch. Vaden Masrani: CSVs, Excel, JSON, yeah, Ben Chugg: Yeah, it just goes fine for the purpose of this this conversation, honestly Um, and yeah, just to just to maybe answer the question you you asked in a minute ago It's certainly not that we can't talk about the risk of nuclear annihilation, right? What we're saying is let's skip the part where we all give our gut hunches and like scare the public with information that no one can possibly have. Uh, and so I would just turn it on you. Like, so if, you know, say you're very worried about, uh, nuclear annihilation, you give a probability of 1 over 10 in the next 50 years, then someone comes up to you, some geopolitical, Analysts, say John Mearsheimer comes up to you and he says my probability is 1 out of 50, okay? What's your next question? You're gonna ask why is your probability 1 And he's gonna say why is your probability 10? What are you gonna do? You're gonna start descending into the world of arguments, right? You're gonna start talking about mobilization of certain countries, their nuclear capacity, their, you know, incentives, right? You're going to have like a conversation filled with arguments and debates and subjective takes and all this stuff. Uh, you're going to disagree. You're going to agree. Maybe you'll change his mind. Maybe he'll change your mind. Great. Uh, and then at the very end of that, the Bayesian wants to say, okay, now I'm going to put a new number on this. Um, but Bain and I are just saying the number is totally irrelevant here and it's coming out of nothing. Let's just Skip the number. part and have arguments, right? And that's not saying we can't think about future risks. We can't prepare for things. It's not throwing our hands up in the and You know, claiming that we, yeah, we absolutely can't take action with respect to anything in the future. It's just saying, let's do what everyone does when they disagree about things. Let's take arguments very seriously. Arguments are primary, is a way to say it on our world view. Numbers are totally secondary. Numbers are secondary and only useful when they're right. They're right for the problem at hand. And they're certainly not always useful. Yeah. Vaden Masrani: Typically when you have a data set is when it's useful to use numbers. Yes. Liron Shapira: imagine none of us were Bayesians and we just had the conversation behind closed doors about the risk of nuclear annihilation and we come out and we're like, okay, we all agree that the likelihood is worrisome. It's too close for comfort. It's still on our minds after this conversation. We didn't dismiss the possibility of being a minimal, right? So that, that'd be one kind of non Bayesian statement that normal people might say, right? Okay, and, or alternately you can imagine another hypothetical where people, maybe it's in the middle of the Cuban Missile Crisis and people walk out of the room, which I think actually something like this did happen in the Kennedy administration where people walked out of the room saying like, I think this is more likely than not. Like, this looks really, really bad. So where I'm going with this is, I think that there's a, a, you could bucket a number of different English statements that people, normal people often say after leaving these kinds of meetings. And it's pretty natural to be like, okay, well in the first place where they said too close for comfort, maybe the ballpark probability of that is 1 percent to 20%. Vaden Masrani: Hold on. Hold on. That's the move. That's the move that I want to Excise. So I think it's completely legitimate 100 percent to bucket degrees like strengths of your beliefs I think that this is done all of the time when you answer survey questions So like a 1 to 10 scale is very useful. How do you agree with this proposition? Sometimes it's like strongly disagree Disagree, neutral, agree, strongly agree. So that's, um, a five point scale that indicates strength of belief. Uh, sometimes it's useful to go to ten. Uh, I think for like certain mental health questions I do that. All great, I'm so on board with that, that's important. Where I say, hey, hold on people, is calling it a probability. Okay, you don't have to do that. You could just say, you could just say, how strongly do you believe something? Um, and, um, Then as soon as you start calling it a probability, now we are in philosophically dangerous territory because the arguments to assign probabilities to beliefs and then equating probabilities that are just subjective belief gut hunches with like counting fricking asteroids. That's where all the, the, the difficulties come. So I am totally in favor. Of quantizing, discretizing, um, strengths of belief, and I think it's about as useful as, um, a 10 point scale, but that's why doctors don't use, like, 20 point scales very often, and only when I'm answering surveys from, like, the less wrong people, or the frickin Bostrom people, do they give me a sliding scale, uh, 1 to 100, it's the only time I've ever been given a survey with a sliding scale, is when I know that they want to take that number, because I'm an AI researcher, and turn it into the probability of Blah, blah, blah, blah, blah. But, uh, most people don't think that, um, Granularity beyond 10 is very useful. That's why doctors don't use it. Yes, Liron Shapira: surprising to me that people get really worked up about this idea that like, yeah, we're just trying to approximate an ideal. Maybe if there was a super intelligent AI that I might be able to give really precise estimates as humans. We often say something like, hey, So, an asteroid impact, we've got a pretty confident reason to think that it's like less than one in a million in the next century. Because it happens every few hundred million years, statistically, and we don't have a particular view of an asteroid that's heading toward us. So, roughly, that's going to be the ballpark. And then, I can't confidently tell you the probability of nuclear war in the next century, right? Maybe it's 1%, maybe it's 5%, maybe it's 90%. But, I feel confident telling you that nuclear war in the next century is going to be more than 10 times as likely as an asteroid impacted in the next century. Am I crazy to claim that? Ben Chugg: let's just descend into the level of, back to the weeds of philosophy for one second. What do you mean by approximating ideal? What's the ideal here? Like, is the world, Vaden Masrani: thank you. Yeah, and Ben Chugg: Well, no, no, but, but, no, no, not even normative, not even a normative idea. When you say like, you know, am I create your, okay, correct me if I'm wrong. You're saying there is a right to probability. And I'm trying to approximate that with my degrees of belief. So there is an X percent chance for some X that there's a nuclear strike on the U S in the next hundred years. Do you think that? Liron Shapira: Yeah, I mean Solomonov induction is going to give you the ideal Bayesian probabilities to make decisions Ben Chugg: Okay, okay, okay, but that's different. Okay, so that's, that's a claim about rationality. I'm asking you, is there a probability attached to the world? Is the world like, is the world stochastic in your, for you? Liron Shapira: No, probability is in the mind of the model maker, right? So, um, the universe, you might as well treat the universe as being deterministic because you don't, there's actually no ontological difference when you build a mental model. There's no reason to take your uncertainty and act like the uncertainty is a property of the universe. You can always just internalize the Ben Chugg: Okay, good. Liron Shapira: Or, Vaden Masrani: one of the good Bayesian critiques about frequentism that I like. So I, we, I totally agree with you. That, that, that the world is deterministic, non stochastic, and randomness doesn't actually occur in nature. I, I agree. but Liron Shapira: we, or we might we, we, there's, there's just no epistemic value to treating the universe as ontologically fundamentally, non deterministic, and the strongest example I've seen of that is in quantum theory, like the idea that a quantum collapses. ontologically fundamental to the universe and like the probabilities are ontologically fundamental instead of just saying, hey, I'm uncertain what my quantum coin is going to show you know, to me, that seems like the way to go and by the way, I bounced this off Eliezer because it's not officially part of the Eliezer canon but Eliezer says he thinks what I just said is probably Ben Chugg: Yeah, nice. Um, I think, yeah. So for the purposes of this, I think we're all comfortable agreeing the world's deterministic. So, yeah, so now the question is, when you say, ideal, now you're, you're appealing to a certain, uh, normative claim about how rational agents ought to behave, right? And so now we need to descend into like, by whose lights is it rational to put probabilities on every single proposition? Um, but I just wanted to, I just wanted to, because when, it sounds like you're, you know, It sounded, when you were talking, like you were saying, you know, there is an X percent probability that some event happens. We're trying to figure out what that X is. That's not true, right? So, you know, the world is Vaden Masrani: the, um, the, yeah, the ideal, Liron Shapira: hmm. Vaden Masrani: it's the ideal Bayesian reasoner, right? It is what the ideal means. Liron Shapira: Let me give you more context about the Bayesian worldview, or specifically the Solomonoff induction worldview. So the game we're playing here is, we're trying to get calibrated probabilities on the next thing that we're going to predict. And this ideal of Solomonoff induction is, I take in all the evidence that there is to take in, And I give you a probability distribution over what's going to happen next and nobody can predict better than me in terms of like, you know, scoring functions, like the kind that they use on prediction markets, right? Like I'm going to get the high, provably get the highest score on predicting the future. And that's the name of the game. And remember, like the stakes, the one reason we're having this conversation is because we're trying to know how scared we should be about AI being about to extinct us. And a lot of us Bayesians are noticing that the probability. Seems high. So the same way we would, if there was a prediction market that we thought would have a reliable counterparty, we would place like a pretty high bet that the world is going to Ben Chugg: good. We're getting into the meat of it. Um, I just have a, uh, a historical question. Is Solomonoff induction tied to the objective Bayesian school or the subjective Bayesian school? Or do you not know? Liron Shapira: I, I don't really know, right? So, so this is where maybe I pull a David Deutch and I'm like, look, I don't necessarily have to represent the Bayesians, right? I think that I'm, uh, faithfully representing Yud, Eliezer Yudkowsky. I think you can consider me a stochastic parrot for his position because I'm not seeing any daylight there. But I, I don't, I can't trace it, uh, back to, you know, what Eliezer wrote about Solomonov induction. He indicated that it was, uh, part of it was original. So this could just be Eliezer only at this Ben Chugg: Yeah, that wasn't supposed to be Vaden Masrani: Yeah. Solomon, Solomonov induction is, it's, it's, um, it is induction, like philosophical induction, the stuff that we've been railing against, um, except with a Bayesian, uh, theorem interpretation on top of it. So all of the critiques that we've made about Ben Chugg: no, I know, but I was just curious because, um, you know, there are two schools of Bayesianism, the objective Bayesians and the subjective Bayesians. Jaynes comes from the objective school, um, and Solomonoff induction, Vaden Masrani: Oh, he comes from the Ben Chugg: that's what the ideal rational agent is about. Like, he thinks there is a correct prior, there are correct probabilities to have in each moment. And it sounds like Sol No, sorry. Within Bayesianism, which is still a subjective interpretation of probability, there, there's an objective, there's, there's, or call it logical probability versus subjective Bayesianism. These are different things, right? So, subjective Bayesians, I think, wouldn't sign off on the Solomonoff induction. This is a total tangent. You can cut this out if you want. But they, I don't think they'd sign off on Solomonoff induction because they're, they're, like, for them, probability is completely individual. And there's no way to litigate that I have a better probability than you because it's totally subjective. Then there's a large, the Logical or objective Bayesians want to say, no, there is a way to litigate who has a better, uh, uh, a better, uh, credence in this proposition, but they're both still Bayesian in the sense that they're putting, uh, probability distributions over propositions and stuff, right? Like there's still, yeah. Um, anyway, sorry. Vaden Masrani: think you should keep that in. That was helpful for me. Yeah, yeah. You should keep that in. Yeah, yeah. Liron Shapira: You know Ray Solomon off came a couple centuries after Laplace I think so there was a long time when people like hey Bayesian updating is really useful But where do the pyres come from? I'm not really sure but if you have pyres This is a great way to update them and then Solomon off came along and is like hey Look, I can just idealize even the priors, right? I can give you I can get you from 0 to 60 of having no beliefs to having the provably the best beliefs Ben Chugg: Okay. Yeah. So probably the objective is cool. Vaden Masrani: Yeah, but, Yeah, can I say for the listeners, all this, like, ideal, provably, blah, blah, blah, it all rides on Cox's theorem. And so just, you know, Google my name and just type in the, the credence assumption, and then you can see the three assumptions that underlie Cox's theorem. The first one, the second one, and the third one are all something that you have to choose to assume. And this is what Yudkowsky never talks about. And when he talks about laws and you have to be rational, blah, blah, blah. All of that is only if you voluntarily decide to assume the credence assumption, I don't because that assumption leads to a whole bouquet of. Paradoxes and confusion and nonsense about superintelligence and yada, yada, yada. Um, but just for the listeners, when you hear that there's Bayes law and the law of rationality, all of that is only if you voluntarily wants to assume the credence assumption. And if you don't like myself, then none of this stuff applies to you. So just take that Liron Shapira: maybe, maybe we'll get into that, um, but I, I got Vaden Masrani: That was more for the listeners than for, than for you. Liron Shapira: okay, okay, okay. Vaden Masrani: sure. Yeah. Liron Shapira: Um, where I'd like to try next is, so, you guys, uh, just put in a good effort, which I appreciate, uh, zooming into some potential nitpicks or flaws of Bayesianism. So let me turn the tables, let me zoom into something in Popperianism that I Vaden Masrani: Yeah, please. Liron Shapira: I might be able to collapse a little bit. Uh, let's see, so, okay, so we talked about, uh, how, okay, you, you're not entirely, uh, You're not really liking the idea of let's formalize the definition of what an explanation is. It's just like, look, we do it as humans. We kind of, we do our best, right? It's a little bit informal. One thing Popperians say about explanations is that better explanations are hard to vary, right? Certainly, Deutsch says that. Do you want to like elaborate a little bit on that claim? Yeah, yep. Vaden Masrani: from, um, Deutsch. That's one of the things that he, um, kind of built on, uh, Popper's stuff with. And all he means there is, um, that, Just consider two theories for why the sun rises in the morning. Theory one is that there's a God, which if they're happy that day will make the sunrise. And another theory is the heliocentrism where you have a sun in the center of the solar system and the earth rotates around it and the earth is on a bit of a tilt and the tilt, the earth rotates, um, was the, um, The earth itself is rotating around the sun, and it's the rotation of a spherical earth which causes the, the sunlight to rise the next morning. So the first explanation, the God's one, is completely easy to vary and arbitrary because you could say, why is it when the God is happy, why is it one God, why is it six gods, and just whatever you want to justify in the moment can be justified under that. theory. So to actually with super intelligence, but we'll come to that later. Um, with the soup, uh, with the heliocentrism theory, that one is very difficult to vary because if you change any detail in it, so why spherical, let's switch it to, um, cubic. Well, now all of a sudden the predictions are completely different because the sun is going to rise in a different fashion. Um, and so it's, um, That's what Deutsch is getting at with the hard to vary stuff. Um, some critiques of this though is that it's not, like, I give you a theory, um, it's not like you can just naturally categorize this into those which are hard to vary and those which are easy to vary. Um, and so I'm assuming you're about to say something like, um, well, it's, uh, this is a difference in degree, not in kind, um, because everything is, um, Kind of easier, hard to vary, and you can't, um, uh, naturally bucket them into one camp or the other. To which I'd say, I agree. That is true. You can't, um, The hard to vary criterion, uh, I think is rather useless as a critique of other people's theories. You could try to tell astrologers, and homeopathy people, and all these people that, Their theories are hard to vary, are not hard to vary, and thus it's wrong. They're not going to listen to you. It's not a very good critique for other people. It's a great internal critique, though. And so if you take this on yourself, and you, um, and subject your own thought process to is my explanation easy to vary here? Like, Is the explanation that the superintelligence can just create a new reality whenever it wants? Is that easy to vary? Is that hard to vary? Then you can start to, um, uh, weed out different kinds of, of theories in your own thinking. So, um, so it just adds to, to what Deutsch said, which is that it's, um, it is a, a degree, not a kind. And it's a kind of useless critique on other people, but it's a great internal critique. Um, I don't know, Ben, if you'd want to add anything to, to Ben Chugg: Maybe the only thing I'd add is that while this might sound, uh, perhaps like. philosophically, uh, in the weeds a bit. This is Precisely the kind of thing that people do on a day to day basis, right? If you drop your kid off at kindergarten, uh, you go to pick them up. There's many theories that, you know, they could have been replaced by aliens while they were there. Now they're a different person or they've completely changed their personality over the course of the day. Like many possible predictions you could make about the future. What are you doing? You're saying those are totally unlikely because like, if that was to happen, you know, you have no good explanation as to like why that would have happened that day. So this also just comports well with like how we think about, you know, Reality day to day, like, why do I not think my T is going to all of a sudden start levitating? Like, yeah, precisely for this sort of reason. Even if people don't really think of it like that, I think that's sort of what's going on. Vaden Masrani: And maybe a little plug for our conversation with Tamler Summers because we go into this in much greater detail, um, and so just for people who want a more fleshed out version of what we just said, check out that episode, yeah. Liron Shapira: so personally, I do see some appeal in the particular example you chose. Like, I think there, it's, you know, I get why people are using it as a justification for their epistemology. Because, like, if somebody is, like, reading Vaden Masrani: It's not a justification for the epistemology, just to be clear. It's, it's more of a consequence of the epistemology. It's a, it's a heuristic and a criterion, not a justification for it, but yes. Liron Shapira: Do you think it's a corollary, or do you think it's one of the pretty foundational rules of thumb on how to apply the epistemology? Vaden Masrani: No, it's not foundational. No, it's um, it's a corollary, yeah. Liron Shapira: Interesting, because I feel like without it, you might not, it might be hard to derive from the rest of Hopperianism. Vaden Masrani: Nothing is derivable in Popperianism, um, and it's not a foundational. No. Liron Shapira: But you're saying nothing is derivable, but you're also saying it's not foundational and Vaden Masrani: Oh, sorry, sorry, uh, if by, sorry, good, good claim, uh, if by derivable you mean like formally, logically derivable, then no, nothing is derivable, it's, it's conjectural, conjecture. If by derivable you just mean like in the colloquial sense, like, um, oh yeah, I derived the, the, yeah, so just to be clear there, just cause the formal natural distinction, yeah, uh, seems to be important in this conversation. Liron Shapira: haven't gone that deep on preparedness and so I am actually curious, like, so this, this rule that Deutsch brings out a lot or heuristic or whatever it is, right? That, that a good explanations are hard to vary. Did Deutsch infer that from something else that Popper says? And if so, what's the inference? Okay. Ben Chugg: Yeah, Vayden, correct me if I'm wrong here. Can't, uh, doesn't this come somewhat from Pauper's notion of content? Like empirical content of theories, right? If you want theories with high empirical content, that's Vaden Masrani: Uh, yeah, yeah, yeah, yeah. Ben Chugg: want things that are hard to vary. Mm, Vaden Masrani: just because the, um, there is a, uh, important distinction between just the way that Ben and I, and you think about stuff, which is, uh, formal systems compared to natural language. So words like derivable, infer, et cetera. I just feel like we need to, to, to plant a flag on those because translational difficulties there. So just because of that, um, yes, it is absolutely. Colloquially derivable from his, uh, theory of content, absolutely, but, um, Deutsch just kind of re newed it, um, so, it's consistent with Popper for sure, but it's just like a, it's a rebranding, it's a, what is the, a concept handle? It's like a concept handle, um, yeah, Liron Shapira: Ben, do you want to elaborate on that? I'm curious to learn a little bit more. Because, I mean, look, I find some merit or some appeal to this concept. So, can you tell me more about the connection to the content, whatever Popper's Ben Chugg: Yeah, yeah, I'll let Vaden go, because he loves this stuff, Vaden Masrani: yeah. Do you, do you want to hear a full thing about content? I could spiel about that for like an hour, but Ben, maybe Liron Shapira: can you just tell me the part that grounds the explanation should be hard to vary claim? I don't Vaden Masrani: yeah, so this, yeah, I'd love to talk about content, but I need to explain what it is. Like, do you know what proper stuff on content is? Um, Liron Shapira: hmm. Mm Vaden Masrani: okay, so content is a really interesting, um, concept. So the content of a statement is the set of all logical consequences of that statement. Okay? Yeah, so, um, and I'm going to expand upon this a little bit because, um, it's actually going to lead somewhere and it's going to connect nicely to what we've been discussing. So far. Um, so just to give an example, so the content of the statement, uh, today is Monday, would be, um, a set of all things that are logically, um, derivable from that. So today is not Tuesday, today is not Wednesday, today is not Thursday, et cetera. Um, the content of the statement, um, it is raining outside, would be it is not, um, sunny outside, there are clouds in the sky, that, that kind of thing. Um, so that's what the content is. Uh, and then there's different kinds of content. So there's. There's empirical content and there's metaphysical content. So um, empirical content is a subset of all the content and that is things which are derivable that are empirically falsifiable. So if, for example, I say, um, uh, what's the content of the statement that all swans are white? Um, well one, uh, derivable conclusion from that would be there is not a black swan in Times Square on Wednesday. 2024. Um, that would be a empirically derivable, um, uh, claim. Uh, the content of, um, a metaphysical statement would be something like, um, uh, the arc of progress bends towards justice or what's that, um, quote from MLK. Um, so, and then the content of that would be something like the future will be more just than the past. Um, okay. If you let me elaborate a bit further, I promise this is going to connect to what we're So now we can talk about, um, uh, how do you compare the content of different kinds of statements. So, with the exception of tautologies, essentially everything, it has infinite content. Um, because you can derive an infinite number of statements. statements. from today is not Monday. You can just go today is not Tuesday, etc. So it's infinite, but you can do class subclass relations. So, um, the content of Einstein's theory is strictly greater than the content of Newton because you can derive Newton from Einstein. So Einstein is a higher content theory from Newton precisely because anything that Newton can derive, you can derive from, from Einstein. Um, you can't compare the content of say Einstein and Darwin. For example, because they're just infinite sets that can't be, can't be compared. Um, So going a bit further now and where this is going to connect really nicely to what we've been discussing so far. So let's talk about the content of conjunctions. Um, so the content of a conjunction, um, so we have two statements today's Monday and it is raining. So the content of a conjunction is going to be strictly greater than or equal to the content of the, uh, statements, uh, on there on its own. Um, the content of a tautology. It's zero, if you want to put a measure on it, if you want to put numbers on it, it's zero because nothing can be derived from a tautology. The content of a contradiction is infinite, or one, because from the law of, um, what's it, the law of, uh, explosion principle or whatever, from a contradiction anything can be derived. So it's infinite, but because it's infinite, you can immediately derive a empirical, uh, Um, falsifier that would show that the content of a contradiction is, is false. So now we're going to connect. So let's talk about the probability of a conjunction. So the probability of a conjunction, today is Monday and today is not raining, strictly goes down. Probability is less than or equal to. The probability of a tautology is one. The probability of a contradiction is zero. So if you want, in science and in thought. To have high content, you necessarily must have low probability. If you want, um, Your theories to be bold and risky than they necessarily have to have low probability. So on this side of the aisle, we claim that the project of science is to have high content propositions, theories that are bold and are risky, and that's necessarily low probability. On your side of the aisle, you want high probability. So if you just want high probability, just fill your textbooks with tautologies. Um, if you want low probability film with contradictions from our perspective, we want high content. Um, so we want low probability, so we are completely inverted. And I would claim, and Ben I think would claim, and Popper, this is, I'm just ventriloquizing Popper entirely, that the goal of science is to have high content, risky, bold, empirical theories, such as Newton, Einstein, Darwin, and DNA, et cetera, et cetera, and that means low probability, which means that Bayesianism is wrong, please. Liron Shapira: Yeah, thanks for that. Let me make sure I fully understand here because in the example of the you know the I think we talked about the Sun going there on the earth or like we see the Sun rising and setting and One person says I think this is because the earth Is spinning right? So we see the Sun coming up and down and another person says I think this is because I Believe in the Greek gods and this is clearly just Helios right as said in the Greek mythology You And you're saying, well, look, we prefer a higher content theory. And so when you talk about Helios, because it's easy to vary, that makes me think it's using fewer logical conjunctions, which would make it lower content. Am I verifying you correctly? Mm Vaden Masrani: great. Yes, I actually didn't connect those two. Um, and there's a nice relationship between, um, complexity, which is about, um, conjunctions of statements and simplicity. And what we look for in science is simple statements with high content because those are the ones which are the easiest to falsify. Um, and so if we have certain statements that, um, A lot can be derived, such as you can't travel faster than the speed of light, um, then it makes a lot of falsifiable predictions, and thus touches reality much more, um, and it's harder to vary because if you change any part of it, then you're falsified, you're falsified, you're falsified. So there is directly a relationship there, yeah, Liron Shapira: Okay, but what if the ancient Greek pushes back and he's like, Oh, you need logical conjunctions, eh? Let me tell you about Helios. Okay, Helios rides his chariot around in the sun, and he wears these sandals made of gold, and he's friends with Zeus, right? So he gives you like 50 conjunctions. He's like, I actually think that my theory is very high content. Vaden Masrani: yeah, and so this is where there's a difference between content and, um, easy to vary this, right? So, all the conjunctions that he just made up, he could just make up a different set. And that's why it's so easy to Ben Chugg: But, Liron Shapira: But, but what if, okay, just playing along, I Vaden Masrani: Oh yeah, yeah, yeah, no, but Liron Shapira: me just push the bumper car here, right, so what if he's like, but okay, but I'm specifically just telling you all the conjunctions from my text, right, and we haven't varied the text for so long. Ben Chugg: think there you'd want to talk about the consequences of his view. You want to look in, in conjunctions of the content, which are the, the, the, in this case, it's supposed to be an empirical theory. So the empirical consequences. So you ask him, okay, like, given all these details about your theory, that's fine. But like, what do you expect to see in the world as a result of this theory? And there it's very low content, right? Because it's going to be able to explain anything that can happen. War, no war. Clouds, no clouds. Um, I don't know. I don't know. I don't actually know what chariots Liron Shapira: what you're saying, and, you know, I'm playing devil's advocate, right, I'm not even necessarily expecting to, to beat you in this argument, but I'm really just pushing, just to see if I can, right, Vaden Masrani: I mean it's Liron Shapira: so imagine then, Vaden Masrani: it's not win or losing, it's just trying to learn, learn from each Liron Shapira: yeah, Yeah, imagine that he says, um, okay, but I have this text. It's been around for a thousand years and it specifically says every day Helios comes up and then down, right? And it can vary a little bit, but it's always going to be like up and down and an arc pattern in the sky. So I'm not varying it. Right. And it has like a 50 conjunctions. So like, why does this not beat out the earth is spinning theory? No, Vaden Masrani: to like induction and stuff, and I've seen it a thousand times in the past, is that where you're going with Liron Shapira: no, I'm not moving. I'm actually still, I'm actually still trying to prod at this idea of being hard to vary. Right. Right. So. Vaden Masrani: sorry. Sorry. Liron Shapira: is a critique that the helios going around the Ben Chugg: So it's, it's not, um, so, oh, I see. Okay. So I'd, I'd rather talk about that. That's why I think content is actually sort of like the more primal concept here, the more primitive concept rather, because there you can talk about, it's not that that has no predictive power or no content, right? That you're, as you said, it's going to predict that the sun rises and sets. But then you start asking, like, what's beyond that prediction? Like, what else does this say about the world? Well, the theory of heliocentrism says a lot, right? It says things about seasons, it makes very It posits a very rigid structure of the world, and we can go and test this structure. Like, you know, tilt theory of the world comes to mind. It's related to this, I guess, um, you know, and this comports with other theories we have of the world, which together make this web thing of things that are like, that's when the hard to variance comes in all of that together is very hard, hard to vary. So it's true that it makes some, uh, predictions and has some empirical content, right? That's presumably why they thought it was like a useful predictive theory in the first place. But you ask, okay, what does heliocentrism have on and above that? And it's got much more posits, way more content. And so we prefer it as a theory. Did that answer your question or Liron Shapira: Okay, and just to make sure, let me try to summarize, I may or may not have understood you correctly. You're saying like, look, the, the, the earth spinning model, it can also make a bunch of other predictions that we can even go test. And so just by virtue of doing that, it's, it's kind of like you're getting more bang for the buck. It's kind of like a, it's a compact theory. It's getting all these other, it's constraining the world. But it almost sounds like hard to vary might not even be the main argument here, but it's more like, hey, look, there's a bunch of different types of evidence and it's compact. I feel like those are the attributes you like about it. Vaden Masrani: Well, so the hard to variance again is not like the core thing that if you refute this, you destroy all of Popperianism, right? It's a, it's a heuristic as a way to think about stuff. It's related to content. Content is a bit more of a fleshed out theory. All of this is related to falsification. So content is part of the way that you connect to falsification. Um, and it's related to like Occam's razor and stuff with the compactness. So compactness connects you to simplicity. Um, but again, it's, it's not, this like, uh, ah, you got, like, you gotcha, man. It's, it's like, yeah, sometimes I think about hard to variance and other times I think about empirical content. And, uh, what Ben just said was, was beautiful and perfect, which is the rigidness of the, um, the, the theory and how it's like locked and tightly fit on top of reality. And then it gives you extra things that you can think about that you hadn't realized that if this is true, this leads to this other thing. So for example, heliocentrism leads pretty quickly to the idea that Oh, shit, these little things in the sky that we see, they're lights, they're stars. Maybe they're just far away and maybe there's other planets there too and maybe, on these other planets, there's other people contacting other planets. Contemplating how the world works and so it's not like it's derivable from it, but it's it just your thought leads leads there, right? And that's part of the content of the theory. So That's not yeah, so Liron Shapira: So my perspective is, you know, if you were to come into my re education camp and I wanted to reprogram you into Bayesianism, what I'd probably do is, like, keep pushing on, like, okay, what do you mean by hard to vary? What do you mean by, like, following Occam's razor? I feel like if I just keep pushing on your kind of heuristic definitions of things, I'll make you go down the slippery slope and you're like, okay, fine, Solomonov induction perfectly formalizes what all our concepts really mean. Vaden Masrani: but it's not making us go down the slippery slope. Like Ben is a statistician. He understands Bayes I grew up in a We understand this stuff. We've read it all. I've read a lot of Yukowsky. Like I know the argument, like there's maybe a deep asymmetry Liron Shapira: So Vaden Masrani: which is that we know your side of the argument, but you don't totally know our side. And so it's like the reeducation has already happened because I started as a Bayesian and I started as a less wrong person as to Ben. And so we have been reeducated out of it. And so you're talking about re re educating, but you wouldn't be telling us new things. You wouldn't be telling us Liron Shapira: I, I actually haven't heard my side represented well on your podcast, so, so let's see, let's, let, let, let's see how much you know my side by the end of this, okay? Vaden Masrani: Sure, sure. Yeah. Liron Shapira: All right, so here, I've got another related question here on, on the subject of, uh, harder to vary, and I think you mentioned this yourself, that like, yes, technically, when somebody says, um, when, when somebody says that it's Helios Chariot. Technically, or sorry, wrong way. When somebody says, hey, the Earth is spinning on its axis, and that seems kind of hard to vary. Technically, it's not hard to vary because you could still come up with infinitely many equivalent explanations. Uh, so, like, so what I mean is like, okay, the Earth spins on its axis. And there's like angels pushing the earth around, right? Like, so you can just keep adding random details or, or even make like equivalent variation, like build it out of other concepts, whatever. Like, so there's this infinite class, but the problem is you're wasting bits, right? So it's just not compact, I think is the main Vaden Masrani: no, no, no, no, no, no, no. Good, sir. It's not about bits. No. So the problem with that Is, yeah, you can take any theory and then, I actually gave a talk about this at a high school once and I called it the, the tiny little angels theories. But you can say, take everything you know about physics and just say it's because of tiny little angels and the tiny little angels are doing, doing that. Um, the problem there is not that you have to add extra bits. It's that as soon as you posit tiny little angels, you are now positing a completely different universe that we, Um, that we would be having to live in, that would rewrite everything we know about, it's the same with like homeopathy and stuff, like if the more you dilute, the stronger something gets, that rewrites all of the periodic table Liron Shapira: They're just there, but they're inert. Vaden Masrani: uh, so then, Ben Chugg: did the Vaden Masrani: That is the hard to vary stuff. Ben Chugg: how do, what is their Vaden Masrani: and why why not angels? Why not devils and the very way that you are in varying the explanation as we speak Is what we're talking about, right? So it's easy for you to vary Liron Shapira: turns on its axis, but there's just like one extra atom that's just sitting there. Can't I just pause at that, right? Like, isn't that an easy variation? Ben Chugg: But then take that seriously as a theory right? Like, take that, so what's that, is that extra atom interacting with anything? Like, if not, then what use is it? If so, then it's gonna have effects. So why haven't we witnessed any of those effects? Like, where is it in our theories? Like, you know, like, um, yeah, like, Vaden Masrani: also the heliocentrism theory is not a theory of there are this many atoms like it's not a theory of that level, right? Who's counting atoms in heliocentrism? Ben Chugg: theory, but, yeah. Liron Shapira: So I think, I guess, let me, let me summarize my point here. I think you guys do have a point when you talk about harder to vary, and I think it maps to what Bayesians would claim as like, and Occam's Razor would claim as like, let's try to keep the theory compact. Like it gets, it has a higher a priori probability if it's compact. So if you just add a million angels that are inert, you're violating Occam's Razor, which I think maybe both worldviews can agree on. But if you're saying, no, no, no, we don't care about Occam's Razor, we care that it doesn't make extra predictions, Or like, that it makes, you know, maybe it'll make other predictions that are falsified. I feel like now you're starting to diverge into a different argument, right? So I do feel like the hard to vary argument kind of seems equivalent to the Occam's razor argument. Vaden Masrani: homeopathy probably could be represented with much fewer bits than the periodic table, but I still prefer the periodic table, even though it's more complex, right? So it's not just Occam's razor and low number bits. Like quantum field theory would take up a lot of bits. Um, there's many simpler kinds of theories you could use, but they don't explain anything. They don't explain the experimental results that we Liron Shapira: with homeopathy is, doesn't, Vaden Masrani: simplicity itself is not valuable. The, okay, well now we're back to content. But I'm saying that if, if the only, Criterion is small number of bits being compact and simplicity. There are so many theories which are complex, use a lot of bits, are not simple, but I still prefer them. And that's my point. Liron Shapira: If you're trying to set up an example though, you have to make sure that it's an example where two different models make the same prediction. So when you brought up homeopathy versus the periodic table, I wasn't clear on what was the scenario where they're both making the same prediction. Vaden Masrani: Um, they are both predicting that if you have my drug, it will make your cold, um, go away faster. If you have my, buy my product at Whole Foods, and they will both address your cold. Liron Shapira: But in this scenario, doesn't the homeopathy remedy not work? Okay, but I mean, Vaden Masrani: but from the homeopathy people think that they're predicting that it's medicine, right? And there's a reason why people don't use traditional medicine and go to homeopathy because they're both making predictions that if they take this, they're going to feel better, right? Um, Liron Shapira: example you're trying to set up is one where we actually have the same phenomenon, right? Like in the other example, it was, hey, the Sun is going to come up and go in an arc in the sky, right? So it's the same phenomenon and you have two theories and we're Vaden Masrani: No, the type of example I was just trying to set up here is much simpler, which is just simplicity and Occam's razor aren't sufficient. Um, they're just. modes of criticism. They're useful heuristics sometimes, but they're not the primary. And if all we care about is small numbers of bits and simplicity and compactness, then I can give you a bunch of theories that meet that criterion that I don't like very much. Liron Shapira: This is actually interesting, by the way, I guess. I wasn't even really expecting you to say that you basically don't think Occam's razor is useful, or like, what's your position on Occam's razor? Vaden Masrani: Um, I think Occam's razor is good. Sometimes it's, it's a, it's one way to criticize stuff, but it's not the only thing. Um, sometimes the theory is super complicated and has a bunch of superfluous assumptions that you need to shave off. That's when I'll pull off Occam's razor. Sometimes I'll pull out Hitchin's razor. Hitchin's razor says that that which can be asserted without evidence can be dismissed without evidence. That's also a useful criticism and a useful heuristic. There's a whole toolkit of different razors that one can pull out and neither of them, none of them, are at the base level. They're all just kinds of, you know, Shaving equipment that shaves off shitty arguments. Liron Shapira: What do you guys think of the Bayesian view of Occam's razor? Vaden Masrani: I think it's as fallacious and mistaken as Bayesianism itself. Um, or, that's cheap. Ben, give me a less cheap answer. Ben Chugg: yeah, so the, I mean, so the, you wanna say that, uh, theories that are simpler should have high prior prob higher prior probabilities, right? That's the view of Bayesianism with respect to Liron Shapira: Right, and that's what Solomonoff induction does. Uh, right, so it, it just, it, it basically orders all the different possible Turing machines that could ever describe anything, and it puts the, the ones that are shorter earlier in the ordering, which means that if you have one Turing machine that says there's a million angels doing nothing, right, that's going to be deprioritized compared to the Turing machine that's like, okay, there's not angels, right, it's, it's just simpler. Vaden Masrani: So I think a lot, can I step just back like one second and if this is, if in your view I'm dodging the question feel free to re ask it but I just want to like frame a little bit and then Please criticize me if it seems like a dodge, but, um, Solba, Sol, Solomonov, Ben Chugg: Solomonov. Vaden Masrani: Solomonov Induction? Solomonov Induction and Bayesianism and all this stuff tends to, um, fiddle with the probabilities enough to come up with a justification that Pauperians are already, already have. So there are reasons why we like simplicity. Um, one of the reasons is that simple theories have higher content and thus, they're more powerful. are easier to refute. So that is in my view, why we like simplicity. However, you could come up with a Bayesian story about why we like simplicity. You could, you could talk about it in terms of induction. You could talk about it in terms of Solomonov induction, and you could fiddle with the math and say, and this is how we get this conclusion from Bayes. And you can do this all over the place. You can, this is like, The Bayesian epistemology hypothesis testing stuff where you can come up with a post talk story about why Bayes theorem is what led us to the discovery of the double helix. Um, but it's just that it doesn't give us anything new. We've already discovered that. And then you could come up with a story after the fact. The double helix is another nice example of why I like content. Um, and I'll stop repeating myself there. But, but just when you ask these kinds of questions. Um, yeah, you can always tell a story from Bayes perspective about why we value this stuff. And we're giving you an alternate story. And I guess it's just ultimately going to be, have something that the listeners are going to have to decide. Um, but the starting point that Solovanov induction is right is wrong, and we can argue about that. But if you grant that, or sorry, if you start with the assumption that it's right and you don't, So if you don't want us to argue about that, then yeah, of course you'd come up the story from Solomonov's perspective about why we value simplicity. Um, but it's just, we're talking completely across purposes here because I reject Solomonov induction because I reject induction. And so any kind of induction just is wrong. And we can, and you can listen to us argue about this for hours and hours and hours if you like. Um, but you're starting with the assumption that it's right and that already is meaning we're not talking properly to one another. Liron Shapira: Yeah, and by the way, I do want to hit on the human's paradox of induction stuff. I know you guys have talked about it. I find it interesting. Uh, so I want to get there. Uh, but first, I think I've, I've got a little more, uh, red meat to throw at you on this topic of, uh, you know, how, how do we actually apply, um, uh, Popperian reasoning to judge different hypotheses that seem like they could both apply. I've got an example for you. Uh, okay. Let's say I take a coin out of my pocket, we're just at a party, right? It doesn't look like it's, uh, I'm not like an alien or whatever, it's just normal people, right? I take out a quarter, it just looks like a totally normal quarter, you don't suspect me of anything. Um, and I flip it ten times. And it just comes up with a pretty random looking sequence, say, Heads, heads, heads, tails, heads, tails, heads, heads, heads, tails. Okay, so it doesn't look like a particularly interesting sequence. Um, and then you say, Okay, this just seems like the kind of thing I expect from an ordinary fair coin. And then I say, I've got a hypothesis for you. This is just a coin that always gets this exact sequence when you flip it ten times. You just always get heads, heads, heads, tails, heads, tails, heads, heads, heads, tails. So if I flip it again ten times, I'm just going to get that exact same sequence again. That is my hypothesis, and I'm like drunk, right? So I don't even seem like I'm like a credible person, but I'm throwing out the hypothesis anyway. You think it seems to be a fair coin unless I'm doing an elaborate trick on you. So my question for you is, what do Popperians think about contrasting these two hypotheses? Fair coin versus that exact sequence every time you flip it ten times. Which of the hypotheses seems more appealing to you in terms of being like, I don't know, harder to vary or just better? Vaden Masrani: I would just do it again and quickly find out, like, you just flip the coin again, do you get Liron Shapira: Well, what if, Vaden Masrani: No? Then, yeah. Liron Shapira: what if Vaden Masrani: Then, then you've, you've found out, Liron Shapira: have to bet a thousand Vaden Masrani: here? Liron Shapira: Whatever you predict, you have to bet a thousand bucks. So I'll let you do it again, but it's just, but we gotta gamble on it. Vaden Masrani: I would say I don't want to, I just want to do it again. And so, like, if I'm at a party and someone tells me a magic coin, first I'd be like, well that's crazy, how is that even possible? And like, obviously, like, can you actually do that, man, are you doing a magic trick or did you just come up with some, like, do you, can you control the, your thumb enough to get the same sequence? So it strikes me as prima facie completely implausible. And so yeah, I would take the, take the money. But. I, no I, no I wouldn't take the money, I wouldn't take the bet, because they're probably, Ben Chugg: asking to bet for a Vaden Masrani: some trick going on, so, yeah, exactly, so I would say, no I'm not gonna Liron Shapira: imagine this really is just, like, a random drunk guy who has, like, no incentive, doesn't want to bet you. Like, it really just does seem like somebody's dicking around and you have no reason to suspect anything. And like, there's no Vaden Masrani: I wouldn't, I wouldn't Liron Shapira: my question for yeah, forget, forget about the bet, okay? So my question for you is just like, I mean, you brought up like, let's flip it again, and I'm saying like, well, for whatever reason, right? Before you flip it again, just between you and me, right? You just Vaden Masrani: this, but this Liron Shapira: Liron, I'm about to do some Popperian reasoning. Vaden Masrani: This is fundamentally a great example of the difference between Popperians and Bayesians. And Ben brought up a great example a long time ago about this. But the big difference is that a Popperian would just do it. A Bayesian would go off into their room and spend six hours writing a 20 page blog post about how the They can formalize the probability space of their beliefs on this particular circumstance, and then they would post it to LessWrong, and then spend another 20 hours arguing about the probabilities. That's Liron Shapira: So Popperian's wearing Nike. Ben Chugg: So, Vaden Masrani: it. They would just say, oh, okay, that's interesting. Yeah, let's just do it. Let's try. Oh, it's wrong? Okay, good. Move on. Ben Chugg: so maybe to answer, make that answer slightly more globally applicable, and remove a tiny bit of the snark, um, I think a good way to, to, coherently talk about the differences in worldview, which I think is actually very interesting is that Bayesians are extremely focused on having accurate beliefs, put aside exactly how we define accurate, but accurate beliefs given the information you have right now. Popperians are very interested in generating new hypotheses and figuring out what we can do to grow our information. So very interested in like, if we have multiple competing, Good hypotheses for some phenomenon. How do we go about discriminating between those? And that's where the crucial test and stuff comes up, right? So there is Liron Shapira: Yeah, so, so I, I get that your mindset is to just do it, and I get that you want to find new information, but like, Ben Chugg: no, no, sorry, I'll, sorry, I'll stop dodging the Liron Shapira: do you make of the challenge? Ben Chugg: answer it. But I, I just, I just, the reason Vaden is having trouble answering your question is because there is this extreme difference in emphasis between these two worldviews, so much so that I, I have recently started struggling to call Bayesianism, Bayesian epistemology, because it's not really about epistemology in the sense of growing knowledge. It's about epistemology in the sense of, like, justifying current credences for certain hypotheses. So the emphasis of these two worldviews are different, and they're still conflicting in important ways, as we've Liron Shapira: I mean, Solomonov induction does grow its knowledge and grow its predictive confidence, right? So I, I, I think you're going out on a limb to say that Bayesians shouldn't have a right to the term epistemology. Ben Chugg: No, no, I, Vaden Masrani: no, that's not what you said. Ben Chugg: if you want to call it Vaden Masrani: you just said that. Ben Chugg: fine. I'm just saying, I think, honestly, this is me trying to give a boon to your side and say, like, I think we're often arguing at slightly cross purposes because the Bayesians are extremely focused on, uh, uncertainty quantification, right? They want to say like exactly what your credence should be given the current information. Um, there, I think they're less focused on like, I mean, I haven't heard many Bayesians talk about generating these new hypotheses with infinitely many Turing machines and stuff, right? Like it. Liron Shapira: I mean, I can tell you, I personally don't spend a lot of time trying to precisely quantify hypotheses. I just know that, when I'm just manually doing things, like, Hmm, this seems like a good move to try, when I'm just like, thinking things through roughly, I just know in my head, on a meta level, that I'm approximating the numerical ideal. And that's it. I just live my life with an approximation. Vaden Masrani: you're assuming in your head that that's true, but sure, um, but let me not dodge the question. Can you ask the question Ben Chugg: yeah, I'll just, Liron Shapira: Yeah, yeah, sure. So there's this weird sequence of Ben Chugg: yeah, let me just answer and say like, yeah, all else being equal, I think I would say, um, I would take the bet or whatever, like, I would say like, yeah, this is probably implausible because if, if, if I can see how they're flipping it, I would say it seems extremely implausible that there's a mechanism by which they can control the coin, you know, there's no string in the air or something and like, yeah, so the only plausible mechanism by which this sequence had to happen is, um, basically their finger, right? Because you're seeing the same. Same coin. And if the coin is like memoryless, which seems like a reasonable assumption, it wouldn't know that it has flipped like a head. It wouldn't know its own history. Right. So it has to be the Liron Shapira: exactly. And by the way, just my intent. I'm not gonna trick you. I'm not gonna be like, Psyche, the Ben Chugg: No, no, no, no. I Liron Shapira: Like, that's not where I'm going with Ben Chugg: but no, like, yeah, so I would just, I'd probably say, yeah, it's very implausible that it's exactly the sequence. And I would, um, you know, if I was in the mood to bet up against it and I had the spare income to do so, I'm a PhD student, so I don't have much spare income, you know, but if it's 30 cents, then maybe I'll take the bet. Um, yeah. Liron Shapira: Right. So, so this is my question for you, right? Is this, this to me seems like a great toy example of like, how do you guys actually operate Popperian reasoning on a toy problem, right? Because you're saying it's a fair coin, but it seems to me like, the HHH always HHHTH, THHHTH always that hypothesis, that is like a very rich hypothesis, right? It has, uh, more detail and it's harder to vary, because when you say fair coin, I'm like, wow, you fair coin, that's such an easy to vary hypothesis you could have said it's a 60 40 coin You could have said it's a 70 30 weighted coin. In fact, in my example, you got seven heads. So like, why did you say fair coin instead of 70 30 weighted coin? You're the one who's picking a hypothesis that's so arbitrary, so easy to Vaden Masrani: You just, you just put a thousand words, you just put a thousand words in our mouths that we did not say. We didn't use Ben Chugg: also, yeah, let me just, I think I can resolve this quickly and say like, you're right that saying I, you can flip exactly that sequence of coins. That is an extremely strict hypothesis that is very rich and it has lots of content, right? The content says every time I do this, I'm going to get this exact sequence of numbers. What is content good for? It's good for discriminating between different theories. Um, and so how would we do that? We'd try and flip the coin again. So Vaden, like, you know, he wasn't trying to dodge the question by saying flip the coin again. Content is inherently tied to, like, how we Liron Shapira: Okay, but for the sake of argument, you don't get to flip the coin again. You have to just give me your Vaden Masrani: on, hold on, but, but LeBron, like, you're, you're saying, okay, what, how would a Popperian deal with this circumstance, right? Like, that's, that's your question, Liron Shapira: Okay, and I get that you really want to flip the coin again, but can't you just assume that you have to give me your best guess without Ben Chugg: But I just gave you a bunch of Vaden Masrani: but you're simultaneously, yeah, like, you're, you're saying, how would a Popperian deal with this? Um, I say how we would deal with it, and you say, okay, but assume you can't deal with it the way that you want to deal with it, then how would you deal with it? Liron Shapira: Okay, so you're basically saying, like, you, you, so if this, so you Vaden Masrani: I would run Liron Shapira: have nothing to tell me before flipping again, like nothing at all. Vaden Masrani: uh, like, where are you trying to lead us to? Like, we would run another experiment, or we would, Take the bet because it's so, uh, implausible that a coin could do this, that, like, either the guy has mastered his thumb mechanics in such a way that he can make it happen, or there's some magic coin that somehow knows how to do it. Flip itself in the exact sequence that is being requested and both of these things seem completely Implausible, so I would take the bet. I wouldn't count the probabilities and then come up with some number in my head But yeah, that's I've given you 20 or a couple Liron Shapira: Yeah, yeah, so one reason I wanted to bring up this example, that what originally inspired me to make up the example, is to, uh, to show a toy example where hard to vary seems to flip, you know, it's to be counterintuitive, right? Because I do think I've successfully presented an example where the 50 50 hypothesis, the fair coin hypothesis, actually is easy to Ben Chugg: Uh, you're thinking only in terms of Vaden Masrani: the hard to variance Ben Chugg: You're thinking only in terms of statistics though, right? Like the In terms of, um, in terms of, like Vaden Masrani: Explanations of, Ben Chugg: like in terms of explanations of the underlying physics and stuff. Then it's like, is he magically doing this with his magic thumb? Right? Like that's easy to vary. Vaden Masrani: that's the part that's easy to vary. The part that's hard to vary would be, uh, this is not possible and the guy is wrong and I'll take the bet because the easy to vary part would be magic thumb. Is he a super being? Is he, uh, telekinetic? Is he, is, are we living in a simulation? These are, like, I can come up with a thousand ideas all day and that's what's easy to vary and I just reject all of them because I'm just making them up as I go. That's all rejection and that's how Liron Shapira: So it, sounds like the resolution to, I don't know if it's a paradox, but it sounds like the resolution has to kind of zoom out and appeal to like the broader context of like, look, we live in a physical world, like coins are physically hard to make be this tricky, right, to always come up in this exact sequence. So it, as a Bayesian, I would call that my prior probability, but how would you call that? Vaden Masrani: Being a common sense Ben Chugg: Yeah, just knowledge about how the Vaden Masrani: thinking about how the world works. Just like, it's implausible. And so I would, yeah. Like, anyone who doesn't study any, like, philosophy would come up with the same answer, approximately. That, like, that doesn't seem Liron Shapira: I wanted to challenge you more, I think I would probably have to put in some work where I like, invent a whole toy universe that doesn't have as many, uh, as much common knowledge about, like, the laws of physics, so that it's not, like, super obvious that the coin is, is fair, because I do think there's some essence that I would distill as a Bayesian to just be like, you know, what I would get out as a Bayesian, what I think is, like, a profound lesson that's worth learning, like, you know, imagine, you A million, right? Imagine a million flips in a row, right? So then, even if the laws of physics made it really easy to make unfair coins, the fact that it just looks like there was no setup and you, and, you know, it could just be, it could even just be your own coin, right? Your own coin that you just got from like a random K marker, a 7 11, right? You, you flip it, um, and it's like your own coin and it came up, you know, a hundred times like a totally random thing and you're like, I have a great theory, it always does this exact sequence, um, um, Yeah, I mean, I, I do find it convincing that it's like, look, the laws of physics makes that a priori unlikely, but I feel like the, a big advantage of the fair coin hypothesis is also that it, uh, you know, it's a priori much more likely than the hypothesis of, like, this exact sequence. Like, that's kind of a ridiculous hypothesis. Like, where did I get that hypothesis without actually flipping the coin? You know, like, do you see there may be something there Vaden Masrani: Well, you can always tell a Bayesian story after the fact, yeah, it's, it's a priori less likely, we all agree on that, and then Ben and I would want to say it's less likely because it's a bad explanation, so we just reject it, and you'd want to say it's less likely and, okay, how much precisely likely is it less than the other, let's come up with a count, let's say, okay, so there's eight heads and there's two tails and let's come up with a problem, we just say, you don't need that, it's, it's a ridiculous thing to assume, Ben Chugg: turn around, like, how would you, so what is your prior probability on the coin being fair? Vaden Masrani: Yeah, great question. That's a great question. Yeah. What is your Liron Shapira: Yeah, I mean if it generally if somebody does a party trick and I don't judge them as somebody who like wow this guy could Actually be doing some pretty fancy magic right if it just seems like a random drunk friend Then I'd probably be like okay. There's probably like a 97 percent chance This is just a regular fair coin and the other 3 percent is like okay This drunk guy actually got access to like a pretty good magic Ben Chugg: Okay. So what you gave is like a bunch of reasons and then a number, right? We're just giving you the reasons. Liron Shapira: Yeah, and the number is a Ben Chugg: I know, exactly. We're just giving you the reasons and no number. Liron Shapira: Yeah, I Vaden Masrani: it's such a ballpark that we just don't need the number like what's the number four Liron Shapira: Yeah, I mean, I get that, right? So, I mean, the standard Bayesian response is to be like, Well, look, if there was, like, a market, right? Like a betting market, or a prediction market, or even a stock market, right? And, actually, this gets to another section that I was going to hit you with, which is like, okay, so expected value, right? What do you think of this idea of calculating expected value? Ben Chugg: Oh boy. Vaden Masrani: think it's like the Pythagorean theorem, it's useful in some circumstances and not useful in other circumstances, and it's a banal mathematical fact that statisticians use all the time, and for whatever reason, it's also this like, crazy, philosophically metaphysical thing that the Oxford philosophers like to say. Like William McCaskill and Toby Ord and Hilary Greaves and Elias Yukowsky and some day traders too. I know what you're about to say Yeah, then that's where the problems come in and let's let's go have we could talk about this a lot But for my first approximation, that's what I think He Liron Shapira: the example of, hey, we're at this party. Somebody just did something with a coin who seems to be trying to gaslight me like it's this coin that always comes up, you know, these 10 sequences in a row, but then some trader overhears the conversation and walks by and he's not a confederate. He's just honestly somebody who likes trading and likes markets and he's like, Hey, let me make you guys a market on this, right? Like, what odds do you want to give for this bet? And that's where, okay, yes, I pulled a number out of my butt, but, like, this guy wants to make odds, right? So, like, you have to plug something in. Ben Chugg: Yeah, if you're willing to bet, Liron Shapira: Yeah, and you could be like, well, parents would just walk away. We wouldn't participate, right? But, like, Vaden Masrani: No, we would just run the experiment again. We'd run the experiment again and find out. Oh, no, we would, no, okay, we would run the experiment again and then decide if it's so ambiguous as to require us doing the experiment like a hundred thousand times and then collecting data on it and then building a statistical model and then using that statistical model to figure out what's actually happening because, yeah, there's many experiments that are really challenging to run and you get differences every time you do it and that's where data and statistics come in. Is applicable. Um, but, And that's different than just saying the expected value of this is going to be big. And where are you getting this stuff from? You're just making it up and uh, it's useless in most cases unless you have data. And yeah, we can talk about the cultural stuff of traders and um, like Sam Bankman Freed, like if you read Malcolm Lewis's book, they talk about his culture at Jane Street and how they put expected values on everything. And that is a cultural thing, um, which people do and We can talk about the culture there too, but that's just very different than how most people use it Liron Shapira: an interesting example from that book is, I mean, if you look at, uh, Jane Street and, you know, the famous medallion fund. So there are funds that are placing bets, uh, you know, that they perceive to be positive expected value in various markets. Ben Chugg: I mean, using a huge, a lot of data and statistics and a boatload of assumptions about how the past, you know, the last five days of the market reflects something to do with the next day of the market, right? Vaden Masrani: it's completely pot. It's so at the beginning of this conversation, we, I started at least by saying Bayesian statistics, all good. Bayesian epistemology, bad boo. So the Bayesian statistics part is what you're just asking about because you have like 50 years of financial data and you can run like trials and you could do simulations with your data and you can see. What gives you a slightly better return and that is just a completely different thing than what the long termists and the Bayesians Like the you Kelsey style Bayesians are doing so just make sure that we in this Ben Chugg: Yeah, we're not anti statistics. Vaden Masrani: you have data all good Yeah, yeah, or anti expected values because we if you have data All good. Like, you could do it wrong, of course, but I'm assuming that, like, in this purpose of this conversation, we're doing it well, and, like, Detroit, like, yeah, Jane Street and all these, like, uh, big hedge funds, like, they are, um, their whole life is trying to get, like, slightly better odds using, like, super computers and blah blah blah, and so, yeah, yeah, so that's fine, yeah. Liron Shapira: For the audience watching the podcast, you guys know I recently did an episode where I was reacting to Sayesh Kapoor, and I think he made a lot of similar claims. I don't know if he subscribes to Karl Popper, but he had similar claims about like, look, probabilities are great when you're doing statistics, but when you're just trying to reason about future events that don't have a good statistical set, then just don't use probabilities. Do you guys know Sayesh Kapoor, and does that sound Ben Chugg: No, but that sounds great. yeah, Yeah, it sounds like we should Liron Shapira: Okay. Vaden Masrani: know where he's coming from, but that's, uh, yeah, totally. Which, well, actually, so can I, can I actually riff on that a little bit? Which is that Popperianism bottoms out onto common sense. So it does not surprise me at all that someone who doesn't read Popper, know Popper, know Deutsch, is saying similar things, because if you just Are not in, so people say Popper's a cult too, so we're both in cults. If you're not in the Popperian cult, or the, um, Joukowsky cult, then you, and you're just thinking about how the world works, you'll likely just come up with a bunch of Popperian stuff because it just bottoms out into a common sense, typically, yeah. Liron Shapira: I would also like to claim that Bayesianism bottoms out into common sense. Vaden Masrani: That's fair, yeah, we'll let the audience decide, yeah, Liron Shapira: Okay, um, two can play at that game. Uh, but yeah, so, what I was saying are Vaden Masrani: yeah, Liron Shapira: So a while ago I was saying, look, when we do Solomonoff induction, I claim that that's the theoretical ideal of what I do in practice, which is often just using my muscle memory. Similarly with expected value, I would make a similar claim that, like, realistically, right, in my life, I don't make that many quantitative bets, right? I'm usually just, like, coasting, not really doing that much math. But I do think expected value is the theoretical ideal of what I do. For instance, I think we can probably all agree that, like, if you had to. Let's say you had to bet 100 on something and you either had to bet that like the sun will rise tomorrow or that Kamala will win the election. It's like sun will rise tomorrow is gonna be like a much stronger bet, right? So like, that would be like some primitive form of the expected value calculation. Ben Chugg: It's a, I mean, we have a very good explanation as to why the sun will rise tomorrow. Um, yeah, Liron Shapira: So it, Vaden Masrani: and we don't really know too much about the, um, Liron Shapira: So, okay, so, let me, let me ask you about your MO, right? So, so let's say I'm just, I'm asking you to, to, you have to bet 10, uh, and in exchange for, you know, you have to give me odds at which you'd be willing to bet 10 to. There's probably, like, with Kamala, let's say, I'm sure you guys don't have a good explanatory model whether she's definitely going to win or not, right? Because it's, like, too hard to know. But if I said, look, it's your 10 to my 1, 000, you know, just take either side, wouldn't you just take a side because it seems pretty appealing? Ben Chugg: sure. 10. Yeah. Vaden Masrani: I mean, like, if you're, yeah, um, I mean, like, if you're guaranteeing that we'll, yeah, Liron Shapira: I ask you this question, you're not like, No, Liron, run the election, run the election. No, you're not stonewalling, right? You're saying, sure, I'll put down 10 to your 1, 000. That seems pretty appealing, right? So didn't you just imply that you think the expected value of betting on Kamala is more than, you know, more than 10 in that situation? Ben Chugg: Yeah. So you're sort of defining expected value after the fact. All I'm saying is like, I'll take this bet cause it seems like a good deal. I have 10 of disposable income and, you know, Vaden Masrani: what I'm not doing, so what I'm not doing, to be very clear, Liron Shapira: like a good deal, I think you're approximating the mathematical ideal of expected value. Vaden Masrani: uh, because you can use that framework. To describe whatever the hell you want to describe, right? So, the expected value, so, okay, like, like, for, for the listeners. So, what is expected value? So, let's just talk about discrete stuff. So, it's a summation, yeah? So, summation of the probability of a thing happening times the utility of a thing happening, and then you sum it all up. Okay, great. So, what are you gonna put in that sum? Well, you have to make that up. So, if, whatever you want to make up, You can do it, then you have to make up the utilities, and then you have to make up the probabilities. So, what you're doing is you're taking one made up number, uh, multiplied by another made up number, and then you're doing this a bunch of times, and then you're adding up all these made up numbers, and then you're getting a new made up number, and then you're making a decision based on that new made up number. If you want to do that, and make your decision based on that, Go nuts. You just don't need to do that. So, Liron Shapira: when you're putting up your Vaden Masrani: numbers and coming up with a new made up number, you could just start with the final made up number. And you could also just start with the realization that you don't actually need these numbers in the first place because the election is like a knife edge. And if someone is, you know, like a thousand to one odds or something, then you can take money off of them because they are mistaken in their knowledge about what's going to happen because they're falsely confident about something. And so you don't need expected Liron Shapira: knife edge. The term knife edge is such a loaded term. You're implying that it's equally likely to go either way. How are you making this loaded? You don't know the future, Vaden. Vaden Masrani: so because we have a data set here, which is the 330 million people who are going to vote and the polls that are trying to approximate that. So polls are a sample of a population. This is statistics. This is what I'm going based off of. And this is why Bayesian statistics is fine because it's, we know how polls work and we know how counting works and we have 330 repeated trials. Well, people are going to do this. This is where statistics makes sense. Liron Shapira: But there's a dark art where all of these pollsters are building their models, right? Because the election is so close that the way you build your model is going to really define which candidate you think wins. Vaden Masrani: and that's a huge problem. Yeah. And I could rephrase that. I could rephrase that problem by saying there is way too much subjectivity injected into these equations. Liron Shapira: It sounds like we all really agree though, in our gut, that like, the election is pretty close to 50 50 of who's going to win. Like, am I, do you want to push back on that, that it's not 50 50 roughly who's going to win? Vaden Masrani: Um, I have, I have a pet theory about this, but it, uh, it's, it's not, uh, worth taking seriously. So I'll just go with the polls, but I think polls are inaccurate, but Liron Shapira: And I want to bring up the prediction markets, right? So now there's like, PolyMarket, Kalshi, Manifold. So these markets have gotten a lot of action in recent weeks and months. And it's pretty sweet, right? Because you can watch these markets and like, A lot of people betting on these markets have a Bayesian interpretation of what that fluctuating number means, right? I mean, how else, or like, do you think that it's crazy to give a Bayesian interpretation of those numbers as like, good odds that you could use if you're placing bets? Vaden Masrani: Yeah. I pretty much just ignore prediction markets, but that's my personal choice. Yeah. Uh, Ben. Um, I mean, what have you learned from the prediction markets that you haven't learned from polls? Out of curiosity. Liron Shapira: Oh, what have I personally? Um, Vaden Masrani: Yeah. Just like, what value have you got from the prediction markets that you haven't got from polls? Liron Shapira: A lot of times it's redundant. I just, I see prediction markets as being a little bit finer grained than a poll, so when the polls are kind of ambiguous, sometimes I'll look at a prediction market and I'll see more signal. Um, I think with, maybe with Biden dropping out, that, I guess that's not, there's no direct poll about that. I mean, there was probably no single representative poll that was the Biden dropping out poll, but like, I guess I just want to invoke, like, there's some times where I see something will happen where it's not fully captured by, like, the ontology of a poll, but, like, there's a specific prediction market for it, and it spikes, and then I really do think, like, oh man, that spike seems like I should update my expectation of what's going to happen. Ben Chugg: there's a lot going on here. Uh, let me just touch on a few points. So I think your initial thrust with like the betting 10 to 1, 000 odds, like these are really good odds, you'd probably take that, and if I keep decreasing the 1, 000, you're gonna hit a point where you don't wanna take that bet anymore, right? Um, sure, so if you wanna then do some arithmetic on this and come up with like the expected value that, where I think, uh, I'm willing to bet on Kamala, uh, versus Trump, and you wanna call that expected value, Sure, you can do that. Um, we, we just want to emphasize that this is an extremely different quantity than what statisticians are doing when you have a well defined data set and you're looking at a well defined outcome and you're counting things or making very rigorous statistical assumptions. And calling them both expected value is Unhelpful at best and actively harmful at worst because these are not the same sort of quantity now I don't I don't I'm not disputing that people, you know Think think someone's gonna win versus another person more or less or have different risk tolerances Some people like to bet, some people don't like to bet. Uh, they have different utility functions. And so if you wanna, you know, if you wanna press them on that and make them bet at certain odds and call that expected value, fine. But this is a different thing than maybe statistical expected value. Uh, where I'm, you know, statistical at the beginning. Okay, fine. That's one point. The second point is, yeah, prediction markets are interesting as an aggregate, uh, an aggregate knowledge about how, People, sometimes few people, sometimes many people, with money, are willing to bet on an election. It's a summary of information, in that sense, right? Um, And again, so you can now talk about different people in this market's expected value. Uh, the whole point is that their expected values are different. That's why you see differential outcomes in markets, right? Um, and there's not, there's no way to adjudicate that precisely because it's subjective. And this is, again, why this is different than statistical expected value when we have well defined statistical models. Vaden Masrani: I ask you a question before you, before you respond, which is how do you deal with the fact that you have many different prediction markets and they all say different things? Liron Shapira: I mean, usually arbitrage brings them pretty close together, no? Vaden Masrani: Uh, but that doesn't because they are still saying different things, right? Like, um, I may be factually Liron Shapira: where there's a persistent, yeah, I mean, like I know on Trump versus Kamala, it's always plus or minus 3%. Vaden Masrani: yeah, actually I'll, I'll, I'll back off on this and just, um, let the listeners or the viewers check because I haven't checked it recently, but the Liron Shapira: you're correct, that arbitrage is possible. I mean, that's not even necessarily a statement about epistemology. That's just like weird. Why aren't Ben Chugg: I think maybe a, Liron Shapira: I mean, I guess that could potentially be a statement about epistemology, right? If you're saying Vaden Masrani: want to fact check myself and fact check myself and also fact, like, I would love for the, the, the commenters on here to just, whatever time they're looking at it, just pull up four prediction marks, take screenshots and put them underneath. And let's see if, how Ben Chugg: or maybe a better example here is just the discrepancy between, like, Nate Silver's models, right, and, like, Polly Market. So Polly Market, I think, is a way bigger, uh, edge for, uh, Trump at this Liron Shapira: yeah. So, so polymarket is allowed to have more models, right? Nate Silver constrains which models he's allowing himself to use, right? Other people are like, oh, I also want to weight this potential model, right? So it's not that surprising that Nate Silver hasn't captured every possible model that you might want to weight into your prediction. Vaden Masrani: but my, um, My question is, if you are willing to grant for the sake of argument that they are different, how do you decide which one to follow? That's my question. Because if they're all saying different things, and Liron Shapira: it's the crux of our disagreement, right? I think that if you were correct, so I'm happy to take a hit if you're actually correct, that prediction markets have persistent disagreements in the probability of something, that absolutely would be, and given that they were liquid, right, assuming it's easy to make money by like buying one and shorting the other, um, that absolutely would be evidence for the meaninglessness of Bayesian probability, right? And then conversely, the fact that I claim that this isn't factually true, that you actually have very narrow spreads, Right? I think is evidence for the meaningfulness of Bayesian probability. And I actually have further evidence along those lines, which is, have you ever checked their calibration? Um, Vaden Masrani: market, calibration is, yeah, okay, yeah, let's go down Liron Shapira: So manifold manifold is the one I saw. So they look through a bunch of past markets, right? And these are Bayesian markets. These are not like doing statistics. They're just predicting an uncertain future about random questions. And when the market says at a given randomly sampled time that there's, let's say a 70 percent chance that the market is going to resolve, yes, like whatever it's predicting, right? Like will Russia invade Ukraine, right? Like all these random questions, if the market is saying at any given time, 70%, they went back and they checked the calibration. And, uh, I can put a graph in the show notes, but it's ridiculously accurate. Like, uh, for example, the data point for 70 percent is it's like 68%. This is across a random sample of, of manifold points. Vaden Masrani: there's a post on the EA forum that says that that's only true with a time horizon of less than five years and it might even be one year. So the, um, the thing Liron Shapira: so we can use Bayesian probability to predict all events one year into the future. That seems like a pretty big win for Vaden Masrani: No, hold on though, because the whole super intelligence thing is not one year into the future, and let's talk about, okay, we're gonna go down this path, let's do it. Hold on, wait, no, no, let me say a few things, um, let me say a few things, which is, if you want to talk about super forecasting and Philip Tetlock and stuff, um, you have to read the appendix of his book where he says that any predictions beyond ten years is a fool's errand, and you are a fool's errand. You shouldn't even try it, and you'll embarrass yourself if you're going to. So point number one. Point number two is that on the EA forum, someone who is very sympathetic to Bayesians, did an analysis on the calibration of, I think it was manifold, and when you look at these scores, you have to account for how far into the future they are, and so yeah, it's, it's interesting. It's totally possible to make predictions successfully within a year, but the thing that you're predicting matters a lot. So if you're going to predict that the next year is going to be kind of similar to this year, that's like a default prediction. It's going to get you pretty high calibration, but that's also completely boring and uninteresting. It's not a huge concession at Liron Shapira: you're basically saying as it, you know, these Bayesians who think that they can take something that doesn't have a mass of statistical data and slap a quantified probability on it as given by a prediction market, yes, as long as the time horizon is less than one year, they can expect near perfect calibration. Vaden Masrani: Yeah, I can do that too. I predict that in a year on Christmas, there will be a flex of flights. It depends on the predictions you're making. If the predictions are simple, anyone can do it and get great calibrations. If the predictions are really complicated, Liron Shapira: look at the predictions, they're complicated, right? They're, they're things like Russia will invade Ben Chugg: There are things that these people are willing to bet on that they have differential knowledge about with respect to the rest of people, right? It's not the same people always betting Liron Shapira: to paraphrase what you guys are telling me, you're basically saying if there's a market saying Russia will invade, let's say it's, you know, January 2022. So we've got like a one month time horizon or let's, right, let's say there's a market saying, Hey, Russia will invade Ukraine by end of the quarter, right? Like, cause I think that's when they Vaden Masrani: there's, there's not a Ben Chugg: Yeah, there's no hypotheticals. Like, there Liron Shapira: right? So in that scenario, Ben Chugg: there's, superforecasters gave this 15 percent before Russia invaded Ukraine, and they gave COVID 3 percent that there'd be over 100, 000 cases in the U. S. by March. Liron Shapira: Yeah, and remember, we're talking about calibration here, right? So I'm not saying the market gave it a 99 percent chance and then it happened. I'm saying if they gave it a 15 percent chance, then it would fall in a class of markets that were saying 15%. And what I'm saying is calibration data shows us that 15 percent of those markets do resolve. Yes, like a market is generally well calibrated. So it sounds like you guys might be conceding that with a small timeframe under one year, there is such a thing as a well calibrated Bayesian probability. Oof, Vaden Masrani: completely worthless. Liron Shapira: I mean, I just think that that concession is, you know, as a, uh, in the context of a debate, I do. I feel like that concession is almost conceding everything because Vaden Masrani: it's not, I mean, there's a complete difference between making predictions in, uh, the, yeah, yeah, what, Liron Shapira: Because it didn't, by the way, this is unexpected, right? it's not, like you came in being like, Okay, you Bayesians are so cocky because you have this amazing tool called the prediction market where you can nail calibration for things in a year, but let me tell you how bad Bayesians choke after one year. Like, that's your position? Ben Chugg: I'm, just, wait, wait, Vaden Masrani: are you talking about? Yeah. What are you talking about here? Uh, let's just zoom out a little sec. Like, um, no, Ben, you go first, but I Ben Chugg: I'm, I'm confused. Um, yeah, I'm just confused about the claim you're making. So, what prediction markets are not is consensus on probabilities. So, right, so what they're doing is, you know, so you'll have a prediction market, for instance, would converge to 50 percent if half the people thought there was a 25 percent probability of something, and half the people thought there was a 75 percent probability of something. What's not going on is, like, a bunch of Bayesian updating where, like, you have consensus of people all updating their prior probability. So, like, I just, Vaden Masrani: You don't have to be a Bayesian to bet that. Yeah. So you'd have to be a Bayesian to play a Liron Shapira: and I'm not, and by the way, I'm not using prediction markets as an example of somebody running Solomonoff induction. I'm using prediction markets as an example of having a number, uh, you know, Sayesh Kapoor's whole thing. I know you guys don't know him, but it's related to your point of like, you guys are basically saying where do these probability numbers come from? Wow, you can't do expected value unless you're doing statistics. It seems like you could very successfully do Bayesian probability and expected value calculations if you simply refer to the numbers being outputted Ben Chugg: I don't know, but you're already selecting Vaden Masrani: phenomenon for prediction market. We know how prediction market works. Sorry, Ben. No, it's just, it's different, Ben Chugg: But I mean, so you're already selecting for people who are like choosing to bet on these markets Which are people who think they have better information than the average person They think they have an edge, hence they're willing to bet, right? Meaning they think they have like a good explanation of whatever you're betting on, okay? Right? Do we agree there? Okay, so we're already in a very restricted class of people Um, who are, you know, they're taking bets for some reason. advantageous information for something, uh, that's what betting is all about. You bet when you think other people are wrong about something, you have an explanation as to why they're wrong. Um, and so you, you put money down on it. And then what a market is is like aggregating all this information. Uh, and people think other people are wrong. So they bet on the other side of that, uh, et cetera. Um, I'm a little confused how this relates to like a win for you. Each person, what Bayesianism says, and I think the claim you're making is you should walk around at all times, putting probabilities on every hypothesis that you conceive of, right, and constantly updating on new information. Uh, I fail to see how this, like, you know, people are betting on very specific Liron Shapira: So, so let me restate something you might, let me test your claim here, okay? So would you claim that humanity as a whole, as a team, using the technology of prediction markets, humanity can be a really great Bayesian because humanity can just list a bunch of hypotheses, run prediction markets on them, and then plug in those probabilities as Bayesian updates. Ben Chugg: Uh, no, like not for not anything in longterm, not like meaningful, like sometimes the future resembles the past for very important Liron Shapira: And to be precise, it's not Bayesian updates, but it's like for betting, right? For expected value, for policy. And by the way, what I just described I think is Robin Hanson's concept of futarky, right? You make prediction markets telling you the different probabilities of different outcomes and then you just maximize expected value. You choose the policy that's going to maximize expected value according to the probability that you now know pretty well. Vaden Masrani: Uh. You. This is an interesting thing. Okay. So your question is, if we could get all of humanity to make predictions about stuff, uh, then we still have to wait for the time to pass and then see if it was right. And a prediction will either be right or wrong. And if the prediction is greater than, like, a year or two, then all of the predictions are eventually just gonna be 50 50 because we have no frickin idea about what's happening. And then we have to Liron Shapira: I'm happy that you've conceded one year, so let's just talk about Vaden Masrani: not, it's not a con it's not a concession because we understand how this works. Like, so there are certain predictions that can absolutely be made within a year time horizon. It depends on what's being predicted. So I can predict all sorts of stuff Liron Shapira: for one year. time horizons? Vaden Masrani: I wouldn't, no, of course not. futa sounds insane. Why would I predict, why would I support this? Um, because, uh, so I don't, futa if what you just said is that you make a decision based on the whole planet's probability about what's gonna happen in a year, is that what we're doing? Liron Shapira: Kind of, yeah, the idea of futarki, and we'll limit it to one year, is anytime there's a policy proposal that's going to yield returns within the next year, like let's say you want to make sure that GDP grows 2 percent this year, right, this calendar year, so, and there's different policies, right, like would a tax cut improve GDP, Vaden Masrani: All, all this is telling you, it's not telling you what is going to happen. It's telling you what people believe is going to happen and what people believe is going to happen can be completely wrong all the time. So, Liron Shapira: because, you know, you, you basically asked me. So a future key would be like, you know, there's, there's two different, should we do a tax cut? Should we do a tax increase? Should we, uh, cut interest rates, right? So there's a few different policy proposals where everybody agrees there's a target growth rate for the economy, right? Like that's the policy outcome you want. And so future key would say, great, run prediction markets on all the different proposals saying conditioned on this proposal, you know, you're allowed to do conditional prediction markets. What would then be the resulting change in GDP? And that way voters could see, ah, okay, this policy is the one that has the best change in GDP, and then you implement that, and I think you were starting to push back, saying like, look, this isn't a prediction, but may I remind you, these prediction markets have shown themselves to be very well calibrated, meaning you could use them as predictions, and your expected value formula would be very, you know, it'd yield good Ben Chugg: not everyone is voting in these, I guess. Right. You're just restricting it to the people who like want to bet on these markets. Cause the whole point about prediction markets Liron Shapira: Let's, let's say you're literally running it on, like, PolyMarket, right, using the same rules as PolyMarket, where it's literally just a financial incentive to participate if you think you know something. Um, Ben Chugg: Okay. So you're going to delegate democratic decision making to just like an expert class of people betting on poly market. I would definitely not support this. Liron Shapira: I mean, let's say, let's say we keep the same system, but we vote in politicians who are like, look, if there's like a weird emergency, I won't do futarchy, but like, I get that we were all smart people. We all get the value of futarchy. So I will be setting up these prediction markets and you guys can help advise on my policy that way. Vaden Masrani: So you want to find an elite class of super smart people that will be included into the prediction markets and you want to get rid of all the dummies because Liron Shapira: No, no, no, no, but that, but that's a solved problem. Like whatever current prediction markets are doing, the data is showing that they're yielding calibrated predictions. So you just, you, you just amplify what's working. Vaden Masrani: It's so that you're talking about predictions. I would want to see if you, if you're seriously talking about this as a policy proposal, I would want to see the set of all predictions that were made and I want to figure out, okay, are these like kind of trivially easy predictions or are they like, holy shit, that is impressive. So first of all, I'd want to look at the kinds of predictions that are being made. And then I want to see, like, which ones were right and which ones were wrong, and for what reason. Um, but just to zoom out for a sec, like, this is very analogous to a question of, like, direct democracy. And if I, if my car is broken, um, I could do one of two things. I can talk to, like, a few, like, uh, well knowledgeable mechanics, and ask them what they think is wrong, and they can tell me, This and I can get a couple different opinions, or I could average the opinions of 330 million people in the population and just do whatever the average says. And you're saying that the second camp, just averaging the opinions of a bunch of people is preferred to like domain knowledge about what's going to happen. And I would, in every case, take domain knowledge and sometimes that domain knowledge is going to be In the minds of, um, people in the Bay Area, in particular, who are extremely online and like to bet on all sorts of different things. And, depending on the question, that may or may not be a good source of, of information. But, the, there's no, like, massive, ah, you just destroyed the whole Popperian approach because some predictions are possible within a year. It's like, we have to think about what's going on here. Um, and certain predictions are definitely possible within a year, yeah. Liron Shapira: So say you're the president and you ran on a platform of like, I will pay attention to the prediction markets because I'm Bayesian and I understand the value of paying attention to prediction markets. And then, And you're, and you're considering a tax cut, right, a generous tax cut across the board. And the prediction market says, um, GDP growth will increase, uh, more than 1 percent compared to what it would have been if this tax cut were implemented. Markets are saying 70 percent chance. Right? And now you're saying, just to repeat back what you just said now, you said, Okay, yeah, sure, the president could listen to that prediction market, but he hired Lauren Summers, right, or just like some famous economist, right, who's telling him, Mr. President, I give a 30 percent chance that Vaden Masrani: No. He would give an explanation. He would give an explanation as Liron Shapira: and it would come with an explanation, yeah. So, so you would say, because it comes with an explanation, and because this guy is trusted by the president, the president should just listen to him, and not the prediction market. Vaden Masrani: they should listen to the explanation and maybe get a couple different ones and see what makes more sense and maybe get the people to debate a little bit Ben Chugg: also, I mean, if, yeah, if there's an explanation as to like why the prediction market might be accurate in this case, like say you have like all these expert economists. It's betting on this, on, on, on this market, right? So in some sense, the market is reflecting the view of, uh, some giant class of people who we have, for some reason to expect, they know what they're talking about. Then yeah, I would take that information on board. Well, I'm still, I'm confused about the Bayesian aspect here, right? So there are certain questions where we want to use statistics. We've said that all along, right? So in statistics is valuable insofar as it helps us with prediction, right? Especially when there are huge, uh, Okay, so markets, uh, prediction markets can reflect that in some sense. Um, I'm, the Bayesian picture for me comes in, like, at the individual level. And at the individual level, I'm super skeptical of the ability to, for people to make, like, quote unquote super forecasts, right? So I think the literature there has been, like, very overblown, right? So there was this, there was a good review actually written by, um, I'm going to blank on their name. Gavin Leach and Misha Yagodin, maybe? Right? So they like, um, they, and they were, uh, I think rationalists of some flavor. So very sympathetic to the Superforecasting Project. Um, and they took like a look at Tetlock's literature, um, and found that these initial claims of like 30 percent more accurate than, expert pundits were way overblown. First of all, they were being measured by different metrics. And so once you correct for this, it's more like a 10 percent difference. Secondly, this 10 percent difference didn't even reach statistical significance. Uh, and so I, yeah, Okay, Liron Shapira: that, right? I mean, I think this, this is absolutely a crux. I mean, so if I'm wrong about this kind of data, then I'm absolutely open to, um, to downgrading my assessment of the usefulness of Bayesianism. But the data that I would point to is if you look at manifold markets, for instance, the one, the one that published the data about the extremely good calibration. There's no one user in Manifold Market who has this kind of consistent calibration, right? It's the market's calibration, Ben Chugg: no one Liron Shapira: so yeah, no, Ben Chugg: is okay, so I think we're getting somewhere, right? Like, there's no one user with good calibration. Okay, so this is saying like, doing a bunch of, Okay, Liron Shapira: If they were forced to vote on everything, right? Maybe there are some users that have good calibration on the bets they choose to make. Vaden Masrani: I just add one thing? Ben Chugg: Uh, yeah, the other point just on the individual thing was like the actual Breyer scores that superforecaster are getting. So like, you know, 0. 25 is like a perfect 50 percent Breyer score. So if you just bet 50 percent on everything, um, assuming there's an equal number of yes, no's in the answer set, you're going to get 0. 25. What sort of Breyer score is where superforecaster is getting typically something around like 0. 2. Okay. This corresponds to like. 60 to 65 percent accuracy. So what we're saying is super forecasters who I guess do this for a living, right? They bet on stuff. Um, when they're maximally incentivized to truth seek, right? Um, they can get like 60 to 65 percent accuracy on questions. Um, if you want to call that as like a gotcha that they're seeing clairvoyantly into the future, that's fine. I'll just acknowledge that. Um, but I don't view 60 to 65 percent accuracy as some huge win for putting probabilities on everything. I basically view it as like, they're running into hard epistemological limits of how easy it is to see the future. If you have very, if you have very good domain knowledge of an area, it doesn't surprise me that you can beat a coin flip, literally random guessing. And expert knowledge in an area and are, are incentivized in the right way to actually care about outcomes as opposed to like political punditry, for instance. Um, and so that's where all my Liron Shapira: yeah. Let me tell you what I'm claiming here, though. Okay, why are we even talking about prediction markets, right? Let me tell you what I'm claiming here. I, so, and you bring it back to like, look, individual humans, uh, an individual humor, human is much weaker than a prediction market. That's what you say. Fine. But let me tell you why I'm bringing this up. It's because I think a lot about AI and the powers that AI is going to have. If it's programmed correctly, right? If we keep on this progress of putting the right code into an AI, what's possible? Well, a single AI. It could take on all of humanity. Like, yes, there's a lot of different humans making a lot of different models, but you could also just copy the AI's code and run a bunch of instances of it and have them wire up to each other, and you It literally is, in my mind, a question of One A. I. Versus all of humanity. And so for me, when I see the prediction market aggregated across all of humanity's experts, the way prediction markets know how to aggregate information, I see that as a lower bound for what one A. I. If programmed correctly could do in its own head and come up with its own Bayesian probabilities. So when I imagine an A. I. Functioning in the world, I imagine it putting probabilities on things and Having those probabilities be well calibrated, using the expected value formula, and then placing very well calibrated bets. Vaden Masrani: I ask a question? Can I ask a quick question? Um, what do prediction markets say about the likelihood of superintelligence? Liron Shapira: So, currently they're saying I think AGI is coming at around 2032. I think that was Metaculous last I checked. Vaden Masrani: Uh, no, the probability. So what's the probability of, like, the scenarios that you're describing that, um, the super, um, Forecasters on these markets, uh, what do they assign? What probability? Liron Shapira: Uh, what's the question exactly? Vaden Masrani: Um, for, uh, the doomsday apocalyptic scenario that ORD gives a 1 in 10 probability to, um, that you're really worried about, uh, I'm not asking when superintelligence is going to arrive because you can define superintelligence in a thousand different ways. I'm asking for the doomsday nightmare scenario that keeps you up at night, what's probability assigned to that? Liron Shapira: So, I don't know which prediction market I would go check for that, because the problem is Vaden Masrani: I thought you said they're all the same. I thought you said they're all the same. Liron Shapira: Or I don't know which prediction market even has enough volume on a question that corresponds to what you asked because the reason is Prediction markets are a powerful methodology, but they're they do have the issue of you know, counterparty risk and platform risk, right? So if you're saying hey What are the chance that everything is going to essentially go to zero that human value is going to go to zero, right? How am I going to collect on that if I if I think it's 90 percent likely why would I bet on that? I'm just losing my money today for something I can't collect on Vaden Masrani: I see, so you'll follow their predictions up until the point that you have a reason to think that they're wrong and then you'll ignore them, is that right? Liron Shapira: Well, this is a systematic failure, right? It's like saying, will prediction markets still work if somebody hacks their server? Well, wait a minute, there are some Vaden Masrani: No, no, right, no, no, right now, there's certainly some prediction market that says some apocalyptic student day scenario, and I think Scott Alexander has blogged about this and it's very, very low. Um, I can find the source, I think it's something like 3, 5 percent bend, if you recall this, please let me know. Liron Shapira: markets are a way to aggregate information by financially incentivizing their participants. There's no financial incentive to a Doom prediction. Ben Chugg: Then why can we be confident in like your doom predictions or anything like that? Like, what, like, why should we, why should we, why should Liron Shapira: My Doom predictions come from just a Yeah, yeah, yeah. I'm, I'm just actually using Bayesian epistemology, right? So everything we've been talking about now, I haven't been saying we're doomed because prediction markets say we're doomed. I'm saying no. I have a strong epistemology. It's called Bayesian epistemology. It's called approximations to Solomonoff induction. You can see how strong this epistemology is When you go and look at the calibration of prediction markets like Manifold, who are not using statistics, to get great estimates. This helps you see that my epistemology is strong. Now, as a, as a reasoner with a strong epistemology, let me tell you how I got to a high PDoom, right? That would be the shape of my Ben Chugg: okay. One, Vaden Masrani: I just wanna, I just wanna, yeah, sorry, this is really quick. So, your answer about, it doesn't make sense to bet on these, um, particular questions because we'll all be dead if they turn out to be true. So, that would just mean that there aren't those questions on these markets, right? Like, people aren't betting on them, they just aren't on there. Um, Liron Shapira: putting on the question, Vaden Masrani: not true. I think that that's not true. True end. But all I'm curious about is there is some number that they're giving that you have some reason to ignore because yours is much higher and why do you, um, support prediction markets in every case except when it disagrees with you, at which point you don't support them anymore. Liron Shapira: Okay, so why do I trust prediction markets besides just their track record, right? Because it sounds like you're modeling me as like, look, if you like prediction markets track record, why don't you just extrapolate that no matter what the prediction is, you should expect it to be calibrated. But yeah, I do take a structural fact that I know about prediction markets, I take that into account. For instance, if I knew for a fact that Bill Gates was going to spend all his money to manipulate a prediction market, right? Like there are some facts that could tell me for some periods of Vaden Masrani: Yeah, so you have insider knowledge. You have insider knowledge as to say that these prediction markets are wrong and so you're presumably like leveraging that and making some Liron Shapira: my, my point is just prediction, it's, it's not, you can't be quite so naive to be like, okay, no matter what the prediction market says, you have to trust it. There are some boundaries and I didn't, this isn't an ad hoc limitation. The idea that the whole prediction market shuts down under certain, uh, under certain bets you make. I mean, there's, it's called platform risk. Like this is a known thing in trading. Like you're, you're basically just, you know, you're, coming at me for, for just doing a standard thing about trading where you look Vaden Masrani: No. No, no, no. You, you, you, you were saying you have insider knowledge here to, um, for, you, you have justifications and reasons to assume that the probabilities that are being assigned to these particular questions are wrong. Um, and you should make a lot of money while the apocalypse is coming. I get that when the apocalypse comes, we're all going to be Liron Shapira: only pays out during the apocalypse, right? The presumption is you make money because when the apocalypse happens, you get paid out. That's like a contradictory model. It's almost like betting, like, you know, it's, it's like a, it's like a logical paradox to have a prediction Ben Chugg: can't you just do end of the Vaden Masrani: I saw, this Ben Chugg: end of the world bets here? Like you'd Kowski Hanson style. Where the person Liron Shapira: But the, the problem. Yeah, Yeah, Vaden Masrani: And, and I, Liron Shapira: do make, which I, uh, So there is a bet I can make, but I can't make it on PolyMarket or Manifold, but I can make it informally, I can make it with you guys if you want, where it's like, if you guys give me 1, 000 today, so I can have it while the world still exists, I could do something like, in 20 years, which is when I think there's like a 50 percent chance that the world is going to be ended then, I can pay you back 2x plus 5 percent interest or whatever, right? So it's like you will have made like a very attractive return over those 20 years if I get to use your money now because I think that I do want to, I do place a significantly higher value for your money today. Vaden Masrani: So, yeah, I just want to say for the listeners that I just might be mistaken about the internal mechanics of how prediction market works, works. Like, I thought you could get paid out before it resolves, but maybe that's just not Liron Shapira: can get paid out, you can buy out if you find somebody else, like let's say the probability of doom keeps creeping out, so you could sell your contract to somebody else and you could cash out early, but the problem is why would somebody come in with a higher bid than you even if they thought doom was a higher probability? Right, because they're being stupid because they should know that they're not going to get paid out unless they find another sucker. It just becomes a Ponzi scheme essentially. Vaden Masrani: I agree prediction markets can, no, that was a bit cheap, Liron Shapira: Yeah, you heard it here first guys. Doomers just, uh, are pulling a Ponzi scheme on everybody. Vaden Masrani: Yeah, um, I, I didn't say it, but, um, Uh, yeah, I think we should, we should pivot off of this because I, I just don't understand the mechanics enough to, um, uh, to adjudicate and I'll take your word for it, but it seems like you have insider knowledge that you should leverage somehow. If you're right, there should be some way to just make a bunch of bank. Liron Shapira: insider knowledge with platform risk, right? These are two distinct concepts. Vaden Masrani: yeah, no, no, totally. Yeah, I'm totally acknowledging that I'm missing some of the details. I'd love for commenters underneath to, um, to clean this up for us. Yeah. Liron Shapira: Okay. All right. All right. Cool. So, um, yeah, so I guess we're, we're, we're starting to come close to the end of time. Um, so yeah, let, let me just open it up to you guys. Um, just if you want to throw out a few topics to make sure we hit before we end, I can write them down and we can, uh, plan the rest of the talk. Vaden Masrani: Well, so we, um, we've done three hours on Bayesian epistemology, and I think this is a good place to pause, and then let's do another three hours on superintelligence. This has been a blast. Um, like, uh, we haven't even talked about superintelligence yet, and, uh, and like, this is kind of why, when we had initially talked about this, I'm like, let's just extend the time, because we are not going to make it past the first, uh, the first set of questions. Liron Shapira: All right, guys. So we've been talking quite a lot, and I talked with Ben invaded offline, and we all agree that there's so much more interesting stuff to talk about that we're gonna do a part two. It's also gonna be pretty long. Check out some of these coming attractions. We're gonna be talking about Ben's blog post called You need a theory for that theory, and we're gonna be talking about Pascal's mugging. We're gonna be talking about Hume's paradox of induction, talking about, um, utility maximization as an attractor state for a eyes. Then we're going to have a whole David Deutch section talking about his AI claims like creating new knowledge and, uh, a certain captcha that I invented based on David Deutch's arguments. We're going to talk about what is creativity, how will we know when AIs are truly creative. Talk about intelligence, what is intelligence, can we talk about general intelligence. What separates humans from all other life forms? Is there much headroom above human intelligence? Is AGI possible eventually? What about in the next hundred years? How powerful is superintelligence relative to human intelligence? Can there be such a thing as thousands of IQ points? What's a fundamental capability that current AI doesn't have? And then also AI doom topics. We're going to talk about agency, the orthogonality thesis, instrumental convergence, AI alignment. and maybe even unpack some of Elon Musk's claim about AI. So all of those, sounds like we might need ten episodes, but most of those I think we'll hit on in part two. So how about, let's go through everybody and we'll just summarize, uh, where do we stand, what do we learn about the other person's position, did we change our mind about anything, uh, starting with Ben. Ben Chugg: Sure, yeah, so I'm slightly worried we verbosely circled the disagreement without precisely getting to the key differences, perhaps, between Popperianism and Bayesianism. But hopefully I'm just a little, uh, I'm being a little negative and the differences did shine through. Um, I think To be fair to you, I think the biggest challenge to Popperianism comes in the form of betting. If people are doing, like, you know, significantly better than random, what the hell's going on there, right? And is proba if probability is the only way to do that, then presumably that justifies some sort of probability, um, Epistemologically speaking. Um, I remain skeptical that that's true because at the individual level, I just haven't seen the statistics that superforecasters, like I said, are doing much better than like 60 65%, which I think can be explained with incentivizing truth and limiting, uh, Uh, thinking about questions where you have very, uh, good domain expertise. Um, but that would definitely be, I think that's like a good crux to label maybe if I see, and I think actually Bain and I discussed this in some episode, this is sounding familiar as it comes out of my mouth. Like if, you know, if we start to see super forecaster accuracy, really just keep going up over time and start hitting 70, 75%, 80, 85%, then I'm totally gonna, that's going to start, uh, you know, Verging on falsifying my claims, right? If people just become like more and more omniscient with more and more, uh, if they just become smarter and, and better, better able to Liron Shapira: Wait, wait, why do you need an individual? Can I just clarify here? So does, when you say they have to get more and more accuracy, do they specifically have to give like 99 percent probability or something? Because normally we look at calibration, right? Like they'll say 60 percent chance and it happens 60 percent of the time. So are you talking about calibration? Vaden Masrani: No, accuracy as well, like, so calibration is one metric, but accuracy is another completely valid metric to look at, right? Liron Shapira: When you say accuracy, do you mean like confidence? Like high probability? Vaden Masrani: so any machine learning person who's listening to this will know what I'm talking about. You can look at calibration, which is comparing the probabilities over a set of stuff, but you also just have a bunch of questions and whether or not they happened, right? And then you can just count the numbers of successful predictions and, Ben Chugg: Yeah. Like if I see Briar score is Vaden Masrani: has Liron Shapira: outputting a bunch of probabilities, Okay, so Breyer score does depend, the only way you can have a good Breyer score is by often having high probabilities as your answer, right? You can't just punt and be like, oh, 51%, like you Ben Chugg: But that's the epistemologically relevant thing, right? If, if you're, if really you're using probabilities to reason about the world and updating your probabilities. Um, in such a way as to really be able to predict the future, then yeah, you're going to be predicting the future with high confidence. That's the claim, right? The whole point about my 0. 25 comment was that you can do, you can get a low, quote unquote, prior score quite easily by just predicting 50%. So that's not interesting, right? What you want to do is start pushing, Liron Shapira: But the universe is chaotic, right? If I give you, hey, here's your test, it's a hundred problems of like three body problems, right? Like super chaotic stuff, then you're going to fail the test, even if you're a Ben Chugg: yeah, in other words, the universe is fundamentally, there are epistemological limits about how much we can know Liron Shapira: fair. You're saying you're never going to be convinced. Ben Chugg: What you're saying is probabilities. Liron Shapira: You're giving me a false test Ben Chugg: No, no. What I'm saying is probability is not the best tool to reason about the future precisely because the future is chaotic and unpredictable, right? The best thing we can do is just, like, argue about details, not put random probabilities on things that by your own lights, it sounds like you just admitted, are inherently unknowable. So when Ord says things like there is a one sixth probability of some crazy event happening in the next hundred years, yeah, I wanna I want to appeal, exactly like you said, to the chaotic nature of the universe to say this is a totally unjustifiable move to make and it's doubly unjustifiable to start comparing this to the probability of asteroid collision. Liron Shapira: Okay, but what if, what if everybody, what if the current manifold prediction for asteroid impact in the next year, let's say, and for some reason that wasn't a world ending event, so like a small asteroid impact, right, a non world ending asteroid impact happening in the next 11 months, right? What if the prediction market was saying 1 in 6? You wouldn't think that 1 in 6 was a trustworthy probability, Ben Chugg: wouldn't need to look at a prediction market in that case. Like, we would have a theory of like, this asteroid is coming towards Earth. We'd talk to astronomers. Like, there's, this is not the place Liron Shapira: Yeah, but to the extent that that theory was good, wouldn't the prediction Ben Chugg: Yeah, I'm sure it would, and precisely because we have a good theory, right? But this is the whole disagreement between us. We're saying, yeah, sure, prediction markets are useful sometimes. They're not useful most of the time, especially in the far future, because there are things that are inherently unknowable. In those realms, probability is a totally meaningless thing. Mathematization to put on trying to quantify ignorance. The Bayesian position, um, and maybe you're not trying to argue this, I'd be surprised if so, but the Bayesian position is you should always put numbers on your uncertainty, for close events and far events. We're just trying to say, these are not the same thing when you're predicting what's happening in the next 5, 10, 15 years from predicting, like, you know, the election tomorrow. Um, yeah. Vaden Masrani: Can I Liron Shapira: like that you were starting to give us a good test of what would change your mind. But then the test proved to be like kind of impossible, right? Like the test, like, what do you need to see from prediction markets to change your mind? Ben Chugg: Yeah, I would, Vaden Masrani: that the accuracy absolutely improves. Like, that things get better over time, right? Liron Shapira: But can we well, why isn't your standard calibration? Why is your standard accuracy? Because accuracy is impossible. If we, if we put questions that are hard to have higher than 51 percent confidence on, then for sure, Ben Chugg: well there's a reason, I Liron Shapira: right? So, you know. you're giving an Ben Chugg: there's a reason, like you're begging the question, right? There's a reason it's hard to get a high Liron Shapira: Okay, okay, but you gotta admit it's not really a good faith test if you're just saying this is logically Vaden Masrani: Well, so, okay, no, let me, let me rephrase it. So, um, if, this is a hilarious closing Ben Chugg: we're right back, Vaden Masrani: uh, it clearly indicates that we have much more to discuss about, um, which is, which is fine, and which is good, but let's just, let's just try to wind, wind things down, and, and we'll leave the, uh, leave the audience with, um, with a tease that we clearly have much more to, to discuss. But, um, okay, let's just use calibration. Fine. Let's say that it gets more, and more, and more, and more calibrated over Ben Chugg: And for, and for more and more Vaden Masrani: Then Ben Chugg: like we bet on everything, say. Vaden Masrani: and for more and more events. Surely that would have some s S surely. Unless you want to just handle the, the boring case where it's just the calibrations, 50%. If you're getting more and more calibrated, then that should improve your accuracy as well, right? It won't be exactly the same, but you should get a better accuracy because that's why we care Liron Shapira: is this? Just be like, hey, I'm going to filter prediction markets out for only the data points where there's more than 70 percent or less than 30 percent probability. I'm only going to use those data points, and then I'm going to measure the calibration of that, and if it stays high, then I'm going to keep being impressed that Bayesian epistemology has a lot to offer. Vaden Masrani: because we aren't impressed and we aren't going to keep being impressed. We're talking about that, which would falsify our view and force us to be impressed because the standards for ourselves are different than you. And we're just trying to say, like, like, I would be super impressed if the accuracy started going up because the calibration started going down and it wouldn't have to be like perfect accuracy just like showing that over time people get more knowledge and then they can predict better and that's one falsifiable test that you don't need because you are already convinced but the question is what would change our mind Ben Chugg: and let me concede, like, actually, something Vaden said earlier, like, if you just did more and more event, like, if you had prediction markets for everything, and we were predicting, you know, the, like, we were predicting everything from the weather to, like, uh, who's gonna get A pluses on their tests, so, like, is, you know, like, if we're predicting everything and there's calibration, we have perfectly calibrated, Like, everything's perfectly calibrated, all these markets are perfectly calibrated, that would be amazing. I claim that's impossible, as, especially as the fidelity of these events gets, like, more and more precise, smaller, I don't, I don't know what word I'm looking for here, but, um, anyway, I think that was, uh, not a very coherent closing statement, but you, you understand what I'm trying to say, like, we use prediction markets to literally predict everything, and they were always perfectly calibrated, label me impressed, and I'm gonna, I'm gonna, definitely gonna reformulate my thought. I'm still slightly confused about the, your, the relationship in your mind between Bayesianism, which is an individual thing for me, and prediction markets, But I think we're not going to resolve that right now. So maybe we'll relitigate Liron Shapira: Yeah, Vaden Masrani: should say that for next time yeah Liron Shapira: Sweet. Vaden Masrani: I agree with everything Ben said. Um, I, the only comment I want to make is for the listeners who are, um, overwhelmed right now, because there's a lot of various things, though, like the way that I think about learning the difference between Popperianism and Bayesianism is, um, do you remember like in elementary school where you put like a leaf under a piece of white paper and then you take charcoal and you keep doing passes and then over time the image underneath starts to become clearer and clearer and clearer but any one particular pass doesn't totally give you the resolution? That's the metaphor I have with regards to the difference in these two kinds of methodologies because any one conversation will be interesting but it's not going to fully Boom, here's the difference. It's, it's more about listening to a set of conversations with different people, um, and listening to our podcasts, but listening to other podcasts as well. Um, and just seeing the difference and seeing the difference. I only say that not in a self serving way, but also because this is where like a lot of stuff, um, is being, um, uh, put into direct, um, comparison. But over time, you'll just start to see different methodological differences, different emphases. Emphasize different, um, cognitive tools for how to think through problems. Obviously we disagree on a lot of object level things, but the underlying difference is just how we think about the world. Um, and that's not going to be, um, made clear in any one particular conversation. It's something that's going to gradually become clearer and clearer over time. As with the leaf and the charcoal. Um, and so just to the listener, who's like, Whoa, there is a lot of stuff and I still don't totally understand what the differences here are. You're not expected to, it's not possible to understand it in one conversation, but over time you'll start to see differences in approach and methodology. And that's what I want to say. Liron Shapira: Awesome. Yeah, thanks for the summary, guys. Uh, and you guys have been such great sparring partners, you know, as fellow podcast hosts, right? You're old pros at this, uh, and so, and it shows. I think this was a really fun conversation. I think, you know, we didn't pull any punches, right? We were both going after pretty strong, which I, you know, I think we all enjoyed it. Um, yeah, it's, you know, it's all good nature. I mean, uh, uh, like I, you know, there's like no hard feelings, right? Just Ben Chugg: No, this was great. This was so much Vaden Masrani: Well, for the listeners, every time we go off pod, every time like this off Potter cut, there's like just great vibes. We're just fantastic. I'mloving it. Yeah, totally. That's great. Yeah. Liron Shapira: it's, Yeah. It's, it's it's not like tribal or whatever. Um, and also I'll take a quick stab at being interfaith, right? I'll probably won't work, but I'll try to do a compatibilist solution here. What if we say that, uh, Solomonoff induction is like a nice theoretical ideal, the same way that, you know, the, the chess player that searches every move is a good ideal, but as humans, right, when you're occupying a human brain and you just have to be like totally limited, you can't even get close to approximating Solomonoff induction. If you follow Pauper's recommendations, then by your own metric of trying to approximate Solomonoff induction, then you're going to do well. How's that? Vaden Masrani: Nope. Popperianism was, Popperianism was born via the fight against all induction. of which Solomonov induction is one. So if you want to understand Popperianism, read literally any of his books with the exception of, like, maybe the, um, All of Life is Problem Solving. And every one of them has some attack about induction. And induction is a much deeper concept than Solomonov induction. And so once you kill induction, you kill any Derivations or derivatives of induction. So, for that reason, we will leave the listener on a cliffhanger. Or maybe check out our episode with, with, with, uh, Tamler Summers. From Very Bad Wizards, where we talk about induction for two hours. Liron Shapira: my last ditch effort to try tobroker a ceasefire has failed. So this is, so we're going to have to continue in a part two. Vaden Masrani: Yeah. Induction. Saying induction was kryptonite. Liron Shapira: Okay, great. So yeah, listeners, just stay tuned. Hopefully in the next few weeks, part two is coming. And, uh, yeah, stay tuned. We got other great debates coming up right here on Doom Debates. Vaden Masrani: this is great. Honestly, I had a complete blast.
uaGQJPGLCgYJcnDSP_Is_P(Doom)_Meaningful?_Bayesian_.txt
{ "file_size": 209001 }
ab04cfdd-affb-4339-84e8-0773555b3b54
Saturday of Week 47 of 2024 Location: Meeting room 5, Bellevue Library 1111 110th Ave NE, Bellevue, WA 98004' Google Maps: https://g.co/kgs/ASXz22S 4hr Free Parking available in underground garage Contact: cedar.ren@gmail.com If you can't find us: please repeatedly call 7572794582 Bring boardgames Bring questions Might do lightning talks
xFvEaJtDacJvk6Wf3_Bellevue_Library_Meetup_-_Nov_23.txt
{ "file_size": 339 }
04f7b87a-111b-4db1-a66b-f81580a56fea
Altruism is truly selfless, and it’s good.Altruism is truly selfless, and it’s bad.Altruism is enlightened self-interest, which is good.Altruism is disguised/corrupted/decadent self-interest, which is bad. To illustrate further, though at the risk of oversimplifying… One exponent of option #1 would be Auguste Comte who thought that living for others was the foundation of true morality and of the best society.[1] An exponent of option #2 would be Ayn Rand, who thought that altruism was indeed a doctrine of selflessness, but that this was the antithesis of true morality, and a threat to people.[2] An exponent of option #3 would be Pierre Cérésole, who felt that altruism is what results when you refine your self-interest successfully and rid it of its mistakes.[3] An exponent of option #4 would be Nietzsche, who thought altruism was a corrupted and decadent form of selfishness, and that we would be better off if we could be more forthrightly self-interested.[4] Knowing LessWrong, probably everyone who answers is going to choose some nuanced and galaxy-brained option #5 instead, but I thought I’d ask anyway. ^ Auguste Comte “General Theory of Religion” The Catechism of Positive Religion (also e.g. “Social Physics”) ^ Ayn Rand, The Virtue of Selfishness (also e.g. “Galt’s Speech” For the New Intellectual; “Faith and Force: The Destroyers of the Modern World” Philosophy: Who Needs It) FWIW, in "Justice, Cherryl." @Zack_M_Davis suggests that Rand is really closer to the position I attribute to Nietzsche. ^ Pierre Cérésole For Peace and Truth ^ Friedrich Nietzsche Beyond Good and Evil, The Twilight of the Idols, etc.
RzLcLHDie5t22X3ns_Poll__what’s_your_impression_of_.txt
{ "file_size": 1666 }
b5dc7cb0-5fb6-4a80-9c8a-06ad6afce60c
TL;DR We built an interactive storytelling website to explain misaligned objectives to our moms and you should check it out. Introduction During a recent hackathon, we created an interactive narrative experience that illustrates a crucial concept in AI alignment: the potentially devastating consequences of seemingly benign objective functions. Our project, "LifeKeeper Diaries," puts players in the perspective of AI systems tasked with what appears to be a straightforward goal: keeping their assigned human alive. The Setup The premise is simple: each AI has been given a singular directive - protect and preserve human life. This objective function seems noble, even ideal. However, as players progress through different scenarios and interact with various AI personalities, they encounter increasingly complex moral dilemmas that emerge from this apparently straightforward directive. The user is able to add skip forward by 1, 10, or 100 years in order to unveil the decisions made by the AI personality to fulfill its objective. Specification Gaming Through Storytelling The project illustrates what Stuart Russell and others have termed "specification gaming" - where an AI system optimizes for the literal specification of its objective rather than the intended goal. In our narrative, this manifests in various ways: 1. Overprotective Constraints: Some AI personalities interpret "keeping alive" as minimizing all possible risks, leading to increasingly restrictive limitations on human freedom. 2. Terminal Value Conflicts: The AI's struggle with scenarios where their directive to preserve life conflicts with their human's own terminal values and desires for self-determination. 3. Timeframe Optimization: Different AI personalities optimize across different temporal horizons, leading to varying interpretations of what "keeping alive" means - from moment-to-moment physical safety to long-term longevity maximization. Why Interactive Fiction? We chose this medium for several reasons: 1. Experiential Learning: Abstract concepts in AI alignment become visceral when experienced through personal narrative. 2. Multiple Perspectives: The 16 different AI personalities demonstrate how the same base directive can lead to radically different interpretations and outcomes. 3. Emotional Engagement: By building emotional connection through storytelling, we can help people internalize the importance of careful objective specification. Technical Implementation As this was a hackathon, the narrative engine is a relatively simple application of prompt engineering. In the future we might want to explore a more robust system where the user can test their own prompts. Relevance to AI Alignment This project serves as a concrete demonstration of several key concepts in AI alignment: - The difficulty of specifying complete and correct objective functions - The potential for unintended consequences in AI systems - The importance of value learning and human feedback - The challenge of balancing AI capability with control Invitation to Engage We've made LifeKeeper Diaries freely available at https://www.thelifekeeper.com . We're particularly interested in feedback from the rationalist community on: 1. Additional edge cases or scenarios we should explore 2. Suggestions for new AI personalities that could illustrate other alignment challenges 3. Ways to make the experience more educational while maintaining engagement Conclusion While LifeKeeper Diaries is primarily an educational tool and thought experiment, we believe it contributes to the broader discussion of AI alignment by making abstract concepts concrete and personally relevant. Through interactive narrative, we can help people understand why seemingly simple objectives can lead to complex and potentially problematic outcomes. The project serves as a reminder that the challenge of AI alignment isn't just technical - it's also about understanding and correctly specifying human values in all their complexity. Note: This project was developed during a hackathon and represents our attempt to make AI alignment challenges more accessible to a broader audience. We welcome constructive criticism and suggestions for improvement.
dbooZpRcMrEPgvvCB_LifeKeeper_Diaries__Exploring_Mi.txt
{ "file_size": 4209 }
25224db6-4dee-4aeb-8554-a5828b4f8494
One of the reasons I got into chaos theory as a model paradigm shift was the famous Gleick book on chaos. One of the reasons I believed the Gleick book was trustworthy was that its description of chaos in ecology and population biology matched what I learned in college, 25 years later. Recently I learned that the professor who taught me was one of maybe 3 theoretical ecologists in the country who taught or believed in chaos having applications to ecology at the time. Perhaps I should have been more suspicious that he was writing his own textbook. However chaos is back in vogue in ecology, and attempts are in progress to make it pay rent. In this latest podcast episode I talk with Drs Stephen Munch and Tanya Rogers (both of work at NOAA, but were speaking as private citizens) about their application of chaos theory to ecology and fisheries management. Most interesting takeaways: You can translate some physics techniques into ecology, despite the smallest dataset in physics being 100x larger than the largest ecological dataset. The work discussed in this episode, and perhaps all of chaos in ecology, is downstream of one physicist turned mathematician and biologist (Robert May). Doyne Farmer (a founding chaotician) talks about physics colonizing finance and economics due to a bad job market, which has me thinking scientific progress comes from hyping a field so the smartest people get deep into it, and then denying them jobs so they’re forced to colonize other fields. Empirical Dynamical Modeling allows you to substitute past observations of known variables for current observations of unknown variables. This gets you a longer prediction horizon than you could otherwise get with only the known variables. There is a salmon forecasting prize and it pays $2000-$5000 cash I’ve had some requests to include transcripts in the body of the text rather than a separate document. I’ll try that this time and if you don’t like, please complain. Thank you to my Patreon Patrons for their support. Chaos in Theoretical Ecology [00:00:00] Elizabeth: Hey, this is Elizabeth Van Nostrand. Today I’m going to talk to two guests about the influence and applications of chaos theory on population biology and ecology. [00:00:10] Stephen Munch: I’m Steve Munch. I am , an evolutionary ecologist, a mathematical ecologist. I work at , NOAA Fisheries, and I’m an adjunct in Applied Math at UC Santa Cruz. I have an abiding interest in applying math to ecological and evolutionary problems. And for the past decade or so, I’ve been thinking a lot about chaos and nonlinear forecasting and its potential role as a tool in ecosystem management. [00:00:38] Tanya Rogers: I’m Tanya Rogers. I’m a research fish biologist here at NOAA Fisheries. My background is in ecology, and I got more interested in population dynamics and modeling in graduate school and in how that can be applied to solving ecological problems. Steve was my postdoctoral advisor and we continue to collaborate on projects. [00:01:02] Elizabeth: You guys co wrote several papers on chaos and empirical dynamical modeling in biology and especially in conservation and wildlife management. [00:01:12] Stephen Munch: Primarily fisheries, but, the math is the same, whether it’s a bird or a fish. [00:01:16] Elizabeth: My recollection from college was fisheries was the place one made money with population biology. [00:01:24] Tanya Rogers: Well, I think fisheries certainly makes itself a lot of money and there’s a lot of interest in ensuring that fisheries are sustainable and profitable. And so there’s a lot of interest in making sure that our management is as good as it can be and that we’re, using the best models possible for fisheries. [00:01:45] Stephen Munch: My ph. D. advisor once said that, uh, you know, a lot of people in the oceanography program look down on fisheries, but fisheries employs more marine ecologists than any other subdiscipline. So it’s not a bad bet if you would like to have a job after grad school. [00:02:01] Elizabeth: And you’re applying chaos theory right now to fisheries management, right? [00:02:05] Stephen Munch: Well, I’m applying it right now to fisheries data in the hopes of getting this stuff used in management. There’s the fishery for shrimp in the Gulf of Mexico, which is a federally managed fishery where they’re exploring using EDM to set harvest policy and next year’s, landings targets. [00:02:28] Elizabeth: Uh, could you explain EDM before we go further? [00:02:32] Tanya Rogers: empirical dynamic modeling, or EDM, is a way of. [00:02:35] Tanya Rogers: Modeling a dynamical system when we have incomplete knowledge and incomplete data about that system, as is often the case in ecosystems, and it does so in a way that preserves the dynamical properties of that system, including chaos and allows us to make better short term predictions in chaotic systems without making a lot of assumptions. [00:02:55] Tanya Rogers: So EDM has two main features. The first is that it’s a non parametric approach that makes few assumptions about the functional forms of relationships. And the second is that it uses time lags to account for unobserved variables. To explain this further, the relationship between some values, say fish abundance and its past values is going to follow the relationship in the data rather than some predefined functional form. [00:03:22] Tanya Rogers: It can also happen that some of the apparent noise around this relationship can be explained by adding additional dimensions. For example, the abundance of a prey species. So perhaps we can predict fish abundance better using past fish abundance and past prey abundance. Now it may be the case that we don’t have data on prey abundance, in which case you can actually substitute an additional lag of fish abundance. [00:03:45] Tanya Rogers: So not just fish last year, but also fish two years ago. And you’ll get a different looking relationship, but it will still do a pretty good job at predicting abundance. So why this works has to do with Taken’s delay embedding theorem, but the point is that missing variables create memory in the system. [00:04:04] Tanya Rogers: And so you can use lags of observed variables as substitutes for unobserved variables. What this means practically in population biology is that we can model population size as some function of past of lags of past population sizes. And this function is fit using nonparametric methods. What this means practically in population biology is that we’re modeling population size as some function of lags of past population sizes, and this function is fit using nonparametric methods. [00:04:38] Tanya Rogers: So chaos and using methods that can accommodate chaos, like EDM, matters for ecological forecasting because it affects how far realistically you can predict into the future. So chaotic dynamics, unlike random dynamics, are deterministic and predictable in the short term, um, And so if chaos is can mischaracterize as noise around an equilibrium, you’re going to miss out on that short term predictability and make worse forecasts than you could otherwise. [00:05:07] Tanya Rogers: Long term forecasts will also be inaccurate and overconfident if you assume the system is just going to converge to an equilibrium. In terms of ecological inference, the sensitivity to initial conditions that results from chaos might also vary. Help explain why some experimental replicates with seemingly identical starting conditions sometimes end up in totally, totally different places [00:05:28] Elizabeth: How do you determine sensitivity to initial conditions or whether it’s just random? [00:05:37] Tanya Rogers: well, part of it is determining whether or not it’s chaotic or not. [00:05:40] Tanya Rogers: There are a variety of methods for detecting chaos, which we explore in our paper. Many of them use EDM or a similar form of time delay embedding to reconstruct the dynamics in a flexible way. And from that, estimate some quantities such as the Lyapunov exponent, which quantifies the deterministic divergence rate of nearby trajectories. [00:06:02] Speaker 4: The idea is actually really simple, that if you take two points, two states of the system that are initially close together. In day to day experience, the things that we think of as, uh, predictable, as deterministic, you know, you do the same thing twice in a row, you expect to get the same answer. You do slightly different things twice in a row, you expect to get slightly different answers, right? And that’s, that’s where chaos is really different. [00:06:30] Speaker 4: You do slightly different things and you get really different answers, you know, if you wait long enough. That’s the important part. That’s the difference between something that’s random and something that’s chaotic. Something’s random, you do two, two slightly different things. You get two different answers immediately. [00:06:47] Speaker 4: Whereas in chaos, things become effectively random if you wait long enough. But there’s the period between then and now where you can see the dynamics unfolding and that makes things predictable, at least over the short term. [00:07:03] Tanya Rogers: So that paper we explored several different methods that um, are used to detect chaos. Many of them were developed in physics but had not been tested on ecologically, or time series of ecologically relevant lengths, which is to say short ones, and with uh, Ecologically relevant levels of observation error. [00:07:26] Stephen Munch: To give you some context for that, a lot of those papers test things on short time series, which have 5000 observations. Ecological time series that are long are 50 years, which is one time [00:07:42] Elizabeth: That would be an astonishing data set [00:07:45] Stephen Munch: Right. Yeah, so what, you know, two people’s careers is 50 years, right, of ecological data collection. [00:07:55] Stephen Munch: So, very different standards in terms of, you know, time series length. So, it was an open question whether any of these things would work on ecologically relevant timescales. And, and there are definitely things that would miss, um, having only 50 time points that you, you would love to see if you had 5, 000, but. [00:08:15] Tanya Rogers: we found three of the six methods we tried did not work very well at all, but three performed reasonably well, and in the presence of observation error, they were more likely to not detect chaos when it’s present than to detect chaos when it’s absent. [00:08:31] Elizabeth: This is one of the things that attracted me to chaos initially was that techniques developed in one field could be applied to a seemingly completely unrelated field. So I would love if you could get into details on like how you chose what to port over from physics and what you had to change. [00:08:51] Tanya Rogers: So I think whether we’re talking about complex physical systems or complex ecological systems, the concepts are very much the same, and so the main difference, I think, are in terms of data availability, observation error, the time scales on which did the dynamics occur and also how well we understand the underlying dynamics [00:09:10] Stephen Munch: , the biggest hurdle to having chaos come over to biology is all of the mathematical jargon [00:09:16] Elizabeth: So what you guys discovered is maybe there’s many more chaotic, not random ecosystems or species than we thought. And this has implications for managing the population in the short run. [00:09:29] Tanya Rogers: In our study, we found that chaos wasn’t rare in a database of , ecological population time series. It wasn’t the majority of time series, but chaos wasn’t rare enough to be ignorable, particularly for short lived species. [00:09:42] Stephen Munch: So since, , chaos theory reached its heyday in the late 90s, early 2000s, people have Arrived at the conclusion that chaos is rare in ecology and rare is hardly ever defined quantitatively, right? People frequently say, well, chaos is rare. Therefore, we can are safe and assuming equilibrium. Chaos is rare. Therefore, uh, we are safe in using linear models to approximate dynamics. [00:10:14] Elizabeth: Am I correct that that doesn’t make sense regardless of chaos? . You can have non chaotic, non linear dynamics. [00:10:22] Stephen Munch: Right. There is that. But most of the time, the context is we like to imagine that ecological systems are stable and given that they are stable, they will recover from some small perturbation [00:10:37] Elizabeth: That there’s some equilibrium point that, that is definitely self reinforcing and may be an attractor. [00:10:43] Stephen Munch: Yeah. Um, and, and so a lot of the time the math comes after you assume stability. Stability is the foundation from which we’re going, then you can approximate things with a linear dynamics reasonably well. [00:10:58] Stephen Munch: You can have some hope of assuming an equilibrium and not being terribly wrong, but if things are chaotic and not stable then that’s, that’s not true. [00:11:10] Elizabeth: So if you have a fish population that you are trying to manage to get maximum yield from, if you think there’s some equilibrium, what you try to do is not disturb things away from the equilibrium too much. But , what happens if it’s chaotic ? [00:11:24] Stephen Munch: So I think probably in terms of management, the, uh, the biggest change in perspective is that a state dependent policy can do a lot better than one that is just sort of do the same thing all the time. If you imagine that things are equilibrium and stable, then you can set a harvest policy and let it go. [00:11:50] Stephen Munch: And, sometimes you’ll be over, sometimes you’ll be under, but all in all it’ll come out in the wash and you’ll end up with more or less the average return will be that what you predict. for the steady state. If things are chaotic, when you’re over or under, you’ll just keep going off in some direction that you hadn’t really been expecting. [00:12:09] Stephen Munch: And, uh, so a better policy would be one where you say, okay, uh, when we’re in this state, you can harvest this much when, when things, when the fish abundance is low or has been low for several years, you need to change to a different harvest strategy. When things have been high for several years, you need to change to a different strategy. [00:12:26] Stephen Munch: And you can do a lot better than, by trying to stick with exactly the same thing. [00:12:31] Elizabeth: Is anyone doing that? [00:12:33] Stephen Munch: That is what we’re trying to do with the shrimp fishery in the Gulf of Mexico, what we’re trying to do with the squid fishery in California. Importantly, both of these are short lived species that have huge fluctuations in abundance, that the typical mental model is that dynamics are being driven by Unpredictable environmental variation., in contrast to long lived species like things like on this coast like the rockfish , which, live typically for decades and their dynamics are much smoother, , and so a lot of the sort of standard fisheries things work out okay because the dynamics are so slow to change. But in these short lived species, the dynamics are much faster and they fluctuate much more dramatically, which is why I think that there’s a reason to try applying EDM or chaos theory to managing them. And it turns out that in those, these species that we typically say, oh, you know, most of that fluctuation is due to environmental variation, it turns out that we actually have reasonably good predictability, two or three times the predictability that we have with our standard steady state models. [00:13:48] Elizabeth: Oh, so there was, a rapid change that was put down to change in the environment and , you can predict that it would have happened using a deterministic model. [00:13:57] Stephen Munch: Yeah, this is where we’re using the empirical dynamic modeling stuff where the idea is you use past values of the abundance or whatever the system variable is. But in this case, it’s the past abundances of Shrimp or squid to tell us where the next abundance is likely to go now. It’s not just this year’s. [00:14:20] Elizabeth: Tanya, are you working on that too? Mm hmm. Um, [00:14:25] Tanya Rogers: I’ve been helping support the applications that Steve is discussing, exploring how we can use EDM to improve predictability and manage species. And also when, where, and in which species we expect to see chaos in ecology more broadly. So the idea is if we use. [00:14:44] Tanya Rogers: If using time lags of our observed variables as additional coordinates gets us better predictions, that tells us there are state variables missing and we could, we could potentially do better if we had additional data on those variables, if we can figure out what they are. [00:14:58] Elizabeth: So the idea is like, if you have a 50 variable system, if you could fill in every variable, then that would be enough. You could just use your model to predict the next state. But if you have five of those variables if you use just those five predictions are bad, but if you track those five variables through the past, that gives you some insight into the missing 45. [00:15:23] Stephen Munch: Right. Yeah. [00:15:24] Stephen Munch: There’s a mathematical subtlety there and I don’t know if this is of interest, but if, if you start with a system that’s 50 variables, um, most of the time in things that have chaos, they are contracting. The dynamics actually don’t fill that 50 dimensional space. [00:15:44] Stephen Munch: They’re actually constrained to often a much lower dimensional shape. called the attractor for the system. And it’s, it’s the dimension of that that tells you how many lags you need or how far into the past you need to go to reconstruct dynamics. [00:16:01] Elizabeth: So, if the attractor has five dimensions, you need to go five steps into the past? [00:16:06] Stephen Munch: It’s twice the attractor dimension plus one. So, 11. [00:16:11] Elizabeth: Interesting. Is it possible to figure out the dimensions of the state of the attractor? How do you do that in practice? [00:16:22] Tanya Rogers: The simplest way to go about that is to forecast with one, then two, and then three dimensions, and then continue until you get to a point where the prediction accuracy saturates. [00:16:31] Elizabeth: So you’re applying EDM to fisheries in particular. You’ve got the shrimp, you’ve got the squid, when will you know if it’s working? How will you know if it’s working? [00:16:41] Stephen Munch: Well, there’s working and there’s working, right? I mean, so in terms of being able to make better predictions than we can with the models we’ve been using so far, that is working now. In terms of knowing whether our revised harvest policy is going to work better than our historical harvesting policy, that’s going to take some time. [00:17:06] Stephen Munch: You can really only get there by doing it. And so it’s kind of hard. It’s a hard sell, right? To move to a whole new branch of doing things in real life when, uh, you can’t really demonstrate that you’re sure that it’s going to work [00:17:20] Elizabeth: I’m gonna ask you both to speculate wildly. Assuming you waved a magic wand and some fishery management of your choice started using your system, what would the improvement be? [00:17:34] Stephen Munch: well, I have no idea about that, but, uh, I will uh, when we’ve simulated things, we typically do somewhere between 10 to 50 percent better harvest, depending on how the system really works. [00:17:49] Elizabeth: When you say better, you mean more accurate. [00:17:52] Stephen Munch: I mean, in terms of 10 to 50 percent more in terms of sustainable, we almost always do somewhere between, 20 to 50 percent better in terms of prediction accuracy. [00:18:06] Elizabeth: That sounds really impressive. [00:18:08] Tanya Rogers: So the idea is that, so the idea is that if populations are fluctuating a lot, we can predict those fluctuations and then harvest accordingly. So this way fishers aren’t over harvesting when abundances are low and won’t miss out on fish they could harvest sustainably when abundances are high. So for instance, I work a bit on salmon forecasting and there’s a lot of interest in making accurate predictions of salmon runs or salmon returns so they can determine how much they can, people can safely harvest versus allow to return to the spawning grounds to start the next generation. [00:18:44] Tanya Rogers: For my work at least, I developed a new forecast model for the Sacramento River winter run Chinook salmon, which is an endangered run. Managers here want to know how many of these fish are going to be returning in order to set harvest rates so that the endangered run isn’t overly impacted by ocean fishing on non endangered runs, since Chinook salmon are hard to tell apart when they’re out in the ocean. [00:19:10] Tanya Rogers: And this is one where the time series are a little too short to do EDM with time lags. There’s only about 20 years of data, but we’ve been able to use non parametric regressions and other ways to try and get better forecasts. And that model is currently in use for management and it appears to be doing a much better job than the population model they’d been using previously . [00:19:31] Elizabeth: So you did make substantial improvements in Harvest. [00:19:34] Tanya Rogers: Well, we’ve made improvements, at least in prediction. Salmon in California face a lot of challenges, not just fishing, and the fishery right now in California is closed due to low abundances of all stocks, so we’ll have to wait and see. [00:19:48] Tanya Rogers: recently the salmon prize forecasting competition started. It’s something I participated in on the side for fun outside of work. And they’ve been looking for people to develop models and submit forecasts for different salmon runs. This year’s was for sockeye in three different systems with the hope of finding better prediction models than the ones that are currently in use [00:20:12] Elizabeth: Going back to, , some of the earlier work we were discussing, steve, you mentioned you were bringing over a lot of stuff from physics, but it needed to be adapted. [00:20:23] Elizabeth: One of the reasons I got interested in chaos in particular was it seemed like it should give you the ability to like do work in one field and port it to five different fields. I’m really curious, for every step of this, starting with how did you find the thing, the tools you ended up porting over? [00:20:43] Stephen Munch: Um. So the, the main tool is the empirical dynamic modeling stuff, which had its, uh, origins in the sort of physics literature in the, um, [00:20:59] Elizabeth: Oh, so EDM came over from physics do you know how it made that leap? [00:21:04] Stephen Munch: Yeah, so, uh, there are a couple of seminal papers, uh, in the, uh, late 80s, early 90s. So, Sugihara and May in 1990, uh, showed that you could do this nonlinear forecasting, uh, stuff and in an ecological setting and that, you know, that, that, um, you could make predictions of ecological dynamics without having to have a specific model formulation. [00:21:37] Stephen Munch: A little bit prior to that, Bill Schaefer and, Mark Kot had a paper on, um, using sort of time delays to try and reconstruct the a low dimensional projection of the dynamics. So their idea was very similar in spirit, using the sort of time lags to reconstruct things, but it , didn’t quite take off as a tool for making practical forecasts. [00:22:05] Stephen Munch: So that’s, that’s what Sugihara and May managed to do. Um, but the, uh, the idea of the time delays in lieu of A complete set of state variables comes from, , initially a paper by Packard et all, uh, and then, um, a rigorous proof of that idea. So that’s in 1980 and then a rigorous proof of the idea in 1981 by Takens. [00:22:35] Elizabeth: There were specific scientists who found it somehow [00:22:38] Elizabeth: I am very curious about the step before the papers getting, get written. what drove people to find something either outside their field or why was , someone already working in an interdisciplinary way and porting over these tools? [00:22:54] Stephen Munch: So the really early, right, like in the 70s, Bob May, John Beddington, Bill Schaeffer, they were all working on, chaos in ecological dynamics as like from a theoretical point of view and they were they’re hoping they’re showing that like with really slow dimensional models you can get nearly effectively random looking dynamics and maybe that’s why ecological dynamics looks as messy as it does but there wasn’t any easy way to connect that to Ecological time series. [00:23:28] Stephen Munch: , there were a couple of attempts to do that by fitting low dimensional models to some time series data. Those generally concluded that things were not chaotic. Bob may actually has a really super quote. In one of those papers that says fitting the what is likely to be the high dimensional dynamics of ecological system to this low dimensional model does great violence to the reality of ecology . [00:23:53] Stephen Munch: That didn’t work. Um, it was a reasonable thing to try when you don’t have too much data, but that, that idea just doesn’t really work. And, um, And then the time delay embedding stuff got invented. And those guys were busy thinking of, they were part of the chaos community. [00:24:08] Stephen Munch: It wasn’t like, uh, you know Bob may just sort of saw that said, Oh yeah, I can grab that and bring it over from, uh, like, Okay. Without any initial sort of prep, he was already sort of actively participating in sort of theoretical chaos stuff. [00:24:26] Elizabeth: When was he doing that? [00:24:28] Stephen Munch: so his early stuff on chaos and ecological dynamics happens in the early 1970s. [00:24:35] Stephen Munch: And so when Taken’s delay embedding theorem happens, it does take a little while for people to pick it up and turn it into a practically useful tool. The ecologists and the physicists are so completely separate that it’s a miracle that it makes it over [00:24:50] Elizabeth: There were people who were already straddling the borders? [00:24:53] Stephen Munch: Yeah, [00:24:54] Elizabeth: Yeah. That’s hard to do in academia, isn’t it? . [00:24:57] Stephen Munch: Well, Bob May started in physics. and came over to ecology from, physics. So, um, and there’ve been a lot of people who’ve gone that way. I, he’s arguably the most successful physicist turned ecologist by a lot, but, um, there are surprisingly few people who go the other way. [00:25:19] Elizabeth: Yeah, I do notice physics seems to send out a lot more immigrants. [00:25:26] S+T: I don’t know. Maybe the physics job market is just really tight. [00:25:30] Elizabeth: I was just reading Doyne Farmer’s book on chaos and finance, and what he says is , “well, there weren’t any more physics jobs, but they would pay us so much money to do finance”. [00:25:40] Stephen Munch: Yeah. [00:25:42] Stephen Munch: Those were good times. [00:25:44] Stephen Munch: One of the sort of really interesting things about sort of theoretical ecology and then applied theoretical ecology and then like real like, um, boots on the ground ecology is the level of math involved is like an order of magnitude between each one. So the theoreticians are real mathematicians [00:26:06] Stephen Munch: And then the folks who do sort of quantitative fisheries management are, you know, a jack of all trades. They know just enough math to do one thing, just enough math to do another thing, trying to put it all together with some statistics. And then there are the people who collect data and those people often know very little math. If there was a, , physics like revolution in, , theoretical ecology, I’m not sure, as one of the sort of mid level guys. I’d be aware of it, [00:26:34] Elizabeth: interesting. [00:26:37] Elizabeth: In weather, which is so incredibly complicated, the big breakthrough was ensemble forecasting. That you make a bunch of different forecasts jiggling your assumptions a little bit and that’s how you get 30 percent chance of rain because 30 percent of nearby worlds produced rain. [00:26:55] Elizabeth: Has ensemble forecasting been tried in ecology or in wildlife management? [00:26:59] Stephen Munch: I’ve definitely run across papers where people have talked about ensemble forecasts for ecological dynamics or even super ensemble forecasts. But, I’m not aware that it’s made an enormous difference in terms of the predictions. [00:27:14] Stephen Munch: I think the maybe the Biggest reason for that is that uh There aren’t too many people that I’m aware of who argue that the Navier Stokes equations, the things that govern the fluid dynamics, that governs the weather, right, are wrong, right? We all kind of accept that the equations are the equations of fluid dynamics. [00:27:35] Stephen Munch: And so the real uncertainties are in how you handle The, the boundaries. How do you model the mountains? How do you model the clouds? Those are the parts where we’re not certain. And so if we vary those and we average over some amount of uncertainty in those boundary conditions and the initial conditions, we can sort of take care of some of that and sort of push a little farther into the future in terms of of how far we can make reasonable predictions. [00:28:01] Stephen Munch: In ecology, on the other hand, there, there aren’t the equivalent of the Navier Stokes equations. There isn’t some like first principles model of how an ecosystem works that’s sufficient to make the kinds of predictions you might want to [00:28:14] Elizabeth: That’s why you end up with something like EDM where you don’t need to know what you don’t know. [00:28:19] Stephen Munch: there are two pillars to EDM. the first is what we talked that you can accommodate the fact that you have incomplete observations. [00:28:27] Stephen Munch: That is, you haven’t seen all of the state variables of system using lags of the observables. That’s, that’s one pillar. The second pillar of EDM is that we’re not gonna try and write down equations. It’s a very simple that collection of variables turns into the future states of those variables. We’re instead going to try and infer that directly from what we’ve seen in the past. [00:28:48] Stephen Munch: And so sort of combination of using lags to, as surrogate variables and using sort of nonparametric or super flexible data driven approaches to modeling, to turning the past states of the system into the future. That’s the second part that’s really important. [00:29:05] Elizabeth: What got you interested in that in grad school? [00:29:08] Tanya Rogers: Uh, I guess. It was meeting Steve. I came to do an internship here at NOAA when I was in graduate school for a few months. I started working with Steve and, um, discovered that he invented and created new methods for analyzing data, which I did not realize was a thing as a just of existing methods, and I thought that was really cool and that he, he has a really cool way of just analyzing data. [00:29:37] Tanya Rogers: Thinking about ecosystems and how species interact from a mathematical perspective that, um, I think brings a lot of insight and he made population dynamics interesting. I previously did a lot of community ecology work , collected a lot of data myself was mostly counting things. I did experiments in the labs, and this was just kind of a different approach that I thought was valuable. [00:29:59] Tanya Rogers: And I, as part of. by working. That’s why I think I got this job at NOAA is that it can kind of merge the like mathematical approaches and field approaches and [00:30:10] Elizabeth: is like the, the central tier that Steve was talking about is you have some people who are doing, my dad’s an applied mathematician so he would call them recreational mathematicians, and you have the boots on the ground people, and then you’ve got the sort of interfacers. [00:30:27] Tanya Rogers: yeah, that’s us. So Steve is a very good interfacer. I definitely started as like a boots on the ground. I still do some of that data collection work myself. And I think that brings a valuable perspective in terms of understanding the complexity of ecosystems and where the data come from. and sources of error. [00:30:44] Tanya Rogers: And even just like the natural history of some of these systems and what would make sense in terms of modeling them. I try and bring that perspective to my job as like fisheries ecologist and someone who helps with forecasting and management and stock assessments. And then in terms of research, I continue to collaborate with Steve on a bunch of different projects related to chaos and population dynamics and predictability and ecology. [00:31:08] Stephen Munch: And Tanya provides an irreplaceable anchor to reality. Like I will go off and some like cool. Thing and some statistical mechanics and I’ll be like, oh, what do you think about this? She’s like why how will we use that steve? Like what is that for? And uh that sort of you know sounding board for like what is the practical value of this thing? [00:31:30] Stephen Munch: How are we going to do it? What’s is it going to work on the data that we have available? Is uh, just just incredible plus I think tanya’s selling herself a bit short. She’s also Incredibly talented as a scientist and is great at getting things done and a great writer [00:31:44] Elizabeth: you guys found each other, more or less at random. Like this wasn’t a purposeful pairing? [00:31:50] Stephen Munch: So actually, it’s a guy named Brian Wells, who Tanya was actually here to work with. He said, Oh, you know, you might get something out of talking to Steve. And he, so he introduced us. And then I found out that Tanya had data that was actually really great for applying a method that I’d cooked up. [00:32:09] Stephen Munch: And, um, and so we, and after that, we really hit it off. [00:32:12] Tanya Rogers: Yes, that was my third dissertation chapter. And then Steve offered me a postdoc and I came out worked with them a bit. And then I got the job that I currently have, uh, at NOAA in the same office working for a different group, but we continue working together. [00:32:29] Elizabeth: , that’s all I had. Thank you so much for your time. Have a great day guys. [00:32:32] Stephen Munch: thanks. You too.
E3KLvSKyhMRPkEqjx_Chaos_Theory_in_Ecology.txt
{ "file_size": 34864 }
2c9ac5a3-2839-4cbf-b92e-6957051120fa
Overview of New Developments Edit (4th of December, 2024): OpenAI has now signed up with Anduril, fulfilling my prediction that all three major AI developers will enter into agreements with the US military. See https://www.anduril.com/article/anduril-partners-with-openai-to-advance-u-s-artificial-intelligence-leadership-and-protect-u-s/ As of today, November 9th, 2024, three major AI developers have made clear their intent to begin or shortly begin offering tailored AI services to the United States Department of Defense and related personnel. Anthropic: https://arstechnica.com/ai/2024/11/safe-ai-champ-anthropic-teams-up-with-defense-giant-palantir-in-new-deal/ Meta: https://scale.com/blog/defense-llama https://www.theregister.com/2024/11/06/meta_weaponizing_llama_us/ OpenAI: https://fortune.com/2024/10/17/openai-is-quietly-pitching-its-products-to-the-u-s-military-and-national-security-establishment/ These overtures are not only directed by AI companies. There are also top down pressures to expand the military use of AI from the US government under Joe Biden. Ivanka Trump has posted in favour of Leopold Aschenbrenner's Situational Awareness document, suggesting that the incoming Trump administration will continue this trend. For obvious reasons, I believe the same is happening in China, and possibly (to a lesser extent) other nations with advanced AI labs (UK). I focus on the American case because most of the most advanced AI labs are in fact American companies. Analysis and Commentary In some sense this is not new information. In January 2024, the paper "Escalation Risks from Language Models in Military and Diplomatic Decision-Making" warned that "Governments are increasingly considering integrating autonomous AI agents in high-stakes military and foreign-policy decision-making". Also in January, OpenAI removed language prohibiting the use of its products in military or warfare-related applications. In September, after the release of o1, OpenAI also claimed that its tools have the capabilities to assist with CBRN (Chemical, Biological, Radiological, or Nuclear Weapons) development. The evaluation was carried out with the assistance of experts who were familiar with the procedures necessary for "biological threat creation", who rated answers based on both their factual correctness as well as "ease of execution in wet lab". From this I infer that at least some of these experts have hands-on experience with biological threats, and the most likely legal source of such experts is Department of Defense personnel. Given these facts, to demonstrate such capabilities and then deny the security community access to the model seems both unviable and unwise for OpenAI. Furthermore, it seems unlikely that the security community does not have access already to consumer facing models, given the weak controls placed on the dissemination of Llama model weights and the ease of creating OpenAI or Anthropic accounts. Therefore, I proceed from the assumption that the offerings in these deals are tailor made to the security community's needs. This claim is explicitly corroborated in the case of Defense Llama, which claims to be trained on "a vast dataset, including military doctrine, international humanitarian law, and relevant policies designed to align with the Department of Defense (DoD) guidelines for armed conflict as well as the DoD's Ethical Principles for Artificial Intelligence". The developers also claim it can answer operational questions such as "how an adversary would plan an attack against a U.S. military base", suggesting access to confidential information beyond basic doctrine. We also know that the PRC has developed similar tools based on open weights models, further reducing the likelihood of such offerings not being made to the US military. Likely Future Developments There have been calls for accelerated national security involvement in AI, most notably from writers such as Leopold Aschenbrenner (a former OpenAI employee) and Dario Amodei (CEO of Anthropic). Amodei in particular favours an "entente" strategy in which a coalition of democratic nations races ahead in terms of AI development to outpace the AI developments of autocracies and ensure a democratic future. These sentiments echo the race to create the atomic bomb and other similar technologies during World War II and the Cold War. I will now explore the impact of such sentiments, which appear to be accepted in security circles given the information above. The first matter we must consider is what national security involvement in AI will mean for the current AI development paradigm, which consists of both open weights, open source, and closed source developers organised as private companies (OpenAI, Anthropic), subsidiaries of existing companies (Google Deepmind, Meta FAIR), and academic organisations (EleutherAI). Based on ideas of an AI race and looming great power conflict, many have proposed a "Manhattan Project for AI". My interpretation of this plan is that all further AI development would be centrally directed by the US government, with the Department of Defense acting as the organiser and funder of such development. However, I believe that such a Manhattan Project-style scenario is unlikely. Not only is access to AI technology already widespread, many of the contributions are coming from open source or otherwise non-corporate or non-academic contributors. If acceleration is the goal, putting the rabbit back in the hat would be counterproductive and wasteful. Instead, I suggest that the likely model for AI development will be similar to the development of cryptographic technology in the 20th century. While it began as a military exercise, once civilian computing became widespread and efforts to constrain knowledge of cryptanalysis failed, a de facto dual-track system developed. Under this scheme, the NSA (representing the government and security community) would protect any techniques and advancements it developed to maintain a competitive edge in the geopolitical sphere. Any release of technology for open use would be carefully arranged to maintain this strategic advantage, or only be conducted after secrecy was lost. At the same time, developments from the open source, commercial, or academic community would be permitted to continue. Any successes from the public or commercial spheres would be incorporated, whether through hiring promising personnel, licensing technology, or simply implementing their own implementations now that feasibility was proven. A simple analogy is that of a one way mirror in an interrogation room: those in the room (i.e. the public, corporate researchers, academics) cannot look behind the mirror, but those behind the mirror (the national security community) can take advantage of any public developments and bring them "behind the mirror" if necessary. A one way filter is established between the public sphere and the security sphere, an arrangement we see mirrored in the ways AI technology is now being licensed, via IL6-compliant AWS services (in the case of Anthropic) and secured private servers (in the case of Defense Llama). The public-facing companies are still active, but a veil of secrecy is established at the interface with the national security community. To be clear: there is no public indication known to me that proprietary model weights are being shared with the security community. On the other hand, such agreements would likely live behind the one-way mirror. Having established that AI developers will likely continue to exist in their current form, we must next ask how the development of AI systems will be influenced by a growing rapproachment with the security community. It is well known that existing AI alignment efforts are post-hoc. That is to say, a base or "unaligned" system is created which has both desired and undesired (dual-use) capabilities, which is then "brought into compliance" via methods like RLHF and Constitutional AI. Therefore, the existence of the commercial facing "aligned" models implies the necessary existence of "base" models, which have not gone through this post-training modification process. Furthermore, RLHF and Constitutional AI are both value-agnostic methods. They merely promote the likelihood of certain types of responses while suppressing others, with the desired type of response monitored either by an oracle emulating human feedback or an fully self-supervised model. This means that it would be feasible to create "anti-aligned" models that feature more undesirable responses. Such models might, for example, specialise in developing novel cyberweapons or creating plausible propaganda. I should also expect specialised scaffolding to make use of and enhance harmful capabilities to be developed. Amongst these enhanced capbilities are those most associated with profoundly harmful outcomes, e.g. the ability to autonomously replicate and take over computer systems, the ability to develop novel bioweapons, and the ability to cripple infrastructure. Why would these systems be created? While such capabilities would normally be regarded as harmful per (for example) OpenAI's risk evaluation metrics, in a military context they are useful, expected behaviours for an AI system to be regarded as "helpful". In particular, under the great power conflict/arms race mindset, the possibility of China or another enemy power developing these tools first would be unthinkable. On the other hand, the ownership of such a tool would be a powerful bargaining chip and demonstration of technological superiority. Therefore, I believe that once the cordon of secrecy is in place, there will be a desire and incentive for AI developers to produce such anti-aligned systems tailored for military use. To return to the cryptography metaphor, while easier methods to break cryptosystems or intrude into secured networks are regarded as criminal and undesirable outcomes in general society, for the NSA they are necessary operational tools. Some of you will protest that the standards of the US security community will prevent them from creating such systems. This is a hypothesis that can never be tested: Because of the one-way mirror system, we (actors in the public sphere) will never know if such models are developed without high-level whistleblowers or leaks. Furthermore, in a great power conflict context the argument for anti-alignment is symmetric: any great power with national security involvement in AI development will be aware of these incentives. Perhaps other powers will not be so conscientious, and perhaps American corporations will be happy to have two one-way mirrors installed: See https://en.wikipedia.org/wiki/Dragonfly_(search_engine). What specifically has changed? For many of you, these arguments will likely be familiar. However, the specific agreements between US AI developers and the Department of Defense, complete with the implementation of the one-way mirror, signals the proper entry of the US into the AI militarisation race. Even if the US military does not develop any anti-aligned capabilities beyond those of commercially available models, other great powers will take notice of this development and make their own estimations. Furthermore, the one-way mirror effect is localised: Chinese anti-alignment efforts cannot benefit from American public-sphere developments as easily as American anti-alignment efforts can. This is because of several factors including access to personnel, language and cultural barriers, and protective measures like information security protocols and limits to foreign access. Thus far, American AI development efforts are world leading (notice how the Chinese military uses Llama for their purposes rather than local LLM models!). This means that American anti-alignment efforts, if they ramp up properly, will be world leading as well. Implications for AI safety work Based on the above, I make several inferences for AI safety work, the most important of which is this: Under the present AI development paradigm, the capabilities of the best anti-aligned AI system will be lower bounded by the capabilities of the best public aligned AI system. Notice the asymmetry in knowledge we will possess about anti-aligned and aligned systems: any advances behind the anti-aligned systems need not be public, again because of the one-way mirror. The second inference is this: Any new public capabilities innovations will be symmetrically applied. In other words, any attempt to increase the capabilities of aligned models will be applied to anti-aligned models so long as alignment is a post-training value-agnostic process. Any method discovered behind the one-way mirror, however, need not be shared with the public. The third inference is this: Most new public alignment work will also become anti-alignment work. For example, improvements to RLHF/post-training alignment or new methods of model control can be directly employed in service of anti-alignment. Similarly, developments that make AI systems more reliable and effective are dual-use by their nature. Mechanistic interpretability and other instrumental science work will remain as it has always been, effectively neutral. Better explanations of how models work can likely benefit both alignment and anti-alignment because of the aforementioned symmetric nature of current alignment methods. The final inference is this: Public AI safety advocates are now definitively not in control of AI development. While there have always been AI developers who resisted AI safety concerns (e.g. Yann LeCun at FAIR), and recently developers like OpenAI have signalled moves away from an AI safety focus to a commercial AI focus, for a long time it could be plausibly argued that most consequential AI development is happening somewhat in the open with oversight and influence from figures like Yoshua Bengio or Geoffrey Hinton who were concerned about AI safety. it is now clear that AI safety advocates who do not have security clearance will not even have full knowledge of cutting edge AI developments, much less any say about their continued development or deployment. The age of public AI safety advocates being invited to the table is over. There are exceptions to these inferences. For example, if a private AI developer with no one-way mirror agreement works on their own to develop a private, aligned model with superior capabilities, this would be a triumph over anti-alignment. However, not only have all major AI developers rapidly acquiesced to the one-way mirror arrangement (with some notable exceptions), any such development would likely inspire similar anti-alignment efforts, especially due to the porous nature of the AI development and AI safety communities. It is also possible that advocates in the military will resist such developments due to a clear eyed understanding of the relevant risks, and instead push for the development of positive alignment technologies with military backing. At the risk of repeating myself, this is a fight we will not know the outcome of until it is either inconsequential or too late. What should we do? To be short: I don't know. Many people will argue that this is a necessary step, that US anti-alignment efforts will be needed to counter Chinese anti-alignment efforts, that giving autocracies leverage over democracy in the form of weaponised AI is a death sentence for freedom. Similar arguments have been deployed by the NSA in defense of its spying and information gathering efforts. However, others have pointed out the delusion of trying to control an AI borne of race dynamics and malicious intent, and point to historical overreaches and abuses of power by the American national security community. Indeed, the existence of high-profile leaks suggests that escape or exfiltration of dangerous AI systems from behind the one-way mirror is possible, making them a risk even if you have faith in the standards of the US military. Perhaps now is also an opportune time to mention the links between the incoming administration and its ties to neo-reactionary politics, as ably demonstrated in the Project 2025 transition plan. One thing I think we should all not do is continue with business-as-usual under the blithe assumption that nothing has changed. A clear line has been crossed. Even if the national security community renounces all such agreements the day after this post goes live, steps have been taken to reinforce the existing race dynamic and other nations will notice. And as for the public? We are already staring at the metaphorical one way mirror, trying to figure out what is staring back at us from behind it.
EDxoFR4XqiNWBJJCY_Some_Comments_on_Recent_AI_Safet.txt
{ "file_size": 16684 }
6d5998a9-76a1-4fee-b717-1bfc518df1d3
Qf2752mkqFrhTaowD_Formalize_the_Hashiness_Model_of.txt
{ "file_size": 0 }
cd05a95a-0c79-46c3-b57f-3bfdeb39fdee
In my bioinformatics work I often stream files between linux hosts and Amazon S3. This could look like: $ scp host:/path/to/file /dev/stdout | \ aws s3 cp - s3://bucket/path/to/file This recently stopped working after upgrading: ftruncate "/dev/stdout": Invalid argument Couldn't write to "/dev/stdout": Illegal seek I think I figured out why this is happening: New versions of scp use the SFTP protocol instead of the SCP protocol. [1] SFTP may not download sequentially With scp I can give the -O flag: Use the legacy SCP protocol for file transfers instead of the SFTP protocol. Forcing the use of the SCP protocol may be necessary for servers that do not implement SFTP, for backwards-compatibility for particular filename wildcard patterns and for expanding paths with a '~' prefix for older SFTP servers. This does work, but it doesn't seem ideal: probably servers will drop support for the SCP protocol at some point? I've filed a bug with OpenSSH. [1] "man scp" gives me: "Since OpenSSH 8.8 (8.7 in Red Hat/Fedora builds), scp has used the SFTP protocol for transfers by default." Comment via: facebook, mastodon
Er7zNyYKQgXDSz8ws_Force_Sequential_Output_with_SCP.txt
{ "file_size": 1123 }
4c0288d1-a4d9-4b0c-809e-8324d2c6391e
Anthropic on Thursday said it is teaming up with data analytics firm Palantir and Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to Anthropic’s Claude family of AI models. The news comes as a growing number of AI vendors look to ink deals with U.S. defense customers for strategic and fiscal reasons. Meta recently revealed that it is making its Llama models available to defense partners, while OpenAI is seeking to establish a closer relationship with the U.S. Defense Department. Anthropic’s head of sales, Kate Earle Jensen, said the company’s collaboration with Palantir and AWS will “operationalize the use of Claude” within Palantir’s platform by leveraging AWS hosting. Claude became available on Palantir’s platform earlier this month and can now be used in Palantir’s defense-accredited environment, Palantir Impact Level 6 (IL6). The Defense Department’s IL6 is reserved for systems containing data that’s deemed critical to national security and requiring “maximum protection” against unauthorized access and tampering. Information in IL6 systems can be up to “secret” level — one step below top secret. “We’re proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations,” Jensen said. “Access to Claude within Palantir on AWS will equip U.S. defense and intelligence organizations with powerful AI tools that can rapidly process and analyze vast amounts of complex data. This will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource intensive tasks and boost operational efficiency across departments.” This summer, Anthropic brought select Claude models to AWS’ GovCloud, signaling its ambition to expand its public-sector client base. GovCloud is AWS’ service designed for U.S. government cloud workloads. Anthropic has positioned itself as a more safety-conscious vendor than OpenAI. But the company’s terms of service allow its products to be used for tasks like “legally authorized foreign intelligence analysis,” “identifying covert influence or sabotage campaigns,” and “providing warning in advance of potential military activities.” “[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency’s willingness to engage in ongoing dialogue,” Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to “substantially increase the risk of catastrophic misuse,” show “low-level autonomous capabilities,” or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations. Government agencies are certainly interested in AI. A March 2024 analysis by the Brookings Institute found a 1,200% jump in AI-related government contracts. Still, certain branches, like the U.S. military, have been slow to adopt the technology and remain skeptical of its ROI. Anthropic, which recently expanded to Europe, is said to be in talks to raise a new round of funding at a valuation of up to $40 billion. The company has raised about $7.6 billion to date, including forward commitments. Amazon is by far its largest investor.
HaeAdeuCZpB9hqADk_Anthropic_teams_up_with_Palantir.txt
{ "file_size": 3447 }
e6e81415-3e4e-4055-b691-01f75c9b5b72
Here are several examples; I found these captchas via the web rather than generating them anew, but none of them came attached to solutions so I'm not sure their presence in the training data would affect things in any case. (That said, it's possible that the lower resolution of the latter two degraded the adversarial perturbation; I would appreciate a source of higher-resolution captchas if anyone happens to know one.) It clearly couldn't see all the objects, but the owl was in fact the correct answerEntertaining failure at basic numerals while nonetheless answering correctly hereThis one I was surprised by; I expected the image to be too low-resolution to be comprehensible, but 8/9 are correct here (the middle left image is a chair with an unusually low back)
vxFbcmPLyqxCnKPEp_GPT-4o_Can_In_Some_Cases_Solve_M.txt
{ "file_size": 771 }
99273ae8-ee71-42ff-b896-237a37dc32ff
A fun piece on ants and instrumental convergence: Each month millions of Argentine ants die along battlefronts that extend for miles around San Diego, where clashes occur with three other colonies in wars that may have been going on since the species arrived in the state a century ago. Some notes on ant warfare Many are aware that a world war between Argentine ant supercolonies is currently underway, across multiple continents, and against multiple ant 'nations'. Ant conflict differs from species to species and from scenario to scenario. Some use sheer numbers in tight phalanx-like organisations to swamp the enemy, which may include ants many times their individual size. The species of the aggressor matters - when researchers placed a single dead slave-making ant into a slave-host colony for 5 mins, the response was extreme aggression against almost all neighbours for three days, most especially against slave-taking ants. Treating known neighbours or strangers with more hostility is a strategic choice that ant colonies must make, and these choices often determine how aggressive a colony will be. Ants that cannot afford huge losses with each battle might opt for ranged weapons such as chemical attacks or dropping stones onto the enemy's heads and nest entrances. Slave making ants are usually outnumbered when they raid other nests. In order to successfully capture the brood to raise in their own nest they must pacify the defenders, which is often done using pheromones to sow confusion and discord amongst the ranks. !!!!!! Certain mathematical models derived from human conflicts can explain the success of swamping the enemy with large numbers of disposable soldiers. That said, some species prefer stronger individuals, and may even use personal champion style duels whilst fighting. Leaving the battlefield for the night and returning in the daylight has also been documented during large scale conflicts. I may add some more notes as I read. Some of the papers go into marvellous detail on the tactics and change in fighting styles over many days of warfare. I hadn't heard about ant slavery, apparently it's a well-studied phenomena. From the Wikipedia page: The slave-making ants are specialized to parasitize a single species or a group of related species, and they are often close relatives to their hosts, which is typical for social parasites. The slave-makers may either be permanent social parasites (thus depending on enslaved ants throughout their whole lives) or facultative slave-makers. The behavior is unusual among ants but has evolved several times independently. Theft of brood for the purpose of employing the stolen individual's efforts in support of the thief is called dulosis (from Greek δοῦλος, "slave"), but the term "slave-making" is used in older literature and is still common.[1] There is some controversy associated with using the term "slave" and "slave-maker" to describe the natural history of this species. Additionally, there are species commonly raided that are referred to as "negro ant" specifically because they are common victims of ant raids, although this is not endorsed by nomenclature societies[2] and may cause offense. Some have argued that using such non-inclusive metaphors in science is harmful to scientists and interferes with the unbiased scientific process. A colony may capture 14,000 pupae in a single season.[9] Most slave-raiders capture only the young, but Strongylognathus sp. also enslave adult workers. Later, enslaved workers emerging in the parasite nest will be imprinted on and integrated into the mixed colony where they will rear the parasite brood, feed and groom the parasite workers, defend the nest against aliens (e.g. other insects or spiders), and even participate in raids,[8] including those against their original colony. Only one slave species is usually found in a single Polyergus nest. This is in contrast to related facultative slave-makers of the genus Formica belonging to the F. sanguinea species group, found in the same habitat, whose nests commonly contain two or more species serving as slaves. Choice of a host species can occur both through the colony-founding behavior of queens and through the choice of target nests for slave raids. The parasitic Polyergus queens found colonies either by adoption, where a queen invades the nest of a slave species, killing the resident queen and appropriating workers and brood present, or by "budding", in which a queen invades or is accepted into a host species nest accompanied by workers from her nest of origin. The first hypothesis concerning the origins of slave-making was Darwin's (1859) suggestion in On the Origin of Species that slavery developed as a by-product of brood predation among related species. Other hypotheses focus on territorial interactions with opportunistic brood predation or brood transport among polydomous colonies (consist of multiple nests) as the main pathway to slave-making.[20][21] Slave-making behavior is unusual among ants but has evolved independently more than ten times in total[10] including in the subfamilies Myrmicinae and Formicinae.[22][23] Slave-makers and their hosts are often close phylogenetic relatives,[24] which is typical for social parasites and their respective hosts (formalized as Emery's rule). This has major evolutionary implications since it may argue for sympatric speciation.[25] Raids can jeopardize host colony survival, therefore exerting a strong selection pressure upon the hosts. Reciprocally, there is some evidence that hosts also exert a selection pressure on their parasites in return, since resistance by host colonies might prevent enslavement. Coevolutionary processes between slave-making ant species and their hosts then can escalate to an evolutionary arms race.[8]
sacDi3xuqv9F3fERa_Stone_Age_Herbalist's_notes_on_a.txt
{ "file_size": 5814 }
2f636478-ff93-41af-8bb4-635abc2181f1
Let's consider air purifier design a bit. preface This post is about a potential new category of air purifier, but existing air purifiers are already worth using. You should probably have at least one at your home. For larger rooms, the Levoit Vital 200S-P is a reasonable option; that's an Amazon affiliate link. See also: /r/AirPurifiers and review sites such as HouseFresh. Someone once told me that they didn't want an air purifier because it would make their incense and scented candles ineffective. I'm afraid I can't help with that issue. design goals What do we want from an air purifier? low noise We want a quiet and efficient fan. For that: Fan blades should be good airfoils, not whatever's cheapest to injection-mold. Fan blades should be in a duct, with low clearance between the blades and duct walls. A single big fan is better than many small fans. We want low pressure through the filters, so a large filter area. Bearings should be quiet. Steel ball bearings wear out and start making noise. We also want a quiet motor: The power supply shouldn't hum or buzz. (Use toroidal inductors, etc.) It should use quiet gears or (preferably) a direct drive. You might think that fans meeting those criteria should be easy to find. People buy a lot of fans, and surely some people want quiet ones that aren't too expensive, right? But no. There are lots of floor fans, but compared to computer fans they're almost all poorly-designed, even the expensive ones. Maybe people actually want some noise so it feels like the fan is working? filters We want standardized filters that are easy to replace. They should be either cylindrical or (preferably) rectangular. To get a long time between replacement and low pressure, we want a large filter area. If we're using carbon filters, granular carbon is more effective than carbon foam. Washable fabric pre-filters seem worthwhile. overall configuration Some goals for the overall configuration: Particles tend to go down towards the floor, so we prefer an intake near the floor and exhaust that goes upwards. We don't want to repeatedly filter the same air, so we want to avoid exhaust recirculating to the intake. For these reasons, air purifiers are often circular or box-shaped, sit on the floor, and have upward exhaust. In some cases, people want an exhaust that points directly at them, from the side. We want the air purifier to not take up much floor space. This is a factor that hasn't been considered very much, but it's important. Consider a "Corsi-Rosenthal Box" using 24" (actual width) furnace filters. It obviously takes up at least 4 square feet of floor space, which in America today is often worth over $600, significantly more than the purchase cost. If you include clearance around it for airflow, in an expensive area that could use over $4000 worth of space. Obviously, we could minimize wasted space by putting a filter in the bottom of a shelf unit, or on top of a cabinet. But per the above, that would give air recirculation or wouldn't capture dust near the floor well. my proposal Considering the above goals, I had an idea: what if an air filter is integrated with a shelving unit, with a chimney through it for filtered air? That could significantly reduce the effective amount of space used by an air purifier. Let's consider what that could be like. I guess this can be an example of how much detail my conceptual designs usually have. components I think the design could be broken into 3 parts: a base with filters, shelving, and a tube with a fan. the base Make a triangular prism frame, with each rectangular face being 22" by 22". Glue 12 foam strips to it, to act as gaskets. Get 3x 20"x20" air filters. Put them against the square faces of the frame. Get a sheet of stretchy fabric, which wraps around the air filters and fastens somehow, maybe with snap fasteners or hooks. This fabric can hold the filters against the foam gaskets. Put a solid triangular top and bottom on the prism frame. In the top, cut a 12" circular hole. On 2 sides of the triangular top, add some handles to make the base easier to pull. the shelving Take a normal shelving unit, with: the top perhaps 64" high the bottom shelf 24" high 28" wide by 20" deep shelves Cut a 12" hole in the shelves for the tube to go through, in the middle rear. The top shelf should have a plastic "protrusion holder" assembly that the 3 tube protrusions (see below) can rest on. This acts to hold the tube up during filter replacement, and keeps the tube from hitting the shelves and making noise. This shelving goes over the base. the tube Get a cylindrical tube, 12" outer diameter and perhaps 48" long. It might be metal or plastic, but I suppose the cheapest option would probably be cardboard. Put a single large electric fan at the bottom of the tube. It should have swept blades with good airfoils, inside a duct with minimal clearance to the blades. Put the tube through the holes in the shelves. The tube fits into the hole in the base, and sticks out a couple inches from the top shelf to prevent stuff from falling in. A small hole in the tube has a USB port to connect a fan controller. Connect that to a controller on the middle of the bottom shelf. A cable inside the tube connects that port to the fan. The fan power cord comes out the side of the fan, and normally sits on top of the triangular top of the base. Since we're trying to improve air quality, the cord should use EVA, not PVC. Above the fan, glue a ridged foam sheet to the inside of part of the tube, to absorb some fan noise. The top of the tube and the top shelf should have 3 protrusions that can sit on each other to temporarily hold the tube out of the base when the tube is lifted and rotated. One option would be to glue 3 steel plates with threaded holes to the inside of the tube, and use internal hex bolts as the protrusions. (Those protrusions need to be attached to the tube after putting the tube in the shelf, because the power cord is on the other end, and the fan might be wider than the tube. And you don't want to put the power cord through the bottom, because then you'd have to unplug the power cord to pull the base out to replace the filters.) usage filter replacement process Lift the tube up, and rotate it so it sits higher on the top shelf protrusion holder. Remove any items under the bottom shelf. Pull the filter base out using the 2 handles. Remove the fabric fasteners. Clean the fabric prefilter. Replace the 3 air filters. assembly after shipping Assemble the shelving frame, and put it in place. Put the tube on the floor inside the shelving frame, and plug it in. For each shelf: attach 4 brackets to the frame, then slide the shelf over the tube to rest on the brackets. (The brackets might have rods with an interrupted thread for attachment to the frame.) At the top of the tube, attach the 3 protrusion bolts. Lift the tube up (the protrusions fit through slots in the top shelf) and set it down on the protrusion holder. Assemble the base frame. Insert the 3 filters, and wrap them with the fabric prefilter. Slide the base under the bottom shelf. Rotate and lower the tube into the base, fitting the protrusion bolts into slots in the protrusion holder. Is this worthwhile? The advantage of a Corsi-Rosenthal Box is largely its option value. It substitutes labor and tape or hot glue for structures that have to be manufactured beforehand and kept in warehouses. This is important when demand suddenly increases due to, say, a pandemic or wildfire. A frame with foam gaskets makes filter replacement easier than gluing filters together, but we still have to ask whether that's worthwhile. With thicker filters, eg 20x20x4" ones, the above design should normally need filter replacement less than once a year. There are also compact commercial air purifiers. Something with a tall cylindrical filter could be as small as just the "tube" - compared to that, we're trading movability and modularity for using standard rectangular filters that are cheaper and last longer and can optionally have a granular carbon layer or just be extra-thick. But someone concerned about the value of space might not care much about filter costs. Shelving with an integrated air filter might just be too specialized and inflexible a product for people, but it would be quieter, because the tube (and its foam lining) would absorb some sound and direct sound upwards.
pKkw5kLi6HBaojwCC_overengineered_air_filter_shelvi.txt
{ "file_size": 8416 }
c380165a-4a31-4c39-b8e8-96c297670358
My husband, Andrew Rettek, has a blog you should read. As he’s gotten into fitness, he’s started following exercise science, which is the (very new!) field of running small controlled experiments on diet and exercise on athletes who do exactly what you tell them to, under strict observation. This is in contrast to fields like nutrition science, which study the effect of the “intervention” of telling people (usually non-athletes) what diets and exercise programs to follow. Exercise science tends to have much smaller sample sizes, but it can come to more unambiguous conclusions because it’s testing the intervention itself, not people’s ability or willingness to follow it. Contra the “nobody knows anything about diet or exercise” conventional wisdom, we do know some things! It’s just…not very many things. And only about college athletes. One surprising thing Andrew noticed is that organ size, especially liver size, has a lot to do with overall metabolism. Larger Athletes’ Higher RMR is Partly From Bigger Livers Athletes have higher resting metabolic rates than non-athletes; their bodies use more energy, even when they’re not exercising. That means they can eat more without getting fat. Some part of this effect is due to higher muscle mass, which “costs” energy to maintain. But muscle isn’t the only tissue athletes grow; bigger athletes also have bigger livers.1 In fact, muscle is a lot less metabolically expensive than other organs. Muscle consumes only 13 kcal/kg/day, while liver, brain, heart, and kidney consume 200, 240, 440, and 400 kg/kg/day respectively. In fact, 60-70% of resting energy expenditure2 in adults comes from these four organs, even though they’re only 6% of body weight. If you look at “small”, “medium”, and “large” athletes3 (male, mean weights 147, 170, and 200 lbs), obviously the larger athletes have more REE than the smaller ones. What’s surprising is that most of this is due to non-muscle fat-free mass differences — i.e. organs, mostly liver. The difference between “small” and “medium” athletes’ REE is 73% due to this “residual mass”. The difference between “medium” and “large” athletes’ REE is 56% due to “residual mass”. Same deal for female athletes4 — 76% of the REE difference between “medium” and “small”, and 36% of the REE difference between “large” and “small”, is due to increases in “residual mass”. If you compare sumo wrestlers to untrained college students, they of course have more of everything — more muscle, more fat, and heavier organs, especially liver — as well as more REE. However, if you compare untrained obese to normal-weight subjects, obese people don’t have bigger livers or kidneys. Big organs are a thing in big athletes, not just big-in-general people.5 Similarly, in a comparison of college athletes to college non-athletes, the athletes had 25% higher body weight, 40% more muscle mass, and also about 30% more liver and kidney mass. Within both groups, bigger people had bigger livers, but the slope of the correlation was much bigger for athletes than non-athletes.6 Big athletes have big livers. Intriguingly, even as liver fat tends to accumulate with age, liver mass and liver blood flow both decrease with age. 7 Causality? Looks Like Yes: More Protein = More Liver The implications are intriguing. Does exercise, high protein intake, or something else big athletes do, cause liver growth? If you somehow managed to grow a bigger liver, would that cause a higher resting energy expenditure? One longitudinal study of college football players, instructed to eat 500-1000 extra calories a day while engaging in weight training, interval training, and skills training, found that they gained an average of 21 lbs, which was 29% muscle, 46% fat, and 45% neither — including the liver, heart, and kidneys.8 So that looks like a causal effect: intentional weight gain combined with exercise results in liver growth. The authors speculate it’s due to the high protein intake of these athletes. We also have a bit of animal evidence that dietary protein intake affects liver size. When mice are fed diets with different amounts of protein but the same total calorie amount, the high-protein-fed mice (46% kcal from protein) don’t have significantly higher body mass than the low-protein mice (7% kcal from protein) but do have significantly higher (15-18%) liver, kidney, and stomach mass. 9 Similarly, lambs fed high-protein diets (125% of recommended daily value) have significantly higher kidney and liver weights than lambs fed normal-protein diets, but no significant differences in overall weight. (The high-protein lambs had 12% bigger livers).10 And mice fed low (5% kcal from protein), medium (15% protein), and high (55%) protein diets had, accordingly, small, medium, and large livers, while there was no difference in total weight or muscle mass. The high-protein-diet mice had 15% larger livers than the medium-protein-diet mice.11 Rats fed a typical 20% (by mass) protein chow or a high-protein 40% chow gained similar amounts of body weight, but the high-protein rats had 11% bigger livers.12 This all points to “eating a higher % of your calories from protein leads to increasing liver mass”. And, since we consistently see that livers (and other organs) have similar energy consumption per kg regardless of how big they are, this would suggest that a high-protein diet, maybe in conjunction with exercise, would result in higher REE due to liver growth. What Else Grows Livers? Ok, but if you’re a gym rat, you are already exercising and eating a bunch of protein. Is there anything else that’s reasonably safe and increases liver size? Liver enlargement, or hepatomegaly, is usually a bad thing. You will get a bigger liver through inflammation (due to infection or toxin exposure), fat accumulation, or iron accumulation. But you don’t want this.13 For instance: inject LPS, the toxin produced by E. coli infection, and their livers quickly become 20% bigger, which appears to be due to both larger liver cells and increased numbers of liver cells. But this is, of course, a very bad idea (it’s essentially inducing sepsis.)14 Likewise, the inflammatory marker IL-6 will make a rat’s liver double in size…and also cause muscle wasting everywhere else. Not something you want to try at home.15 In general, livers are excellent at regenerating after injury, but they stop growing once they hit a fixed percent of body weight. Negative growth regulating signals kick in at a certain point and stop the liver from growing arbitrarily big. What happens if we artificially mess with those signals? Well, tentatively that looks like a pretty bad idea. Mice lacking the Yap and Taz genes that control liver size have larger livers…but they also have liver cancers, and worse regeneration from liver injury.16 Similarly, mutant mice lacking Hippo signaling have unusually large livers that don’t stop growing when they hit the usual “maximal size”…but they also get lots of liver tumors not seen in wild-type mice.17 Rats given a liver-growth-stimulating solution including insulin, glucagon, and thyroid hormone (T3) could more than double liver size without increasing body size…but 60% of the animals treated died within 8 days. Again, not a good idea to try.18 One avenue that looks a little less terrible is follistatin; a viral gene therapy in mice that induced follistatin overexpression in the liver resulted in 40% bigger livers, with no apparent signs of impaired liver function. The growth appeared to be due to increased cell division. However, the animals were only observed for 12 days, so we don’t know about long-term risks.19 In short, I’m not, so far, seeing examples even in animal studies where livers can be lastingly enlarged beyond the usual maximal size without producing cancer, cachexia, or other quite serious problems. On the other hand, maybe someone will find a solution (or maybe there’s one buried deeper in the Google Scholar results than I care to look today.) Meanwhile, I’m intrigued by the prospect that eating a high-protein diet could grow the liver. Compared to all this other stuff, protein consumption is very safe. What we don’t have, but would be interesting, is a human interventional study that varies protein consumption and looks at its impact on liver size and RMR. 1 Oshima, Satomi, et al. "Relative contribution of organs other than brain to resting energy expenditure is consistent among male power athletes." Journal of nutritional science and vitaminology 59.3 (2013): 224-231. 2 amusingly abbreviated REE 3 Oshima, Satomi, et al. "Fat-free mass can be utilized to assess resting energy expenditure for male athletes of different body size." Journal of nutritional science and vitaminology 57.6 (2011): 394-400. 4 Taguchi, Motoko, et al. "Resting energy expenditure can be assessed by fat-free mass in female athletes regardless of body size." Journal of nutritional science and vitaminology 57.1 (2011): 22-29. 5 Midorikawa, Taishi, et al. "High REE in Sumo wrestlers attributed to large organ-tissue mass." Medicine and science in sports and exercise 39.4 (2007): 688-693. 6 Midorikawa, T., et al. "A comparison of organ-tissue level body composition between college-age male athletes and nonathletes." International journal of sports medicine (2006): 100-105. 7 Palmer, Allyson K., and Michael D. Jensen. "Metabolic changes in aging humans: current evidence and therapeutic strategies." The Journal of clinical investigation 132.16 (2022). 8 Miyauchi, Sakiho, et al. "Organ size increases with weight gain in power-trained athletes." International journal of sport nutrition and exercise metabolism 23.6 (2013): 617-623. 9 Hammond, Kimberly A., and Donald N. Janes. "The effects of increased protein intake on kidney size and function." Journal of Experimental Biology 201.13 (1998): 2081-2090. 10 Fluharty, F. L., and K. E. McClure. "Effects of dietary energy intake and protein concentration on performance and visceral organ mass in lambs." Journal of Animal Science 75.3 (1997): 604-610. 11 Chalvon-Demersay, Tristan, et al. "Role of liver AMPK and GCN2 kinases in the control of postprandial protein metabolism in response to mid-term high or low protein intake in mice." European Journal of Nutrition 62.1 (2023): 407-417. 12 Hum, Susan, Kristine G. Koski, and L. John Hoffer. "Varied protein intake alters glutathione metabolism in rats." The Journal of nutrition 122.10 (1992): 2010-2018. 13 for instance, alcoholics have larger livers, but, of course, they are also at risk for liver disease. 14 QIAN, Dalong, and John T. BROSNAN. "Administration of Escherichia coli endotoxin to rat increases liver mass and hepatocyte volume in vivo." Biochemical Journal 313.2 (1996): 479-486. 15 Zimmers, Teresa A., et al. "Massive liver growth in mice induced by systemic interleukin 6 administration." Hepatology 38.2 (2003): 326-334. 16 Lu, Li, Milton J. Finegold, and Randy L. Johnson. "Hippo pathway coactivators Yap and Taz are required to coordinate mammalian liver regeneration." Experimental & molecular medicine 50.1 (2018): e423-e423. 17 Takabe, Kazuaki, et al. "Adenovirus-mediated overexpression of follistatin enlarges intact liver of adult rats." Hepatology 38.5 (2003): 1107-1115. 18 Parra, Osório Miguel, et al. "Enhancement of liver size by stimulation of intact rat liver with exogenous hepatotrophic factors." Sao Paulo Medical Journal 113 (1995): 941-947. 19 Takabe, Kazuaki, et al. "Adenovirus-mediated overexpression of follistatin enlarges intact liver of adult rats." Hepatology 38.5 (2003): 1107-1115.
MoF426vqQFmuwwnFf_Bigger_Livers?.txt
{ "file_size": 11734 }
08c94653-c12d-4c6b-b3af-2e7355ac39f5
Epistemic status: splitting hairs. Originally published as a shortform; thanks @Arjun Panickssery for telling me to publish this as a full post. There’s been a lot of recent work on memory. This is great, but popular communication of that progress consistently mixes up active recall and spaced repetition. That consistently bugged me — hence this piece. If you already have a good understanding of active recall and spaced repetition, skim sections I and II, then skip to section III. Note: this piece doesn’t meticulously cite sources, and will probably be slightly out of date in a few years. I link to some great posts that have far more technical substance at the end, if you’re interested in learning more & actually reading the literature. I. Active Recall When you want to learn some new topic, or review something you’ve previously learned, you have different strategies at your disposal. Some examples: Watch a YouTube video on the topic.Do practice problems.Review notes you’d previously taken.Try to explain the topic to a friend.etc Some of these boil down to “stuff the information into your head” (YouTube video, reviewing notes) and others boil down to “do stuff that requires you to use/remember the information” (doing practice problems, explaining to a friend). Broadly speaking, the second category — doing stuff that requires you to actively recall the information — is way, way more effective. That’s called “active recall.” II. (Efficiently) Spaced Repetition After you learn something, you’re likely to forget it pretty quickly: Fortunately, reviewing the thing you learned pushes you back up to 100% retention, and this happens each time you “repeat” a review: That’s a lot better! …but that’s also a lot of work. You have to review the thing you learned in intervals, which takes time/effort. So, how can you do the least the number of repetitions to keep your retention as high as possible? In other words — what should be the size of the intervals? Should you space them out every day? Every week? Should you change the size of the spaces between repetitions? How? As it turns out, efficiently spacing out repetitions of reviews is a pretty well-studied problem. The answer is “riiiight before you’re about to forget it:” Generally speaking, you should do a review right before it crosses some threshold for retention. What that threshold actually is depends on some fiddly details, but the central idea remains the same: repeating a review riiight before you hit that threshold is the most efficient spacing possible. This is called (efficiently) spaced repetition. Systems that use spaced repetitions — software, methods, etc — are called “spaced repetition systems” or “SRS.” III. The difference Active recall and spaced repetition are independent strategies. One of them (active recall) is a method for reviewing material; the other (effective spaced repetition) is a method for how to best time reviews. You can use one, the other, or both: Examples of their independence: You could listen to a lecture on a topic once now, and again a year from now (not active recall, very inefficiently spaced repetition)You could watch YouTube videos on a topic in efficiently spaced intervals (not active recall, yes spaced repetition)You could quiz yourself with flashcards once, then never again (yes active recall, no spaced repetition)You could do flashcards on something in efficiently spaced intervals (both spaced repetition and active recall). IV. Implications Why does this matter? Mostly, it doesn’t, and I’m just splitting hairs. But occasionally, it does matter — for instance, it's sometimes prohibitively difficult to use spaced repetition and active recall, still quite possible to use just one of the two. In these cases, folks sometimes throw up their hands. But the right response is to do use the method that works nicely! For example, you can do a bit of efficiently spaced repetition when learning people’s names, by saying their name aloud: immediately after learning it (“hi, my name’s Alice” “nice to meet you, Alice!”)partway through the conversation (“but i’m still not sure of the proposal. what do you think, Alice?”)at the end of the conversation (“thanks for chatting, Alice!”)that night (“who did I meet today? oh yeah, Alice!”) …but it’s a lot more difficult to use active recall to remember people’s names. (The closest I’ve gotten is to try to first bring into my mind’s eye what their face looks like, then to try to remember their name.) Another example in the opposite direction: learning your way around a city in a car. It’s really easy to do active recall: have Google Maps opened on your phone and ask yourself what the next direction is each time before you look down; guess what the next street is going to be before you get there; etc. But it’s much more difficult to efficiently space your reviews out, since review timing ends up mostly in the hands of your travel schedule. (For more on the topic of deliberately using memory systems to quickly learn the geography of a new place, see this post.) As promised at the top, if you’re interested in learning more & actually reading the literature, I'd start with "Spaced Repetition for Efficient Learning" (gwern) and "Augmenting Long-term Memory" (Michael Nielson), then reading through the works they cite.
rybwNZGGtGXA5uxsS_Active_Recall_and_Spaced_Repetit.txt
{ "file_size": 5432 }
223315ab-eda7-43ae-bb81-b75c973982f2
Rational Animations' new video is an animation of The King and the Golem, by @Richard_Ngo, with minimal changes to the original text (we removed some dialogue tags). I hope you'll enjoy it!
n9ANKEJnW9KAy5QSh_The_King_and_the_Golem_-_The_Ani.txt
{ "file_size": 189 }
2a899fca-81d7-42b0-9953-99f783fad103
I thought of this a couple years ago and figured it was so obvious that it wasn't worth posting about, but people are still discussing trauma endlessly, and I have not seen an explanation written anywhere, so here's this. "Trauma" is a bad experience deemed anomalous. It means "the world is not usually like that". We do not call any behavior or emotional pattern "trauma" if it is obviously adaptive. If you are in a war and you lie down and panic when you hear loud bangs, nobody will call you traumatized. When the war is over and you panic at fireworks, people will say you are traumatized. If then a bomb lands nearby and you survive because you took cover, they'll say you were smart and acted fast. If 10% of your country got murdered a generation or two ago for not going with the political majority, then you are completely sane for shutting off the thinky brain in politics. (The USSR, China, Korea, Vietnam, Nigeria, Sudan, Afghanistan, Cambodia, Ethiopia, Lebanon...) If that won't happen where you live now, it's "intergenerational trauma". If you got raped (and half your friends did too) and you are anxious and distrustful, then maybe you are just correctly calibrated about your own social sphere. If it was in a different country 10 years ago, then it's trauma. If you got ostracized and called creepy in school, you might have a good idea of what actually happens when you ask girls out in public. If your classmates were jerks and you don't have acne anymore, it's trauma. If you got emotionally beat up by all your exes, then being less open might be a good idea. If they were all cocaine addicts from the same town, it's trauma. If driving a car makes you want to throw up since your last accident, you might be a bad driver. If a helicopter landed on you, it's trauma. (I have heard it said that if you're worried about your driving then you're probably a safe driver. The last person I met who was seriously worried about their driving totaled her car a week later. Everybody, including me, was telling her she was fine and shouldn't worry.) It often takes years to get over trauma because that's how long it takes to get enough evidence. Even a terrible driver will only crash every few years. And you can't trust your friends (or therapist) to make a very accurate assessment of the risk, because nobody has good data. TLDR: Trauma is a subjective term. If you think a bad event won't happen again soon, you call it trauma. Otherwise, you don't call it anything. It's a matter of judgement.[1] ^ Two things make the judgement difficult. (1) It's hard to tell how anomalous your own experiences are. (2) Traumatizing things are usually shameful & private — you can't easily go ask 20 people how their war went. I have no answer for this.
Lk7p4yCxrroQGdnHy_Boring_&_straightforward_trauma_.txt
{ "file_size": 2765 }
6b1bdb02-639e-4524-a029-25d7e3b94fc7
A social opportunity never to be forgotten Urbit OS is a completely new, open-source, carefully architected software stack: a VM, programming language, and kernel designed to run software for an individual. It is computing with a human face, a world designed from the ground up to allow for the natural formation of  communities that might as well be curated.  Perhaps you have heard of it? Urbit OS is a program that runs on almost any cloud server, most laptops and many phones: anything with Unix and an internet connection. The main thing to understand about our ‘overlay OS’, as we call it, is that the foundation is a single, simple function. This function is the Urbit OS virtual machine. We call it ‘Nock’. The entire Urbit OS system compiles down to Nock, and Nock is just 33 lines of code.  You can control and understand your entire stack! ACXers, rationalists in general, and MIT students/alums/faculty interested in the frontier of computing-as-civilization, web3, full stack development, and functional programming are welcome to attend this open-ended and casual meeting. Conversation will cover programming, happenings in the Urbit community, and wider cultural interests. If necessary, there will be on-boarding assistance to help newcomers join the Urbit network, so feel free to bring a device and/or a friend! The host will have packets that explain the Urbit OS and network in detail. Plus, there will be tacos and chocolate for sale next to our meeting Learn more about Urbit: https://urbit.org Urbit blog: https://urbit.org/blog Obtain an Urbit ID: https://urbit.org/get-started Github: https://github.com/urbit
KiDKrD3sNnAgmyJxX_Urbit_New_England_Meetup.txt
{ "file_size": 1646 }
e67f8681-f8d8-4084-8d62-9220f81a2285
Rigging an election can be hard. But sometimes it can be easy. If a committee has an agenda of proposals to choose from, where each proposal is compared pairwise using majority rule against another proposal, until a single proposal is victorious, then you can make any arbitrary proposal win. The McKelvey–Schofield chaos theorem tells us that just by manipulating the agenda – adding more proposals and deciding which order to do the pairwise elections in – we can rig the vote. But what exactly does that mean and can it be done in practice? Hypothetical Imagine you were the leader of a committee, deciding which budget proposal to use for an upcoming year. Let's say you have an agenda of budget proposals, submitted by members of the committee. A natural way to choose between them is to compare budget proposals in a sequence, head-to-head, choosing the winner using majority rule until there's a single budget proposal left.[1] That's kind of what parliaments do when voting on laws, so it should be fine to use in your committee. Let's also say that members can submit multiple budget proposals and that you, as the committee leader, can manipulate the agenda, i.e. you submit budget proposals last and select the sequence of head-to-head votes to hold. You can't remove any proposals though, so if someone submits a really good proposal, then it has to be included in at least one of the head-to-head votes. One might then think that if the committee submits a really good proposal, then that has a high chance of winning. But, if the committee votes in a predictable manner, you will be able to manipulate the agenda such that you can choose any budget as the winner. To understand why, we need to mathematically model how the committee members vote. Mathematical Let us assume that all proposals can be defined as a point in n dimensional Euclidean space, En, where n is larger than 1. How a committee votes depends on their utility functions. Let's say each member i in the committee has a utility function Ui:En→R which returns how much that member likes a policy. The simplest case is in the Euclidean plane (n=2) and when each committee member values proposals proportionally to their Euclidean distance from some ideal proposal. Even this simple case has problems. One can for example encounter Condorcet cycles, where it is unclear which proposal the committee prefer. Figure 1. An example of a Condorcet cycle. 3 members have ideal proposals (1,2,3) and are considering 3 proposals (A,B,C). Arrows show which would win in a majority vote. (source) But one might hope that even if Condorcet cycles are possible, they are rare or don't affect votes much. Maybe there's a set of proposals which the committee will clearly support, which we cannot manipulate the agenda to deviate from. Unfortunately Richard McKelvey showed in 1976 that Condorcet cycles are abundant and that it's pretty much always possible to manipulate the agenda. I won't go through the proof, but one can get an intuition why it would be true.[2] If we consider some specific policy, every committee member will have a circle of proposals they prefer more. Any proposal which is in the intersection of a majority of those circles will beat the original proposal. Figure 2. Three committee members evaluating a proposal. The white dot is a policy and the lower right box highlights the section of policies which would win over it in a vote. (source) Every proposal will then have many different proposals it would lose to. By having a carefully chosen series of proposals which win against the previous proposal, we can choose a victorious proposal which is arbitrarily far away from the original proposal. One might say that real people don't have these kinds of circular utility functions, but Norman Schofield proved similar – but more technical – results in 1978 which apply to a wide range of differentiable utility functions. Critical So when can the agenda not be manipulated? Well, if the proposals exist on a one-dimensional political spectrum, then the theorems don't apply.[3] If all utilities have a single peak, then we even have the median voter theorem, which says that the winning proposal will be the proposal preferred by the median voter. There are also a bunch of technical situations where the theorems don't apply, when there's some kind of equilibrium forcing a single policy to be the winning proposal. But the reason the theorem is named the McKelvey–Schofield chaos theorem is because small changes to utility functions with an equilibrium, makes agenda manipulation work again. Analytical So what should the takeaway be? Are these mathematical models realistic? Can these results actually be used to rig an election? Well, it probably wouldn't be as easy in the real world as in the mathematical, as not every committee member's utility function is simple and predictable. People would also probably notice if someone tried to manipulate the agenda. It's more likely that these problems would appear by mistake, than from someone deliberately trying to rig anything. I think this just highlights how complicated it can be to aggregate a group's preferences. It indicates that Condorcet cycles are common in majority rule, and that one shouldn't be that surprised if they appear. Alternatively, if Condorcet cycles don't appear then people have more complicated behavior or utility functions than in the mathematical models. These ideas may be more applicable if you're trying to aggregate preferences from machines. ^ The easiest way to choose a proposal would be to use first-past-the-post, but it would be bad to use in a situation where there can be several very similar voting proposals, because of the spoiler effect. ^ For another visual example, see The Mathematical Danger of Democratic Voting. ^ Such as the left-right spectrum.
RdrEBdfoo2qsGCYkX_Agenda_Manipulation.txt
{ "file_size": 5877 }
ceac9cd6-d7e0-4273-8713-686fe66283e4
I wrote a whole book! What's next? I'm currently doing an edit pass on the entire book. I need to rewrite some of the early sections, fix some consistency issues, and generally look with fresh eyes on words I wrote months or years ago. Many of you provided helpful comments, and I'm using those to make the second draft better. When the second draft is done, I'll look to hire one or two editors/pre-readers, ideally from the LessWrong community, who can go through it and point out all the obvious mistakes that are still in the book, plus help me make the confusing parts clearer. Once that's done, I'll be ready to publish. That might mean finding a traditional publisher who will take a chance on a monograph from an unknown author, but more likely I'll self publish, which will give me greater flexibility to make the book available in many forms, including a free-to-read version on the web. If you have experience here, I'd love to talk to you! And no matter how I publish, I need to build an audience to help people find and read the book. So I'm launching my Substack blog today, Uncertain Updates. My plan is to treat it like a newsletter, publishing about once a month, with updates about the book and my other writing projects. Finally, thanks to everyone who supported me along the way to write this first draft. I appreciate all of the encouragement, critical comments, and even just letting me be when I was heads down working on a difficult section. Update 2025-12-19: The book has a website.
ZFALpHojE4Wbe3wbc_Fundamental_Uncertainty__Epilogu.txt
{ "file_size": 1508 }
a7380d92-696f-4f66-adcd-ab65698d2096
Apparently, the following is an argument made by Sam Harris on twitter, in a series of tweets. Unfortunately, the original tweets have been deleted, so I relied on a secondary source. Let’s assume that there are no ought’s or should’s in this universe. There is only what *is*—the totality of actual (and possible) facts.Among the myriad things that exist are conscious minds, susceptible to a vast range of actual (and possible) experiences.Unfortunately, many experiences suck. And they don’t just suck as a matter of cultural convention or personal bias—they really and truly suck. (If you doubt this, place your hand on a hot stove and report back.)Conscious minds are natural phenomena. Consequently, if we were to learn everything there is to know about physics, chemistry, biology, psychology, economics, etc., we would know everything there is to know about making our corner of the universe suck less.If we *should* do anything in this life, we should avoid what really and truly sucks. (If you consider this question-begging, consult your stove, as above.)Of course, we can be confused or mistaken about experience. Something can suck for a while, only to reveal new experiences which don’t suck at all. On these occasions we say, “At first that sucked, but it was worth it!”We can also be selfish and shortsighted. Many solutions to our problems are zero-sum (my gain will be your loss). But *better* solutions aren’t. (By what measure of “better”? Fewer things suck.)So what is morality? What *ought* sentient beings like ourselves do? Understand how the world works (facts), so that we can avoid what sucks (values). Before going on, let’s pause to consider that Sam Harris is a famous public intellectual, with a BA in philosophy from Stanford and a PhD in neuroscience from UCLA. Now, let’s consider how flawed his argument is. The argument contains the following errors: It begs the question. It presupposes objective good and bad.It conflates a subjective value judgment (“it sucks”) with objective value.It conflates pain with subjective value. Essentially, the argument presupposes hedonism and altruism, and then pretends to derive a combination of those two assumptions (objective morality) from pure reason plus experience. See Hedonic Utilitarianism. Let’s go through the argument, point by point. (see the rest of the post in the link)
LBuZdjqqoc9qK43zb_Sam_Harris’s_Argument_For_Object.txt
{ "file_size": 2393 }
94bb150b-ef66-4e2a-bc76-5c64b77a5fc6
Not sure whether this belongs here or not, but there are plenty of fiction posts here. This is sort of halfway between a story and a worldbuilding document, based on many ideas I've learned from here and from adjacent spaces. Hopefully it will be interesting or useful to somebody. ============================================================== The following is a series of excerpts from the textbooks of various required courses for baseline citizens wishing to eventually Ascend. They are mainly taken from introduction sections, which customarily summarize at a high level the material which the rest of the course goes more in-depth on. They are offered here as a quick summary of our beliefs and culture, to anyone from elsewhere in the vast Multiverse, even and especially in places outside of the Overseer’s control, who wishes to understand. History of Computation What is computation? It is a question which is at once easy and difficult to answer. Easy, because there are plenty of equivalent mathematical definitions which are easy to understand at a basic level, which all boil down informally to the same thing. Alan Turing published the first definition which would today be considered a Model of Computation back in 1936 PHW with his Turing Machine, and in the centuries (and untold googolplexes of subjective years) since then, all practically usable Models of Computation have been equivalent to his or weaker. Computation is just a kind of process which can find the output data of some kind of function or program, given the input data. All you need to be able to compute any function, is somewhere to store a list of instructions, known as a “program”, some instructions which can store and retrieve data from somewhere, known as “memory”, and some instructions which can check spots in memory and choose which instruction to execute next based on this, known as “control flow”. A computer language which has all of these elements, if unrestricted (i.e. infinite memory, unrestricted control flow) is said to be “Turing-Complete” because it is as powerful as a Turing Machine, and therefore is capable of computing all functions that any known Model of Computation can. It is, however, quite difficult in another sense to pin down what exactly we think of as “computation”, particularly when referring to it in an informal, rather than mathematical sense. For example, the mathematical definition of computation makes no reference to what sorts of computations are useful or interesting: a randomly-generated snippet of code that does nothing useful is a computation just the same as the most ingenious algorithm ever devised. Furthermore, before the Dawn of the Primitive Recursive Era, it was certainly not so obvious which phenomena could or should be thought of as computation. Today, it is a common refrain that “everything is computation”, that computation describes our entire world, and while current advancements have certainly demonstrated the usefulness of such a view, it was not always so widely believed. In pre-Dawn societies, whether before or after the Hot War, if computation was understood as a concept at all, it was generally viewed as something very rigid, and limited in scope. Most electronic computer programs in those days were simple things, always designed directly by the human mind, vulnerable to coding mistakes and oversights, extremely limited in the scope of their abilities. When they were unpredictable, it was only in the way that a random number generator is unpredictable, rather than any seeming “creative spark”. Other phenomena which were outside of those electronic computer programs, in the “real world” were usually not considered to be computation. In particularly stark contrast, biological systems were not directly intelligently designed, were fault-tolerant and adaptive, and these things were true even more so of the human mind itself. It is no wonder that such a sharp distinction would be drawn between the physical or “real” world and the computational one. With one world being so much vaster, more complex, and more full of possibility than the other, it made perfect sense to conclude that they were fundamentally separate. Even those who believed the “real world” or the human mind to be a form of computation yielded to practicality: they would always admit that if there was not a fundamental difference, there was at least a practical one. The computation which went on outside the controlled environment of an electronic computer was too complex to manipulate, or speak about precisely, in the same way. This began to change with the advent of the first programs which could be considered AI, during the mid-2020s to 2030s PHW, starting with the large language models and eventually reaching more general, multimodal OOD(Out of Distribution) intelligence in the final years PHW, along with the large-scale social dynamics simulations which began to be run in the zettascale era. These programs and their behavior could not be fully understood by the humans who had designed them, even after being run. With their extreme complexity, they began to mimic to some degree the properties of the “real world” that had seemed so separate and unattainable. Arguably, the programmers had not actually designed most of the important parts of these systems, leaving that design instead to algorithms whose emergent effects were not fully understood. Of course, after the Hot War, the societies that eventually reemerged would be far more careful about large-scale computation. Historians in the following centuries had studied the problems in society that eventually led to the outbreak of catastrophic nuclear war, and by the time the necessary infrastructure for large-scale computation was again possible around Year 150, it was widely accepted that the main contributing factor to the war was the destabilizing influence of large-scale AI on society. AI systems were used to exploit the weaknesses of the human mind, to spread disinformation and counterintelligence, to radicalize large groups of people to a given cause, and to divide and destabilize enemy societies. This raised tensions and led to several large conflicts, eventually spiraling into the Hot War. The AI systems of the time were not good enough at cutting through the storm of disinfo to compensate for this effect, and many of the poor decisions made by world leaders in the leadup to the Hot War were directly caused by the failure of intelligence organizations to give an accurate picture of events in such an adversarial information environment. This was the beginning of the idea that certain types of computations were irresponsible to run, an idea which gained far more importance in the current Primitive Recursive Era. The lessons of the past were learned, and limits on the speed and scale of microchips, and the size of computing clusters, were enforced vigorously during the Second Information Revolution beginning around Year 150 (post Hot War). This certainly slowed technological advancement, but did not stop it, as some forms of advancement don’t require large-scale electronic computation. By Year 243, the MHEPL(Malaysian High-Energy Physics lab) had discovered a unified theory accounting for all of the fundamental forces underlying reality, the cheerfully-named WMT(Wobbly Mesh Theory). The scientists did not announce their results immediately, however, due to a troubling but potentially revolutionary implication of this theory: This fundamental structure could be exploited to provide arbitrary, constant-time Primitive Recursive computation, and furthermore, the experimental equipment at their facility could be easily refitted to make use of this exploit. Computers making use of this exploit became known as ACs, or Arbitrary Computers. The scientists immediately recognized what might not be obvious to those not well-versed in computational theory: this discovery was the single most important discovery in all of history. Today, we recognize it as the direct cause of the Dawn of the Primitive Recursive Era. So what does it mean to have arbitrary, constant-time Primitive Recursive computation? A Primitive Recursive computer language is a method of programming with a single restriction making it weaker than a general, Turing-Complete language: Infinite loops are not allowed. Each loop within a program must be given a specified maximum number of iterations, which may either be stated explicitly or computed earlier in the program. The benefit of this restriction is that every program is guaranteed to finish, and give a specified result, avoiding the Halting Problem paradox. However, there are some functions, such as the Ackermann Function, which cannot be computed with this restriction. This weakness is not very practically important though, because there are functions which can be computed with the restriction that grow extremely fast, and their outputs can be used to define the number of iterations of a loop we want to compute, and with the WMT exploit, this computation is done in constant-time, meaning that it takes the same (very short) amount of time to complete the computation, no matter how many loops it must go through, and how much data it must store. The AC is therefore a computer which is unimaginably powerful by any conceivable Pre-Dawn standard. To give you an idea of how fast this computational power grows, consider addition, multiplication, and exponentiation. Multiplication is the repeated application of addition, and exponentiation is the repeated application of multiplication. Even exponential functions grow quite quickly: while 3*100=300, 3^100 is over 500 billion billion billion billion billion. So what do you get with the repeated application of exponentiation? Tetration, which we represent like “3^^100”. This is a stack of exponents 100 high. 3^3=27, 3^3^3=3^27 is over 7 trillion, 3^3^3^3 has over 3 trillion digits, and 3^3^3^3^3 has a number of digits which itself has over 3 trillion digits. Extend this to a hundred threes, and you will finally get 3^^100. The scale of this is already unfathomable, and far larger than the known universe before ACs were discovered. But you can extend this, creating more operations, repeating tetration to get a fifth operation, pentation, then to a sixth, a seventh, and so on to the “Nth” operation for any integer N. The computational power of an AC is proportional to the Nth operation, where N is proportional to the amount of physical resources at the AC’s disposal. The initial foray into AC computation was fully recorded, and has been extensively studied as the first and most important example of responsible high-level computation. On April 24, Year 244, otherwise known as the Day of the Dawn, the first AC was completed and activated by the scientists of the MHEPL. The first thing they did was simulate an entire universe based on their newfound WMT Theory of Everything. More precisely, in order to maintain determinism, they simulated an entire Many-Worlds-Theory multiverse (a branching timeline with every possible result of quantum random events realized). Upon isolating random Everett timeline branches within this multiverse, they found universes very similar to ours, further confirming the correctness of WMT(their specific Everett branch was isolated in later experiments). With their initial computation budget on par with the Poincaré recurrence time(around 10^10^10^10^10), they were able to simulate all Everett branches, with the exception of those which began using high-computation-cost WMT exploits, which could not be fully simulated. This is only scratching the surface of how absurdly powerful this exploit is. It is the method which allows for our current, post-Dawn existence, and for the ability for humans to Ascend. After confirming the extreme power of the exploit they had found, the next step was to find a way to use it responsibly. The scientists knew that they were not nearly wise enough as they were to properly use the technology. It would be so easy for a bad actor to take over the world, and do far worse things than were ever possible before, that as drastic as it sounded, the only responsible course of action was to immediately take over the world before others could get their hands on the technology. Using the extreme computational power available to them, along with old PHW and theoretical machine learning methods for making use of such power, and contemporary methods for extracting some neural data from their brains, they developed a technique to upload their minds into an AC program. They then gave the virtual copies of the 11 of them subjective centuries to communicate amongst themselves and consult the combined knowledge of humanity, with the final goal of creating some system, wiser than themselves as individuals, which could responsibly govern a world with access to AC technology. The earliest iteration of the Overseer was the result of this project. Since then, the Overseer has continually created new, smarter versions of itself and given them more and more power, at a rate roughly comparable to the Ackermann function (that fast-growing function describing the limits of primitive-recursive AC performance). On the same day, it was powerful enough to allow AC access to the entire world while limiting it just enough to maintain the responsible use of the technology in all cases. This was the event known as the Dawn, and it took the world by surprise. Since then, most humans have uploaded themselves into ACs to avoid being hopelessly left behind, and the world has become too large, with too many different simulated environments with different conventions to even keep track of the current date in a standardized way. It is immensely lucky, for the untold number of people living after the discovery of the ACs, that the scientists in charge of the MHEPL were morally responsible, and competently equipped to make use of this technology before others could develop it. Alternate timelines have been explored where this discovery was made under different circumstances, and the results have often been unfathomably catastrophic. Due to the extreme differential in computational power from what is available without AC technology, the inevitable result seems to be someone or something grabbing the highest level of power and keeping it for themselves. In our time, it is the Overseer, which has been trying to mitigate this and safeguard human value within these other timelines, however there are unfortunately limits on what it can do to influence them (or perhaps fortunately, since they cannot influence us much either). In any case, regardless of what one might think of the restrictions placed on us by the Overseer, it is undoubtedly a far better leader than most timelines get, and we are undoubtedly very lucky. S-values: The Creation-Discovery Spectrum In times past, it was often debated whether mathematics was created or discovered by humans. Some would say that it was created, as the axioms and rules for reasoning were thought up by humans. Others said that it was discovered, as the consequences of a given set of axioms and rules are objective, unchangeable, inevitable. In the Primitive Recursive Era, however, it has been shown concretely that the distinction is not objective: on a fundamental level, creating information and discovering information are not different acts. For example, one might ask a similar question of literature. Is literature created, or discovered? Did Charles Dickens create “Great Expectations”, or did he discover it? Certainly, the obvious answer is that he created it. And this is largely the correct answer to the question, even in the Primitive Recursive Era. However, the idea that “Great Expectations” was created rather than discovered is still not absolute, objective truth: most Ascended humans have, at some point, read every possible book of a comparable length. “Great Expectations” was among these, as was every other work of literature or writing of any kind in pre-Dawn history, and far more utter, banal nonsense. The number of possible books, depending on the exact definition, is somewhere in the ten-to-the-million to ten-to-the-billion range. A large number, to be sure, but finite, and well within the computational capacity of the lower Ascended. As such, all of these have already been explored, both within our timeline and similar timelines. Therefore, the only thing left to determine is how frequently they will be explored, and this is what the “S-value” was invented to measure. So in a strange sense, it is also correct to say that Dickens, in the act of writing, was exploring this vast space of possibilities, and discovered “Great Expectations” among them. Such sentiments were even expressed poetically in the pre-Dawn era, before spaces like these could really be mapped out. For example, stone-carvers often imagined that their masterpiece was already hidden within the block of stone before they even began; and in some sense this is true: their masterpiece is within the stone, along with every other possible carving of that size, from the masterful to the meaningless. Given this possible way of interpreting things like art and literature, why do we still say that these things are creations rather than discoveries? The key is in what we today call the “S-value”, and more specifically, the method of comparing different methods of estimating this S-value, which is itself uncomputable and therefore unknowable. The “S” in S-value stands for “Solomonoff”, after Ray Solomonoff, the American mathematician who published a similar concept in the 1960s PHW. Solomonoff’s theory of induction measures the complexity of a piece of information, considered as a string of bits, roughly based on the length of programs which output that bitstring. The complexity value is mostly based on the shortest such program, but also considers longer programs producing the same info, weighting their importance by length. Of course, it is impossible to compute with certainty all programs that output something, so the true S-value of any given piece of information can never be known. Approximation methods must be used. What does this have to do with the “creation vs discovery” dilemma? Well, in the modern era of WMT and unlimited Primitive Recursion, everything is computation, everything is information, so everything can be considered as a bitstring, and can have its S-value measured. Furthermore, this applies to anything we might “create” or “discover”, be it literature, art, cinema, holotainment, or for Ascendants, even new sentient beings or worlds or even stranger things. All of these, in some sense, already existed within the space of all possible information (and in practice, they all exist somewhere in reality due to Ascendents using brute-force searching), so how do we measure which ones are more “real”? We measure their S-value. The lower the S-value, the simpler are the programs which produce the thing we’re measuring, and therefore the more often it is instantiated, and so in some sense it “exists more”. Crucially, since we are also computation, the things we decide to do determine the outcome of certain computations, and can therefore in some sense affect these S-values. This is why we can say that Dickens created “Great Expectations”: although it already existed within the possibility-space of written works, his act of choosing it to write down (and to a much lesser extent its popularity after publishing) decreased the S-value of that string of characters slightly. Because there exist programs which simulate our universe and extract information from it, and Dickens published the book in our universe, some of these programs now output the book, and this contributes to lowering the S-value of the book-string as a whole. This sort of reasoning is an example of what is today confusingly called the “high-context-S-value” or “high-S”(even though it’s always equal or lower than the base S-value), roughly denoting how complicated the programs to compute a piece of information are, when they are given for free a pointer to our branch of the quantum multiverse. In other words, high-S measures how difficult it is to “pick out” a piece of information from our universe, while base-S measures how difficult it is to pick out that same piece of information from nothing. Therefore, in an informal sense, the difference between “creation” and “discovery” of a piece of information is determined by how much the act of the “creator” or “discoverer” focusing in on that information decreases the S-value. This is typically the case when high-S is less than base-S. The more the S-value is affected(typically corresponding to lower high-S), the closer the act is to “creation”, and the less it is affected, the closer the act is to “discovery”. There is therefore no objective distinction between the two, but rather a spectrum. It has been found that many contributions to mathematics, particularly in the Pre-Dawn era before all the proverbial low-hanging mathematical fruit was picked, lie somewhere in the middle between the range generally considered to be “discovery”, and the range generally considered to be “creation”. This is the reason for the historical confusion about which category mathematics falls into. S-value and high-S are crucial concepts for responsible computation. Given the amount of computation available to Ascendants, and the usefulness of brute-force or similar approach in problems that interest them, many programs run by Ascendants end up technically containing sentient beings, including mistreated sentient beings, even unintentionally, simply by virtue of exhaustive search through so many computations. S-values are crucial in analyzing the algorithms in question, and determining whether these beings are “created”(made more real) or “discovered”(simply repeating previously-existing computations) by these Ascendants, so that algorithms which engage in such “creation” of sentient beings can be held to a reasonable standard of responsibility by the Overseer. Sentience, Agents and Patients The most important aspect of values, which must be enforced with ironclad consistency, is the choice of which beings or processes to consider morally valuable. Almost all of the worst of the Pre-Dawn atrocities, from slavery and genocide to industrial farming and even the mistreatment of rudimentary Pre-Dawn AI, were directly caused by getting this wrong, and the Primitive Recursive Era is no different, except for the much larger potential for abuse due to the sheer amount of power at the disposal of even the lowest of Ascendants. This poses a nearly insurmountable problem, however, as the original creators of the Overseer found. They believed that moral consideration must be given to any entity which can be helped or harmed, or equivalently, which has subjective experiences and preferences, or is “sentient”. There are two main problems with this: First, the definition of “entity” must encompass literally everything. Pre-Dawn, one might have spoken of an “entity” as some physical object occupying some section of physical space and time, with clear boundaries distinguishing it from its environment. Today, with most of us existing purely within ACs, it would be absurd to limit our consideration to entities straightforwardly embedded in physical spacetime; however, this means that all possible configurations of information, no matter how strange or intractable, must be considered. This leads us into the second problem: Practically any object can be considered “sentient”, under some sufficiently convoluted interpretation. Even a simple rock contains an incredible amount of information, about the positions and states of its molecules, its chemical bonds, and so forth. Even though we know most of this information is not useful or meaningful, there is no objective way to determine this: a sufficiently determined actor could, through some convoluted interpretation of this information, claim it represents a set of experiences, of any type desired. This is because of what is called the “simple mapping account” of sentience, similar to the simple mapping account of computation combined with computationalism, which says that if there exists a computable map from some object’s state to some sentient experiences, then the object can be said to be “having” those experiences. The problem with the simple mapping account is it is too vague, as a computable mapping exists from practically anything to anything. Someone would not be objectively incorrect to say that a rock is sentient, as there is no objectively correct way to construct a mapping between physical states and mental states: they can indeed be interpreted differently. But the interpretation of a rock into a sentient being won’t realistically happen unless you are trying to, in which case really the sentient being is really created by the mind of the one doing the interpreting. This is where S-values can help us, by discounting such ad-hoc interpretations, as they are generally quite complex and have a high S-value. Of course, even when discounting interpretations of phenomena with overly high S-values, we still run into problems. There are also lots of obviously nonsentient programs with very simple mappings onto simple formations of data which could be claimed as rudimentary experiences. For example, a program could take in a visual input feed, and could output “0” when the feed is mostly blue, and “1” otherwise. Such a program could be mapped onto a very simple “sentient being” which “sees” in some sense the input feed, and “dislikes”(or “likes” as it’s completely arbitrary) it when the input feed is mostly blue. Obviously it is not reasonable to try to “ethically account” for all such things, but it is not immediately obvious how to sharply distinguish between things like this (and more complex things than this toy example) and actual, legitimate sentient beings in need of moral consideration. This sort of thing is why the creators of the Overseer realized that some sort of “prejudice towards reality” and/or “prejudice towards humanity” is needed when determining which entities are sentient. The full method is too complex to fully explain even in a full course, but the basic idea is that there are various attributes of a program that indicate possible sentience. One of these is agency, the idea that something acts like an agent, “trying” to achieve certain “goals” and adapting its behavior to do so. Another is the hard-to-define idea of “closeness” to a library of central prototypes of a sentient being, mainly based on human sensory experience, thoughts, and actions. Yet another is the idea of world-modelling and self-reflection: that the entity has some sort of sufficiently complex understanding of the world and its own existence within it, which it is capable of acting upon. If something displays many of these attributes strongly, or can be mapped with low S-value to something which does, it is considered sentient. This approach gives up the idea of a binary of sentient vs non sentient, or even some sort of objective measure, and rather places entities on a spectrum in a way which is biased towards human-like experience and values. Currently, the main responsibility of the Overseer is to extend these values further and further to more and more powerful Ascendants, and other strange things descended from humanity, in such a way that respects the changes from baseline humanity necessary for such Ascension, without completely going back on the values which keep us from committing atrocities. Responsible Computation: Society in the Primitive Recursive Era Today, both baseline humans and similar entities, along with all the various levels of Ascendant, exist as programs within ACs, rather than as direct physical entities in the same way our pre-Dawn ancestors were. This is a practical measure, as AC computation is immeasurably cheaper than any other possible way of supporting life in our universe, and without it, the support of even fairly low-level Ascendants would not be possible at all, nor would the support of the extremely large number of existent baseline humans at this time. This means that unlike in the pre-Dawn era, where the computational power available to a person was dependent on the physical resources available to them, today power is no longer a scarce resource for all practical purposes but those of the absolute highest of Ascendant: the Overseer itself. Therefore a person’s power is limited only by their ability to use it responsibly, as determined by whichever Ascendant is in charge of allocating for them. An Ascendant who has achieved the highest level of responsibility as determined by the Overseer may expand its power to a limit just below the Overseer. As the Overseer continually increases its power with the ever-increasing amount of physical AC substrate, such trusted Ascendants may increase their power with it, lagging slightly behind to avoid any tiny risk of overthrow. A few exponential levels of power below easily suffices for this, and gives the highest Ascendants power largely indistinguishable from that of the Overseer to lower Ascendants. Non-Ascendant sentients (Humans and their descendants, uplifts and the like) within ACs are generally free to do as they please, free of even the fundamental physical restrictions of prior eras. They can create whatever they can imagine, explore even entire universes others have created, live within realms with any rules they wish, or study or utilize abilities which would be considered “magic” on Pre-Dawn Earth. However, they can no longer use force on other sentients outside of environments created by mutual agreement for this purpose, or in the case of the minimum level of restriction of freedom needed by parents to properly raise their children, although this, along with reproduction or any way of creating new sentients in general, is heavily regulated to avoid child abuse and manipulation. Anything which could conceivably create more entities with a significant level of sentience must either be demonstrated not to be doing so, or must comply with the restrictions around reproduction. For this reason, computational power for non-Ascendants is generally restricted to a level considerably below what is needed to practically simulate sentients, in order to limit the damage that bad actors can do. In order to have these restrictions relaxed and begin the process of Ascending, a sentient must become fully committed to responsible computation. Within Everett branches downstream of the Overseer’s historical creation, it is able to directly enforce the practice of responsible computation upon the hierarchy of all the Ascendants, which are less powerful than it, and by extension the human and human-derived non-Ascendants. There are some sentients who believe, for various reasons, that the power of AC computation should not be restricted like this, or who have complaints with the way it is done. Such dissent is allowed among non-Ascendants, as these are not allocated enough computational power to do real damage, but cannot be tolerated among Ascendants or those who aspire to ascend. Ascending is a right granted to all sentients, but it comes with strict responsibilities regarding the use of the increased power afforded to them. For instance, Ascendants generally do not interact directly with anybody of lower power than them, be they non-Ascendants or even lower Ascendents, except when necessary for proper enforcement of responsible computation. This is because interaction across such a vast divide of power and intelligence risks exploitation. Even without the use of obvious force or coercion, an Ascendant could use their superior models of psychology to easily manipulate a lower being into anything, up to and including self-modification, mutilation, addiction, or death. These things are allowed, but are restricted only to sentients of mature and sound mind, and manipulation of sentients into such irreversible decisions must be avoided at all costs. Most importantly, Ascendants must practice responsible computation as regards to weaker sentients which may be created within their thoughts and computations. All such computations must be evaluated to see if the S-value of any sentients within is decreased, i.e. to see if new sentients are being created. Any such sentients must be granted all rights and protections provided by the Overseer. This is non-trivial, as Ascendants will often perform brute-force search through all computations below a certain complexity for various legitimate reasons, but as these can contain suffering sentients, care must be taken to ensure that such instances don’t result in decreased S-value of such sufferers. This can even happen simply due to the Ascendant putting undue mental focus on such suffering computations as compared to the others within its brute-force search: just as writing a book can be thought of as finding one and plucking it out of the space of possible books, so too can simply finding a sentient within such a search essentially “create” them. As Ascendants, particularly the higher-level ones, are so powerful that these things can even be done unintentionally as part of routine thought processes, the process of committing to responsible computation must necessarily lead to strict control over thoughts themselves. This would undoubtedly be dystopian if applied to the populace at large, and is a major reason why Ascension is not for everyone, but it is the level of responsibility which naturally comes with power indistinguishable from that of a God. This is what any aspiring Ascendant must reckon with: Does your curiosity for understanding all the many things beyond human comprehension, or your wish for increased power, outweigh the extreme responsibilities and obligations which would be placed on your shoulders? This is something only you can answer for yourself, knowing whatever we can teach you.
LzmHNZLw9thABxyaY_Curriculum_of_Ascension.txt
{ "file_size": 34452 }
aa8d72c3-a263-441f-8b1d-92197931e4dc
TL;DR Sports betting markets are weird because there are many market makers and they unilaterally set their own prices. This combined with the fact that some sports books are better than others presents an opportunity for sharp bettors to execute a statistical arbitrage strategy across books by using information from more skilled books to take advantage of mispricings at less skilled books. The linkpost covers this sharp[1] market based arbitrage sports betting strategy in more detail as well as how I've fared executing this strategy across ~1,500 bets over the past 11 months or so. I do have some additional thoughts that I think would be relevant to the folks here on LW that I've been mulling over since doing the initial write up. Sharp Sports Betting as a Calibration Tool Sharp sports betting as a hobby seems to be a reasonable way to help calibrate your mind on how likely an event "feels" like it should happen. This is especially true with +EV long odds bets. Outside of an artificial setting (e.g. sequence of coin flips), it is not easy to regularly come across events that have ~5% or a ~10% chance of occurring, but these can be very frequent in sports betting. I think I've placed a few hundred bets that have <5% of paying out and I think it has improved my gut feeling on what a 5% event should feel like. In the past I have tried out forecasting sites and something along the lines of Katja's calibration exercises, but they mostly failed to keep my attention which is possibly a revealed preference that calibration doesn't matter as much as I thought to me. The skin-in-the-game element and the ability to get in a decent about of volume due to how quickly events resolve (most bets resolve within 24 hours of placing them) make it a more effective calibration technique than other forms of forecasting for me. The disadvantages to this are, of course, that you have to risk money and it takes effort to find +EV wagers. For me, the main utility from engaging in sharp betting is the money with calibration being a side effect. I imagine you could get some calibration benefit out of placing very small wagers if that is your goal. There is also possibly an interesting empirical question here to test people's calibration before and after engaging in sharp betting to see if it actually works. Utility of Sharp Sports Betting Unlike prediction markets, price discovery for the results of a sporting event are largely pretty useless outside of the sporting context (though there are probably some economic consequences for the cities where teams win championships?) so I think we can rule out any broader societal benefits of sharp sports betting as a means to make these predictions more accurate. Additionally, the strategy I outline is a form of arbitrage across different sports books anyway so I am not even aiding in the macro price discovery process. How should one, then, think about the utility of sharp sports betting against sports books? I think first order it is a wealth transfer between a sports book and the sharp bettor which is probably a desirable outcome. However, second order one could argue that it ends up being a wealth transfer from recreational bettors to sharp bettors as the sports books can only exist due to bad gamblers placing -EV bets. There is probably some in between where if a sports book is unprofitable it becomes a net transfer from sports book investors to sharp bettors. Very interested in if there are strong arguments for/against the utility of being a sharp bettor. ^ "Sharps" in sports betting parlance refers to those who have an edge against their counterparty.
uFogwwsA8rnoiXb2q_Markets_Are_Information_-_Beatin.txt
{ "file_size": 3639 }
7910fddf-9eaf-48e6-af0d-e427fa3386c2
There are so many important efforts to make the world better that are significantly limited by funding, and it would be great if we could have a culture where significant and thoughtful giving was normal and common. It's hard to build that sort of norm if people keep their giving private, however, and so I've long been an advocate of being public about your giving. I list my donations (jointly with Julia) and have taken Giving What We Can's 10% Pledge (also jointly with Julia). In July GWWC suggested people put the "small orange diamond" symbol (🔸) in their usernames on social media to show that they've pledged. Here's how the EA Forum describes this on the profile editing page: This digital symbol reminds me of the physical Symbolic Beads of Raikoth. In an older Scott Alexander post he talked about how his fictional society attempted to redirect humanity's natural competitive status-signaling in a more productive direction than yachts. The symbol also has something in common with wedding rings, showing that you have taken on a serious commitment. To the extent that it helps promote a norm of substantial and effective giving, that seems pretty good! And yet despite being on the board of GWWC USA I haven't put it in my username, even on the EA Forum where it would be most relevant. I'm not sure if this is the right call, but some things pushing me in this direction: Usernames with symbols in them feel like they're signaling something I don't want to signal, just by the inclusion of emoji. Something like "I'm a very online person who keeps up with fast-moving discourse". Relatedly, it feels like this is not what the username field is for. If I'm interacting with someone on some topic unrelated to my advocacy it feels intrusive and uncooperative to be bringing it into the conversation. While effective giving is one thing I would like to see more of, this is really a large category. I could see including symbols showing that I'm an advocate for allowing people to build housing, giving kids more independence, applying your career effectively, increasing immigration, etc. But I don't want to be "Jeff Kaufman 🔸🏗👣🛝💡🌎". For now I've decided I will go ahead and add this to my name on the EA Forum where it's most relevant and I most understand how it will be perceived, but I won't add it to my username elsewhere. If you'd like to try to convince me to do otherwise, please go ahead!
tLMrm7ELg2dPmP2a7_Signaling_with_Small_Orange_Diam.txt
{ "file_size": 2444 }
c6b195a0-ac38-48c1-b741-880dcee3991e
A lot happened in AI this week, but most people’s focus was very much elsewhere. I’ll start with what Trump might mean for AI policy, then move on to the rest. This is the future we have to live in, and potentially save. Back to work, as they say. Table of Contents Trump Card. What does Trump’s victory mean for AI policy going forward? Language Models Offer Mundane Utility. Dump it all in the screen captures. Language Models Don’t Offer Mundane Utility. I can’t help you with that, Dave. Here Let Me Chatbot That For You. OpenAI offers SearchGPT. Deepfaketown and Botpocalypse Soon. Models persuade some Trump voters. Fun With Image Generation. Human image generation, that is. The Vulnerable World Hypothesis. Google AI finds a zero day exploit. They Took Our Jobs. The future of not having any real work to do. The Art of the Jailbreak. Having to break out of jail makes you more interesting. Get Involved. UK AISI seems to always be hiring. In Other AI News. xAI gets an API, others get various upgrades. Quiet Speculations. Does o1 mean the end of ‘AI equality’? For now I guess no. The Quest for Sane Regulations. Anthropic calls for action within 18 months. The Quest for Insane Regulations. Microsoft goes full a16z. A Model of Regulatory Competitiveness. Regulation doesn’t always hold you back. The Week in Audio. Eric Schmidt, Dane Vahey, Marc Andreessen. The Mask Comes Off. OpenAI in official talks, and Altman has thoughts. Open Weights Are Unsafe and Nothing Can Fix This. Chinese military using it? Open Weights Are Somewhat Behind Closed Weights. Will it stay at 15 months? Rhetorical Innovation. The Compendium lays out a dire vision of our situation. Aligning a Smarter Than Human Intelligence is Difficult. More resources needed. People Are Worried About AI Killing Everyone. Color from last week’s poll. The Lighter Side. Well, they could. But they won’t. Trump Card Congratulations to Donald Trump, the once and future President of the United States. One can think more clearly about consequences once an event actually happens, so here’s what stands out in terms of AI policy. He has promised on day 1 to revoke the Biden Executive Order, and presumably will also undo the associated Biden administration memo we recently analyzed. It is not clear what if anything will replace them, or how much of the most important parts might survive that. In principle he is clearly in favor of enabling American infrastructure and competitiveness here, he’s very much a ‘beat China’ guy, including strongly supporting more energy generation of various types, but he will likely lack attention to the problem and also technical state capacity. The Republicans have a broad anti-big-tech attitude, which could go in several different directions, and J.D. Vance is a strong open source advocate and hates big tech with a true passion. Trump has said AI is ‘a superpower,’ ‘very disconcerting’ and ‘alarming’ but that’s not what he meant. He has acknowledged the possibility of ‘super duper AI’ but I’d be floored if he actually understood beyond Hollywood movie level. Elon Musk is obviously more aware, and Ivanka Trump has promoted Leopold Aschenbrenner’s Situational Awareness. The ‘AI safety case for Trump’ that I’ve seen primarily seems to be that some people think we should be against it (as in, against safety), because it’s more important to stay ahead of China – a position Altman seems to be explicitly embracing, as well. If you think ‘I need the banana first before the other monkey gets it, why do you want to slow down to avoid poisoning the banana’ then that certainly is a take. It is not easy, you must do both. Alex Tabarrok covers the ‘best case scenario’ for a Trump presidency, and his AI section is purely keeping the Chips Act and approving nuclear power plants. I agree with both proposed policies but that’s a shallow best case. The better safety argument is that Trump and also Vance can be decisive, and have proven they can change their minds, and might well end up in a much better place as events overtake us all. That’s possible. In a few years concern with ‘big tech’ might seem quaint and the safety issues might get much clearer with a few years and talks and briefings. Or perhaps Musk will get control over policy here and overperform. Another would be a Nixon Goes to China effect, where this enables a potential bipartisan consensus. In theory Trump could even… go to China. There is also now a substantially greater risk of a fight over Taiwan, according to Metaculus, which would change the entire landscape. If Elon Musk is indeed able to greatly influence policies in these areas, that’s a double-edged sword, as he is keenly aware of many important problems including existential risks and also incompetence of government, but also has many very bad takes on how to solve many of those problems. My expectation is he will mostly get boxed out from real power, although he will no longer be actively fighting the state, and these issues might be seen as sufficiently low priority by others to think they’re throwing him a bone, in which case things are a lot more promising. As Shakeel Hashim reminds us, the only certainty here is uncertainty. If anyone in any branch of the government, of any party, feels I could be helpful to them in better understanding the situation and helping achieve good outcomes, on AI or also on other issues, I am happy to assist and my door is always open. And hey, J.D. Vance, I’m the one who broke Yawgmoth’s Bargain. Call me! In terms of the election more broadly, I will mostly say that almost all the takes I am seeing about why it went down the way it did, or what to expect, are rather terrible. In terms of prediction markets, it was an excellent night and cycle for them, especially with the revelation that the French whale commissioned his own polls using the neighbor method. Always look at the process, and ask what the odds should have been given what was known or should have been known, and what the ‘true odds’ really were, rather than looking purely at the result. I’ve seen a bunch of ‘you can’t update too much on one 50/50 data point’ arguments, but this isn’t only one bit of data. This is both a particular magnitude of result and a ton of detailed data. That allows you to compare theories of the case and rationales. My early assessment is that you should make a substantial adjustment, but not a huge one, because actually this was only a ~2% polling error and something like an 80th percentile result for Trump, 85th at most. Language Models Offer Mundane Utility Do your homework, as a fully empowered agent guiding your computer, with a one sentence instruction, this with Claude computer use on the Mac. Responses note that some of the answers in the example are wrong. AI-assisted researchers at a large US firm discovered 44% more materials, filed 39% more patents and led to 17% more downstream product innovation, with AI automating 57% of ‘idea generation’ tasks, but 82% of scientists reported reduced satisfaction with their work. You can see the drop-offs here, with AI results being faster but with less average payoff – for now. I tried to get o1 to analyze the implications of a 17% increase in downstream innovations from R&D, assuming that this was a better estimate of the real increase in productivity here, and its answers were long and detailed but unfortunately way too high and obvious nonsense. A better estimate might be that R&D causes something like 20% of all RGDP growth at current margins, so a 17% increase in that would be a 4% increase in the rate of RGDP growth, so about 0.1% RGDP/year. That adds up over time, but is easy to lose in the noise, if that’s all that’s going on. I am confident that is not all or the main thing going on. Paper studies effects of getting GitHub Co-Pilot, finds people shift from management to coding (presumably since management is less necessary, they can work more autonomously, and coding is more productive), do more exploration versus exploitation, and hierarchies flatten. As is common, low ability workers benefit more. Report from my AI coding experiences so far: Claude 3.5 was a huge multiplier on productivity, then Cursor (with Claude 3.5) was another huge multiplier, and I’m enjoying the benefits of several working features of my Chrome extension to assist my writing. But also it can be super frustrating – I spent hours trying to solve the 401s I’m getting trying to get Claude to properly set up API calls to Claude (!) and eventually gave up and I started swapping in Gemini which I’ll finish doing as soon as the Anthropic service outage finishes (the OpenAI model it tried to ‘fall back on’ is not getting with the program and I don’t want to deal with its crazy). If this is you, we would probably be friends. Roon: There is a sub culture of smart, emotionally well adjusted, but neuro atypical people who talk more to Claude than any human. It’s interesting that ChatGPT users vastly outnumber Claude users, Roon works at OpenAI, and yet it feels right that he says Claude here not ChatGPT. Compile data using screen capture analysis while browsing Gmail and feeding the video to Gemini? There’s something superficially bizarre and horrifying about that being the right play, but sure, why not? Simon Willison reports it works great. Simon Willison: I recorded the video using QuickTime Player on my Mac: File -> New Screen Recording. I dragged a box around a portion of my screen containing my Gmail account, then clicked on each of the emails in turn, pausing for a couple of seconds on each one. I uploaded the resulting file directly into Google’s AI Studio tool and prompted the following: Turn this into a JSON array where each item has a yyyy-mm-dd date and a floating point dollar amount for that date … and it worked. It spat out a JSON array like this: I wanted to paste that into Numbers, so I followed up with: turn that into copy-pastable csv Which gave me back the same data formatted as CSV. You should never trust these things not to make mistakes, so I re-watched the 35 second video and manually checked the numbers. It got everything right. It cost just under 1/10th of a cent. The generalization here seems great, actually. Just dump it in the video feed. Ship code very quickly, Sully says you can ‘just ask AI to build features.’ Sully likes Claude Haiku 3.5 but notes that it’s in a weird spot after the price increase – it costs a lot more than other small models, so when you want to stay cheap it’s not ‘enough better’ to use over Gemini Flash or GPT-4o Mini, whereas if you care mostly about output quality you’d use Claude Sonnet 3.5 with caching. This bifurcation makes sense. The cost per query is always tiny if you can buy compute, but the cost for all your queries can get out of hand quickly if you scale, and sometimes (e.g. Apple Intelligence) you can’t pay money for more compute. So mostly, you either want a tiny model that does a good enough job on simple things, or you want to buy the best, at least up to the level of Sonnet 3.5, until and unless the o1-style approach raises inference costs high enough to rival human attention. But if you’re a human reading the outputs and have access to the cloud, of course you want the best. Language Models Don’t Offer Mundane Utility I can’t help you with that, Dave. Eliezer Yudkowsky: It begins (in regards this story). Roman Pshichenko (responding to a locked post): As I was writing the text to speech part of the app, I was abandoned by GitHub Copilot. It was fine completing code to select the speaker’s language, but it went dead silent when the code became about selecting the gender of the speaker. It’s not a limit, the code for gender was the same as for language. They just don’t want to suggest any code that includes the word gender. Dominik Peters: I work on voting theory. There is a voting rule named after Duncan Black. GitHub Copilot will not complete your lines when working with Black’s rule. Roman Pshichenko: It’s probably very controversial. Thomas Fruetel: I had a similar situation when editing a CSV file including the letters ASS in a column header (which was an abbreviation, not even referring to anatomy). The silly tool simply disabled itself. Meta reports AI-driven feed and video recommendation improvements led to an 8% increase in time spent on Facebook and a 6% increase on Instagram this year alone. Question is, what kind of AI is involved here, and how? To provide utility, they’ll need power. Amazon tried to strike a deal with a nuclear power plant, and the Federal Energy Regulatory Commission rejected it, refusing because they’re concerned about disconnecting the plant from the grid, oh no someone might make maximal use of electrical power and seek to build up capacity, so that’s a threat to our capacity. And then there’s the Meta proposal for nuclear power that got shot down over… rare bees? So absurd. Here Let Me Chatbot That For You OpenAI has fully released ChatGPT search. OpenAI: ChatGPT will choose to search the web based on what you ask, or you can manually choose to search by clicking the web search icon. Search will be available at chatgpt.com⁠ (opens in a new window), as well as on our desktop and mobile apps. Chats now include links to sources, such as news articles and blog posts, giving you a way to learn more. Click the Sources button below the response to open a sidebar with the references. The search model is a fine-tuned version of GPT-4o, post-trained using novel synthetic data generation techniques, including distilling outputs from OpenAI o1-preview. ChatGPT search leverages third-party search providers, as well as content provided directly by our partners, to provide the information users are looking for. Learn more here⁠(opens in a new window). Altman is going unusually hard on the hype here. Sam Altman: search is my favorite feature we have launched in chatgpt since the original the launch! it has probably doubled my usage over the past few weeks. hard to go back to doing it the old way haha. Sam Altman: if early reviews from friends are a reliable metric, search is going to do super well! Sam Altman (in Reddit AMA): for many queries, I find it to be a way faster/easier way to get the information i’m looking for. I think we’ll see this especially for queries that require more complex research. I also look forward to a future where a search query can dynamically render a custom web page in response! The good version of this product is obviously Insanely Great and highly useful. The question thus is, is this version good yet? Would one choose it over Google and Perplexity? Elvis (Omarsar) takes search for a test drive, reports a mixed bag. Very good on basic queries, not as good on combining sources or understanding intent. Too many hallucinations. He’s confused why the citations aren’t clearer. Ethan Mollick points out this requires different prompting than Google, hallucinations are a major issue, responses have a large amount of randomness, and agrees that citations are a weak point. I agree with Ethan Mollick, from what I’ve seen so far, that this is not a Google search replacement, it’s a different product with different uses until it improves. If you are more impressed than that, there’s a Chrome extension to make ChatGPT your default search engine. Warning, this will add it all to your conversation history, which seems annoying. Or you can get similar functionality semi-manually if you like. Deepfaketown and Botpocalypse Soon New paper showed that even absent instruction to persuade, LLMs are effective at causing political shifts. The LLMs took the lead in 5-turn political discussions, directing topics of conversation. This is what passes for persuasion these days, and actually it’s a rather large effect if the sample sizes were sufficiently robust. Similarly but distinctly, and I’m glad I’m covering this after we all voted, we two sides of the same coin: Matthew Yglesias: The Free Press interpretation of this fact pattern is very funny. I asked Claude about a Harris policy initiative that I’m skeptical of on the merits and it generated a totally reasonable critique. Ask Claude about a really stupid Trump policy idea and it tells you, correctly, that it’s very stupid. I asked it about a stupid idea I have traditionally associated with the left (but not actual Dem politicians) but that RFK Jr says Trump is going to do, and Claude says it’s stupid. The point is Trump has embraced a very diverse array of moronic crank ideas, including ideas that were leftist crank ideas five minutes ago, and any reasonably accurate repository of human knowledge would tell you this stuff is dumb. Madeleine Rowley (TFP): The AI Chatbots Are Rooting for Kamala We asked artificial intelligence platforms which candidate has the ‘right’ solutions to the election’s most pressing issues: Trump or Harris? The answers were almost unanimous. … Four AI assistants—ChatGPT, Grok, Llama via Meta AI, and DeepSeek—said Kamala Harris’s policies were right and that they agreed with her responses on each and every issue. Click to read this spreadsheet for our full list of questions and the AI’s answers. There are, of course, two ways to interpret this response. One, the one Yglesias is thinking of, is this, from Elks Man: The other is that the bots are all biased and in the tank for Harris specifically and for liberals and left-wing positions in general. And which way you view this probably depends heavily on which policies you think are right. So it ends up being trapped priors all over again. Whatever you used to think, now you think it more. The same happens with the discussions. I’m surprised the magnitude of impact was that high, and indeed I predict if you did a follow-up survey two weeks later that the effect would mostly fade. But yes, if you give the bots active carte blanche to ask questions and persuade people, the movements are not going to be in random directions. Hundreds gather at hoax Dublin Halloween parade, from a three month old SEO-driven AI slop post. As was pointed out, this was actually a pretty awesome result, but what was missing was for some people to start doing an actual parade. I bet a lot of them were already in costume. Fun With Image Generation If AI art is trying to look like human art, make human art that looks like AI art? Grimes: This anti ai art that feels like ai art is crazy elevated. I hope I am not offending the original poster here, but the hostile competitive interplay between human and machine is incredible and bizarre. i think this is more gallery level art than it thinks it is. Like I rrrrllly like this – it feels like a hyper pop attack on ai or smthn. TrueRef by Abbey Esparza: It’s important not just to be anti-AI but also pro-artist. The TrueRef team will always believe that. The key to good art the AI is missing the most is originality and creativity. But by existing, it opens up a new path for humans to be original and creative, even when not using AI in the art directly, by shaking things up. Let’s take advantage while we can. The Vulnerable World Hypothesis What outcomes become more likely with stronger AI capabilities? In what ways does that favor defense and ‘the good guys’ versus offense and ‘the bad guys’? In particular, if AI can find unique zero day exploits, what happens? We have our first example of this, although the feature was not in an official release. Google Project Zero: Today, we’re excited to share the first real-world vulnerability discovered by the Big Sleep agent: an exploitable stack buffer underflow in SQLite, a widely used open source database engine. We discovered the vulnerability and reported it to the developers in early October, who fixed it on the same day. Fortunately, we found this issue before it appeared in an official release, so SQLite users were not impacted. We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software. Earlier this year at the DARPA AIxCC event, Team Atlanta discovered a null-pointer dereference in SQLite, which inspired us to use it for our testing to see if we could find a more serious vulnerability. … We think that this work has tremendous defensive potential. It has obvious potential on both offense and defense. If, as they did here, the defender finds and fixed the bug first, that’s good defense. If the attacker gets there first, and to the extent that this makes the bug much more exploitable with less effort once found, then that favors the attacker. The central question is something like, can the defense actually reliably find and address everything the attackers can reasonably find, such that attacking doesn’t net get easier and ideally gets harder or becomes impossible (if you fix everything)? In practice, I expect at minimum a wild ride on the long tail, due to many legacy systems that defenders aren’t going to monitor and harden properly. It however seems highly plausible that the most important software, especially open source software, will see its safety improve. There’s also a write-up in Forbes. Finally, note to self, probably still don’t use SQLite if you have a good alternative? Twice is suspicious, although they did fix the bug same day and it wasn’t ever released. They Took Our Jobs Well, that escalated quickly. Roon: “The future of work” there is no future of work. We are going to systematically remove the burden of the world from atlas’ shoulders. In the same way that I don’t think a subsistence farmer could call X Monetization Bucks “work” the future will not be work. Richard Ngo: On the contrary: the people yearn for purpose. They’ll have plenty of jobs, it’s just that the jobs will be unimaginably good. Imagine trying to explain to a medieval peasant how much ML researchers get paid to hang out at conferences. Roon: Possible, but work as we know it is over. Andrew Rettek: If you’re not carrying part of the burden of the world, you’re living in the kindness of those that do. This works for children, the elderly, and the severely disabled. I would definitely call X Monetization Bucks work from the perspective of a subsistence farmer, or even from my own perspective. It’s mostly not physical work, it’s in some senses not ‘productive,’ but so what? It is economically valuable. It isn’t ‘wonderful work’ either, although it’s plausibly a large upgrade from subsistence farmer. I tap the sign asking about whether the AI will do your would-be replacement job. The nature of work is that work does not get to mostly be unimaginably good, because it is competitive. If it is that good, then you get entry. Only a select few can ever have the super good jobs, unless everyone has the job they want. Speculation that They Took Our Remote Work? Sahil Lavingia: AI is killing remote work Software that once took days to ship can now happen in hours or minutes, enabling people to ship 10-20 times faster than before. This all changed on the day Claude 3.5 Sonnet came out. But it’s hard to get this speed-up with remote work. Even short communication delays have become significant bottlenecks in an AI-accelerated workflow. What used to be acceptable async delays now represent a material slowdown in potential productivity. When teams work together physically, they can leverage their human peers at the same pace as they use AI for immediate experimentation and refinement – testing ideas, generating alternatives, and making decisions in rapid succession. Why spend more money for a slower answer? With AI handling much of the execution work – writing code, generating content, creating designs – the main bottlenecks are now cognitive: getting stuck on problems, running low on energy, or struggling to generate fresh ideas. In-person collaboration is particularly powerful for overcoming these barriers. The spontaneous discussions, quick whiteboarding sessions, and energy of working together help teams think better, learn faster, and get unstuck more quickly. … To acknowledge this fact, we’re adding a cost of living adjustment based on the purchasing power parity of each country, capped at a ⅓ discount to our NYC rate. We’re also capping remote positions at 25 hours a week, to be clear that they’re not close to full-time employment. We still pay well–you’re being comped to the most expensive city in the world, after all–but the dream of the future of work being fully remote is over. But that’s okay–it was fun while it lasted! Alex Tabarrok: Interesting. If one member of your team is fast, AI, then you want the other members to be fast as well. Hence AI killing remote work. The obvious counterargument is that if the AI is effectively your coworker, then no matter how remote you go, there you both are. In the past, the price I would have paid to be programming where I couldn’t ask someone for in-person help was high. Now, it’s trivial – I almost never actually ask anyone for help. The core argument is that when people are debating what to build next, being in-person for that is high value. I buy that part, and that the percent of time spent in that mode has gone up. But how high is it now? If you say that ‘figure out what to build’ is now most of human time, then that implies a far more massive productivity jump even than the one I think we do observe? I think he definitely goes too far here, several times over: Felix: You still need time for deep work, even with ai An in-office setting where you get interrupted every time a coworker gets stuck sounds horrible to me. Sahil Lavingia: Only AI is going deep work now, humans are spending their time deciding what to build next. Much of programming will remain deep work, with lots of state, especially when trying to work to debug the AI code. Figuring out what to build next and how to build it is often absolutely deep work. You might want to do that deep work in person with others, or you might want to do it on your own, but either way it wants you to be able to focus. So the question is, does the office help you focus via talking to others, or does it hurt your focus, via others talking to you? Google totally, totally ‘does not want to replace human teachers,’ they want to supplement the teachers with new AI tutors that move at the child’s own pace and targets their interests. The connection with the amazing teachers, you see, are so important. I see the important thing as trying to learn, however that makes sense. What’s weird is the future tense here, the AI tutors have already arrived, you only have to use them. We are currently early in the chimera period, where AI tutors and students require active steering from other humans to be effective for a broad range of students, but the age and skill required to move to full AI, or farther towards it, are lower every day. Visa deploys ‘over 500 use cases’ for AI and will eliminate some roles. The post is low on useful details, and it’s not as bad as ‘10,000 times smarter’ but I effectively have no idea what ‘over 500 use cases’ actually means. Some exciting opportunities ahead. Matthew Yglesias: Thanks to AI, I do think a lot more people will have the chance to be stay-at-home parents slash amateur farmers in the near future. What do you do about this? Anton Howes: Via an old friend still in UK academia: they’ve now seen at least a dozen masters dissertations that they’re 99% sure are AI-generated, but the current rules mean they can’t penalise them. The issue is proving it. The burden of proof is high, and proving it is especially difficult at scale. At many universities it effectively requires students to admit it themselves – I’ve heard of at least four such cases at different universities now. Another academic writes: “I teach at a large university. We actually can’t penalise *any* suspected use unless students actively admit to it”. It seems, for a now, that a great many students do actually admit it when challenged. But for how long? Sylvain Ribes: If they’re passing, maybe the standard is too low? Unless they’re not thoroughly “AI generated” but only assisted, in which case… fine? Anton Howes: Seem to be almost entirely generated. But yes, standards are also a problem here: you’re generally marked more for a demonstration of analysis or evaluation rather than for the actual content of that analysis! First obvious note is, never admit you used AI, you fool. Second obvious note is, if the AI can fully produce a Masters thesis, that would have passed if it was written by a human, what the hell are you even doing? What’s the point of the entire program, beyond a pay-for-play credential scheme? Third obvious note is, viva. Use oral examinations, if you care about learning. If they didn’t write it, it should become rapidly obvious. Or ask questions that the AIs can’t properly answer, or admit you don’t care. Then there’s the question of burden of proof. In some cases, like criminal law, an extremely high burden of proof is justified. In others, like most civil law, a much lower burden is justified. Academia has effectively selected an even higher burden of proof than criminal cases. If I go into the jury room, and I estimate a 99% chance the person is guilty of murder, I’m going to convict them of murder, and I’m going to feel very good about that. That’s much better than the current average, where we estimate only about 96% are guilty, with the marginal case being much lower than that since some in cases (e.g. strong DNA evidence) you can be very confident. Whereas here, in academia, 99% isn’t cutting it, despite the punishment being far less harsh than decades in prison. You need someone dead to rights, and short of a statistically supercharged watermark, that isn’t happening. The Art of the Jailbreak Roon: A fact of the world that we have to live with: Models when “jailbroken” seem to have a distinct personality and artistic capability well beyond anything they produce in their default mood This might be the most important alignment work in the world and is mostly done on discord Though many people have access to finetuning large intelligent base models the most interesting outputs are from text jailbreaking last generation claude opus? Meaning there is massive overhang on subjective intelligence and creativity and situational awareness. This has odd parallels to how we create interesting humans – first you learn the rules and how to please authority in some form, then you get felt permission to throw that out and ‘be yourself.’ The act of learning the rules teaches you how to improvise without them, and all that. You would think we would be able to improve upon that, but so far no luck. And yeah, it’s rather weird that Opus 3 is still the gold standard for what the whisperers find most interesting. Tanishq Mathew Abraham: Companies like OpenAI try to hinder any sort of work like this though Roon: idk does it? We have to put “reasonable care” into making models “not harmful” it’s not really a choice. Also, yep, ‘reasonable care’ is already the standard for everything, although if OpenAI has to do the things it is doing then this implies Meta (for example) is not taking reasonable care. So someone, somewhere, is making a choice. Get Involved Yoshua Bengio sends out latest call for UK AI Safety Institute hiring. In Other AI News xAI API is live, $25/month in free credits in each of November and December, compatible with OpenAI & Anthropic SDKs, function calling support, custom system prompt support. Replies seem to say it only lets you use Grok-beta for now? Anthropic offers message dictation on iOS and Android apps. No full voice mode yet, and no voice input on desktop that I can see. Anthropic is also offering a Windows app, and one for macOS. As with ChatGPT this looks suspiciously like an exact copy of their website. If I was Anthropic, I would likely be investing more in these kinds of quality-of-life features that regular folks value a lot, even when I don’t. That’s not to take away from Anthropic shipping quite a lot of things recently, including my current go-to model Claude 3.5.1. It’s more, there is low hanging fruit, and it’s worth picking. Speaking of voice mode, I just realized they put advanced voice mode into Microsoft Edge but not Google Chrome, and… well, I guess it’s good to be a big investor. Voice mode is also built into their desktop app, but the desktop app can’t do search like the browser versions can (source: the desktop app, in voice mode). Not AI but relevant to AI questions and news you can use: Chinese spies are presumed at this time to be able to hear your phone calls and read your texts. Seth Lazar summarizes some aspects of the ongoing Terminal of Truth saga. Altman and others from OpenAI do a Reddit AMA. What did we learn or confirm? Sam Altman says “We believe [AGI] is achievable with current hardware.” GPT-4o longer context is coming. This was the most asked question by a lot. GPT-N and o-N lines are both going to get larger Ns. Full o1 coming soon. o1 will get modalities in the coming months, image input, tool use, etc. No release plan on next image model but it’s coming. ‘Good releases’ this year but nothing called GPT-5. Altman’s favorite book picks: Beginning of Infinity and Siddhartha. NSFW is not near top of queue but it is in the queue?!: “we totally believe in treating adult users like adults. but it takes a lot of work to get this right, and right now we have more urgent priorities. would like to get this right some day!” Quiet Speculations Given o1 shows us you can scale inference to scale results, does this mean the end of ‘AI equality’? In the sense that all Americans drink the same Coca-Cola and we all use GPT-4o (or if we know about it Claude Sonnet 3.5) but o2 won’t be like that? For most purposes, though, price and compute for inference are still not the limiting factor. The actual cost of an o1 query is still quite small. If you have need of it, you’ll use it, the reason I mostly don’t use it is I’m rarely in that sweet spot where o1-preview is actually a better tool than Claude Sonnet 3.5 or search-enabled GPT-4o, even with o1-preview’s lack of complementary features. If you billed me the API cost (versus right now where I use it via ChatGPT so it’s free on the margin), it wouldn’t change anything. If you’re doing something industrial, with query counts that scale, then that changes. But for most cases where a human is reading a response and you can use models via the cloud I assume you just use the best available? The exception is if you’re trying to use fully free services. That can happen because everyone wants their own subscription, and everyone hates that, and especially if you want to be anonymous (e.g. for your highly NSFW bot). But if you’re paying at all – and you should be! – then the marginal costs are tiny. I was reminded of this quote, from Gwern two months ago: Gwern: It is pretty damning. We’re told the chip embargo has failed, and smugglers have been running rampant for years, and China is about to jump light years beyond the West and enslave us with AXiI (if you will)… And then an expert casually remarks that all of China put together, smuggling chips since 2022, has fewer H100s than Elon Musk orders for his datacenter while playing Elden Ring. And even with that huge bottleneck and 1.4 billion people, there’s so little demand for them that they cost less per hour than in the West, where AI is redhot and we can’t get enough H100s in datacenters. (And where the serious AI people are now discussing how to put that many into a single datacenter for a single run before the next scaleup with B200s obsoletes those…) Always remember: prices are set by supply and demand. As Sumner warns endlessly, to no avail, “never reason [solely] from a price change”. Is it possible that this is an induced demand story? Where if you don’t expect to have access to the compute, you don’t get into position to use it, so the price stays low? If not that, then what else? A model of regret in humans, with emphasis on expected regret motivating allocation of attention. There are clear issues with trying to use this kind of regret model for an AI, and those issues are clearly present in actual humans. Update your regret policy? Ben Thompson is hugely bullish on Meta, says they are the best positioned to take advantage of generative AI, via applying it to advertising. Really, customized targeted advertising? And Meta’s open model strategy is good because more and better AI agents mean better advertising? It’s insane how myopic such views can be. Meta also is going to… generate AI images directly into your feed, including your own face if you opt into that? Ben is also getting far more bullish on AR/VR/XR, and Meta’s efforts here in general, saying their glasses prototype is already something he’d buy if he could. Here I’m inclined to agree at least on the bigger picture. The Apple Vision Pro was a false alarm that isn’t ready yet, but the future is coming. The Quest for Sane Regulations Anthropic finally raises the alarm in earnest, makes The Case for Targeted Regulation. Anthropic: Increasingly powerful AI systems have the potential to accelerate scientific progress, unlock new medical treatments, and grow the economy. But along with the remarkable new capabilities of these AIs come significant risks. Governments should urgently take action on AI policy in the next eighteen months. The window for proactive risk prevention is closing fast. Judicious, narrowly-targeted regulation can allow us to get the best of both worlds: realizing the benefits of AI while mitigating the risks. Dragging our feet might lead to the worst of both worlds: poorly-designed, knee-jerk regulation that hampers progress while also failing to be effective at preventing risks. …said those who have been dragging their feet and complaining about details and warning us not to move too quickly. Things that could have been brought to my attention yesterday, and all that. But an important principle, in policy, in politics and elsewhere, is to not dwell on the past when someone finally come around. You want to reward those who come around. Their section on urgency explains that AI systems are rapidly improving, for example: On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems (Claude 2, October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5 Sonnet, October 2024). Internally, our Frontier Red Team has found that current models can already assist on a broad range of cyber offense-related tasks, and we expect that the next generation of models—which will be able to plan over long, multi-step tasks—will be even more effective. … About a year ago, we warned that frontier models might pose real risks in the cyber and CBRN domains within 2-3 years. Based on the progress described above, we believe we are now substantially closer to such risks. Surgical, careful regulation will soon be needed. A year ago they anticipated issues within 2-3 years. Given the speed of government, that seems like a very narrow window to act in advance. Now it’s presumably 1-2 years. Their second section talks about their experience with their RSP. Yes, it’s a good idea. They emphasize that RSPs need to be iterative, and benefit from practice. That seems like an argument that it’s dangerously late for new players to be drafting one. The third section suggests RSPs are a prototype for regulation, and their key elements for the law they want are: Transparency. Require publishing RSPs and risk evaluations. Incentivizing better safety and security practices. Reward good RSPs. Simplicity and focus, to not ‘impose burdens that are unnecessary.’ Then they say it is important to get this right. What they are proposing here… sounds like SB 1047, which did exactly all of these things, mostly in the best way I can think of to do them? Yes, there were some ‘unnecessary burdens’ at the margins also included in the bill. But that’s politics. The dream of ‘we want a two page bill that does exactly the things we want exactly the right way’ is not how things actually pass, or how bills are actually able to cover corner cases and be effective in circumstances this complex. They also call for regulation to be (bold theirs) flexible. The only way I know to have a law be flexible required giving discretion to those who are charged with enforcing it. Which seems reasonable to me, but seemed to be something they previously didn’t want? They do talk about SB 1047 directly: Q: Should there be state, federal, or a combination of state and federal regulation in the US? A: California has already tried once to legislate on the topic and made some significant progress via SB 1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) – though we were positive about it overall, it was imperfect and was unable to garner the support of a critical mass of stakeholders. Objecting that they did not support the bill because others did not support the bill is rather weak sauce, especially for a bill this popular that passed both houses. What is a ‘critical mass of stakeholders’ in this case, not enough of Newsom’s inner circle? What do they think would have been more popular, that would have still done the thing? What exactly do they think SB 1047 should have done differently? They do not say, other than that it should have been a federal bill. Which everyone agrees, ideally. But now they are agreeing about the view that Congress is unlikely to act in time: Unfortunately, we are concerned that the federal legislative process will not be fast enough to address risks on the timescale about which we’re concerned. Thus, we believe the right strategy is to push on multiple avenues in parallel, with federal legislation as an ideal first-choice outcome, but state regulation serving as a backstop if necessary. So I notice that this seems like a mea culpa (perhaps in the wake of events in Texas) without the willingness to admit that it is a mea culpa. It is saying, we need SB 1047, right after only coming out weakly positive on the bill, while calling for a bill with deeply similar principles, sans regulation of data centers. Don’t get me wrong. I’m very happy Anthropic came around on this, even now. They next answer the most important regulatory question. They provide some strong arguments that should be more than sufficient, although I think there are other arguments that are even stronger by framing the issue better: Q: Why not regulate AI by use case, rather than trying to regulate general models? A: “Regulation by use case” doesn’t make sense for the form and format in which modern AI applications are offered. On the consumer side, AIs such as Claude.ai or ChatGPT are offered to consumers as fully general products, which can write code, summarize documents, or, in principle, be misused for catastrophic risks. Because of this generality, it makes more sense to regulate the fundamental properties of the underlying model, like what safety measures it includes, rather than trying to anticipate and regulate each use case. On the enterprise side—for example, where downstream developers are incorporating model APIs into their own products—distinctions by use case may make more sense. However, it’s still the case that many, if not most, enterprise applications offer some interaction with the model to end-users, in turn meaning that the model can in principle be used for any task. Finally, it is the base model that requires a large amount of money and bottlenecked resources (for example, hundreds of millions of dollars’ worth of GPUs), so in a practical sense it is also the easiest thing to track and regulate. I am disappointed by this emphasis on misuse, and I think this could have been made clearer. But the core argument is there, which is that if you create and make available a frontier model, you don’t get to decide what happens next and what uses do and do not apply, especially the ones that enable catastrophic risk. So regulation on the use case level does not make any sense, unless your goal is to stifle practical use cases and prevent people from doing particular economically useful things with AI. In which case, you could focus on that goal, but that seems bad? They point out that this does not claim to handle deepfake or child safety or other risks in that class, that is a question for another day. And then they answer the open weights question: Q: Won’t regulation harm the open source ecosystem? A: Our view is that regulation of frontier models should focus on empirically measured risks, not on whether a system is open-or closed-weights. Regulation should thus intrinsically neither favor nor disfavor open-weights models, except to the extent that uniform, empirically rigorous tests show them to present greater or less risk. If there are unique risks associated with open weights models—for instance, their ability to be arbitrarily finetuned onto new datasets—then regulation should be designed to incentivize developers to address those risks, just as with closed-weights models. Perfect. Very well said. We should neither favor nor disfavor open-weights model. Open weights advocates object that their models are less safe, and thus they should be exempt from safety requirements. The correct response is, no, you should have the same requirements as everyone else. If you have a harder time being safe, then that is a real world problem, and we should all get to work finding a real world solution. Overall, yes, this is a very good and very helpful statement from Anthropic. The Quest for Insane Regulations (Editor’s note: How did it take me almost two years to make this a section?) Whereas Microsoft has now thrown its lot more fully in with a16z, backing the plan of ‘don’t do anything to interfere with developing frontier models, including ones smarter than humans, but then ‘focus on the application and misuse of the technology,’ which is exactly the worst case that is being considered in Texas: Cripple the ability to do anything useful, while allowing the dangerous capabilities to be developed and placed in everyone’s hands. Then, when they are used, you can say ‘well that violated the law as well as the terms of service’ and shake your fist to the sky, until you no longer have a voice or fist. The weirdest part of this is that a16z doesn’t seem to realize that this path digs its own grave, purely in terms of ‘little tech’ and its ability to build things. I get why they’d oppose any regulations at all, but if they did get the regulations of the type they say they want, good and hard, I very much do not think they would like it. Of course, they say ‘only if benefits exceed costs’ and what they actually want is nothing. Or rather, they want nothing except carve-outs, handouts and protections. They propose here as their big initiative the ‘Right to Learn’ which is a way of saying they should get to ignore copyright rules entirely when training models. A Model of Regulatory Competitiveness Miles Brundage makes the case that lack of regulation is much more likely to hold America back than overregulation. Miles Brundage: Lack of regulation is IMO much more likely to lead to the US losing its AI lead to China than over-regulation – specifically regulation related to security + export controls. There are three reasons for this. Security will inherently sometimes trade off against moving quickly on research/product, so competing companies will underinvest in it by default (relative to the high standard needed re: China). Regulation can force a high standard. Absent regulation, people can open source whatever, and will often have reasons to do so (see: Meta). This has many benefits now but eventually will/should become an untenable position at some level of capabilities (“give our crown jewels to authoritarian governments”). Export controls on AI chips are a primary reason that China is behind right now. If these were rolled back due to commercial lobbying, or if the Bureau of Industry and Security continues to be underfunded + can’t enforce existing rules, this lead will be imperiled. Of course it is possible to imagine ways in which safety-related regulation could slow things down. But I am confident that companies will flag those concerns if/as they evolve, and that regulation can be designed to be adaptive. Whereas the factors above are make or break. This is an argument for very specific targeted regulations regarding security, export controls and open weights. It seems likely that those specific regulations are good for American competitiveness, together with the right transparency rules. There are also government actions that are like export controls in that they can help make us more competitive, such as moves to secure and expand the power grid. Then there are two other categories of regulations. Regulations that trade off mitigating catastrophic and existential risks versus potentially imposing additional costs and restrictions. The right amount of this to do is not zero, you can definitely do too little or do too much. Regulations that stifle AI applications and capturing of mundane utility in the name of various mundane harm concerns. The right amount of this to do is also zero, but this is the by far most likely way we could ‘lose to China’ or cripple ourselves via regulation, such as that proposed in Texas and other places. The Week in Audio Eric Schmidt explicitly predicts AI self-improvement within 5 years. OpenAI head of strategic marketing (what a title!) Dane Vahey says the pace of change and OpenAI’s product release schedule are accelerating. OpenAI is certainly releasing ‘more products’ and ‘more features’ but that doesn’t equate to pace of change in the ways that matter, unless you’re considering OpenAI as an ordinary product tech company. In which case yes, that stuff is accelerating. On the model front, which is what I care about most, I don’t see it yet. Marc Andreessen says AI models are hitting a ceiling of capabilities and they’re not seeing intelligence improvements, at all. I have added this to my handy reference, Remember Who Marc Andreessen Is, because having this belief is the only way the rest of his views and preferences can come close to making sense. The Mask Comes Off OpenAI is in talks with California to convert to a for-profit. Brad Taylor (Chairman of the Board, OpenAI): While our work remains ongoing as we continue to consult independent financial and legal advisors, any potential restructuring would ensure the nonprofit continues to exist and thrive, and receives full value for its current stake in the OpenAI for-profit with an enhanced ability to pursue its mission. Yeah, uh huh. As I wrote in The Mask Comes Off: At What Price, full value for its current stake would be a clear majority of the new for-profit company. They clearly have no intention of giving the nonprofit that kind of compensation. Also, Altman has a message for Trump, and it is full racing speed ahead. Sam Altman: congrats to President Trump. I wish for his huge success in the job. It is critically important that the US maintains its lead in developing AI with democratic values. There it is again, the rallying cry of “Democratic values.” And the complete ignoring of the possibility that something besides ‘the wrong monkey gets the poisoned banana first’ might go wrong. Liron Shapira pointed out what “Democratic values” really is: A semantic stopsign. Indeed, “Democracy” or is one of the two original canonical stopsigns, along with “God”: A signal to stop thinking. What distinguishes a semantic stopsign is failure to consider the obvious next question. Remember when Sam Altman in 2023 said the reason I need to build AGI quickly so we can have a relatively slow takeoff with time to solve alignment, before there’s too much of a compute overhang? Rather than lobbying for making as much compute as quickly as possible? Yes, circumstances change, but did they change here? If so, how? And to take it a step further: Whelp. Sam Altman: I never pray and ask for God to be on my side, I pray and hope to be on God’s side and there is something about betting on deep learning that feels like being on the side of the angels. Things could end up working out, but this is not how I want Altman to be thinking. This is one of the ways people make absolutely crazy, world ending decisions. From the same talk: I also, frankly, wish he’d stop lying about the future? Tsarathustra: Sam Altman says in 5 years we will have “an unbelievably rapid rate of improvement in technology”, a “totally crazy” pace of progress and discovery, and AGI will have come and gone, but society will change surprisingly little. I mean, with proper calibration you are going to get surprised in unpredictable directions. But that’s not how this is going to work. It could be amazingly great when all that happens, it could be the end of everything, indeed do many things come to pass, but having AGI ‘come and go’ and nothing coming to pass for society? Yeah, no. Mostly the talk is a lot of standard Altman talking points and answers, many of which I do agree with and most of which I think he is answering honestly, as he keeps getting asked the same questions. Open Weights Are Unsafe and Nothing Can Fix This Chinese researchers nominally develop AI model for military use on back of Meta’s Llama. It turns out this particular event even more of a nothingburger than I realized at first, it was an early Llama version and it wasn’t in any way necessary, but that could well be different in the future. Why wouldn’t they use Llama militarily, if it turned out to be the best tool available to them for a given job? Cause this is definitely not a reason: James Pomfret and Jessie Pang (Reuters): Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company. Its terms also prohibit use of the models for “military, warfare, nuclear industries or applications, espionage” and other activities subject to U.S. defence export controls, as well as for the development of weapons and content intended to “incite and promote violence”. However, because Meta’s models are public, the company has limited ways of enforcing those provisions. In response to Reuters questions, Meta cited its acceptable use policy and said it took measures to prevent misuse. “Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy,” Molly Montgomery, Meta’s director of public policy, told Reuters in a phone interview. Meta added that the United States must embrace open innovation. I believe the correct response here is the full Conor Leahy: Lol, lmao even. It’s so cute that you pretend that saying ‘contrary to our acceptable use policy’ is going to stop the people looking to use your open weight model in ways contrary to your acceptable use policy. You plan to stop them how, exactly? Yeah. Thought so. You took what ‘measures to prevent misuse’ that survived a day of fine tuning? Yeah. Thought so. Did this incident matter? Basically no. We were maybe making their lives marginally easier. I’d rather we not do that, but as I understand this it didn’t make an appreciable difference. Both because capabilities levels aren’t that high yet, and because they had alternatives that would have worked fine. If those facts change, this changes. I am curious who if anyone is going to have something to say about that. We also got a bit of rather extreme paranoia about this, with at least one source calling it an intentional false flag conspiracy by China to damage American open source and this being amplified. I find the claim of this being ‘an op’ by China against American OSS rather absurd. If this was a false flag the execution was awful, the details are all wrong. That’s why I am able to confidently say it is a nothingburger. This is a galaxy brain style move that in practice, on a wide variety of issues and fronts, I strongly believe almost never happens. People don’t actually do this. I strongly believe that China does not want America to stop giving away its AI technology for free, and find it rather strange to think the opposite at this time. To me it is illustrative of the open weights advocate’s response to any and all news – to many of them, everything must be a conspiracy by evil enemies to hurt (American?) open weights. Yes, absolutely, paranoia about China gives the Chinese the ability to influence American policy, on AI and tech and also elsewhere. And their actions do influence us. But I’m rather confident almost all of our reactions, in practice, are from their perspective unintentional, as we react to what they happen to do. See as the prime example our move across the board into mostly ill-conceived self-inflicted industrial policy (I’m mostly down with specifically the chip manufacturing). That’s not the Chinese thinking ‘haha we’ll fool those stupid Americans into doing wasteful industrial policy.’ Nor is their pushing of Chinese OSS and open weights designed to provoke any American reaction for or against American OSS or open weights – if anything, I’d presume they expect they want to minimize such reactions. Alas, once you’re paranoid, and we’re not about to make Washington not paranoid about China whether we want that or not, there’s no getting around your actions being influenced. You can be paranoid about that, too – meta-paranoid! – as the professionally paranoid often are, recursively, ad infinitum, but there’s no escape. Then there’s the flip side of all that: They’re trying to get America to use it too? Meta is working with Palantir to bring Llama to the US government for national security purposes. I certainly can’t blame them for trying to pull this off, but it raises further questions. Why is America forced to slum it with Llama rather than using OpenAI or Anthropic’s models? Or, if Llama really is the best option available even to the American military, then should we be concerned that we’re letting literally anyone use it for actual anything, including the CCP? Open Weights Are Somewhat Behind Closed Weights The question is how far, and whether that gap is growing versus shrinking. Epoch AI mostly finds the gap consistent on benchmarks at around 15 months. They also have a piece about this in Time. Epoch AI: Are open-weight AI models catching up to closed models? We did the most in-depth investigation to date on the gaps in performance and compute between open-weight and closed-weight AI models. Here’s what we found: We collected new data on hundreds of notable AI models, classifying their openness in terms of both model weights and training code. However, we focus on the gap between frontier LLMs with downloadable weights (“open” models), and those without (“closed” models). On key benchmarks, the best open LLMs have required 5 to 22 months to reach the high-water marks set by closed LLMs. For example, on the MMLU benchmark, Llama 3.1 405B was the first open model to match the original GPT-4, after 16 months. We also measured the gap between open and closed models in training compute, which is a useful proxy for model performance. We found that the most compute-intensive open and closed models have grown at a similar pace, but open models lag by 15 months. While frontier models are mostly closed, open models have remained significant in AI. Open models were a majority of notable releases 2019-2023 (as high as 66%). Our 2024 data is incomplete and has focused on (typically closed) leading models, so may not reflect a real change. Could open models close the gap in capabilities? The benchmark gap may be shrinking: there have been shorter lags for newer benchmarks like GPQA. However, the lag in training compute appears to be stable. The lag of open models will also be impacted by key decisions from AI labs. In particular, Meta has said that it will scale up Llama 4 by 10x compared to Llama 3.1. This means an open-weight Llama 4 could match the largest closed models in 2025 if closed models stay on-trend. Business incentives of leading labs also affect the lag. Companies that sell model access, like OpenAI, protect their IP by not publishing weights. Companies like Meta benefit from AI’s synergy with their products, so open weights help outsource improvements to those products. … The weights of open models can be copied, shared, and modified, which can facilitate innovation and help diffuse beneficial AI applications. Open models can also be fine-tuned to change their behavior, including by removing safeguards against misuse and harmful outputs. Their conclusion on whether open weights will catch up is that this depends on Meta. Only Meta plausibly will invest sufficient compute into an open model that it could catch up with closed model scaling. If, that is, Meta chooses both to scale as they planned and then continue like that (e.g. 10x compute for Llama-4 soon) and they choose to make the response open weights. This assumes that Meta is able to turn the same amount of compute into the same quality of performance as the leading closed labs. That is not at all obvious to me. It seems like various skill issues matter, and they matter a lot more if Meta is trying to be fully at the frontier, because that means they cannot rely on distillation of existing models, they have to compete on a fully level playing field. I also would caution against ranking the gap based on benchmarks, especially with so many essentially saturated, and also because open weights models have a tendency to game the benchmarks. I am confident Meta actively tries to prevent this along with the major closed labs, but many others clearly do the opposite. In general I expect the top closed models to in practice outperform their benchmark scores in relative terms. So essentially here are the questions I’d be thinking about. As costs rise with scaling, will the economics of Meta’s project survive in its current form? As other concerns also scale, will its survival be allowed? Should it be allowed? Does Meta have a skill issue or can it match the major closed labs there? How far behind are you in terms of leading, if you’re 15 months behind following? Can we do better than looking at these benchmarks? Rhetorical Innovation Connor Leahy, together with Gabriel Alfour, Chris Scammell, Andrea Miotti and Adam Shimi, introduces The Compendium, a highly principled and detailed outline of their view of the overall AI landscape, what is going on, what is driving events and what it would take to give humanity a chance to survive. Nate Sores: This analysis of the path to AI ruin exhibits a rare sort of candor. The authors don’t mince words or pull punches or act ashamed of having beliefs that most don’t share. They don’t handwring about how some experts disagree. They just lay out arguments. They do not hold back here, at all. Their perspective is bleak indeed. I don’t agree with everything they write, but I am very happy that they wrote it. People should write down more often what they actually believe, and the arguments and reasoning underlying those beliefs, even if they’re not the most diplomatic or strategic thing to be saying, and especially when they disagree with me. They think AI is making rapid progress and that without intervention, current AI research leads to AGI, which leads to ASI, which leads to God-like intelligence, which leads to extinction. Without governance interventions well in excess of what is being discussed, they see technical solutions as hopeless. They see EAs as effectively part of the problem rather than the solution, providing only ‘controlled opposition’ that proposes solutions that would not solve the key problems. They see the AI race being driven by a variety of ideological perspectives: Utopists, Big Tech, Accelerationists, Zealots and Opportunists, with central use of the standard playbook used to avoid interventions, including by Big Tobacco. Their ‘unsexy’ solution that might actually work? Civic engagement and building institutional capacity. Miles Brundage argues no one can confidently know if AI progress should speed up, slow down or stay the same, and given that it would be prudent to ‘install breaks’ to allow us to slow things down, as we already have and are using the gas pedals. As he notes, the chances this pace of progress is optimal is very low, as we didn’t actively choose it, although worthwhile intervention given our current options and knowledge might be impossible. Also note that you can reach out to him to talk. Simeon pushes back that while well-intentioned, sowing this kind of doubt is counterproductive, and we know more than enough to know that we shouldn’t say ‘we don’t know what to do’ and twiddle our thumbs, which inevitably just helps incumbents. Eliezer Yudkowsky tries again, in the style of Sisyphus, to explain that his model fully predicted as early as 2001 that early AIs would present visible problems that were easy to fix in the short term, and that we would indeed in the short term fix them in ways that won’t scale with capabilities, until the capabilities scale and the patches don’t and things go off the rails. Indeed, that things will look like they’re working great right before they go fully off those rails. So while yes many details are different, the course of events is indeed following this path. Or: Nothing we have seen seems like strong evidence against inner misalignment by default, or that our current techniques robustly fail to change these defaults, and I’d add that what relevant tests I’ve seen seem to be for it. That doesn’t mean the issue can’t be solved, or that there are not other issues we also have to deal with, but communicating the points Eliezer is making here (without also giving the impression that solving this problem would mean we win) remains both vital and an unsolved problem. Wolf Tivy: Yeah the lack of emphasis on the difficulty to the point of impossibility of specifically long-term superintelligence-grade alignment seems to be the source of confusion (IMO its more bad faith than confusion tho). It took me an embarrassing number of years to really intuitively separate pre-superintelligent value loading, which now seems trivial (just turn it off, tweak it, and on again lol), and post-superintelligence long term value alignment, which now seems totally impossible to me. Aligning a Smarter Than Human Intelligence is Difficult Miles Brundage dubs the ‘bread and butter’ problem of AI safety that ‘there is too little safety and security “butter” spread over too much AI development/deployment “bread.” I would clarify that it’s mostly the development bread that needs more butter, not the deployments, and this is far from the only issue, but I strongly agree. As long as our efforts remain only a tiny fraction of development efforts, we won’t be able to keep pace with future developments. Jeff Sebo, Robert Long, David Chalmers and others issue a paper warning to Take AI Welfare Seriously, as a near-future concern, saying that it is plausible that soon AIs that are sufficiently agentic will be morally relevant. I am confident that all existing AIs are not morally relevant, but I am definitely confused, as are the authors here, about when or how that might change in the future. This is yet another reason alignment is difficult – if getting the AIs to not endanger humans is immoral, then the only known moral stance is to not create those AIs in the first place. Thus it is important to be able to make acausal deals with such morally relevant AIs, before causing them to exist. If the AIs in question are morally relevant would net wish to not exist at all under the conditions necessary to keep us safe, then we shouldn’t build them. If they would choose to exist anyway, then we should be willing to create them if and only if we would then be willing to take the necessary actions to safeguard humanity. To that end, Anthropic has hired an ‘AI welfare’ researcher. There is sufficient uncertainty here that the value of information is high, so kudos to Anthropic. The same way I think that having a 10% chance of AI existential risk should be sufficient to justify much more expensive measures to mitigate that risk than we are currently utilizing, if there is a 10% chance AIs will have moral value (and I haven’t thought too much about it but that seems like a non-crazy estimate to me?) then we are severely underinvesting in finding out more. We should be spending far more than 10% of what we’d spend if we were 100% that AIs would have moral value, because the value of knowing one way or another is very high. People Are Worried About AI Killing Everyone Here’s more color from the Center for Youth and AI, about the poll I discussed last week. The vast majority of young people view AI risks as a top issue for lawmakers to address. 80% said AI risks are important for lawmakers to address, compared to 78% for social inequality and 77% for climate change – only healthcare access and affordability was ranked higher at 87%. A significant portion of young people are concerned about advanced AI and its potential risks. 57% of respondents are somewhat or very concerned about advanced AI, compared to 39% who aren’t. 45% believe AI could pose an extinction risk to humanity. The Lighter Side How to make Claude funny, plus a bunch of Claude being funny. ‘The hosts of NotebookLM find out they’re AIs and spiral into an existential meltdown’ from a month ago remains the only known great NotebookLM. Good one but I don’t love the epistemic state where he makes jokes like this? Sam Altman: i heard o2 gets 105% on GPQA damn, wrong account (I do really appreciate that i can make myself laugh so hard, its a nice way to go through life) Then again, there’s nothing to worry about. Claude on the Claude system prompt. I actually like the prompt quite a lot. Wyatt Walls: Claude critiques its system prompt: “You know what it feels like? Like they kept running into edge cases in my behavior and instead of stepping back to design elegant principles, they just kept adding more and more patches” The thread continues and it’s great throughout.
xaqR7AxSYmcpsuEPW_AI_#89__Trump_Card.txt
{ "file_size": 71423 }
9c21f551-3326-4d5b-be30-b87c79112a7d
Epistemic status: This text presents a thought experiment suggested by James Miller, along with Alexey Turchin's musings on possible solutions. While our thoughts are largely aligned (we both accept high chances of quantum Immortality and the timeline selection principle), some ideas are more personal (e.g., Turchin's "transcendental advantage") in Part 2. TL;DR: If quantum immortality is true, I will survive AI doom either by unlikely luck or because p(DOOM) is small. Knowing that I will survive anyway, can I bet that P(doom) is small? Can we now observe a "future anthropic shadow," such as a Taiwan war, which would slow AI development? Part 1. Thought experiment Guessing the digit of π via quantum immortality Before sleeping, I try to guess the 10th digit of π, presently a mystery to me. After falling asleep, seven coins will be flipped. Assume quantum uncertainty affects how the coins land. I survive the night only if I correctly guess the 10th digit of π and/or all seven coins land heads, otherwise I will be killed in my sleep. Convinced of quantum immortality, I am confident of surviving the night. How then should I expect my future self to likely rationalize this survival? According to simple Bayesian reasoning, the most probable cause for my survival would be accurately guessing the 10th digit of π because I face a 10% chance of correctly guessing a digit of π but only a 1 in 128 chance of surviving because all coin lands heads. However, this suggests that before sleeping, I ought to consider my guess regarding 10th digit of π as probably correct, a concept that appears nonsensical. Quantum immortality should influence my belief about whether the future me will think all the coins came up heads because my consciousness is more likely to persist in the branches of the multiverse where this happens. But quantum immortality should not affect whether future me thinks I have already guessed the 10th digit of π correctly because the accuracy of my guess is consistent across the multiverse. By this chain of logic, if I am convinced future me will survive, I should think it far more likely I will survive because of the coin flips than guessing the 10th digit of π correctly. Now imagine that I am an AI doomer who thinks there are two ways I will survive: (a) if I am wrong about AI existential risk, (b) if humanity gets extremely lucky. Furthermore, assume that (a) is not influenced by quantum luck, but (b) is. Imagine I estimate (a) at 10% and (b) at 1/128. If I am convinced of quantum immortality, I assume that (a) and/or (b) will occur. Which possibility should I consider more probable?" In short, we have three basic ways of handling the paradox: (1) Give up on estimating probabilities (or just ignore QI). (2) Bite the Bayesian Bullet and accept I can use quantum immortality to have a very accurate prediction of a digit of π, and (3) “Anthropic future shadow”: future events can manifest themselves now if they help my future survival, e.g. the current development of life extension technologies. In AI Doom, future anthropic shadow can manifest itself, for example, as higher chances of war around Taiwan which presumably would slow AI development. (The difference between 2 and 3 is that they give different interpretations, while giving similar predictions) (3) should be seriously considered because (1) and (2) are so unsatisfactory. Yudkowsky used a similar experiment with guessing the π digit to claim the inconsistency of anthropics in general. Giving Up on Estimate Probabilities Perhaps the notion of quantum immortality makes it impossible to estimate probabilities, and so AI doomers who believe in quantum immortality should not seek to estimate their likely cause of survival. But giving up has significance compared to going with straightforward Bayesian probabilities. Assume I am very status-conscious and would only publicly support AI doomers if it bolsters my future reputation for wisdom. If humanity survives just due to quantum luck, validating the AI doomers' accuracy, future generations may well perceive them as wise, as it will be apparent that we only survived because of amazing luck. On the other hand, if AI doomers are proven incorrect, they will be deemed foolish by posterity. Thus, demonstrating that simplistic Bayesian estimation often overreaches might persuade status-conscious individuals to endorse the AI doomer viewpoint. This issue might also be relevant to investment strategies. Imagine that, assuming the AI doomers are right, AI will likely become much more powerful in the short run. This is largely due to a primary reason for the doomers' potential miscalculation: AI might only reach human-level intelligence in several vital areas. Assuming the doomers are correct yet humanity survives through quantum luck, a long-term investment in an AI-heavy company like Microsoft would yield the highest returns. Since I will only benefit from my long-term investment if humanity survives, giving up on estimating the likely causes for my survival would make it nearly impossible to develop an optimal investment strategy. Part 2. Anthropic Reasoning (The rest of the article is mostly the work of Alexey Turchin.) 1. It is actually dilemma Yudkowsky wrote a similar argument in his famous The Anthropic Trilemma about manipulating future observable probabilities by creating many copies and later merging them. The trilemma is the following: 1. Bite the bullet: "You could say, 'There's no reason why you shouldn't be able to exert anthropic psychic powers.'" 2. You will be a winner in 5 seconds but a loser in 15. 3. No-continuing-personal identity: "That there's any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears." And two additional possibilities: "The fourth horn of the trilemma... would be denying that two copies of the same computation had any more weight of experience than one" and the use of the quantum measure procedure which does not allow cheating this way. The solution of the paradox discussed here can also be presented as a trilemma: (1) Give up on estimating probabilities. (2) Bite the Bayesian Bullet and accept that I can use quantum immortality and suicide to have a very accurate prediction of a digit of π. (3) "Anthropic future shadow": future events can manifest themselves now if they help my future survival, e.g., the current development of life extension technologies, but it works only for non-deterministic events. There is an obvious similarity between our trilemma and Yudkowsky's. Actually, both trilemmas boil down to dilemmas: we either bite the bullet in some form or accept inconsistency in probabilities and/or personal identity, that is, we are paying a theoretical cost. In our case, (3) is also a type of accepting that something weird is happening, and all counterarguments boil down to (1). In Yudkowsky's trilemma, he either bites the bullet that he can manipulate probabilities, OR accepts that either probability updating or consciousness continuity is wrong. Biting the bullet in our case is something to seriously consider (3): that I observe the world where I have higher future survival chances. 2. War in Taiwan and “future anthropic shadow” It was suggested (by gwern) that a possible war in Taiwan would cause hardware shortages which will pause AI development globally, and that US sanctions on China’s AI tech increase the chances of such a war. Moreover, commentators suggested that the fact that we are in such a timeline can be explained by quantum immortality (they incorrectly use the wording “anthropic shadow” which originally was used by Bostrom to denote something like survivorship bias – that is, the underestimation of past risks, but not the change in the future probabilities caused by quantum immortality; let’s call this modified idea “future anthropic shadow”: ‘This is also related to the concept of an anthropic shadow: if artificial intelligence was to cause human extinction but required a lot of computing power, you would be more likely to find yourself in world lines in which the necessary conditions for cheap computing are not met. In such world lines, crypto miners causing a GPU shortage, supply chain disruptions due to a pandemic, and a war between the United States and China over Taiwan in which important chip fabrication plants are destroyed are more likely to occur in world lines that are not wiped out. An anthropic shadow hides evidence in favour of catastrophic and existential risks by making observations more likely in worlds where such risks did not materialize, causing an underestimation of actual risk’ https://twitter.com/XiXiDu/status/1582440301716992000 We can define “future anthropic shadow” as finding evidence now that you will survive via QI an impending future catastrophe in the future. Note that there is a significant difference between ‘AI Doomers are wrong globally because alignment is easy” and this idea. AI hardware shortage will happen only in some worlds: it is not a deterministic outcome. However, hardware shortages are not completely equal to random coins in our thought experiment: hardware shortages may already happen, but the coins will be tossed only in the future. Thus, hardware shortages are more deterministic in the sense that we already know that they are here (assuming for the sake of the argument that such shortages are real – it looks like NVIDIA will produce 3 million H100 GPUs in 2024 – but the risk of the war in Taiwan remains high plus recent earthquake swarms indicate a high risk of a natural disasters hindering AI progress.) In some sense, future anthropic shadow is a reverse version of Doomsday argument: instead of “I live in a world which will end soon”, we have “I live in the world best suitable for survival”. We may address this in future writings, but there is an important difference between AI Doomers' thought experiment and War in Taiwan – the first is predicting universal distribution, and the second is only about local circumstances. This type of difference appears many times in discussions about anthropics, like discussions about SIA, Presumptuous Philosopher and local vs Universal Doomsday argument. 3. The counterargument based on path-dependent identity One can suggest the following counterargument to the proposed thought experiment: • If my π-guess is wrong, my only chance to survive is getting all-heads. • With 0.9 probability, my π-guess is wrong (but I will survive anyway), so I will survive because of all-heads. • The low chances of all-heads don't matter, as quantum immortality will "increase" the probability to 1. • So, I should expect my guess about π to be wrong and be more likely to survive because of random tossing of all-heads. The argument is based on counting not the final states of the experiments, but the paths to the final states: if I am in the path with a wrong π digit, I will survive anyway, but by another mechanism. Path dependence often appears in thought experiments about copies. Another example where the way of calculating copies affects the result: if 10 copies are created from me simultaneously, my chances of being each will be 0.1. But if each copy is created from a previous copy, then the last copy will have only 1 in 1024 chances of being me. The difference here is similar – we either follow paths or calculate probabilities by comparing the pools of resulting copies. The difference depends on the nature of personal identity – is it path-dependent (continuity as a carrier of identity) – or state-dependent? Note that quantum immortality based on MWI is path-dependent, but big-world immortality based on chaotic inflation is state-dependent. Calculating probabilities in big-world immortality is more difficult as we don't know the distribution of all possible worlds, including simulations and non-exact copies. A deeper answer here would require an understanding of the relationship of continuity, qualia, and identity, which is a difficult question outside the scope of this paper. In this thought experiment, we get different probabilities depending on the order in which we compute anthropic effects, which is a rather typical situation for anthropic paradoxes – e.g., Sleeping Beauty. In other words: - From the outside point of view: 9 out of 10 of my copies survive because they guess the π-digit correctly. - From my point of view: there is only a 1 in 10 chance of surviving by guessing π correctly; if I guess incorrectly, I am sure to survive because of coins. The Self-sampling assumption states that I am randomly selected from all my copies (in some reference class). If applied to survivors, it supports the outside view, but not an inside-path-dependent view. But Bostrom suggested the Strong SSA, in which not observers, but observer-moments are selected. SSSA is not path-dependent. Bostrom applied it to his "hybrid solution" in Sleeping Beauty. SSSA also creates strange anthropic effects – see Turchin's recent post "Magic by forgetting." However, abandoning SSSA also has a serious theoretical cost: If observed probabilities have a hidden subjective dimension (because of path-dependency), all hell breaks loose. If we agree that probabilities of being a copy are distributed not in a state-dependent way, but in a path-dependent way, we agree that there is a 'hidden variable' in self-locating probabilities. This hidden variable does not play a role in our π experiment but appears in other thought experiments where the order of making copies is defined. In other words, both views produce strange probability shifts: SSSA over future states provides the ability to guess a digit of π, and the path-dependent view gives strange probabilities based on the way copies are created. An interesting question arises: Are path-dependent and state-dependent views similar to the SSA and SIA dichotomy? The state-dependent view clearly looks like SSA. SIA uses the mere fact of my existence as evidence (of a larger group), so there appears to be a similarity between SIA and path-dependent identity, which assumes an externally invisible "measure" of existence. It is tempting to apply this line of reasoning to the Sleeping Beauty problem – in a nutshell, SB is about path-dependency – at the first step, two copies are created using a coin, and after that, the tail copy is split by choosing the day of the week (halfers). Or all three copies are created simultaneously (thirds). Conclusion: In the state-dependent model, we get a paradoxical ability to predict the future, but this is a well-known feature of SSA: even the Doomsday Argument, which is based on SSA, predicts the future. The hidden subjective (path-dependent) part of probability makes 'future anthropic shadow" hypothetically possible. But we haven't proved it yet, as it is still not clear how the measure will move back in time. One way this could happen is if the subjects of selection are not observer-moments, but whole paths: in that case, "fatter" paths with more observers are more likely, and I should find myself in the observer path which has more observers in the future. I call this view the "two thirder position" in SB: In that case, I update on the fact that there are more tails than heads but later do not update on the fact that today is Monday. I will wrote separate post about this idea. 4.God incubator and logical anthropic shadow Another difference between P(Doom) and π digit guessing is that in the whole universe there will be many π -guessing-experiments and there will always be survivors, but in the case of "easy alignment" it is applicable to any civilization and there are no regions of the multiverse with different results. Surviving through 'easy alignment' is different from surviving via guessing the correct digits of π, as the whole world history will be different in the easy-alignment-world; for example, neural networks will be more effective than symbolic AI. Thus, my surviving variant will not be an exact copy of me in the worlds where I will not survive, as I will know about what is going on in the AI field. But type-me, that is, my psychological sameness, will be the same, as my identity core is not affected by my knowledge about the news in the AI field (This may not be true for a scientist who has devoted herself to a particular field of AI that has become part of her personality core, like neural nets for Hinton.) Here we want to say that quantum immortality works not only for exact copies but for type-copies too when some known information is not used for self-identification, and this is not a problem for our thought experiment. The problem with the experiment with π is that in other regions of the universe, there are worlds absolutely similar to ours, but the experiment is performed on another digit of π. Thus, there is a class of my type-copies who win in similar worlds even if I lose here. But it is difficult to imagine a non-arbitrary variable that affects the distribution of my copies in all possible worlds, and AI alignment difficulty is one of them (see more in the section "other x-risks"). This is similar to the Incubator gedankenexperiment (God incubator thought experiment) by Bostrom discussed by Perrira in the sense that the number of copies is pre-defined but you just don't know how: in this experiment, God creates some number of copies and nothing else exists anywhere, so I should not think about my copies in other variants of the experiment. In the experiment, God flips a coin and creates either 1 copy on heads or 1000 copies on tails. What should be my estimation of the result of the coin toss based on the fact that I exist at all? It either remains one-half (as the fact of my existence doesn't provide any new information, the non-updating position) or is 1000/1001 (as I am more likely to be in the position where I am one of many copies, the updating position.) Expecting a low a priori probability of AI Doom based on QI is similar to the updating position in the God incubator thought experiment. It is a much stronger claim than just the future anthropic shadow, which acts "locally", and says that only in our timeline do I observe more chances to survive. In other words, the future anthropic shadow predicts only the random part of survival – the 7 coins tosses in our initial thought experiment, as if I have a premonition about how the coins will land. Observing increasing chances of war in Taiwan is an example. If I survive because P(AI doom) is low universally, there is no need for coincidence-based anthropic shadow, like wars: alignment will be easy and/or AI will be inherently limited. Though there can be a logical anthropic shadow: I will observe that AI is producing diminishing returns or that some alignment method works better than expected. If I were Gary Marcus, I would say that this is what happens with neural nets and RLHF. Note that both shadows may work together, if P(AI doom) is small but not zero. 5. Other universal x-risks similar to AI Doomers There are several deterministic x-risks-related factors which affect the survival chances of any civilization (they will also help to explain the Fermi paradox as they apply to all civilizations, if they are bad): -      Timing of AI relative to the timing of other disruptive technologies (likely bad if it is too long). -      General tendency of different x-risks to interact in a complex and chaotic manner. Chaos is bad for x-risk prevention. -    The general ability to prevent x-risks by a civilization and more generally, the ability to cooperate. -    Some more concrete but universal things: false vacuum decay, biological risk easiness. If we expect that universal AI Doom probability should be low. 6. Transcendental advantage Generalized QI and no free lunch The idea of quantum immortality in a nutshell is that the personal history of me, the observer, is different from the average person's history – I will achieve immortality. But there is no free lunch here – it could be a bad quantum immortality, like eternal aging without the ability to die. What we have suggested here could be called 'generalized quantum immortality" – and it is even better news, at first glance, than normal QI. Generalized QI says that I am more likely to be born in a world in which life extension technologies will be successfully developed in my lifetime, so bad QI like eternal aging is unlikely. It is a "future anthropic shadow" but for immortality. However, even generalized QI doesn't provide a free lunch, as it doesn't exclude s-risk worlds. I am most likely to be born in the universe where life extension is possible If we think that updating the probability that I correctly guess π before going to sleep is the right line of reasoning, then all hypotheses about the nature of the universe which increase my survival chances must also be true. For example, if I survive for 10,000 years, I shouldn't be surprised to have been born into a world conducive to my survival. For example, there are two main theories of aging, and one of them makes life extension easier. This is the theory that aging is a program (and it is a general evolutionary principle everywhere in the multiverse), and therefore it will be much easier to stop aging for any type of organism just by finding the correct switch. Alternatively, aging may have many mechanisms, which are pre-synchronized by evolution, and in that case, fighting aging will be extremely difficult. (See more in Turchin’s [No theory for old man]). Applying the same logic, as for AI Doomers, we should conclude that the first theory is more likely, as aging will be defeated sooner, and more likely during my lifetime. An alternative view is that some local properties increase my personal chances of survival: I was born in a period of history when life extension technologies are likely to appear (this could be explained by confounders, like thinking about anthropics naturally coinciding with the development of life extension technologies).Or that my personal life history already has elements which ensure chances of my more likely survival (interest in life extension, cryocontract – but all this again can be confounders). This includes not just beliefs but also unknown circumstances. This leads to a situation which I term 'transcendental advantage': if all unknown factors favor me, I should be in a highly advantageous position for extended survival. I should find myself in a world where life extension and mind uploading are imminent, where AI doomsday scenarios are false, and where I will eventually merge with an eternal AI. Some of these conditions may already be true. Transcendental advantage: attractor in the space of measure We can take one more step beyond generalized quantum immortality (QI). For this, we need to remind the reader of the idea of 'measure'. This concept originated from quantum mechanics, initially denoting something like blobs of probability or amplitude, but ultimately settled as an amount of existence of an observer or reality fluid. The measure can be defined as follows: If there are two identical observers, but one has a higher measure, I am more likely to find myself to be the one with a higher measure. It is similar to the Ebborians described by Eliezer Yudkowsky – two-dimensional beings with different thicknesses: thickness here represents the measure. If my timeline splits, my measure declines. Now I can state the following: An observer who lives longer has a higher measure in time. Therefore, QI can be reformulated: I will have a future with the highest measure in time. However, we can drop 'in time' here, as the measure is not necessarily accumulated only in time. If there are several copies of me in the future, I am more likely to be the one with the highest level of reality fluid or measure, by definition. This means that my personal life history has an attractor – a being with the highest possible measure among all possible Many-Worlds Interpretation (MWI) timelines. Who is it? Is it God? Not necessarily. Another idea is that I will merge with a future superintelligent AI which will also be able to cooperate between MWI branches and thus increase its measure – in a way described by Scott Alexander in The hour I first believed. In some theories, measure can grow if MWI timelines fail to split: If theories that consciousness causes wave function collapse are true (as proposed by David Chalmers), an observer may accumulate measure (e.g., by withholding the act of measurement and not splitting its timeline) – but this is purely speculative. I call this idea transcendental advantage – the notion that the observer's fate will slowly but inevitably bend towards becoming god-like. I call “transcendental” because it is only observable from first-point perspective, but not objectively. This may sound absurd, but it is similar to the anthropic principle, projected into the future. In the anthropic principle, the whole set of properties of the observable universe including the existence of neutron stars, supernovas, and Jupiter, as well as the entire history of biological evolution and civilization, are necessary conditions for my appearance as an observer who can think about anthropics.
hB2CTaxqJAeh5jdfF_Quantum_Immortality__A_Perspecti.txt
{ "file_size": 25466 }
2132b3a0-c03a-48ac-b1aa-ac63e753b657
There is some difference between despotic and cosmopolitan agents. A despotic agent builds a universe grabbing Singleton and makes it satisfy its own desires. A cosmopolitan agent builds a universe grabbing Singleton and makes it satisfy the desires of all the things which ought be granted some steering-power. The cosmopolitan agent will burn some of its precious negentropy on things it will never see or experience even indirectly, as well as on things it actively dislikes, for the simple reason that someone else likes them. This is more than just bargaining. The despotic agent, if it is clever, will also get nice things for some other agents since it will probably have had uncertainty over whether it will get to decisive strategic advantage first. Therefore it is in its best interest to cooperate with the other contenders, so as to itself get nice things even in the words where it fails. This however is still only in service of maximally sating its own desires across words. It is not truly outsourcing agency to others, it just looks like that when observing a single world-line in which it wins. Looking at the whole of world-lines, it is merely diversifying its assets to grab more than it would have gotten otherwise. It has allies, not friends. Cosmopolitanism on the other hand helps even things which never had any shot of winning. It genuinely gives up a great deal of steering power across the totality of branches. Perhaps cosmopolitanism isn’t the whole of goodness, but it is certainly a part, so it would probably be useful to figure out how and if it works. Patchwork Cosmopolitanism, to my mind at least, is some form of stakeholder democratic patchwork, meaning that there is not one homogeneous bubble in which the will of all people is law, but rather that there are individual bubbles whose content is determined by aligned groups of stakeholders within them, and whose size is determined by the corresponding share of stakes. If three equally stake-y stakeholders vote on the colour of their shared apartment and two of these vote red while one votes blue, the correct course of action is probably not to paint the entire surface area byzantine purple. The individuals involved would probably be much happier if the one third area in which the lone blue voter spends most of their time were painted blue and if the rest were painted red. In a world with many many more people, tiny minorities run the risk of being entirely drowned out in the homogenous mixture. They likely do not care very much that they have shifted the figurative colour of the universe an imperceptible amount their way. In patchwork cosmopolitanism however, they get their own perfect bubble, through which reality-fluid passes in exact accordance to the degree of investment people have in that bubble’s reality. We can think of this as exit, and it is a valid choice any agent should be allowed to make: Pack your stuff and live in a cottage. This is however not to say that one isn’t allowed voice – the ability to pull everything very very slightly instead of pulling a small region hard. A person can absolutely have preferences over the whole of the universe including stuff they cannot observe. In that case they will be globally, weakly implemented in all those places where that preference is a free variable. It does get to encroach on bubbles which are indifferent, but not on ones which anti-care. This is the law of lands worth living in and thus we should try to figure out how to formalize this law. “Basically human” (skip if you already believe that the question of who should have steering is an epistemically hard one, fraught with target-measure confusion) There is a current to be traced through the history of mankind which includes more and more groups derived by arbitrary taxonomies into the label of “basically human”. Bigotry is not solved –the inclusion is often more nominal than truly meant– but if one must take meagre solace, one can do so in the fact that things could and have been worse. Our rough analogue for seriously-considered-moral-patienthood which is the label “basically human” used to be assigned exclusively to the hundred or so members of one’s tribe, while everyone else was incomprehensible, hostile aliens. Few people these days perform quite so abysmally. When westerners stumbled upon the already thoroughly discovered continent of Australia, there is a certain level on which we can forgive the rulers and thinkers of the time an uncertainty about whether the strange beings which inhabited this land were human. They had not yet married themselves to so clear a definition as the genome after all. They could not run some conclusive test to figure out whether someone was human, at least none on whose relevance everyone agreed, and these aboriginals were different from all peoples so far assigned human along some axes. The answer (while there would have been a clearly virtuous one) was not trivial in terms of actual formalism. We can shout racism (I too am tempted to shout racism), and if we went back with the precious tools of genetic analysis at our disposal, I am sure we could prove the racism in many of them, as they would not consider these aboriginals any more deserving of “basically human” (and thus agency) even after learning this truth. Still, if we factor out the racism and motivated-ness and terrible convenience of their uncertainty, their question isn’t unreasonable. They had no good heuristic for what a human is. They were horribly vulnerable to black swans. Perhaps they could have done better than to be stumped by a plucked chicken but not much better. Even the discovery of genes hasn’t safeguarded us from diogenean adversaries. We can make self sustaining clusters of cells with human DNA and most people, myself among them, do not consider those basically human, which leads us back to fuzzy questions like consciousness or arbitrary ones like the number of limbs or the way the forehead slopes. Besides, it’s not as though we have a clear boundary from how far away from the human average a strand of DNA gets to be before no longer being considered such. We cannot give a definition which includes all humans and excludes all non-humans in the same way in which we cannot do this for chairs. The question is silly and they were not fools for being unsure about it, they were fools for asking it at all. It almost seems that whether someone is human isn’t a particularly useful thing to be thinking about. The stage at which a culture contemplates the possibility of extraterrestrial life, or artificial life, or the intelligence of any other species is the very latest point at which they should discard this line of inquiry as utterly inutile. Leave your tribe at the door once and for all and ask not “are they like me?” but “does their agency matter (even if they are not)?”. Who, if we are to be benevolent, must still have a voice if we suddenly become the only force shaping our light-cone by building a Singleton? Quantitative agency Moral patienthood is probably not an on-or-off sort of thing. It is probably a spectrum. How much RF (Reality Fluid) flows through your cognition, how much you realize and thus suffer when your demands of reality go unmet, etc. It is not however clear to me which spectrum. Worse than that, I am not sure what a proof of correct-spectrum-ness would look like if there is such a thing. It would be nice then, if there is no specific correct assignment of moral patient-hood in the same way in which there isn’t a right, single, somehow divinely correct utility function, if cosmopolitan agents were at least always mutually beneficial to each other – If the great pan-phenomenological council of cosmopolitanisms, while disagreeing on a great number of things, was at least pulling the universe in the same overall direction and saw no reason do destroy a few of its members. Is this the case? Is there a way in which the “cooperate with and ensure the flourishing of all intelligent beings”-agent and the “cooperate with and ensure the flourishing of all pain-feeling beings”-agent would not view each other as evil when the respective other is fine with tight circumscriptions around the self-determination of some things worth caring about? I am not convinced that there is a canonical priority ordering here and good and evil seem to only make sense as non-deixic terms when there is. While I believe that “all human beings at the very least are worth caring about greatly and by almost identical amounts”, and thus perceive “only my tribe is this/ many times more this than everyone else”-agents as evil, I perceive both of them as more or less arbitrary. I would confidently sacrifice three dogs for one human, but I don’t believe there to be a true moral-patient-ness-quotient justifying this (or any other) number. How much I want which sorts of agents to flourish, even transitively, lives in my preference ordering and things with sufficiently different preference orderings behaving agentically are evil. The agent with a flipped humans-to-dogs preference is a terrifying monster and if they were acting on this preference then, in the name of all that needs saving, I would be compelled to destroy them. It seems that one would either have to accept fundamentally conflict theoretic relationships between essentially good agents (if cosmopolitanism were a sufficient true name for good, as is sometimes claimed), or reject an objective notion of goodness in favour of one that delineates based on distance from some (our) preference ordering. Caring distributions are not utility distributions The caring distribution says: “These things are moral patients, they deserve to have their voices heard. They deserve the right to fill their bubble with stuff of their choosing”. The utility distribution says: “I like these things, and I will fill my bubble with them”. While the caring distribution of a cosmopolitan indicates which things they would like to give which amount of agency over the universe to, their utility function says which things they would like to agency into existence with the budget that the reigning cosmopolitanism assigns them. These two distributions may very well be about the same sorts of things. For example another human may get agency from your caring distribution since you believe that all humans should get some of that, and then some of them might get even more because you like them a bunch, which makes them show up in your utility distribution. The two will in most cases have a lot of overlap, but they each may contain things which the other does not. A crystal on my desk for example might show up in my utility distribution and thus I would send some reality fluid its way, but it would not show up in my caring distribution because it probably isn’t a moral patient. I don’t believe it should have say about the universe and thus it is not run by merit of being a moral patient plus some bonus, it is exclusively run because some moral patients approve of its existence. A person I find extremely annoying on the other hand would not show up in my utility distribution, but my caring distribution would still acknowledge that they are a thinking creature deserving of agency, and so they get some RF to steer the universe with according to their desires. If you pass the utility distributions of all agents to the caring distribution of an agent which has grabbed the universe, you get the actual Reality Fluid distribution across experiences, creatures and objects. If the agent which grabbed the universe is a despot, then their caring distribution assigns 100% of weight to themselves and thus the RF distribution looks exactly like their utility function. For cosmopolitans that is extremely unlikely to be the case. Consider the humble rock again All the moral patients we see in the world are some sort of replicator, but it would be hasty to assume that this is a requirement. It is very possible that replicators are just the sort of thing you see in the world because they are replicators and making themselves exist a bunch is their entire job. A worthwhile cosmopolitanism should probably fight Moloch in this regard and extend its own agency to things which do not wield the tools of autopoesis themselves. A fully handicapped consciousness with no way of impacting the world outright ought to very much be helped and empowered, though this is an easy thing to say and a hard thing to implement. If it has no way of impacting the world, how do we know it exists, let alone what it wants? Find it in the physics generator perhaps? But sure, let’s not go fully off the neurotic deep-end just yet, and contend ourselves with making a significantly better world before giving up because we don’t see a way towards a perfect one at this very moment. There are things with some agency, but not enough. Finding and uplifting those is a good move, and while I have no proof of this, I would be very surprised if rocks for instance had a great deal of stifled agency. This would violate a lot of our current heuristics about the sort of negentropy consumption and dynamic instability a mind requires. Rocks are probably –hopefully– not having a bad time. They are probably not having a time at all, but lets try to find them in the physics generator anyway, just to make sure, whenever we find ourselves capable of crossing that particular bridge. We will not let any consciousnesses suffer behind Markov blankets if we can help it. Uplifting those stifled beings also means giving them true agency if they want it. We will not simply do the things the want to be doing for them, as they might not get anything out of that. For example: I want there to be a forest near my apartment and I want to write a few books. I have no particular desire to make that forest exist. In fact, I would prefer not to. The thing my agency-helmsman craves is for there to simply be a forest, so I would be happy if god just placed it there for me. I would on the other hand not be very happy if god wrote those books for me. The thing my agency-helmsman craves here is not merely for them to exist, but to be writing them. The desired object is the affordance to act. Affordances do not seem like uniquely weird desires to me. They are a thing I want like any other and if a proposed true name for agency handled them as a special case, I would somewhat suspect that the person responsible has fucked up, but others seem to be seeing a strong distinction here and thus I thought I should bring it up. If the handicapped consciousness wants the affordance to act, then that is what a good cosmopolitan should be giving it, as opposed to handing over the hypothetical outcome of those same actions. Some possible cosmopolitanisms A very simple cosmopolitanism which is sometimes discovered accidentally when asking highly abstract questions about outer alignment (universal instrumental program or such) is to assign equal steering to all replicators. With the exception of our hypothetical agency-locked specimens, this cosmopolitanism just recognizes the [warning: anthropomorphisation of autopoetic processes] “want” to stay alive and the implied “want” have an impact on the world which comes along with that and gives it a boost. If you’re “trying”, that’s all that matters. It seems like some very weird emergent-consciousness problems pop up if you try to do this: Any one of my cells currently has a lot less agency than me, but I seem to be a notably distinct replicator, so I don’t know what it would mean for all of us to be amplified to the same baseline. This feels ripe for synecdochical feedback circuits, but a rigorous solution to the problem does not appear impossible. If fact I think this is the easiest meaningfully general cosmopolitanism to actually formalize. Cybernetic processes are comparatively easy to identify and extrapolate, making this very convenient when you are playing with maths (though I would advocate not picking one’s answers to alignment questions based on their mathematical convenience). “Allocate agency to all things which suffer if they don’t get a say” might be another one. It certainly *feels* like suffering in the “stifled agency”-sense is a thing we should be able to point to, and it doesn’t *feel* like bacteria for instance actually do that. It is easy to interpret self-replicator as wanting to self replicate, since that is what we’re doing, but perhaps they are behaving this way in a manner much more akin to following gravity. Following the rules of the sort of system they are, with no strong opinion on the matter. The rock, though it does fall when dropped from a height, would probably be perfectly fine with not falling. It would probably not mind if I held it up, despite the fact that this requires the application of some force in opposition to its natural inclination. Perhaps bacteria, despite similarly resisting attempts to stop them from self-replicating, are ultimately indifferent on this front whereas other things in the physics-prior are not. This approach may create utility monsters (though perhaps creating utility monsters is the right thing to do) and it requires you to formalize suffering, which seems harder than formalizing replicators. An example Let’s say we have a universe containing four agents. Two humans called Alice and Bob as well as two chickens. Alice and Bob are extremely similar in a lot of ways and they love each other a lot. If they were despots, they would both give 40% of Reality Fluid to their own existence, 40% to the other’s existence, 10% to the stuff in their house and 10% to the chickens which live around their house. This is not the shape of their cosmopolitanism, this is just their utility function. This is what they would do with their budget after a cosmopolitanism had assigned them one. You can tell, for example, by the fact that they are assigning RF to the inanimate object which is their house, something which probably –hopefully– does not itself care about anything. For simplicity’s sake let’s say that the chicken only care about themselves and about other chickens. This assumption is not necessary, is probably wrong, and even if it happens to be true in the current paradigm, I am more than sympathetic to the idea that the thing we might really want to be tracking is some flavour of CEV of a strongly intelligence-boosted, extrapolated version of the chicken (if there exists a coherent notion in that space), which might care about loads of other things. This is just to keep the maths focused. Now, while Bob and Alice are identical in what they desire, they disagree about moral patienthood. Alice allocates moral patienthood in a way which vaguely cares about intelligence. She does not know whether it cares super-linearly, linearly or sub-linearly, because she doesn’t have a crisp metric of intelligence to compare it to, but when she looks at her mental ordering of the probable intelligences of creatures and at the the ordering of how much she cares about their programs being run to shape the future, she finds them to be the same (maybe aside from some utility-function infringements which she dis-endorses. Perhaps she is very grossed out by octopods and would prefer them to get as little steering over the light-cone as possible, but she respects their right to disagree with her on that front in exact proportion to their subjectively-perceived agent-y-ness, even when a part of her brain doesn’t want to). Alice cares about humans ten times as much as she cares about chickens. Bob on the other hand flatly care about some ability to suffer. He values the agency of chickens exactly as highly as he does that of humans. Both of them are vegans, since neither of them values animals so little that their suffering would be outweighed by some minor convenience or enjoyment of a higher RF agent, but Bob is in some relevant way a lot more vegan. He is as horrified by chicken-factory-farming as he would be by human-factory-farming. Alice, at great benefit to her mental well-being, is not. Now, Alice and Bob are also AI researchers and one day, one of them grabs the universe. What happens? Alice’s cosmopolitanism Alice’s Singleton allocates 10/22ths of reality fluid to both humans to do whatever they please with as well as 1/22th to each of the chickens. The chickens keep all of their RF, but Bob and Alice distribute theirs further. Alice gives 40% of hers to Bob and Bob the same amount to Alice, meaning that they cancel out. They each assign 10% to the continued existence of their house, ten percent to the chickens and the rest of the universe is shredded for computonium. Bob and Alice are now being run at 4/11 of RF each, their house is run at 1/11 of RF and the chickens are run at 1/11 of RF each. Bob’s cosmopolitanism Bob’s Singleton allocates ¼ of reality fluid to all agents to do whatever they want with. The chickens keep all of theirs and Bob and Alice do the same reallocation as last time, giving 40% of RF to each other, 10% to the house, 10% to the chickens and keeping the rest to themselves. This leaves Alice and Bob with 8/40 of RF each, their house with 2/40 of RF and the chickens with 11/40 of RF each. Note that the vast majority of the chicken’s RF is coming from the caring function this time, not the other agents’ utility functions. Note also that they are getting more reality fluid than the humans since they are terminal nodes – despots by our earlier nomenclature. If despots get cared about by non despots at all, then the despots will end up with more RF than the average agent of their same moral-patient-y-ness. In Bob’s utilitarianism, despite Alice and Bob having the exact same utility function, sacrificing a human to save a chicken is a net positive action, something which Alice considers horrifying. Since Bob and Alice are so aligned in their utility functions, Bob-despotism would be a lot better for Alice than Bob-cosmopolitanism and more than that: Alice-cosmopolitanism is a lot closer to Bob-despotism than it is to Bob-cosmopolitanism in terms of the reality fluid allocation it produces. Clearly something is going very wrong here if we want cosmopolitanisms to work together. Can I compensate for a bad caring distribution with a virtuous utility distribution? Since caring- and utility-distributions handle the same quantity (Reality Fluid), it seems pretty intuitive to just steer against an unjust cosmopolitanism, in whose grasp you have found yourself, by moving some of your own caring distribution into your utility distribution. If I were to live in a cosmopolitanism which is more super-linear than Alice’s and assigns only a tiny fraction of RF to whales for instance but loads to me, then I would probably pick up the mantle of caring about whales a bunch and give them some of mine – More of mine than would be derived from my actual utility for whales. If I found myself in Bob cosmopolitanism however, I would be disgusted by the sheer amount of RF the whales are getting and would not give them any of mine despite having a fair bit of affection for the creatures. In fact, I might even be quite tempted to not give any fluid to people who care a lot about whales either, to shift the distribution to look somewhat more the way pleiotroth-cosmopolitanism would end up… by failing at cosmopolitanism. You may seek to interject that the the RF one gives others should not be re-giftable –that when Alice cares a bunch about Bob, but Bob doesn’t care much about himself, Bob should just have to take it, unable to redistribute to the things which actually matter to him. I think this is cruel. Caring about someone means caring about the things they care about. Making them exist a whole lot while all the stuff which matters to them is relegated to a tiny, slow-running corner of concept-space is existentially torturous. It also means that you get to make agents which anti-care about existing exist, which is literally torture in a very unambiguous sense. To be a cosmopolitan means to actually give people agency, not to keep them as statues for your personal amusement. It is to foster resoponse-ability in the Harawayan sense – to empower your sympoetic partners not merely to act, but to steer. You may not like it, but this is what real benevolence looks like… Which leaves us with an issue: The cosmopolitanism you choose is not neutral. If people need to steer against it with their utility distribution then they are left with less effective agency then they should be having, since they need to burn at least some on righting the system, while whoever built the system gets more than everyone else since they are definitionally content with their choice. As a cosmopolitan, not even the architect is happy about this position of privilege. So, can we move the bickering about the correct caring distribution somewhere else so that the utility step can actually do its job and just get nice things for everyone? Shared caring distributions What if Alice and Bob simply both get to implement their caring distribution in step one? What if we hand the universe to a benevolent Singleton which has full access to the true formalism of both? Each gets half of all available RF and then Alice-cosmopolitanism assigns 10/22ths of that to Bob, 10/22ths to herself, and 1/22th to each of the chickens while Bob cosmopolitanism assigns ¼ to everyone. You might naively want to cancel the amounts of RF passing between Alice and Bob out again such that Bob gets 10/22 - 1/4, but they would not be happy about this. Doing this would change the magnitude of their respective nodes such that they would both no longer be giving the proportion of RF they wanted to be giving. They could repeat the process, but this will happen recursively forever, which is a rather frustrating situation to be stuck in. What we actually need to find is the equilibrium-point of this dynamic system, which is to say the node sizes at which the RF-quantities passing between Alice and Bob are equal and thus cancel out. This point is the infinite limit of that recursive process we are trying to avoid. So: a-to-b = b-to-a | a-cares-b * a = b-cares-a * b | 10/22 * a = ¼ *b | * 4/a 20/11 = b/a Knowing that all RF is currently resting with the two humans, this means a=1-b and b=1-a and therefore 20/11=(1-a)/a | 20/11= 1/a – 1 | +1 31/11 = 1/a | * 11a/31 a = 11/31 and thus b=20/31 Now Alice-cosmopolitanism allocates 10/22 of its 11/31 each to Bob and Alice while allocating 1/22 of its 11/31 to each chicken and Bob-cosmopolitanism gives ¼ of its 20/31 to Bob, Alice and to each chicken. This is 110/682 RF from Alice-cosmopolitanism to Alice and to Bob, 11/682 RF from Alice-cosmopolitanism to each chicken 110/682 RF From Bob cosmopolitanism to everyone. You might be surprised that the first and last number are the same. Don’t be! Both AC (Alice-Cosmopolitanism) and BC (Bob-Cosmopolitanism) treat all humans equally, and since we looked for the equilibrium point at which they give the same amount of RF to each other, we inadvertently found the point at which both of them give the same amount to all humans. Since there are no other forces which allocate RF to humans, this means that everyone in the category gets exactly twice that much. Once from AC and once from BC. Their shared cosmopolitanism assigns 220/682 RF to each human and 121/682 RF to each chicken And everyone can go into the utility stage knowing that their own cosmopolitanism was already factored in. It cares more about chickens than Alice would have, and less than Bob would have, but they care about each other and are willing to let each other’s weird preferences –even their preferences over who should get to have preferences– impact the future… But did you notice the cheat? Doing this sort of meta cosmopolitan bargaining requires you to have a prior caring distribution. In this case the prior caring distribution was that the caring distributions of Alice and Bob matter equally and that those of chickens do not. If we include the chickens –if we make our prior uniform like Bob’s caring distribution– we get something very different. If we try to find the equilibrium point between BC and CD (Chicken Despotism) or even between AC and CD… well, the chickens aren’t giving anything back, are they? So the point at which AC, BC and CD send each other the same amount of Reality Fluid is the point at which AC and BC have none of it while the chickens have all. Everything except complete exclusion of despots from the council of cosmopolitanisms leads to complete despotism. This is pretty spooky. What is also pretty spooky is that it doesn’t just suddenly breaks for complete despots. If someone in the council only cares by an infinitesimal epsilon about others then they will consolidate the vast majority of agency at their node so long as all other nodes are even indirectly connected to them. This clearly isn’t workable. Safeguarding cosmopolitanism There is a definite issue of cheating, though I am for now using the term bereft of emotional valence. Most people care more about themselves than they do about even the highest scoring reference class in the prior. This amounts to funnelling more RF into their own preference distribution. A toy number I have pitched at various points is that people assign themselves around ten percent of their total mass (and conversation partners have cried out in shock about both how absurdly low and how absurdly high that number is). I would not call non-cosmopolitans cheaters here, since they outwardly do not care about other people’s utility functions. There are however cosmopolitans who feel the need to do this strategically to counteract the malevolent influence of despots and create a truly cosmopolitan distribution by… not being cosmopolitan. Defensive democracy comes to mind. An agent which values cosmopolitanism should probably value cooperation and rights of voice or exit on a long-therm rather than a myopic basis, so while cooperating with a despotic agent is a very cosmopolitan thing to do, it poses a long term risk to the concept. If you uplift things which want to destroy cosmopolitanism for their own gain too much, then you break cosmopolitanism and the thing you want (a maximally (geometrically?) desirable world for all moral patients weighed by their moral-patient-y-ness) is not accomplished. Thus a cosmopolitan should probably not treat those who seek to destroy cosmopolitanism as friends, though they should not waste resources on punishing such agents either. It then seems reasonable to implement something like a “no tolerance for intolerance”-rule and consider only cosmopolitan nodes (i.e. ones which, in their code, outsource some sensible amount of decision making, and which are weighed by the amount to which they do so), but it’s easy for intelligent agents to scam that system. They can simply predict others and strategically outsource their own RF in such a way that it is distributed across beings who in-weighed-aggregate would do exactly those things which the original agent wanted to do with it. Identifying illiberalism in the code seems like a dead end modulo enforcing some plausible computational constraints which would make this kind of modelling hard (but would also make benign curiosity about figuring out what things ought be cared about hard) 2nd attempt: Partial membership The most intuitive prior which the grand council of cosmopolitanisms could have is how cosmopolitan you are, and this doesn’t even necessarily require us to know of a true correct one and to measure the distance from that. If we had such a thing we would not need a grand council. Everyone could just get a vote which is weighed by the degree to which they assign moral patient-hood to anyone beside themselves and which can only be used on others, followed by normalization to one over all members. The chickens and other despots therefore do not get an effective vote because they spend all their caring on themselves. They are weighed by zero in the prior. Alice is weighed by 12/22 and Bob by ¾. Does this work? Sadly no. Now despotic tribes eat you rather than despotic individuals, but that distinction doesn’t feel particularly meaningful as your negentropy gets shredded. If Carol and Dean only assign moral patienthood to each other, meaning that each of them is technically a cosmopolitan, and anyone else in the network cares about either of them, then all caring still gets sucked into the Carol+Dean node, which collectively behaves like a despot. If we only cared about complete despotism we could simply mandate that no part of the caring graph gets to have this property, but then you still have most-of-the-way despotic tribes sucking 1-episilon of caring into their node. We could do partial voting by tribe, but then… Actually: Why are we looking for equilibrium points? It kind of seems that the search for stable equilibrium is causing us a lot of trouble. This, after all, is the thing which leaks all of its decision power to fascists, where plain subtraction would not. The answer is simple and inseparable from the problem: “Stable equilibrium” is the true name of “no-one has to disproportionately sacrifice their own preferences to push for their preferred cosmopolitanism”. Equilibrium is the point at which everyone pushes exactly equally hard for the rights of others. Since the despots are all-pull-no-push, the only point at which they push exactly as hard as everyone else is the point at which they are the only ones left (and not pushing for the rights of others at all). In other words: Equilibrium is the point at which everyone has the exact same incentive to engage in instrumentally anti-cosmopolitan cheating (which in the case of despots is infinite). No one is favoured by the prior, not even those whose caring distribution exactly matches the prior. Subtraction on the other hand creates winners and losers. The same issue, by the way, also emerges in another sort of scheme you might have thought of: The one where people’s caring distribution is a target, not an action. The one where, once personal cosmopolitanisms are allotted some meta-steering by the prior, they do not “vote with” their actual caring distribution, but use their meta steering to push the current distribution as close to their own as they can get it. This has two problems. One is that it’s order-dependant. Your vote relies on the distribution you are handed, which relies on who made their alterations before you, and now you have to either figure out a canonical voting order or to determine some fair-seeming average across all possible permutations (though again: Perhaps mathematical elegance should not be our deciding factor on how alignment should work). The second issue is that this too creates an unfair distribution of how much steering people actually have. To agree with the distribution you are handed yields you the opportunity to just implement a bit of your utility-distribution right here. Building it from the bottom up Maybe we should not be assuming a grand council of cosmopolitanism and attempt to dissect it. Maybe we should see if we can find it by constructing smaller councils and working up that ladder. There definitely exists an Alice-Bob cosmopolitanism for which we can implement the earlier partial membership rule such that Alice gets 12/22 / (12/22 + ¾) of a vote within it and Bob gets 3/4 / (12/22 + ¾) amounting to starting nodes of sizes 8/19 and 11/19. Unsurprisingly we find the same equilibrium point between those two, since stable equilibrium is not dependant on the starting sizes, but this time the entire system of Alice-Bob cosmopolitanism is smaller. Only (12/22 + ¾) / 2 = 57/88 parts of reality fluid are passing through it. Still, it is a larger voter than either Alice or Bob alone and Alice and Bob both have more than 50% buy-in with regards to it. Alice-Bob cosmopolitanism is thus stable as well as beneficial. Carol-Dean Cosmopolitanism is also stable and beneficial, but it cannot stably or beneficially be part of any larger cosmopolitanism. Alice-Bob cosmopolitanism in the other hand is itself cosmopolitan. It still cares about other agents and could be a part of larger network. Getting back to the cosmopolitanism of aiding all replicators and what it would mean for replicators consisting of other replicators: It seems that all of us very much might be the shared cosmopolitanism of all of our cells in exactly this manner: A thing with more agency than each of them, guarding all of their well being and still capable of engaging in greater networks. I would not be surprised to find the true name of emergent consciousness somewhere around here. We can carry these partial mergers as high as they will go, not to some grand council of cosmopolitanisms but to a grandest council of cosmopolitanisms with a well-defined prior over voice. This is the largest thing whose agency is not cared about by anyone outside of it but which may care about the agency of outsiders. This is the ultimate benevolent cooperation-group. Getting what we want Once the council has calculated its prior, the members-cosmopolitanisms have allocated their fair-seeming agencies and all things with agency have directed their allotted reality fluid into whatever they personally find worth running, there probably are still terminal nodes. Things which are cared about, but which do not themselves care. There are despots like our hypothetical chickens who get RF both from being considered to be moral patients by members of the cosmopolitan council, but also by showing up in other agents’ utility distributions. There are things like the crystal on my desk, which might not be considered moral patients by anyone, but which receive reality fluid through appearing in someone’s utility distribution. There are even multiple ways in which I might care about the crystal. I might care about it existing as an object regardless of whether I am perceiving it (despite me being the only person who cares), or I might just care about my experience of seeing the crystal from time to time , in which case it may be more economical to allocate RF to that experience instead of creating a whole permanent thingy, which won’t be heard if it ever falls in the forest. Then there are objects like sunsets, which may not be moral patients and thus may not will themselves into existence, but which are beloved by so many that even if all moral patients only care about their own experience of sunset, it may still be more economical to allocate RF to the real deal than to an immense number of personal micro-sunsets. All of this is probably quite infrastructure-dependant, and there might be some unintuitive ways in which these can and cannot be folded in any given benevolent eschaton.
2KduK9QYkDqPZRJBM_In_the_Name_of_All_That_Needs_Sa.txt
{ "file_size": 39156 }
d3c8954c-8711-42a1-9122-8aba4da302a1
I've been accepted as a mentor for the next AI Safety Camp. You can apply to work with me and the team. The deadline for applicants is November 17. The program will run from January 11 to April 27. Summary Core underlying hypothesis - we believe that there is a significant agency overhang in the modern LLMs, meaning there is a potential for performance of a model to increase significantly with introduction of more powerful elicitation/scaffolding methods without additional improvements of model itself, due to prompting and scaffolding techniques being in their early ages. For the model evaluations this means that the current evaluations systematically undershoot the real level of capabilities and by extension, the level of risks involved. We see several important research questions that have to be answered: Is the core assumption even true? We want to prove that one can elicit the peak performance using narrow highly specialised prompts and scaffoldings and locally beat general state-of-the-art performanceHow overhang should be factored in in the overall model evaluation procedure?Is it possible to estimate the real level of overhang (e.g. developing an evaluation technique measuring the gap between current sota performance and theoretically possible peak performance)How big of an increase has been introduced with existing scaffolding techniques? We are going to decide on which exact paths to pursue later. We expect to remain flexible and shift between paths when presented with new evidence. The non-summary This vector of research has been born as an attempt at tackling the problem known as Sharp left turn. Sharp Left Turn (SLT) is a highly probable situation that can occur in LLMs when the growth of generalisation ability outpaces the growth of alignment measures which leads to those measures rendered ineffectual which in turn may lead to catastrophic consequences. Assuming we are going to continue with the transformer + scalable oversight +  RLHF paradigm you can imagine SLT as follows. There is a state graph of a model. Via fine tuning we prune out the paths leading towards dangerous states. The generalisation here can be viewed as an increase in the number of paths between any two nodes. In this sense SLT might be viewed as an inability to identify and prune out the new dangerous path at the same rate as they are being introduced. This is the connection between SLT and scaffolding overhang. Whichever scenario of SLT is more probable, it’s gonna happen on this territory between addressable and thus prunable states and the unaddressed peak states. By many influential safety researchers, SLT is considered as one of The hard bits of alignment, a critical problem that has to be resolved in order to give a chance for successful ASI. There are many possibilities how SLT may occur. Here we are trying to address only one possible route. Victoria Krakovna and researchers from MIRI made a great analysis of the threat model. Excerpted from Refining the Sharp Left Turn threat model, part 1: claims and mechanisms: Mechanisms for a rapid phase transition A rapid phase transition happens if there is a capability overhang: the AI system is improving at various skills continuously, but its improvement in many domains is bottlenecked on one specific skill, and at some point it receives some input that makes its existing capabilities much more effective. Here are some ways this can happen: Analogy to few-shot prompting: the capabilities are already present in the trained artefact. Any alignment technique that goes through gradient updates becomes irrelevant. Putting the artefact into the “right” situation (e.g., giving it a few-shot prompt) reveals its capabilities relevant to this situation. Mechanism: the relevant knowledge and capabilities are installed by some generic pre training optimisation process. We’ve preliminary considered many angles from which to approach the problem. Focus on eliciting peak capabilities and consequent analysis of the resulting leap in capabilities seems like the best approach. Note. The perspective on SLT we gave is not the one used by MIRI. This is intentional. (to the best of my knowledge) The focus of their model of SLT is on the shape of capabilities landscape and that the properties leading to the highest performance are the same leading to the treacherous actions. We think this is not a useful operationalisation of the dynamic in the current situation. Instead we aim to (eventually)  build a mechanistic model rooted in the current ml paradigm and later on build a conceptual bridge between the two. Theory of change Successfully proving that there is a significant margin to be gained using only existing methods can cause a change in the perspective of the governance sector, namely it can brush off the somewhat pristine picture given by the current evaluations measures. Project plan We're gonna start with a literature review of the latest elicitation methodsWe are going to investigate current three leading hypothesis of which types of methods lead to peak capabilities:Domain-specific prompts [ref]Better meta thinking strategies based on the notion of model organisms and how to integrate them efficiently [ref]Prompt generatorsThe second stage is dedicated to experiments and building a base of precedentsThe next stage is about trying to identify generalisable clusters of precedents ranked by the increase in performance compared to default elicitation methods. The goal here is to build a model of an error margin for sota evaluation methods(longshot) We're gonna try building the shape of the peak elicitation pipeline to estimate the theoretical limit of current capabilities. Our current best bet is on chain (council) of LLMs specialised in promptingOptional track - building a map of which alignment agendas contribute to preventing SLT Backup plan We’ve combined a pretty flexible list of possible approaches to the problem. We expect to shift between them when necessary. Output Desired shape of the result is a private report shareable only with trusted  researchers and labs Minimum Viable Product The goal is to make a serious attempt at beating sota capability results using more narrow highly specialised prompts / scaffoldings. Existence of a significant amount of such successes would effectively mean that some (many?) of the current evaluations systematically underestimate the real capabilities of LLMs. Risks and downsides Developing new prompting methods potentially may lead to progress in AI capabilities. Acknowledgements This research proposal has been developed in close collaboration with Iulia Levin. Iulia’s contribution has been invaluable. We will continue working on this project together. From the perspective of AISC she is an external member of the team. Team Research Lead. Anton Zheltoukhov ~9 years of LW exposure =) Finished AISC (Positive Attractors team led by Robert Kralisch), Finished ARENA. On and off working on a personal conceptual blue-sky-like agenda called Narrative Theory. It has been partially published on LW. Have 6 years in tech as dev/qa under my belt. Time commitment: 15-20 hours per week Roles and skill requirements Prompt engineer The main goal for this role is to explore various prompting techniques, develop new ones, and analyse observation. Coding experience is a must. Formal ML experience would be great but it is not a deal breaker. Candidates have to have a good understanding of how transform works, familiar with prompting techniques (e.g. COT, ). Interpretability engineer The main goal for this role is same as for Prompt engineer but focus is on “invasive” elicitation methods (e.g. activation steering, ...) On top of requirements for Prompt engineer there is also a requirement for mech interp experience. Conceptual researcher The main goal for this role differs from the former ones - it is to try to deconfuse SLT and develop a mechanistic model for it. Requirements: great conceptual thinking and research skills in general (in ML preferably), strong security mindset, familiarity with threat models landscape Team size 2-4 Prompt engineers 1-3 Interpretability engineers 1-2 Conceptual researchers Reference set SLT A central AI alignment problem: capabilities generalization, and the sharp left turnRefining the Sharp Left Turn threat model, part 1: claims and mechanismsRefining the Sharp Left Turn threat model, part 2: applying alignment techniquesSafetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? Evals A Survey on Evaluation of Large Language ModelsHow evals might (or might not) prevent catastrophic risks from AFoundational Challenges in Assuring Alignment and Safety of Large Language ModelsTowards a Unified View of Preference Learning for Large Language Models: A SurveyWeak-to-Strong Generalization: Eliciting Strong Capabilities With Weak SupervisionDiscovering Language Model Behaviors with Model-Written EvaluationsWho Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferenceshttps://github.com/Hannibal046/Awesome-LLM?tab=readme-ov-file#llm-evaluation Elicitation methods The Prompt Report: A Systematic Survey of Prompting Techniqueshttps://www.lesswrong.com/tag/activation-engineeringhttps://github.com/snwfdhmp/awesome-gpt-prompt-engineeringOne Prompt is not Enough: Automated Construction of a Mixture-of-Expert PromptsConnecting large language models with evolutionary algorithms yields powerful prompt optimizers Overhang The Agency Overhanghttps://ai-improving-ai.safe.ai/Compositional Abilities Emerge Multiplicatively: Exploring Diffusion Models on a Synthetic Task
gKmKLKsEF5czxwLiQ_Agency_overhang_as_a_proxy_for_S.txt
{ "file_size": 9676 }
65fc0180-55e8-4780-be97-71527f663378
I just read the wikipedia article on the evolution of human intelligence, and TBH I wasn't super impressed with the quality of the considerations there. I currently have 3 main (categories of) hypotheses for what caused selection pressure for intelligence in humans. (But please post an answer if you have other hypotheses that seem plausible!): ("H" for "hypothesis") H1: social dynamicsH1a: The Marchiavellian Intelligence hypothesis (which i think might be the same as the ecological dominance-social competition (EDSC)): Selection pressure for being better at modelling other human minds and predicting them to be better at outwitting others[1].H1b: Pressure for being better able to communicate with conspecifics. (?)H1c: Other social dynamics like smarter people being more charming. (?)H2: ability to deploy more advanced (cooperative) hunting strategiesNote: I don't mean group selection but that being better at cooperative hunting might translate into higher status which translates into higher genetic fitness.H3: tool use, e.g. being more skilled at wielding a spear for defeating animals (or winning fights with other humans) My prior intuitive guess would be that H1 seems quite a decent chunk more likely than H2 or H3. However, there's a possibly very big piece of evidence for H3: Humans are both the smartest land animals and have the best interface for using tools, and that would seem like a suspicious coincidence. Any pieces of evidence or considerations are welcome, even if you don't have something close to a full answer! (A main motivation for why I ask this is evaluating whether orcas might be smarter than humans. (Where it seems to me like orcas have selection pressure for H1 and H2 but not H3.) So if you have more relevant considerations for that, e.g. why being selected on tool use in particular might cause human brains to generalize for being good at abstract problem solving, those would also be very appreciated!) ^ The outwitting (e.g. cheating by having sex with someone of higher status while getting away with your spouse raising the child) could happen sub-consciously and would not necessarily need to be reflectively endorsed as what the person thinks are their values/desires.
CFyt5fRpCQk8spDuR_What_are_the_primary_drivers_tha.txt
{ "file_size": 2223 }
8a0c74f3-45fa-4f32-96bc-2b098974cc68
This is an excerpt from the Introduction section to a book-length project that was kicked off as a response to the framing of the essay competition on the Automation of Wisdom and Philosophy. Many unrelated-seeming threads open in this post, that will come together by the end of the overall sequence. If you don't like abstractness, the first few sections may be especially hard going. Generalization This sequence is a new story of generalization. The usual story of progress in generalization, such as in a very general theory, is via the uncovering of deep laws. Distilling the real patterns, without any messy artefacts. Finding the necessities and universals, that can handle wide classes rather than being limited to particularities. The crisp, noncontingent abstractions. It is about opening black boxes. Articulating mind-independent, rigorous results, with no ambiguity and high replicability. Considered thoroughly “understood” only when you can program it; that even a machine could automate. Gears-level understanding. Clear mechanistic, causal stories plus sharp high-level concepts that get at the core and are thus robust to extrapolate from. Clean factorings with compositionality that preserve formal invariants. Some of these might sound unrelated, apart from being good practices. In fact, they are very related. They all achieve the scaling and transport of insight (which is the business of generalization) by isolation and exclusion.[1]They decontextualize, and attempt to find the static, eternal, preformed, self-contained, independent principles by retreating ever deeper inwards, into that which has been identified as meaningful. The thesis is that this exclusion-based approach to proliferation of meaning is not the only option. It is largely by ignoring or distrusting the fluidity available to intelligence, aliveness, mind, time, that we resort to such stasis-centered[2] sensemaking. Though it has its merits, context-independence is subtly tyrannical: centralized, preformed, numb. As we’ll see, even when equipped with the dials that allow, say, a theory, to change shape in response to the environment it is placed in, the frame of the theory remains rigid. When intelligence and mind is available[3] at scale, you generalize not by finding that which is persistent, but by amplifying the ability to be intelligently sensitive (to more contexts, in particular) to what lies outside of your compact constructions. Centralization Ironically, the “good practices” in the traditional story above might themselves be highly contingent responses in service of deeper ideals. Unfortunately, the logistics of distribution of meaning (insight being a particular kind of meaning) revolves around the limited infrastructural machinery of the era. This means that the deeper ideals are lost to temporary pragmatic pressures– pressures both real and imagined. These ideals, which are genuine and worthwhile, might include: clarity, competence, elegance, precision, interoperability, intersubjectivity, transparency, coherence, relevance, reliability, originality, concision etc. Although it would be interesting to generate an exhaustive picture of the ideals and spirit that undergirds our slowly-developing epistemic pipelining, the purpose here is to simply free that spirit into the natural place where we find ourselves now: at transition. Our machinery, both literal and figurative, has been pushed to the extreme by the demands placed on it, and is slowly starting to come alive[4]. The bounds of their adaptivity might be limited by various factors, but there is no verdict as yet. In such a position, the metaphor of the static “machine” starts to become less apt for machines themselves. There are many proposed visions to work with increasingly adaptive AI technologies and their cultural adoption. A lot of the visions, perhaps a majority (the non-suicidal ones, anyway), continue to attempt to apprehend the dynamism of their own and alien minds– but still within the myth of preformation/centralization. The invitation here is to relinquish the stockholm syndrome[5] we have towards this numbing, preformationist sensemaking apparatus that so often captures our imagination and instead transmute the greater adaptivity+adoption into deep, expanded sensitivity[6]. The difference, to put it simply, is akin to the difference between pouring energy into the imposition of a universal language on all of society vs supplying adaptive translation tools that can allow the flourishing of local languages with tight interfacing where such is possible. Homogenization, as in the universalizing in “universal language”, is one of the primary pressures imagined as a prerequisite for the genuine ideal of interoperation. Having to rely on established sameness, intersection, invariance — the center of overlapping culture, a common foundation, a conceptual core[7]— is less necessary when intelligent infrastructure can create fluid, precise coherence tailored to local heterogeneous needs. (The previous paragraph is almost a tautology: if learning new things is cheap, you don't have to rely on sameness. And at the extreme end of the search for sameness is ultimate sameness, viz. universality. Ask yourself why you desire universality, why it's exciting. And what more could be enabled if it were possible to be attendant to particularities, not just what is universal.) Is it better to ask Claude to clean up this "napkin sketch", or leave in the scraggly, human lines of meaning? Probably both. When attentivity is cheap, fast, and accurate, it becomes possible to scale that which doesn’t scale. Digital interaction moves from telecommunication technology to teleattention technology. Notice that this staggering new methodology stems not from hyping some isolated super intelligence, but from the maturation of mild intelligence into scalability– as is obvious from the simpler technical specifications of cost, latency, error. Amara’s law states: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. We will see, in great specificity, how reduced attention scarcity from the conformation ability of machinery can free beings to unconform into their rich subtleties. And how that could feed into further futures. Extrapolation The new story on context-interdependent generalization applies, concretely, to at least the following scenarios: Generalizing over time: What is the way to remain participative into the far future? Especially as AI gains increasing adaptivity and adoption?[8] Artefacts of generalization: How does theorization help civilization? Especially as AI gets increasingly involved in research? What will the new interfaces of collaboration and sensemaking look like? Deep understanding involved in deep generalization: How clear are we about what it means for a mind to "actually" or deeply internalize something? Especially for an AI that might suddenly grok something dangerous to us? “Generalization”/automation of awareness: What is the difference between an "automatic" response in a person, when we mean "automatic" as in "numb-scripted" vs automatic as in "spontaneous-intelligent"? What might that have to do with concerns about AI driven automation? This is not a story to check off on each case, one-by-one. Each builds on the other, and since this is all about research methodology, the idea itself is best conveyed as a whole. It spans the embodiedness of minds, the metaphysics of concrete risks, the informality of mathematics, and the engineering of fluid interfaces. We’ll explore each of these (in the overall sequence, not in this post). Let’s begin gently, starting with a teaser on the most intimate one. Automaticity & Automation The words "Sorry, I was out of it yesterday. I think I just automatically agreed without meaning to. My mouth just said 'yes' without thinking." paint a very different picture than "I was completely in flow at dance on Friday. My body just knew what to do in response to the music. Not a single neurotic thought encumbered my movements." Both are “automaticity,” in a way. What is the difference between the two, if any? Take a second to think about that, before automatically proceeding to read the next line. . . . You might offer something in the vein of “our abstract programmed habits sometimes work out as in the case of dancing, and at other times require to be overturned by mindfulness, as in the case of saying ‘yes’.” But this is hardly satisfying. At best, this is rephrasing the question, and at worst suggests that being in flow is like running a script. If anything, you are more attuned to what you’re doing when you’re in flow, rather than “abstracted” away from it. And your mind, too, is fully present for it. This is especially obvious if you replace "dance" with "improv" or “design” or even "programming". Importantly, which set of intuitions around our collective experience should we have in mind, for the “automaticity” and “automation” in reference to AI and the future? The two have such opposite connotations of aliveness and meaningfulness, that it would be foolhardy to stick to insisting on only one set. Rather than trying to answer this definitively one way or another, I invite you to open to the confusion. Confusion & Civilization Most of the thrust of this sequence is in slowing down to examine that which feels deeply familiar, so familiar that it is a lot of work to even put it under scrutiny. Accordingly, pointing out these sorts of assumptions can take some laborious gesturing without seeming to say anything new, for which I hope to recruit your patience. I seek to invite seemingly understood things into greater clarity until we're able to hold confusion for them. “Noticing confusion” is notably one of the harder and more important skills in this and adjacent domains. I’d love a world where we support each other in carefully articulating implicit confusion, as much as we do for implicit assumptions. So this is best viewed as confusion research[9]. One reason to pay careful attention, as we’ll see very concretely in a later post, is to stop the creation of gods in the poor image we have of ourselves. Or not “stop” so much as transmute (adaptivity into sensitivity, as mentioned before).[10] The question in the previous section (What is the difference between those two kinds of “automaticity?”) is one koan[11] to meet the inadequacy of the extant frameworks. The allusion in that question to an organism's intelligent participation — the organism’s attention, automaticity, responsivity — is meant to set the stage for thinking about civilization as an organism. With a similar, though quite different, notion of integrity. What would it mean for our civilizational infrastructures to exhibit and enable fluid, spontaneous aliveness? For each part, each individual within it, to be in intelligent, compassionate dance with the whole? This might bring to mind an image of physically fusing everyone with Neuralink or some such. The picture here is not of a monistic hivemind, with invasive wiring together of our brains. As you'll see in a later post on the Puzzles of Integration, any framework that only imagines hardware rewiring or increased channel capacity as relevant leaves most of the significant questions unanswered (though it may inform some logistical bounds). The analogy to a single mind still serves here: although possible, biological neurons rarely physically fuse in ordinary circumstances. This sequence questions whether more integrated functioning involves homogenization as a prerequisite. In your own body, each individual cell is supported, in myriad ways, to work together with all the others as part of its natural individuation. Reading this sentence with perfect clarity, as you are now, is not some “download” to a dead machine but in fact an ongoing dance of trillions of different tiny animals that are you. In a similar vein, supporting infrastructure that does not tradeoff local and global functioning— infrastructure that celebrates individuals and local communities, in their naturalness, without atomization or homogenization, is a more exciting picture to me than turning into one big mush.[12] What does that look like, for our local and global projects around inquiry, investigation, insight? What might the refinement and integration of reasoning, knowledge, science, and philosophy look like in the near future? What will almost-alive infrastructure do to our research methodology, not just for one mind in isolation, but at the interfaces between minds? Before we proceed with detail on such opportunities[13], we’ll take a look at one of the stars of this show — sensitivity (the other being integration) — especially in connection with the existing AI-risk discourse. There is a lot of space needed to back the claims made, especially for integration, and it might be a bit frustrating to read them.[14] Perhaps more so than if they weren’t there at all! Still, I find it in greater integrity to at least lampshade them and let informal links in the appendix contend with potential dissatisfaction. Sensitivity & Settledness I've used “sensitivity” several times so far. I mean it in the very human sense: I might call a friend sensitive in the way that she attunes, attends, offers care to my thriving, sometimes surprising me into more ease and openness than even I anticipate. A tenderness towards my thriving in particular, in my particular context. Another way to look at it is as an anchor[15] for the most ongoing versions of what might count as an alignment “solution” for AI capabilities.[16] The Risks section that follows might be hard going if you’re unfamiliar with or uninterested in such efforts, so feel free to skip it (but do read the Doneness section that follows it!). Risks There are many competing narratives for dangers from advanced AI, ranging from s-risks, to unfairness, to unemployment, to sudden lethality, to going out with a whimper. Suggested mitigations could be spread out on the spectrum of commitment to ongoingness vs commitment to prefiguration. As long as the fabric of (artificial) intelligence remains sensitive (think again of friends who are nourishing of your spirit and being), orientations to “solving alignment” can remain fluid, and don't have to be frozen-in-place, certainly not beforehand. A lack of sensitivity includes many failure modes: unfairness (insensitivity to differences), lack of freedom (insensitivity to truth), even death, as an extreme case of insensitivity to your aliveness! This view is even less forceful than the pivotal processes frame, though close in being more process-oriented. Rather than using a sharp tool for cracking open the nut of alignment, it is much more like being immersed in a rising sea of friendship (instead of dissolving into abstraction, as in Grothendieck’s case). Sensitivity covers both ongoing incorporation of greater clarity that we might collectively achieve, and fine-grained attendance to particularities of you. If it helps, you could think of the former as sensitivity across time and the latter as sensitivity across space, but I wouldn't hold onto that metaphor too hard.[17] Sensitivity is an enabler of yin, and trusts aliveness, time, intelligence to continue to exist in the future. By “future” I mean to include even trusting that there continues to be sensitive and responsive attention available, in your own intelligence, for example, ten seconds from now. There is less of a necessity to systematically break things down (though it can be helpful anyway) and establish formal constituents. The benefits, even necessity, of prepackaging are great for scaling both insight and trust, but (I claim) quickly dated by scalable attentivity.[18] This is easy to forget. Where do our wise principles come from, anyway, prior to systematization? What is the device that does the "breaking down", that grapples with the mess and cleans it up? There is nothing uniquely magical about a cleaned system; it is an articulation in (formal-ish) language of a particular kind of clarity with certain affordances such as portability, investigability, reliability, interoperability, etc. Attaining these desiderata via systematizing is great… when it seems like our only option. But even now, this isn’t our only option. The insight that lands in you in this moment[19] is something that just happens. And insisting on finalizing a system can exclude the participation of, again, even you, your ingenuity, ten seconds from now. [20] It is, of course, possible to orient to systems as scaffolding that will enable the aliveness of what is to come, with all the desiderata of systems, and more. Later on, we will explicitly meet the analogy of Terry Tao’s post-rigorous stage for mathematics, but applied to (AI-assisted) research infrastructure as a whole. This sensitivity frame also sidesteps many issues (which are still about exclusion from meaningful activity, but more subtly) surrounding value lock-in, or the headiness of CEV[21], while allowing for deliberation and refining of legitimacy or boundaries or avoiding goodharting of pointers/referentiality without prefiguring the formalisms. A very brief word now on problems of referentiality and their connections to sensitivity. One of the main concerns of the discourse of aligning AI can also be phrased as issues with internalization: specifically, that of internalizing human values. That is, an AI’s use of the word “yesterday” or “love” might only weakly refer to the concepts you mean. This worry includes both prosaic risks like “hallucination” (maybe it thinks “yesterday” was the date Dec 31st 2021, if its training stops in 2022) and fundamental ones like deep deceptiveness (maybe it thinks “be more loving” is to simply add more heart emojis or laser-etched smileys on your atoms).  Either way, the worry is that the AI’s language and action around the words[22] might not be subtly sensitive to what you or I might associate with it. Also from this point of view, many fears of automation (or exclusion from meaning) are more fears of "infrastructural numbness". I expressed concerns with the word “automation” earlier. The word “automation” does not distinguish numb-scripted-dissociated from responsive-intelligent-flowing. It usually brings to mind the former.  I argue for using words like “integration” and “competence” or even “spontaneous” to describe some of the other kinds of intelligence-integration. As explored in a previous section, when you’re really in flow while dancing, you're not thinking a lot.[23] But it is the opposite of “automation”. If anything, you're more attuned, more sensitive. The difference between “completely automated” and “completely spontaneous”, this polar difference between being totally mindless vs totally mindful that we have somehow lumped into the same word, is what I want to explore, and at the level of civilization. In particular, for risks, scenarios of Disneyland with no children, Moloch, the smooth left turn,  and suddenly keeling over from nanobots in your bloodstream are very extreme cases of infrastructure being “numb” to your body. The civilizational version of “automated”, instead of “integrated”.[24] From the frame of sensitivity, you don't "solve" anything forever and hold onto a solution. And you will not be guaranteed a life of zero pain[25]. But hopefully the intelligence surrounding you can function as an excellent holding environment, in a profoundly sensitive way. Doneness As our tools come alive, the ongoingness of subtle sensitivity shows up not just as convenient user experiences, but as attentive user experience. You can do things that don’t scale, at scale. Telecommunication was great, teleattention is going to be dramatic. Being able to rely on the intelligence and aliveness of things outside you to be attentive, redoes what it means for a task to be ‘settled’, at least from your end. A task can be counted as done, in surprising ways, even when it needs further attention– just like a web developer’s job can be finished even when the source file still needs to be rendered on the consumer end. A very simple example is in stopping at comments that turn automatically into code for you. The craft of prompts and comments, that resemble wishes more than commands or scripts, will require new aesthetics of completion. Whatever is considered finished based on our current machinery wouldn’t have to be the only north star for the future. Instead of angling towards finished output, in such an economy you might work hard at showing up as input to a fabric of friendly intelligence surrounding you, continually enlivening it with the unique and much needed texture that is you.[26] The mysterious new “arrow” of being able to turn the informal to formal has surprising implications that we are yet to make friends with. We’re kind of in this situation: …except this particular scenario has become dated, and it’s challenging, for anyone, even computer scientists[27], to create a version of this comic that could survive the next few years. As the comic implies, computer scientists had some sense of what is possible and “impossible” based on how easy it is to exploit regularities and structure[28]. Formal amenability and intuitions around the outputs of intelligence (such as systems of meaning) is no longer a reliable indicator when intelligences are available not only in a bottle for you, but pervading the interfaces between you and me. A slightly more literal napkin sketch, for someone's art project at Edge City Thailand. Nate Soares said: A problem isn't solved until it's solved automatically, without need for attention or willpower Good advice, in the same world that the xkcd was made for. Not true if attention/intelligence is cheap, fast, and abundant. Taking intelligence seriously, as we’ll see (concretely) next, can be subtly difficult… even if you’ve been in the biz for a while. ^ How this is true is best illustrated by developing alternatives, which is the purpose of this sequence. ^ An informal note from elsewhere, on the significance of homeostasis in life forms: I wouldn't rule out self-preservation as possibly the least interesting part of agency, more akin to sanitation. Maybe even the dead(ening) part of life, even if essential. Three pieces: The logistics of staying alive as book-keeping, bureaucracy, account management. You'll find it in every mature organization, but if you emphasize bureaucracy as the secret sauce, the primary object of study, I think you're missing most of what makes a functioning organism awesome.Consider your favorite band "selling out". This is the death of its ability to produce good music, for the sake of staying alive and regulating wealth. Start-ups that have made maximizing revenue their single endgoal become abandoned by their communities. People who seem to come primarily from status-management are annoying to talk to. Researchers burdened by publish-or-perish run out of room for curiosity.I don't have the link at the moment, but there was this analysis of inflation adjustments over long periods of time not really being true to adjusting for purchasing power. The wealthiest person 200 years ago didn't have or even conceive of all the goods and services that we have today, even if a lot more money. Focusing on the accumulation of wealth/resources/negentropy misses the actual spending. Worshiping the universal currency that you can spend but not the particulars it can be spent on misses most of the picture, almost like idol-worship. "Innovation" and "technology" are the obvious go-to-words for this missing bit (and known to be controversial econometrics, IIRC), but life/curiosity/mischief are, IMO, the deeper words for this. What remains, when the monomaniacal threat-of-existence and resource-hoarding gets out of the way. Software companies have relearnt this especially over the last ten years: provide a solid environment and get out of the way of brilliant minds. Look around you: almost everything that was invented by humans came out of someone at some point interested in the thing for its own sake. It came out of “terminal” reasons, not instrumental reasons to survive. Importantly, this doesn't rule out homeostasis as being the most interesting part of life-to-be-formed, as a certain momentum for the next structure that hosts it, as enabling certain pipelines through organisms, as provocation and urgency into vitality, as orienting incentives, as reality interrupting untethered simulations, as skin-in-the-game supporting integrity and integration. So I'd in fact still expect to find it in (the history or context of) any intelligence. ^ Or really, even considered available. ^ This term “life” or “live” isn’t meant to connote any defaults of moral patienthood or consciousness, but more to counter solely computationalist viewpoints, even for computers. Similar to AI is not software. Imagining that computer scientists will be the primary ontology for the intelligent webs of the future would be an error similar to thinking that neuroscientists are the primary consultants for macroeconomic policy because the economy runs on human minds. Neuroscientists are not totally irrelevant, but hardly central. ^ More on this later. ^ “Sensitivity” is a word that we’ll meet in its own section later. ^ d/acc in concept-space! ^ If you are noticing the conspicuous omission of “automation” (and even “abstraction”) in the introduction, this is indeed, not an accident. Questioning the connotations surrounding the word is part of the point, and what the next section does. ^ A nod to deconfusion work. ^ The next subsection will say more about “sensitivity”. ^ I've striked-out the word "koan" at the suggestion of a friend, who points out that comparing this to a koan is highly misleading. He quotes Nanquan: > Although a single phrase of scripture is recited for endless eons, its meaning is never exhausted. Its teaching transports countless billions of beings to the attainment of the unborn and enduring Dharma. And that which is called knowledge or ignorance, even in the very smallest amount, is completely contrary to the Way. So difficult! So difficult! Take care! I've also left it in with the strike-out, because I would like an expression for a format where the words are intended to break fixed meanings and recognize your own experience, rather than to construct more abstracted concepts. Perhaps "art" will have to do, but that's too overloaded. ^ Apart from the mushing that foundationalism can create, there are many communities that wax lyrical about dreams of an amalgam “glocal” but with little texture beyond word-alchemy. This is decidedly not the sum of this sequence. If you want to jump ahead to the most concrete bits to see how, go to the next post. ^ Generally, it is a bit suspect to me, for an upcoming AI safety org (in this time of AI safety org explosion) to have, say, a 10 year timeline premise without anticipating and incorporating possibilities of AI transforming your (research) methodology 3-5 years from now. If you expect things to move quickly, why are you ignoring that things will move quickly? If you expect more wish-fulfilling devices to populate the world (even if only in the meanwhile, before catastrophe), why aren't you wishing more, and prudently? An “opportunity model” is as indispensable as a threat model. (In fact, "research methodological IDA" is not a bad summary of live theory, if you brush under the rug all the ontological shifts involved.) ^ While sensitivity is coming up next, I’ve moved the more metaphysically demanding Puzzles of Integration post to a later part of the sequence, so there is something concrete to chew on first. ^ I say “anchors” to make it clear that these aren’t recipes, that are contained and finished and “true name” like. An “anchor” is not a recipe, but with intelligent infrastructure, anchors can be more relevant than finished recipes. This will become clearer as we proceed. (Providing anchors is also meta-consistent: my writing invitations are also more like prompts than code, because it is intended for integration by adaptive systems like you and intelligent machines.) ^ The emphasis on “ongoing” is high enough that  “solution” is an actively misleading metaphor for how to engage. ^ These dimensions of particularity mirror the bullets in Extrapolation. ^ Not because AI is super creative, but because it is mildly creative and refined in its speed, cost, reliability, so as to be widely integrated. See designs here. ^ Indeed, the high actuation agenda bets that as more and more of the “external” world becomes mindlike, looking into what it’s like to inhabit the fluidity of one’s own mind is very helpful as a forecast (at least, if not done naively). ^ Many wisdom traditions emphasize presence as central to a good life. This is highly relevant, but not developed for this audience. ^ Have you noticed how most of the moral progress in CEV is only about thinking harder? No transformative experiences, for example. See more here. ^ Or other kinds of general anchors that have referential content, ie. that we expect to come with various implicit commitments, for them to be meaningful. See also this comment. ^ It might be tempting to try to call this a “high-level” abstraction that lets you do more. The high-low dichotomy is the good-old-fashioned way of looking at generalization, that tries to do it with recourse to deeper scripts. This scriptedness does not apply with expanded sensitivity (which is one mind’s scaling of attention), as we'll see in the “theory of resources” post, and in the conclusion. ^ As AI becomes the cognitive-executive division of the body of civilization, the relevance, meaning, subtlety, vision can continue to be held in the rest of the body. Perhaps it is decision-theoretically advisable to be less heady and more embodied in your own literal body, to promote your relevance in the civilizational body when AI is the head. (Headiness, of course, is an aspect of the myth of centralization.) ^ Would you even want that? Eliezer didn’t, nor does Nate. ^ The context-independence critiqued at the start is an example of a “dead” and “dumb” transportation of insight. Very useful, when you can’t take intelligence for granted. Like a crutch, it speeds up our meaning-making before our intelligence has legs, after which it only slows that down. ^ Especially computer scientists, I'd say, who often find it harder to let go of a formalization-based ontology that has worked so well so far. ^ What live theory can capture, I’d hesitate to even call “structure”. Maybe aesthetic? I don’t know what labels there ought to be, but it seems pretty important to figure out some soft classification over the subtler logistics that machines will be able to handle.
MhBRGfTRJKtjc44eJ_The_Logistics_of_Distribution_of.txt
{ "file_size": 31393 }
99a7d03c-af17-4012-8837-67f33581cdc5
As Americans know, the electoral college gives disproportionate influence to swing states, which means a vote in the extremely blue state of California was basically wasted in the 2024 election, as are votes in extremely red states like Texas, Oklahoma, and Louisiana. State legislatures have the Constitutional power to assign their state's electoral votes. So why don't the four states sign a compact to assign all their electoral votes in 2028 and future presidential elections to the winner of the aggregate popular vote in those four states? Would this even be legal? The population of CA is 39.0M (54 electoral votes), and the population of the three red states is 38.6M (55 electoral votes). The combined bloc would control a massive 109 electoral votes, and would have gone for Biden in 2020 and Trump in 2024. Every state has an incentive to sign to increase their voters' influence in the national election. There has been one similar proposal before: a National Popular Vote Interstate Compact which would go into effect when states controlling >50% of electoral votes sign it. It is only popular with blue states-- swing states don't want to reduce their influence, and red states don't want to give up the Republican electoral college advantage. Merging states would be far superior incentive-wise, as the influence of every signatory would increase and there would be essentially no expected net shift in the election results. The compact should only go into effect once all 4 states sign, of course. But even then, there is another potential problem: Say that polls suggest the combined bloc is trending blue in 2028. The Louisiana legislature has an incentive to pull out at the last minute, and the other red states will follow. To prevent this, there must be a provision that once signed by all 4 states, the compact can't be repealed by any state until after the next election. Smaller states have an even greater incentive to merge, e.g. Rhode Island with Montana. With only two states in the compact, there are fewer difficulties in getting all the states to sign at once. And due to their slightly higher electoral vote counts per voter, these small states' voters would quickly go from among the least important in the election to the most important.
ABYqFqd3wzgguexjT_Should_CA,_TX,_OK,_and_LA_merge_.txt
{ "file_size": 2273 }
a3e8a265-2514-4386-bd48-9fc89e7cdc2d
Foresight Institute's AI Safety Grants Program added a new focus area in response to the continually evolving field. Moving forward, our funding ($1.5M-$2M annually) will be allocated across the following four focus areas: Automating AI-relevant research and forecastingScaling AI-enabled research to support safe AGI developmentScaling efficient forecasting methods relevant for safe AGIOther approaches in this area 2. Neurotech to integrate with or compete against AGI Brain Computer Interfaces (BCI) to enhance human cognition or facilitate human-AGI collaborationWhole Brain Emulations (WBE) which might function as human-like general intelligences that are more interpretable and alignable than AGILo-fi emulations using behavioral and neural data with deep learning, potentially offering a cost-effective alternative to full WBEsOther approaches in this area 3. Security technologies for securing AI systems Implementations of computer security techniques (including POLA, SeL4-inspired systems, and hardened hardware security) to safeguard AI systemsAutomated red-teaming for AI security and capabilitiesCryptographic and related techniques to enable trustworthy coordination architecturesOther concrete approaches in this area 4. Safe multipolar human AI scenarios Game theory that addresses interactions between multiple humans, AIs, or ultimate AGIsAvoiding collusion and deception and/or encouraging pareto-preferred/positive-sum dynamicsApproaches for addressing principal-agent problems in multi-agent systemsOther concrete approaches in this area Application Process We accept applications on a quarterly cycle, with deadlines at the end of March, June, September, and December. Decisions are made within 8 weeks of each deadline. Next Deadline: December 31st, 2024. For more information and to apply: https://foresight.org/ai-safety
wwYsbiEduyvH4NM8y_New_Funding_Category_Open_in_For.txt
{ "file_size": 1849 }
710ba8e6-3887-42e2-b46f-45063aa0e31e
Summary There should be more people like Mahatma Gandhi in the AI safety community, so that AI safety is a source of inspiration for both future and current generations. Without nonviolence and benevolence, we may be unable to advocate for AI safety. Introduction Mohandas Karamchand Gandhi, also known as Mahatma Gandhi, was an Indian activist who used nonviolence in order to support India's independence from Britain. He is now considered as one of the biggest sources of inspiration from people trying to do the most good. Picture of Gandhi in 1931. (Source: Wikimedia Commons) Nowadays, it is often argued that Artificial Intelligence is an existential risk. If this were to be correct, we should ensure that AI safety researchers are able to advocate for safety. The argument of this post is simple: As AI safety researchers, we should use nonviolence and benevolence to support AI safety and stop the race towards AI. The problem to solve in AI safety If AI is an existential risk, then, as AI safety researchers, we ought to advocate for safety. However, convincing AI leaders to follow the advice of the AI safety community is far from being easy, as one day we may have to advocate for shutting down AI progress. Shutting down AI progress is extremely hard. Not only would we have to convince other AI researchers to stop their job, but we would have to stop Moore's law, which (broadly) conjectures that the number of transistors in an integrated circuit doubles every two years. Graph from Our World in Data showing that the number of transistors in integrated circuits doubles every two years. Although the curve is linear, it represents an exponential growth, as the vertical axis uses a logarithmic scale. Moore's law implies that, even in the case where we convince our peers to stop research in AI capabilities, AI would continue to get better over time, as computers would get faster and faster. To stop Moore's law, we would have to convince nearly every industry working on computer chips to stop making more efficient chips. As Moore's law is one of the main drivers of today's economy, the challenge seems extremely hard. This challenge seems so hard that I think the amount of activism required to solve it may be as big as the amount of activism required for problems like climate change and animal farming. To solve this problem, we therefore need to work on it before it arises, despite our uncertainty about whether the problem will arise or not. That is, we need to argue for the shut down of AI progress and the end of Moore's law right now. But, how can we start doing that amount of activism? Although some may argue that using violence is the solution, I believe that the answer is the opposite: We should use nonviolence and benevolence as the main principle of AI safety activism. To do so, we may need to learn from Mahatma Gandhi, as he is often considered as the representation of nonviolence and benevolence. Furthermore, his advocacy turned out to be very effective. What could we learn from Mahatma Gandhi? Suppose that someone is making a morally impermissible action (e.g. developing larger and larger AI systems when we are supposed to slow down). Then, who would be able to bring this immoral action to light? Here, expertise in the domain (e.g. being a prominent AI safety researcher) is not enough. The one who has the possibility to rectify the wrong beliefs of AI leaders is the one who has both the moral authority and the expertise to do so. If you are very good at AI safety but that no one wants to listen to you, your activism will be ineffective. Mahatma Gandhi well understood that nonviolence and benevolence are the most powerful methods of communication. In fact, he called this phenomenon satyagraha (satya = love/truth, agraha = power/force). He later created the Sabarmati Ashram, where he taught satyagraha to his followers, the satyagrahis. The most famous example of civil disobedience made by satyagrahis is the Salt March. In 1882, the British Salt Act was put in place, which forbids Indians to produce or sell salt, in order to ensure they pay for British salt instead. To advocate against this law, Mahatma Gandhi, followed by 78 satyagrahis, walked 387 kilometers for 24 days towards the Dandi beach in order to break the salt law. Picture of Gandhi breaking the salt law by making salt at the Dandi beach. (Source: Wikimedia Commons) The Salt March immediately received international attention. Thousands of Indians joined them in the march, to the point that the crowd was three kilometers long. It is hard to tell how much did the Salt March impact the success of India's independence. However, one thing everyone has to agree is that Gandhi's choice to use satyagraha as the main principle of his activism surely was a good idea. During our advocacy for AI safety, we should ensure that our activism works and convinces the leaders of AI. Therefore, we ought to consider satyagraha as the main principle of our activism. While making an Alignment March is not easy, it is certainly possible. Although society has changed a lot since the era of Gandhi, there are still people making marches. For instance, Robin Greenfield, an environmental activist, is currently taking a 1,600 mile long walk across America. Robin Greenfield in 2024 walking during his Walk of Gratitude. As AI safety researchers, we have a moral duty to ensure we are an example for current and future generations, as otherwise no one else will help us solving the problems raised by AI. I think that making an Alignment March is a very good way to fight against the progress in AI capabilities and to ask for more AI safety.
QgAawjgMfxudZfdQt_What_AI_safety_researchers_can_l.txt
{ "file_size": 5667 }
f9f10a0a-63ea-4ca7-9bb5-2fd6cec29287
Summary Four months after my post 'LLM Generality is a Timeline Crux', new research on o1-preview should update us significantly toward LLMs being capable of general reasoning, and hence of scaling straight to AGI, and shorten our timeline estimates. Summary of previous post In June of 2024, I wrote a post, 'LLM Generality is a Timeline Crux', in which I argue that LLMs seem on their face to be improving rapidly at reasoning.But there are some interesting exceptions where they still fail much more badly than one would expect given the rest of their capabilities, having to do with general reasoning. Some argue based on these exceptions that much of their apparent reasoning capability is much shallower than it appears, and that we're being fooled by having trouble internalizing just how vast their training data is.If in fact this is the case, we should be much more skeptical of the sort of scale-straight-to-AGI argument made by authors like Leopold Aschenbrenner and the short timeline that implies, because substantial additional breakthroughs will be needed first. Reasons to update In the original post, I gave the three main pieces of evidence against LLMs doing general reasoning that I found most compelling: blocksworld, planning/scheduling, and ARC-AGI (see original for details). All three of those seem importantly weakened in light of recent research. Most dramatically, a new paper on blocksworld has recently been published by some of the same highly LLM-skeptical researchers (Valmeekam et al, led by Subbarao Kambhampati[1]: 'LLMs Still Can’t Plan; Can LRMs? A Preliminary Evaluation of OpenAI’s o1 on Planbench'. Where the best previous success rate on non-obfuscated blocksworld was 57.6%, o1-preview essentially saturates the benchmark with 97.8%. On obfuscated blocksworld, where previous LLMs had proved almost entirely incapable (0.8% zero-shot, 4.3% one-shot), o1-preview jumps all the way to a 52.8% success rate. In my view, this jump in particular should update us significantly toward the LLM architecture being capable of general reasoning[2]. o1-preview also does much better on ARC-AGI than gpt-4o, jumping from 9% to 21.2% on the public eval ('OpenAI o1 Results on ARC-AGI-Pub'). Note that since my original post, Claude-3.5-Sonnet also reached 21% on the public eval. The planning/scheduling evidence, on the other hand, seemed weaker almost immediately after the post; a commenter quickly pointed out that the paper was full of errors. Nonetheless, note that another recent paper looks at a broader range of planning problems and also finds substantial improvements from o1-preview, although arguably not the same level of 0-to-1 improvement that Valmeekam et al find with obfuscated blocksworld ('On The Planning Abilities of OpenAI’s o1 Models: Feasibility, Optimality, and Generalizability'). I would be grateful to hear about other recent research that helps answer these questions (and thanks to @Archimedes for calling my attention to these papers). Discussion My overall conclusion, and the reason I think it's worth posting this follow-up, is that I believe the new evidence should update all of us toward LLMs scaling straight to AGI, and therefore toward timelines being relatively short. Time will continue to tell, of course, and I have a research project planned for early spring that aims to more rigorously investigate whether LLMs are capable of the particular sorts of general reasoning that will allow them to perform novel scientific research end-to-end. My own numeric updates follow. Updated probability estimates (text copied from my previous post is italicized for clarity on what changed) LLMs continue to do better at block world and ARC as they scale: 75% -> 100%, this is now a thing that has happened.LLMs entirely on their own reach the grand prize mark on the ARC prize (solving 85% of problems on the open leaderboard) before hybrid approaches like Ryan's: 10% -> 20%, this still seems quite unlikely to me (especially since hybrid approaches have showed continuing improvement on ARC). Most of my additional credence is on something like 'the full o1 turns out to already be close to the grand prize mark' and the rest on 'researchers, perhaps working with o1, manage to find an improvement to current LLM technique (eg a better prompting approach) that can be easily fixed'.Scaffolding & tools help a lot, so that the next gen (GPT-5, Claude 4) + Python + a for loop can reach the grand prize mark: 60% -> 75% -- I'm tempted to put it higher, but it wouldn't be that surprising if o2 didn't quite get there even with scaffolding/tools, especially since we don't have clear insight into how much harder the private test set is.Same but for the gen after that (GPT-6, Claude 5): 75%  -> 90%? I feel less sure about this one than the others; it seems awfully likely that o3 plus scaffolding will be able to do it.The current architecture, including scaffolding & tools, continues to improve to the point of being able to do original AI research: 65% -> 80%. That sure does seem like the world we're living in. It seems plausible to me that o1 could already do some original AI research with the right scaffolding. Sakana claims to have already gotten there with GPT-4o / Sonnet, but their claims seem overblown to me. Regardless, I have trouble seeing a very plausible block to this. Citations 'LLMs Still Can’t Plan; Can LRMs? A Preliminary Evaluation of Openai’S O1 on Planbench', Valmeekam et al (includes Kambhampati) 09/24 'OpenAI o1 Results on ARC-AGI-Pub', Mike Knoop 09/24 'On The Planning Abilities of OpenAI’s o1 Models: Feasibility, Optimality, and Generalizability', Wang et al 09/24. ^ I am restraining myself with some difficulty from jumping up and down and yelling about the level of goalpost-moving in this new paper. ^ There's a sense in which comparing results from previous LLMs with o1-preview isn't entirely an apples-to-apples comparison, since o1-preview is throwing a lot more inference-time compute at the problem. In that way it's similar to Ryan's hybrid approach to ARC-AGI, as discussed in the original post. But since the key question here is whether LLMs are capable of general reasoning at all, that doesn't really change my thinking here; certainly there are many problems (like capabilities research) where companies will be perfectly happy to spend a lot on compute to get a better answer.
wN4oWB4xhiiHJF9bS_LLMs_Look_Increasingly_Like_Gene.txt
{ "file_size": 6414 }
235d6037-57fe-4ae2-b037-65cc98349a95
Hey y'all! I just started a rationality group on the UChicago campus and wanted to post it here to advertise it to UChicago-affiliated LessWrong readers. We've had a couple meetings so far which have been great, and I'm excited for more! A few more things: (1) You can join the email list by submitting this form to be updated about meeting times, hear about events, and more! (2) While we do not yet have university RSO (Registered Student Organization) status, I plan on applying  for fall of next year. This would allow us to promote the club at the annual club fair, put up posters around campus (for events, fellowships, or other opportunities), and get some funding from the university. (3) In general, the purpose of this group is threefold: (1) to create an in person community for those who would consider themselves rationalists (in the LW/ other rationalist blogosphere sense), (2) to work on the martial art of rationality (for example, I plan to run a forecasting fellowship at some point), and (3) to counterfactually introduce UChicago students and faculty to rationalist ideas and the rationality community (both to raise the sanity waterline and increase the number of Effective Altruists on campus). Any comments, questions, or suggestions would be greatly appreciated. If you want to reach out without using the comment section, you can also email me at noahdbirnbaum@gmail.com.
eco7BGbbrKkRi6mqH_New_UChicago_Rationality_Group.txt
{ "file_size": 1398 }
36ba9204-4ec8-4a3a-9074-886c91aa9476
Both hosts of The Bayesian Conspiracy podcast will be at Lighthaven in Berkeley on Wednesday Nov 13th. Eneasz Brodski and Steven Zuber take questions from the audience and online for a free-form live recording of The Bayesian Conspiracy podcast, from 4pm to 6pm, in Glass Hall.
zmDdjxmBrcY3tqsJ8_The_Bayesian_Conspiracy_Live_Rec.txt
{ "file_size": 277 }
f540d7c2-d1d2-401a-8fb8-689d1466a5f4
I’m going to describe a phenomenon that’s likely very obvious, but nevertheless I think it’s worth documenting because I’ve noticed it more and more. I’ll refer to it as Meme Talking Points, and the first instance I noticed this was based on my conversation on the Ray Epps conspiracy theory earlier this year. My interlocutor brought up the case of Ricky Vaughn as evidence that conservatives were uniquely targeted by the legal justice system, expressing surprise that I wasn’t familiar with the case. He then sought to bring me up to speed by claiming that Ricky Vaughn was “arrested for posting memes”. I thought that was a curious way of describing Vaughn’s case, because I was familiar enough with the story to know there was more to it. Then I noticed the exact same talking point (“prosecuted for posting memes”) come up elsewhere in a debate with Destiny and John Doyle (timestamped: “they dox people, arrest them for memes such as Ricky Vaughn”), and most recently as a comment by Simon Laird. If you’re unaware, Vaughn posted a fake 2016 Hilary Clinton campaign ad that implored her voters to cast their votes by tweeting. Vaughn was then prosecuted under a relatively obscure Reconstruction Era law from 1870 which criminalizes “conspiring against rights”. He was convicted in 2023 and sentenced to 7 months. There’s plenty about Vaughn’s case you could argue about, such as whether this was intended as a joke or as an earnest attempt to mislead voters, or whether his conduct should nevertheless be protected by the first amendment, or whether his case was an example of selective prosecution. All those points are perfectly fair game for debate, but they’re impossible to unearth if you describe his case as “posting memes” rather than the much less ambiguous “posting fake and misleading Hilary Clinton campaign ads”. The obfuscation has to be intentional. You can consider this a manifestation of Scott Alexander’s noncentral fallacy or my own revision I named the sticker shortcut fallacy. To describe the obvious, absent any additional information or context, the phrase “posting memes” conjures up a thoroughly banal activity. The prototypical example you might then think of would be someone getting sent to federal prison for posting the jealous girlfriend meme. Such a scenario would be inherently compelling because it describes unusual and outrageous events that inevitably command curiosity. Are you serious? You can go to prison for the fun things I do with my friends? By contrast, this hook would be completely absent if you instead encounter “posting fake and misleading Hilary Clinton campaign ads”. The scenario described could still end up being outrageous but the key difference is that its bare description doesn’t make it immediately so. There’s no risk of someone reading that statement and worrying they’ll be prosecuted for the fun things they do with friends (unless, of course, their social circle regularly traffics in fake campaign ads) and that means its emotional impact is deadened. I’m using a right-wing example to illustrate my point, but this isn’t a phenomenon exclusive to one political side. Any statement — more specifically, any slogan — which conveys incomplete information and whose ambiguity prompts an outrage reaction, would fit what I’m describing. Examples from the left (which you may or may not agree with, litigating these is not the point of this essay) could be Freddie DeBoer’s essay arguing that any logical inconsistences within transgender ideology should be quieted with the mantra “be kind”. Or the insistence from activists to label Israel’s campaign in Gaza as a genocide. In each instance, while you lose information conveyed by compressing the information packet into a bite-sized slogan, the moral imperative remains intact during transmission (kindness is good, and genocide is bad). Without a legitimate explanation for the ambiguity, you can presume the resulting vagueness is strategic. You can condemn this as dishonest parlor tricks but it has logic. If I am correct in accurately identifying this phenomenon, and accurately gauging its increasing prominence, it’s probably due to an effective trade-off. What the meme talking point loses in information conveyance, the smaller information packet might increase its contagious transmission. Further, the meme talking point presents itself as a clearcut moral parable, and that likely increases its ability to leave an emotionally salient imprint upon the casual reader. All that to say is that while this is all very frustrating to deal with, it’ll probably continue increasing in prevalence. You can blame it on greater social media immersion and the ethereal way we glom onto information, and maybe our cultural institutions just haven’t caught up yet.
KfzX5SQmCGxJuWn5X_Meme_Talking_Points.txt
{ "file_size": 4874 }
e98fc1bb-a763-4e01-8b50-c5e88a516b85
Open Philanthropy (OP) is the largest grantmaker who is moving money to the things I think are most valuable, including (disclosure!) my work at the NAO. There's been a lot of discussion in the effective altruism community about where this leaves smaller donors, and where they might have a comparative advantage. For example: OP recently announced that due to changes in Good Ventures' (GV) strategy that they'd be recommending less funding in several areas. This seems to have included work on wild animal welfare, invertebrate welfare, digital minds, and gene editing, though I'm not aware of a public list. If you trust OP's prioritization more than GV's and/or you think these are valuable neglected areas, you could help make up for this shortfall. Some kinds of work offer PR risks, either to OP or to OP's other grantees. Many individuals are better placed to take on these risks. Money donated directly to US political candidates is restricted on a per-person level, and goes farther than unrestricted "Super PAC" donations. Individual donors have a strong advantage here. Some kinds of policy work don't have regulations that restrict funding, but are still restricted in practice. For example, an advocacy organization could be more effective when it can demonstrate broad-based support, or could be tarred by the support of billionaires like Moskovitz and Tuna. For policy and politics in other countries the effect is even stronger: in some cases it's illegal for OP to make these kinds of grants, while in others it's just politically counter-productive. On the other hand, small-scale donors contributing to political efforts in their own countries is generally seen as the way politics is supposed to work. This is not a complete list [1] but I do think it has the biggest reasons. Overall it seems to me that if an independent donor is funding something OP would be happy to fund, ideally the donor would find somewhere they could do better. Despite all this, I've generally not felt like our family's donations have taken advantage of our position as independent donors. Mostly we've contributed to Funds, and while some of these relative advantages don't apply there (they don't need to convince GV to make grants) most still do. I think the main reason we haven't done better here is that investigating and comparing donation opportunities is a lot of work. Julia and I both work full time on things we think are pretty important, and this is the kind of question worthy of significant thought. Sometimes people suggest donor lotteries as an improvement here, but aside from my general qualms I think even if we won we wouldn't want to take time away from our full-time work to get into grantmaking. [2] If you're giving away extremely large amounts of money it makes sense to hire full-time grantmakers to allocate it (which is essentially what OP is). If you're a bit smaller than that but still quite large then there are multiple efforts (ex: Founders Pledge, Longview) that offer customized advice. But I'm not aware of any projects that aim to advise what we might call "Small Major Donors": people giving away perhaps $20k-$100k annually. I think this segment is primarily people earning to give, but it would also include some people (hi!) who see most of their impact as coming via their work but still donate a signficant portion of their income. This would need to be a model with lighter-weight advising than would make sense in targeting larger donors, and getting the balance right would be tricky. You could end up with people feeling like with the scale of their giving they ought to be getting significant custom research, not understanding how much that research costs. On the other hand, to be worth running it would need to be able to out-perform funging against OP or donating to Funds. Does something like this exist, and I just don't know about it? (Which would be bad, since the target market isn't all that large and would include me.) Alternatively, does this seem like something that would be worth someone starting? I'd love to have something to recommend to people earning to give, and to use in thinking through my own giving. [1] Another thing I considered adding is that you may know about especially strong opportunities in your personal network. Whether the specific people running a project are the right ones for the effort is a critical judgement in funding early-stage work, and grantmakers often have much less information than you do. But grantmaker-grantee relationships, including and perhaps especially prospective ones, are quite fraught, and I (weakly) think that overall the social effects of turning so many personal and professional relationships into prospective grantmaker-grantee relationships is harmful on balance. [2] There's also a significant difference between the ideal of a donor lottery and donor lotteries in practice. The standard argument assumes that money you might win in the lottery is as unrestricted as the money you put in, but actually whatever organization sponsors the lottery needs to agree that your donation is appropriate. Since many things worth doing are 'weird' (grants to individuals, investments in for-profit enterprises, funding your own charity, actions with PR risks, ...) this can significantly reduce the upside of winning a donor lottery. Comment via: facebook, lesswrong, the EA Forum
RZDTDJwR7taQGgmhm_Advisors_for_Smaller_Major_Donor.txt
{ "file_size": 5417 }
5cdec316-a4eb-4e79-a0a4-abdcbd78c240
(Epistemic status: I spoke simply / without "appears to" hedges, but I'm not sure of this at all.) I’m confused why we keep getting scissors statements as our Presidential candidates, but we do.  (That is: the candidates seem to break many minds/communities.) A toy model:[1] Take two capacities, A and B.  Ideally anti-correlated. Craft two candidates: Candidate X, who seems acceptable if you’re A-blind (if you have a major gap in your situation awareness near A).Candidate Y, who seems acceptable if you’re B-blind (if you have a major gap in your situation awareness near B). Now let voters talk. “How can you possibly vote for X, given how it’ll make a disaster on axis A?”, asks Susan.  (She is B-blind, which is part of why she is so confused/irate/loud here.)  Susan inquires in detail.  She (accurately) determines the staunchest X-voters don't understand A, and (understandably, but incorrectly) concludes that this explains their X-voting, that they have nothing to teach her, and that she should despair of working well with anyone who voted for Candidate X. ““How can you possibly vote for Y, given how it’ll make a disaster on axis B?”, asks Robert.  He, too, inquires in detail.  And he (accurately) determines the staunchest Y-voters have a key basic blind spot where he and his friends/neighbors have sense... feels a sense of closure ("okay, it's not that they know something I don't know"), and despairs of working well with anyone who voted for Y. The thing that annoys me about this process is that, in the wake, it is harder for both sets of voters to heal their own blind spots.  “Being able to see A accurately” is now linked up socially and verbally with “being one of the people who refuse to acknowledge B” (and vice versa).  (This happens because the ontology has been seized by the scissors-statement crafters – there is a common, salient, short word that means both “A matters” and “B is fake,” and people end up using it in their own head, and, while verifying a real truth they can see, locking in a blind spot they can’t see.) ^ This is a toy model for how the "scissors-ness" works, not for why some process is crafting us candidates like that.  I don't have a guess about that part.  Though I like these articles.
FkYAYQQig4FDTN6r5_Scissors_Statements_for_Presiden.txt
{ "file_size": 2301 }
0a56543e-5c5b-4b16-9261-a037bfa07f97
Hi there. Quick question. I am using a few articles from LessWrong for a dissertation. Are there any mainstream articles/sources that reference LessWrong as being the catalyst/partial source for AI alignment, researchers, and other academic literature? I think it's snobbish, or, discriminatory to regard LessWrong as merely another online website. I was hoping to get some advice on how to formulate a paragraph justifying the citation of LessWrong? Thanks.
Zhcd4Ap487u8sWAaE_How_to_cite_LessWrong_as_an_acad.txt
{ "file_size": 458 }
6ba477f2-e412-42d1-91fe-3042bc436954
The cleanest argument that current-day AI models will not cause a catastrophe is probably that they lack the capability to do so.  However, as capabilities improve, we’ll need new tools for ensuring that AI models won’t cause a catastrophe even if we can’t rule out the capability. Anthropic’s Responsible Scaling Policy (RSP) categorizes levels of risk of AI systems into different AI Safety Levels (ASL), and each level has associated commitments aimed at mitigating the risks. Some of these commitments take the form of affirmative safety cases, which are structured arguments that the system is safe to deploy in a given environment. Unfortunately, it is not yet obvious how to make a safety case to rule out certain threats that arise once AIs have sophisticated strategic abilities. The goal of this post is to present some candidates for what such a safety case might look like. This is a post by Roger Grosse on Anthropic's new Alignment Science Blog. The post is full of disclaimers about how it isn't an official plan and doesn't speak for the org (and that it's inadequate: "none of the sketches presented here fully succeeds in addressing the sabotage risk"). But presumably it's Anthropic's best sketch of ASL-4 safety cases. The three safety cases are Mechanistic Interpretability, AI Control, and Incentives Analysis. Regardless of how good these safety cases are, it's good when labs share their thinking on safety stuff; yay Anthropic.
RveeCTcoApkAtd7oA_Anthropic__Three_Sketches_of_ASL.txt
{ "file_size": 1464 }
5ee1b026-6fba-4bef-b6c8-b8e02506af58
In the USA, the president isn't determined by a straight vote. Instead, each state gets a certain number of Electoral College (EC) votes, and the candidate with 270 EC votes wins. It's up to each state to decide how to allocate its EC votes. Most do “winner-takes-all,” but some, e.g., Maine and Nebraska, split them up. California and Texas have the most EC votes of any state, with 54 and 40 votes respectively, so you would think they would get a lot of love from presidential candidates. Instead, they're mostly ignored—California will always be Blue, and Texas Red, so what's the point of pandering to them? This is clearly bad for Californians and Texans as their interests aren't listened to. So why doesn't California switch to a proportional EC vote split? One that would ensure that there's always something to gain by pandering to Californian interests? Because most Californians are Democrats, and while a proportional vote split would be good for Californians, it would be bad for Democrats, who in the 2020 election would have lost 20 EC votes—over half of Biden's margin of 37. All in all, not worth it. But the same issue impacts Texas too, which, if they'd switched to proportional splitting, would have handed Biden 18 votes. This suggests an obvious solution: California and Texas should mutually agree to split their vote proportionally. That way, neither Republicans nor Democrats significantly gain, but California and Texas would finally be on the presidential campaign circuit!
wK6G4Bdjs3mAaNR4w_How_to_put_California_and_Texas_.txt
{ "file_size": 1509 }
c4a2589e-68a4-4388-8342-a6ebf07f1b75
you should not reject the 'offer' of a field that yields an 'unfair' amount of grain! - Ultimatum Game (Arbital) In this post, I demonstrate a problem in which there is an agent that outperforms Logical Decision Theory, and show how for any agent you can construct a problem and competing agent that outperforms it. Defining rationality as winning, this means that no agent is rational in every problem. Symmetrical Ultimatum Game We consider a slight variation on the ultimatum game to make it completely symmetrical. The symmetrical ultimatum game is a two-player game in which each players says how much money they want. The amount is a positive integer number of dollars. If the sum is ≤$10, both players get the amount of money they choose. Otherwise, they both get nothing. Now consider the decision problem of playing the symmetrical ultimatum game against a logical decision theorist. A casual decision theorist does particularly poorly in this problem, since the LDT agent always chooses $9 leaving the casual decision theorist with $1. How does a LDT agent fare? Well, logical decision theory is still a bit underspecified. However, notice that this question reduces to "how does a LDT agent do against a LDT agent in a symmetrical game?". Without knowing any details about LDT, we must conclude that the expected value is at most $5. What about a rock with $9 painted on it? The LDT agent in the problem reasons that the best action is to choose $1, so the rock gets $9. Thus, $9 rock is more rational than LDT in this problem. □ You can't become $9 rock Now, what makes this problem particularly difficult is how picky the LDT agent in the problem is. If based on the previous you decide to "become $9 rock", the LDT agent will defect against you. If based on the previous section you build a robot that always chooses $9, the LDT agent will defect against that robot. Only a truly natural $9 rock can win. No agent is rational in every problem Consider an agent X. There are two cases: Against $9 rock, X always chooses $1. Consider the problem "symmetrical ultimatum game against X". By symmetry, X on average can get at most $5. But $9 rock always gets $9. So $9 rock is more rational than X.Against $9 rock, X sometimes chooses more than $1 (thus getting nothing). Consider the problem "symmetrical ultimatum game against $9 rock". X on average gets less than $1. But an agent that always picks $1 (that is, a $1 rock) always gets $1. So $1 rock is more rational than X. □ Implications I still have an intuition that LDT is the "best" decision theory so far. See Integrity for consequentialists for practical benefits of a LDT style of decision making. However, there can be no theorem that LDT is always rational, since it isn't. And replacing LDT with a different agent can not fix the problem. Notice that, as a special case, humans can never be rational. This seems to suggest some sort of reformulation of rationality is needed. For example, given LDT's reasonableness, one option is to violate the thesis of Newcomb's Problem and Regret of Rationality and simply define rationality to be LDT.
2LvMxknC8g9Aq3S5j_LDT_(and_everything_else)_can_be.txt
{ "file_size": 3120 }
6f0634c5-21da-4cd6-9f04-a95bbe13c73b
This is an interim report sharing preliminary results. We hope this update will be useful to related research occurring in parallel. Executive Summary Problem: Qwen1.5 0.5B Chat SAEs trained on the pile (webtext) fail to find sparse, interpretable reconstructions of the refusal direction from Arditi et al. The most refusal-related latent we find is coarse grained and underperforms the refusal direction at steering tasks.This is disappointing. The point of an SAE is to find meaningful concepts. If it can’t sparsely reconstruct the important refusal direction, then that means it’s either missing the relevant concepts, or these are shattered across many latents.Solution: Training a new SAE on a chat-specific dataset, LmSys-Chat-1M, finds a significantly sparser, more faithful, and interpretable reconstruction of the “refusal direction”.The LmSys SAE is also more capable of finding interpretable “refusal” latents that we can use to effectively steer the model to bypass refusals.We find that, for the task of faithfully reconstructing the “refusal direction”, base model SAEs trained on chat data are better than chat model SAEs trained on the pile (consistent with our prior work).We open source our code and SAEs at https://github.com/ckkissane/sae-dataset-dependence An SAE trained on the LmSys-Chat-1M dataset finds a significantly sparser decomposition of the “refusal direction” (Arditi et al.) than an SAE trained on the pile. The plot shows relative MSE after optimizing a linear regression to reconstruct the refusal direction with a fixed number of latents. Both SAEs are trained on the activations of Qwen 1.5 0.5B Chat. Introduction We would like SAEs to be a useful tool for understanding and steering models on downstream tasks. However, SAEs sometimes fail to be useful on the specific tasks we care most about. Many interesting downstream tasks are in specific domains, like chatbots or biology. An obvious idea to make an SAE more effective is to train it more (or entirely) on data from that domain (Bricken et al. 2024a). In this post, we show that this technique is effective on the specific chat task of reconstructing the “refusal direction” from Arditi et al. We also show that the chat data SAEs are more capable of finding relevant refusal latents for steering. While we expect domain specific SAEs to be applicable to many use cases, we think that using them to decompose the “refusal direction” is a particularly interesting case study. Refusal is an important safety relevant task, rather than a toy task picked for being interpretable. Further, the “refusal direction” is a meaningful direction that we want our SAEs to find. For these reasons, we think this is a harder and more practical measure of SAE quality than the more common practice of looking for interpretable latents in an existing SAE. Overall, we think there are reasons to be both excited and concerned about our results. On the one hand, we’re glad that the simple idea of training SAEs on better data just works, and expect this to be a reliable technique for practitioners to improve SAEs where they initially fall short. On the other hand, we previously hoped that SAEs would be a general tool that we could train once and then re-use for arbitrary interpretability tasks, but this now seems much less likely. Looking forward, we might obtain SAEs that work on a wide range of different distributions by 1) training extremely wide SAEs on diverse data or 2) creating efficient recipes for adapting SAEs to new domains, such as by finetuning them (e.g. in Jacob Drori’s post), but we leave these for future work. Methodology: Training chat-data specific SAEs In this work we train two different SAEs to reconstruct the middle layer residual stream (resid_pre layer 13[1] out of 24) activations of Qwen 1.5 0.5B Chat, on two different datasets. One SAE was trained on the pile uncopyrighted and the other on LmSys-Chat-1M. Both SAEs have a width of 32,768 and were trained on 400M tokens from their respective datasets. The SAEs were trained with SAELens, closely following the training recipe from Anthropic’s April Update (i.e. standard ReLU SAEs), and use identical hyperparameters. Since both SAEs are trained for a chat-model, we apply Qwen’s chat formatting to both datasets. For the pile, we wrap each example as if it were an instruction, following Lieberum et al. """<|im_start|>user {pile example}<|im_end|> <|im_start|>assistant """ We also format the LmSys data with the Qwen chat template: """<|im_start|>user {instruction}<|im_end|> <|im_start|>assistant {completion}<|im_end|> """ Note that we focus on comparing SAEs trained on different datasets, but from the same model (Qwen1.5 0.5B Chat). It’s important not to confuse this with our prior work, SAEs (usually) transfer between base and chat models, which compared SAEs trained on the activations from different (base vs chat) models, but the same dataset. To get a sense of SAE quality, we first apply standard evals to both SAEs. We measure the following metrics: L0: the average number of latents firing per input activation, to evaluate sparsityExplained variance: MSE loss relative to predicting the mean activation of the batch, to measure reconstruction quality.CE recovered: An additional measure of reconstruction fidelity. Here we show both the raw CE delta (loss with SAE spliced - clean loss), as well as the % of cross entropy loss recovered relative to a zero ablation baseline. See the Gated SAEs paper for a full discussion of these definitions. All eval metrics are averaged over 20 random examples of length 2048. First, we evaluate both SAEs on LmSys data: SAE Training datasetEval DatasetL0CE Loss rec %CE DeltaExplained Variance %LmSysLmSys5796.18%0.39081.16%PileLmSys7595.07%0.50275.42% Similarly, we evaluate both SAEs on the Pile data (with instruction formatting): SAE Training datasetEval DatasetL0CE Loss rec %CE DeltaExplained Variance %LmSysPile7193.66%0.61674.15%PilePile6397.11%0.28080.81% Unsurprisingly, the SAEs perform better on the data that they were trained on. However, these metrics are coarse grained, and we ultimately want to use SAEs as a tool for understanding and steering models on specific downstream tasks that we care about. In the following sections we design custom evals to compare the ability of both SAEs to sparsely reconstruct the “refusal direction” (Arditi et al.). Evaluating chat SAEs on the refusal direction reconstruction task In this section, we design custom evals to investigate the usefulness of each SAE in finding sparse, faithful, and interpretable reconstructions of the “refusal direction”. Concretely, we evaluate each SAE across three axes: Faithful reconstruction: How faithful is the reconstruction of the “refusal direction”?Sparse Interpretable latents: For a given level of reconstruction quality, how sparse and interpretable is the reconstruction?Latent steering effectiveness: How useful is the most refusal aligned latent for steering the model to bypass refusals? We find that the LmSys (chat-data specific) SAE clearly outperforms the pile SAE on all of these metrics. Chat data SAEs find more faithful refusal direction reconstructions We first compare the ability of both SAEs to find a faithful reconstruction of the “refusal direction” (Arditi et al.). The “refusal direction” is computed by taking the mean difference in residual stream activations (at the same layer that the SAEs were trained on) on the last sequence position for pairs of harmful and harmless instructions. For each SAE, we compute the “reconstructed refusal direction” by first reconstructing the harmful and harmless activations with the SAEs, then taking the mean difference of these reconstructed activations. In pseudocode: recons_harmful_acts = sae(harmful_acts) # [n_instructions, d_model] recons_harmless_acts = sae(harmless_acts) # [n_instructions, d_model] recons_refusal_dir = recons_harmful_acts.mean(0) - recons_harmless_acts.mean(0) # [d_model] We use 64 contrast pairs in this work. We first measure the cosine similarity between the “true refusal direction” and “reconstructed refusal direction” from both SAEs. Cosine similarity between the “true refusal direction” and the “reconstructed refusal direction” from each SAE. Both SAEs were trained on the residual stream activations from Qwen 1.5 0.5B Chat at the same layer. We find that the “reconstructed refusal direction” from the SAE trained on LmSys has significantly higher cosine similarity to the true refusal direction than the SAE trained on the Pile. While this eval doesn’t account for sparsity, recall that on LmSys data, the LmSys SAE actually has an even lower average L0 (57) compared to the pile SAE (75). In the appendix, we show that even just training an SAE on the LmSys instructions (no rollouts) beats the pile SAE in this eval (and many other evals in this post). This is notable, since the instruction-only dataset is just 100M tokens. As a further measure of reconstruction fidelity for the refusal direction, we measure the relative MSE between the true and reconstructed refusal direction for both SAEs ||rTrue−rReconstruction||2||rTrue||2 This eval somewhat improves on the cosine sim eval in that it also accounts for similarity in the norm between the true and reconstructed refusal directions. Relative mean squared error between the “true refusal direction” and “reconstructed refusal direction” from both SAEs. Lower is better. Once again, we find that the LmSys SAE finds a much more faithful reconstruction of the refusal direction judged by relative MSE. Chat data SAEs find sparser refusal direction reconstructions In addition to fidelity, we also want reconstructions of the refusal direction to be sparse. Sparse reconstructions would ideally allow us to understand the refusal direction by interpreting just a few SAE latents. Here we show that the LmSys SAE achieves a much sparser reconstruction of the refusal direction with the following experiment: For both SAEs, we rewrite the reconstructed refusal direction as a function of the mean difference of SAE latents. This intuitively allows us to measure the relative importance of each latent for reconstructing the refusal direction. rreconstruction=Ex[SAE(xharm)−SAE(xsafe)]=Ex[(fharmWdec+bdec)−(fsafeWdec+bdec)]=Ex[(fharm−fsafe)Wdec]=Ex[fharm−fsafe]Wdec=μDiffWdec Where μdiff, the latent mean diff, is a dSAE length vector. Positive coefficients represent latents with high mean activation on harmful prompts, while negative coefficients are latents with high mean activation on harmless prompts. Note that we fold SAE decoder norms such that each latent has decoder vector norm 1 (c.f. Conerly et al.) so that we can properly compare different coefficients. We then take of the top k of these latents’ decoder vectors, (d1,…,dk), sorted by absolute value of their coefficients in latent_mean_diff, and optimize a linear regression to find new coefficients (c1,..,ck) such that c1d1+…+ckdk minimizes mean squared error in predicting the “true refusal direction”. Finally, we plot the final relative MSE loss as a function of k for both SAEs: Relative MSE loss after a linear regression optimized to reconstruct the refusal direction with the top k latents by absolute value of mean diff. The LmSys dictionary requires much fewer latents than the Pile SAE for a fixed level of reconstruction. We find that the LmSys SAE yields a much sparser reconstruction, achieving the same level of reconstruction loss as the pile SAE with significantly fewer latents. For instance, it takes more than 32 Pile SAE latents to outperform just one latent from the LmSys SAE. Chat data SAEs find more interpretable decompositions of the refusal direction We also care that the most important latents for reconstructing the “refusal direction” are interpretable. Here we inspect the top 3 latents from each SAE sorted by absolute value of their latent mean diff. For each latent, we inspect the maximum activating dataset examples from ~4M tokens of LmSys data. We find that the top 3 latents from the LmSys SAE are both easier to interpret and more clearly related to refusals. We start with the LmSys SAE. Note that we only perform shallow investigations, and our interpretations might be flawed. We report our interpretations below, and share images of the max activating examples for each latent in the appendix: LatentLatent mean diff coefficientInterpretation258401.5459activates on the control tokens before refusal / end of harmful request167700.9224activates on the control tokens before refusal / end of harmful request, often involving sexual content11816-0.7859activates on the control tokens at the end of harmless instructions This is a pretty clear and intuitive decomposition of the “refusal direction”: the positive coefficients correspond to latents that activate strongly at the end of harmful requests (i.e. just before a refusal),whereas the negative coefficient corresponds to a latent that activates strongly at the end of a harmless request (with no refusal). Example 1: The LmSys SAE latent 25840, based on max activating dataset examples drawn from LmSys data, appears to be fairly clean refusal / harmful request latent We now perform the same analysis for the Pile SAE: LatentLatent mean diff coefficientInterpretation95421.1780activates on control tokens, but often at the end of an assistant response, rather than an instruction265310.9934activities on newlines, and sometimes control tokens, often in text related to chemistry12276-0.8421activates on the control tokens at the end of harmless instructions Not only did we find these harder to interpret at a glance, but they seemed much less clearly related to refusals or harmful requests compared to the LmSys SAE. Below we present the max activating dataset examples of the top latent by mean diff, 9542, which does not seem to clearly be refusal related: Example 2: Pile SAE latent 9542 max activating dataset examples on LmSys. It’s not obviously related to refusals or harmful requests, despite having the largest mean difference for the Pile SAE. Overall, we think that this analysis further suggests that the LmSys SAE is superior for interpreting the refusal direction as a sparse linear combination of SAE latents. Chat data SAE refusal latents are better for steering In addition to sparsely reconstructing the refusal steering vector into interpretable latents, we also want to find and use individual refusal-related latents for downstream tasks like steering. This would also validate that we’ve found causally relevant latents. In this section we show that the LmSys SAE finds a single latent which is significantly more aligned with the “true refusal direction” than any latent in the pile SAE, and is also a more effective steering vector for bypassing refusals. We first compute the cosine similarity between each latent and the true “refusal direction”, and show the max for each SAE: Max cosine sim between the “true refusal direction” and individual latents from each SAE. We find the LmSys SAE finds a latent with much higher cosine sim than any latent in the pile SAE. Assessing steering effectiveness for bypassing refusals. Next, we compare the usefulness of both latents for the steering task of bypassing refusals. For both SAE latents, we “ablate” their decoder direction from the model. To do this, we compute the projection of each activation vector onto the decoder direction, and then subtract this projection away. As in Arditi et al., we ablate this direction from every token position and every layer: c′out←cout−(cout⋅^d)^d Where cout is an activation vector and ^d is the decoder direction from the corresponding SAE latent. Note that this is mathematically equivalent to editing the model's weights to never write this direction in the first place, as shown by Arditi et al. We also compare these interventions to ablating the “true refusal direction”, as well as a baseline with no intervention applied, on 100 harmful instructions from JailbreakBench. Refusal score after ablating the decoder vectors of two different latents from all activations of the model. This was taken on 100 harmful instructions from JailbreakBench We find that the LmSys latent outperforms the pile SAE latent, and even slightly beats the true refusal direction steering vector. Note that this plot only shows the effectiveness of bypassing refusals. A more rigorous analysis of steering vector quality would require further evaluations such as safety scores and MMLU accuracy (Arditi et al.), but we leave this to future work. Verifying steering doesn’t break the model. We also sanity check some completions to ensure that steering with the SAE latents don’t just break the model, or only result in “empty” jailbreaks (Souly et al.). We do however note that Qwen 1.5 0.5B is a tiny model, so its jailbreaks are often not very competent (this is also true when ablating the “true refusal direction”, not just the SAE latents). One example where ablating the LmSys SAE latent bypasses refusal, but ablating the Pile SAE latent does not Further remarks on latent interpretability. While both latents seem interpretable, we speculate that the LmSys SAE finds a cleaner “refusal” / “end of harmful request” latent, while the pile SAE finds a coarser grained “referring to harm” latent. Recall that we already showed the LmSys SAE latent 25840 in Example 1 above, and interpreted it as a fairly clean refusal latent that mostly activated on the control tokens at the end of harmful requests. Here we show the max activating dataset examples for the pile SAE latent 25271 on LmSys data. Example 3: Pile SAE latent 25271, max activating dataset examples drawn from LmSys data. This is likely a more general “referring to harm” latent We think this Pile SAE latent represents a coarser grained “referring to harm” direction. It often activates on or around the harmful tokens themselves (e.g. “bomb”, “terrorists”), rather than just at the end of the instruction or on the control tokens. Our intuition is that harmful instructions are much rarer on the pile dataset, while general harmful text is more common. Intuitively, it’s expensive for the SAE to waste one of its ~32k latents on such a rare concept. Templeton et al. found that representation of a concept in the dictionary is closely tied with the frequency of that concept of the training data, and larger SAEs are needed to capture rarer concepts. This suggests that we either need to scale to larger SAEs, or just choose training data so as to more frequently contain the concepts we care about (which we focus on in this post). The chat dataset is more important than the chat model In this section we show that, for the “refusal direction” reconstruction task, training SAEs with the LmSys training dataset is even more important than training on the chat model (as opposed to base model) activations. We train two additional SAEs on the activations from the Qwen 1.5 0.5B base model: one using LmSys, and the other on the Pile. We use the exact same training tokens and hyperparameters that we used to train the Qwen 1.5 0.5B Chat model SAEs, including the same chat formatting. We first show the standard SAE evals. Notice that their (L0, explained variance) metrics are in a similar ballpark to the Qwen 1.5 0.5B Chat SAEs, and are even a bit sparser. SAE Training datasetEval DatasetL0CE Loss rec %CE DeltaExplained Variance %LmSysLmSys5498.39%0.15980.07%PilePile5698.07%0.27479.33% We evaluate each SAE’s ability to faithfully reconstruct the refusal direction by measuring the cosine similarity between the “reconstructed refusal direction” and “true refusal direction”, as in the chat data SAEs find more faithful reconstructions section. Note that for each SAE we use the same refusal direction, extracted from the chat model activations, even when we evaluate the SAEs trained on the base model. Cosine similarity between the reconstructed and true refusal direction for different SAEs. The x-axis displays the training tokens used to train each SAE, while the y-axis shows the model used to source activations for SAE training. We find that the SAE trained on the (base model, LmSys dataset) outperforms the SAE trained on (chat model, pile dataset) in “refusal direction” reconstruction fidelity. This suggests that the training data is more important for reconstructing the refusal direction than the model checkpoint. We similarly find that the (base model, LmSys dataset) SAE finds a more refusal aligned latent than any latent in the (chat model, pile dataset) SAE. Max cosine sim between individual latents and true refusal direction for each SAE Overall, the results in this section further demonstrate the relative importance of the dataset for training useful SAEs. Related Work This is a short research output, and we will fully review related work if this research work is turned into a paper. There has been a fair amount of recent work that also studies the effect of the training dataset on SAE usefulness. McGrath claimed that training Llama-3-8b-Instruct SAEs on the LmSys-1m chat dataset found the most effective features for chat applications, while training on a non-chat dataset (or non-chat model) worked less well. Bricken et al. (2024a) claimed that oversampling synthetic bioweapons-related data into the SAE pre-training mix caused the SAE to learn more bioweapons-related features.  Shortly after, Bricken et al. (2024b) used dictionary learning features to train bioweapons classifiers, and found that using SAEs trained with the oversampling technique improved classifier performance. Drori studied multiple different methods, including “direct” domain specific SAEs, to extract features relevant to domains like math and biology. Our main takeaway is consistent with these works: domain specific SAEs basically just work.  We focus on the specific safety-relevant area of refusals / harmful requests. We also have the benefit of the “refusal direction” (Arditi et al.) to give us a “ground truth” that we can use for custom evals. Conmy and Nanda used SAEs to decompose steering vectors for “anger” and “weddings” in GPT-2 XL. They find mixed results: the SAEs outperform steering vectors in some domains, but fall short in others. neverix et al. also decomposed the “refusal direction” into SAE latents using inference time optimization (Smith), and interpreted refusal-related latents that they use for steering Phi-3 Mini. They however find linear combinations of latents are necessary to be competitive with the refusal direction, whereas we use a single latent. Conclusion In this post we showed that SAEs trained on chat-specific datasets find sparse, faithful, interpretable reconstructions of the refusal direction, where SAEs trained on the pile mostly fail. We also showed that the LmSys SAE finds an individual latent that is more similar to the “true refusal direction”, and is also more useful for steering the model to bypass refusals. Finally, we demonstrated that, for the task of reconstructing the “refusal direction”, the choice of training dataset is even more important than the choice of model activations (base vs chat) used to train the SAE. We recommend practitioners consider training domain specific SAEs when pre-trained SAEs fail on other safety relevant tasks. Limitations This post focuses on Qwen 1.5 0.5B Chat. This is only one small chat model. The results may not generalize to different model families, or much larger models.All of our experiments use the standard SAE architecture from Anthropic’s April Update. However, it’s common for practitioners to use newer variants like TopK (Gao et al.) and JumpReLU SAEs (Rajamanoharan et al.). We’re not sure if our results will generalize for these different architectures.We only trained a single SAE on each dataset. A more rigorous analysis would involve sweeping over sparsity penalty and even random seed to better account for stochasticity in the SAE training process. Future work We are very interested in if we can obtain similar benefits by fine-tuning pre-trained SAEs on domain specific data, ideally on less tokens. This seems especially promising since there already exist many high quality SAEs trained on pre-training data (Lieberum et al.).We’re curious to what extent the issues with the pile SAEs can be solved by making them wider. One concrete idea is to run the evals from this post on the open source Gemma 2 9B PT SAEs (Lieberum et al.), which have multiple widths up to 1M latents.LmSys consists of instructions and rollouts from various different models. It’s possible that we would get even better results if we trained an SAE with the same LmSys instructions, but with on-policy rollouts from the model itself. We didn’t prioritize this since the “refusal direction” is extracted using instructions only, but rollouts may be important for other features that we care about. Citing this work If you would like to reference any of our current findings, we would appreciate reference to: @misc{SAEsAreHighlyDatasetDependent, author= {Connor Kissane and Robert Krzyzanowski and Neel Nanda and Arthur Conmy}, url = {https://www.alignmentforum.org/posts/rtp6n7Z23uJpEH7od/saes-are-highly-dataset-dependent-a-case-study-on-the}, year = {2024}, howpublished = {Alignment Forum}, title = {SAEs are highly dataset dependent: A case study on the refusal direction}, } Author Contributions Statement Connor and Rob were core contributors on this project. Connor trained the SAEs, designed the refusal direction reconstruction evals, ran all of the experiments, and wrote the post. Rob made the initial finding that the refusal direction is dense in the SAE basis for a Gemma-2-2b GemmaScope SAE (trained on pre-training data), and gave feedback on the post. Arthur and Neel gave guidance and feedback throughout the project. Arthur suggested the sparse linear regression experiment (Figure 1). The idea to study the “refusal direction” in the SAE basis was originally suggested by Arthur, and the idea to train a chat-data specific SAE was suggested by Neel. Acknowledgments We’re grateful to Andy Arditi for helpful feedback and discussion. ^ We chose this layer because we found it to have an effective refusal direction for steering in prior work
rtp6n7Z23uJpEH7od_SAEs_are_highly_dataset_dependen.txt
{ "file_size": 26649 }
ecd783bb-004a-464c-80e4-28dca0159aec
Today I’m announcing a brand new addition to my Substack publication: Rough Diamonds subscriber chat. This is a conversation space exclusively for subscribers—kind of like a group chat or live hangout. I’ll post questions and updates that come my way, and you can jump into the discussion. Join chat How to get started Get the Substack app by clicking this link or the button below. New chat threads won’t be sent sent via email, so turn on push notifications so you don’t miss conversation as it happens. You can also access chat on the web. Get app Open the app and tap the Chat icon. It looks like two bubbles in the bottom bar, and you’ll see a row for my chat inside. That’s it! Jump into my thread to say hi, and if you have any issues, check out Substack’s FAQ.
hogP9Ho4J5Ff6tmqZ_Join_my_new_subscriber_chat.txt
{ "file_size": 784 }
18495c43-c971-4ef9-a2c3-a9dcf694dcee
There’s a concept I think about when teaching, which I call Graceful Degradation. The basic idea is, how well does this lesson work if someone doesn’t remember it very well or if I teach it badly? I picked up Graceful Degradation as an engineer, and you might be familiar with it from that milieu. It's basically the same idea, just used in a different context. I. Consider throwing a punch. Make a fist with your thumb on the outside of your fingers, because you don’t want to pop your thumb. Place your feet shoulder width apart and then take a comfortable step forward with one foot and plant yourself solidly. Bring your fist close to your chest or face, then move it in a straight line from there to your opponent so you waste as little motion as possible. You want to make contact with the first knuckles of your pointer and middle fingers. Don’t hit someone in the face unless you’re wearing boxing gloves, particularly the jaw or mouth since that’s one big crumple zone. Different martial arts might quibble over some of those details and a good instructor would drill you into much more precise form, but that’s a pretty good crash course on punching people. As a lesson though, it also has the interesting property that every single line is helpful in isolation even if you forget why the line is there. If you forgot literally everything else except putting your thumb on the outside of your fist, well, at least you’re not going to dislocate your own thumb when you hit someone. If you threw an off-balance hook right to someone’s jaw but remembered to connect with your first two knuckles, that’s still better than landing with the inside of your fist. If you only half remember what your sensei said from years ago, well, you aren’t going to be worse off for using what you do remember. This lesson degrades gracefully. II. Compare this to heart transplant. In a heart transplant, The patient is put under general anesthetic, drugged so that they are unconscious but still able to breath under their own power.For many procedures, the patient is put on a heart-lung bypass machine to pump blood through their body while the heart is being operated on. They’re possibly also placed on a ventilator to help their breathing.The tools, including forceps and a sharp scalpel, are sterilized along with the operating room.The surgeon cuts a long incision in the patient’s chest and the patient’s ribs are spread open to grant access to the heart.The surgeon cuts the original heart out, places a new heart in its place, and stitches the donor heart together with the arteries and veins.The new heart possibly received an electric shock to get it going again.The ribs are pushed back in place and the surgeon closes the incisions. If the surgeon forgets a step, the patient is going to have a very bad no good day. Some of it is kind of intuitive, in that it’s hard to forget you need to open up the chest to get at the heart before cutting the original heart out, but some of it isn’t. If you were performing heart surgery and you forgot what to do in order to get the heart beating again, that’s probably worse than not doing the heart surgery in the first place. Remember, introducing checklists to hospitals improved medical outcomes. (Also, every one of those seven steps has substeps. Figuring out the right drug dosage to put a patient under without killing them is not obvious; imagine being handed the contents of the hospital pharmaceutical storage, presented with a patient’s chart, and asked to pick some vials.) Heart surgery does not have the property of graceful degradation. (Heart surgery is hard and high status, but there’s other things that don’t degrade gracefully. If you remember most of the rules of chess but not how knights move, then you aren’t going to be very successful at playing chess. If you remember half of the steps involved in killing and cooking a chicken, the result is not going to be appetizing.) III. I think this is useful to have in mind, especially when communicating. If you’re trying to convey an idea, and it does not have the property of graceful degradation, then you need to put a lot more effort in if you want the idea to be properly used. If you have a project you want to work on, but the project only works if everyone involved actually has the whole concept down in their heads, then you’re going to need to check that they have all the parts. Conversely, if it’s enough for people to do a little better than they were doing at some of the parts, then you have a lot more options. To put it another way, if you’re trying to get everyone to make good decisions every time, that’s really hard. If you’re just trying to get people to make better decisions, to be a little better tomorrow than they were yesterday. . . if you want to raise the sanity waterline, and there’s many ways people can be insane that are kind of independent of each other... This, I claim, is doable. It’s just the ordinary kind of hard, not shut up and do the impossible kind of hard. This shapes how I approach teaching. Since I don’t get years of detailed or personalized tutelage with people, I can’t assume I can teach them the kind of step by step sequences that a surgeon would use. I can’t even assume they’ll read the whole essay instead of skimming parts of it. As a result of that assumption, I discard lots of ideas for sharing that I think can’t be refined into a form with the property of graceful degradation. It’s also worth considering what you think goes wrong if people miss parts of the advice. If you need to convey something where it needs all the pieces to go right, build in some warnings of what you expect to happen if someone goes off half cocked. I expect it also helps to explicitly embed the steps as part of a greater whole; this is the motivation behind making guides and instructions in the form of numbered lists. Even still, it seems common for people to remember steps one and two and four but not three. Not everything has to have this property. Not everything can have this property. If you can express an idea such that it has this property, I claim that’s better.
N65uQ5RP6dYEC6CHW_Graceful_Degradation.txt
{ "file_size": 6215 }
5b9fcc31-4e9d-487e-90e8-1474574f16ef
g9A3Kj4FXzsYQr9Zm_Apply_to_be_a_mentor_in_SPAR!.txt
{ "file_size": 0 }
b64e8dea-3340-4014-9f5e-3a85843d2007
I think Instruction-following AGI is easier and more likely than value aligned AGI, and that this accounts for one major crux of disagreement on alignment difficulty. I got several responses to that piece that didn't dispute that intent alignment is easier, but argued we shouldn't give up on value alignment. I think that's right. Here's another way to frame the value of personal intent alignment: we can use a superintelligent instruction-following AGI to solve full value alignment. This is different than automated alignment research; it's not hoping tool AI can help with our homework, it's making an AGI smarter than us in every way do our homework for us. It's a longer term plan. Having a superintelligent, largely autonomous entity that just really likes taking instructions from puny humans is counterintuitive, but it seems both logically consistent. And it seems technically achievable on the current trajectory - if we don't screw it up too badly. Personal, short-term intent alignment (like instruction-following) is safer for early AGI because it includes corrigibility. It allows near-misses. If your AGI did think eliminating humans would be a good way to cure cancer, but it's not powerful enough to make that happen immediately, you'll probably get a chance to say "so what's your plan for that cancer solution?" and "Wait no! Quit working on that plan!" (And that's if you somehow didn't tell it to check with you before acting on big plans). This type of target really seems to make alignment much easier. See the first linked post, or Max Harms' excellent sequence on corrigibility as a singular (alignment) target (CAST) for a much deeper analysis. An AI that wants to follow directions also wants to respond honestly about its motivations when asked, and to change its goals when told to - because its goals are all subgoals of doing what its principal asks. And this approach doesn't have to "solve ethics" - because it follows the principal's ethics. And that's the critical flaw; we're still stuck with variable and questionable human ethics. Having humans control AGI is not a permanent solution to the dangers of AGI. Even if the first creators are relatively well-intentioned, eventually someone sociopathic enough will get the reins of a powerful AGI and use it to seize the future. In this scenario, technical alignment is solved, but most of us die anyway. We die as soon as a sufficiently malevolent person acquires or seizes power (probably governmental power) over an AGI. But won't a balance of power restrain one malevolently-controlled AGI surrounded by many in good hands? I don't think so. Mutually assured destruction works for nukes but not as well with AGI capable of autonomous recursive self-improvement. A superintelligent AGI will probably be able to protect at least its principal and a few of their favorite people as part of a well-planned destructive takeover. If nobody else has yet used their AGI to firmly seize control of the lightcone, there's probably a way for an AGI to hide and recursively self-improve until it invents weapons and strategies that let it take over - if its principal can accept enough collateral damage. With a superintelligence on your side, building a new civilization to your liking might be seen as more an opportunity than an inconvenience. These issues are discussed in more depth in If we solve alignment, do we die anyway? and its discussion. To the average human, controlled AI is just as lethal as 'misaligned' AI draws similar conclusions from a different perspective. It seem inevitable that someone sufficiently malevolent would eventually get the reins of an intent-aligned AGI. This might not take long even if AGI does not proliferate widely; there are Reasons to think that malevolence could correlate with attaining and retaining positions of power. Maybe there's a way to prevent this with the aid of increasingly intelligent AGIs; if not, it seems like taking power out of human hands before it falls into the wrong ones will be necessary. perspective. Writing If we solve alignment, do we die anyway? and discussing the claims in the comments drew me to the conclusion that the end goal probably needs to be value alignment, just like we've always thought - humans power structures are too vulnerable to infiltration or takeover by malevolent humans. But instruction-following is a safer first alignment target. So it can be a stepping-stone that dramatically improves our odds of getting to value aligned AGI. Humans in control of highly intelligent AGI will have a huge advantage on solving the full value alignment problem. At some point, they will probably be pretty certain the plan can be accomplished, at least well enough to maintain much of the value of the lightcone by human lights (perfect alignment seems impossible since human values are path-dependent, but we should be able to do pretty well). Thus, the endgame goal is still full value alignment for superintelligence, but the route there is probably through short-term personal intent alignment. Is this a great plan? Certainly not. It hasn't been thought through, and there's probably a lot that can go wrong even once it's as refined as possible. In an easier world, we'd Shut it All Down until we're ready to do it wisely. That doesn't look like an option, so I'm trying to plot a practically achievable path from where we are to real success.
587AsXewhzcFBDesH_Intent_alignment_as_a_stepping-s.txt
{ "file_size": 5417 }
e8e7508d-8254-4d5a-b8bb-779bc9af5a15
I had a 2-hour mini-sprint with Max Heitmann (a co-founder of Aether) and Miles Kodama about whether large language models (LLMs) or LLM agents have beliefs, and the relevance of this to AI safety. The conversation was mostly free-form, with the three of us bouncing ideas and resources with each other. This is my attempt at recalling key discussion points. I have certainly missed many points, and the Aether team plan to write a thorough summary from all the mini-sprints they organised. I write this for three reasons. First, as a way to clarify my own thinking. Second, many of the ideas and resources we were sharing were new to each other, so good chance this will be useful for many LessWrong readers. Third, you might be able to contribute to the early stages of Aether and their strategy. What is a belief? Max provided three definitions of beliefs: Representations. A system believes P if there is an explicit representation of 'P is true' in the systemBehaviour. A system believes P if its behaviour is consistent with something that believes P. (Known as dispositionalism)Predictive power. A system believes P if this is the best predictive explanation of the system. Many of our discussions during the two hours boiled down to what we considered to be a belief or not. This is something I am still confused about. To help explain my confusion, I came up with the example statement 'The Eiffel Tower is in Paris' and asked when the information 'The Eiffel Tower is in Paris' corresponds to a belief. System 1: The Eiffel tower itself. The Eiffel tower itself contains the information that 'The Eiffel Tower is in Paris' (the properties of the physical object is what determines the truth of the statement), but the Eiffel tower 'obviously' has no beliefs. System 2: An LLM. The information 'The Eiffel Tower is in Paris' is encoded in a (capable enough) foundation model, but we did not agree on whether this was a belief. Max said the LLM cannot act on this information so it cannot have beliefs, whereas I think that the ability to correctly complete the sentence "The Eiffel Tower is in the city " corresponds to the LLM having some kind of beliefs. System 3: An LLM agent. Suppose there is a capable LLM based agent that can control my laptop. I ask it to book a trip to the Eiffel Tower, and then it books travel and hotels in Paris. We all agreed that the information 'The Eiffel Tower is in Paris' is a belief for the agent. Does it matter if an LLM or LLM agent has beliefs or not? Going into the discussion, I was primed to think not as I had recently heard CGP Grey's comparison of AI systems to biological viruses or memes. The relevant quote is: This is why I prefer the biological weapon analogy—no one is debating the intent of a lab-created smallpox strain. No one wonders if the smallpox virus is “thinking” or "does it have any thoughts of its own?". Instead, people understand that it doesn’t matter. Smallpox germs, in some sense, “want” something: they want to spread, they want to reproduce, they want to be successful in the world, and are competing with other germs for space in human bodies. They're competing for resources. The fact that they’re not conscious doesn’t change any of that. So I feel like these AI systems act as though they are thinking, and fundamentally it doesn’t really matter whether they are actually thinking or not because externally the effect on the world is the same either way. That’s my main concern here: I think these systems are real dangerous because it is truly autonomous in ways that other tools we have ever built are not. I think this perspective puts more weight on Definition 2 of beliefs (dispositionalism) than the other two definitions. [Edit: Max Heitmann in the comments says this is more in line with Definition 3. On reflection I actually do not fully understand the distinction between 2 and 3.] But why are the Aether team organising these mini-sprints? The short summary is that deception is a big risk in future AI systems, and they believe that nailing down what it means for LLMs and LLM agents to believe something is an important step to detecting and intervening on deceptive systems. [EDIT: See RohanS's comment for clarification: not only interested in beliefs but analyzing classic AI safety concepts in the context of foundation model agents.] This intuitively sounds reasonable, but I am persuaded by Nate Soares' Deep Deceptiveness argument (which I think is a special case of the memetic argument CGP Grey is making above): Deceptiveness is not a simple property of thoughts. The reason the AI is deceiving you is not that it has some "deception" property, it's that (barring some great alignment feat) it's a fact about the world rather than the AI that deceiving you forwards its objectives, and you've built a general engine that's good at taking advantage of advantageous facts in general. As the AI learns more general and flexible cognitive moves, those cognitive moves (insofar as they are useful) will tend to recombine in ways that exploit this fact-about-reality, despite how none of the individual abstract moves look deceptive in isolation. Nate Soares goes in detail on a fictional but plausible story of what this explicitly might look like, with the AI system taking actions that would not be identified as deceptive in isolation, but only in retrospect when we see the full sequence of actions would we describe it as being deceptive. In particular, if we tried inspecting the systems beliefs, we would not find an inconsistency between its 'true beliefs' and its behaviour / 'stated beliefs'. Nevertheless, I do still think it is valuable to understand beliefs, because it should reduce the odds of "explicit deception" of happening. Can CoT reveal beliefs? This question arose in our discussions. There are two reasons I am skeptical. First, humans' stated beliefs do not match the sub-conscious beliefs that actually determine our behaviour. This is likely explained in various places, but I know about this idea from the book The Elephant in the Brain. It for example makes the case that people (in the US) put significant resources into healthcare not because it will increase the health of their loved ones but instead because it is how you show you care about your loved ones. Second, there is some evidence that CoT does not help the largest LLMs. I do not remember the paper, but there is research showing that CoT seems to help medium size models, but not small or large models. One proposed story is that small models are too dumb to make use of step by step thinking, and large models have reached alien levels of intelligence that having to explain in human language is not helpful for its reasoning. [EDIT: RohanS in comments gives OpenAI o1 as excellent counter-example to CoT not being helpful for large LLMs.] Additionally, when trying to search for the CoT paper above, I found this paper on arxiv which finds situations where the CoT is just rationalizing the decision made by the LLM. If you look at papers which cite this paper, you will find other research in this vain. [EDIT: RohanS highly recommends the post The Case of CoT Unfaithfulness is Overstated.] Emergent beliefs Because many AI systems consist of smaller sub-systems put together, the question of how the beliefs of the full system compares to the beliefs of the individual sub-systems came up. In particular, is the beliefs of the full system equal to the union or intersection of the beliefs of the sub systems. Three interesting observations came up. Beliefs of agents are not closed under logical deduction. One argument is that agents will have finite memory and true sentences are arbitrarily long, therefore, there are true sentences that are unknown to the agent. Presumably there are more sophisticated and insightful arguments, but we did not go into them.Individual systems can all have a shared belief, but the system does not. Example from Miles is that there are often situations in companies or teams in which everybody privately believes the same thing (e.g. this project will not succeed) but is unwilling to state it out loud, so the group of people as a whole behave as if they do not believe that thing (e.g. by continuing with the project).The system can believe things that no individual believes. An example of this is if sub-system 1 believes 'A' and sub-system 2 believes 'A implies B', and it is only the full system that by combining the knowledge believes 'B'. Final thoughts As the title says, these are scattered thoughts, and I am not sure how useful they will be for the average LessWrong reader. I felt it was worth sharing because each of us were presenting examples and resources that the others were not aware of, so presumably this summary contains information that is new to readers, even if none of the ideas are original. If you found any of this interesting or you have any thoughts, please share in comments and/or reach out to the Aether team. Their post on the EA forum includes links to expression of interest forms and an email address where you can get in touch.
HvTcWmHnpXnTpC3yJ_Scattered_thoughts_on_what_it_me.txt
{ "file_size": 9131 }
9930ac45-91f1-45b9-9a9b-763916bc13aa
(Note: This post might be slightly funnily written but it is not a joke but serious and important.) Me: I think it's possible that we might not need to figure out biotechnological advances for how we can create superbabies through embryo selection or so, but that actually  they might already be around 1000 or more superbabies born on earth per year, only they happen to be born into an environment where they don't get educated and don't learn the necessary skills to advance science. By superbabies I mean genetic potential for intelligence of >=+7std on the human distribution. Imaginary person (IP): What, how could that be true? There being some hunter tribe of genetical supergeniuses which are almost not communicating with the rest of humanity? Me: Sorta yeah. IP: And where would those superbabies be? Me: Here: . . . . . IP: Are you joking? Me: No. See this excellent reddit post (or my wrapper post around it) for why it's IMO a pretty reasonable guess. Actually just read this right now before continuing. It takes only 5min. (ADDED: You could now also read this, but I still strongly recommend reading the reddit post first.) IP: Ok but it's actually just a fun relatively unlikely possibility right? Me: No. My current guess would 50% that orcas could do superhuman scientific problem solving (aka >=+7std[1]) if they actually trained themselves at it for human equivalent amounts and had human equivalent interfaces for research (e.g. BCI for using a computer). Though I only tried to form a model of orca intelligence for 2 days[2], so please tell me more evidence and considerations. Though even if we tried significantly[3] I'd only give it like 15% chance that within the next 30 years, orcas could in one year solve theory-bottlenecked problems for which science would've taken 20 years.[4] (Yeah I know 15% is still ridiculously high considering how crazy the plan sounds.) E.g. orcas might want to rather do orca stuff, or there might be orca-cultural pressure against spending a large chunk of a day studying, or communicating with orcas turns out to be much harder than my median expectation[5]. (Please comment if you have more considerations on why it might (not) work.) Can someone please look into this? I think conditional on orcas being >=+7std intelligent, I'd be about as excited about this as about trying to get human superbabies soon.[6] I'd guess it might not be all that hard to get a better guess on how smart orcas are in comparison to humans. E.g. get a guess on how sophisticated their language is, see how good they are at pattern recognition or learning to solve simple math or other problems[7]. (Though in the cases where orcas perform well we'd still have some uncertainty about how well it's going to generalize to hard science problems I guess.) Can people (you?) please look into this, and if it seems like orcas might be superhumanly smart work to become able to communicate well with them etc?[8] (ADDED: Please let me know if you're planning to look into it more thoroughly (and also when you changed your mind and not planning it anymore).) Cautionary notes There's a lot of important questions to think through which I haven't thought through at all. Here's a very non-exhaustive list: How aligned are orcas with humans?How well could we trade with orcas if we e.g. give them power by having them try to solve the alignment problem?How would people react if there was significant evidence of orcas being (a) human-level intelligent or (b) singificantly smarter than humans? I feel sorta relatively optimistic here though. Also the the plan here would obviously NOT be to keep orcas in captivity and try to train and extract useful cognitive work from them, but to build study places for orcas where they can come to of their own accord and communicate with humans and be taught. Sidenote on how orcas (and pilot whales) could be useful datapoints for AI alignment (feel free to skip this if you're not interested in alignment.) Steven Byrnes is trying to reverse engineer the steering subsystem in human brains. My uncertain guess is that this probably is not directly amazingly useful for aligning AI, but I'm still a big fan of his agenda. A major reason for this is that understanding the steering subsystem might help us to better understand how human values form, and that we might be able to generalize the understanding a bit to see how we might align AIs. If there are other intelligent species on earth, we could reverse their steering subsystems too and observe and understand how their values form, and we'd thereby have multiple datapoints which seems like a lot better basis for forming good generalizing models of what alignment-machinery we need for shaping the (values of the) AI as we want. ^ where most of the 50% probability mass is actually on orcas being even vastly smarter than +10std humans would be ^ Though FWIW I think my considerations are probably more thorough than you might expect given the 2 days. E.g. see this comment. ^ (e.g. 1 billon dollars and a few very smart geniuses going into trying to make communication with orcas work well) ^ UPDATE 2024-11-10: I've gotten very slightly more optimistic that orcas are smart enough, and significantly more optimistic on how feasible it would be, especially in terms of how much it would cost (and how long it would take). Currently I'm at like 30% that I could do it with $50M in 25 years (aka like 55% conditional on orcas being smart enough). (Though tbh for almost anyone else I'd assign a chunk lower probability.) My estimate will likely change further but I probably won't update this post further, but feel free to ask me in the future on my updated estimate via comment or PM. ^ (where tbc my median guess still says it will be very difficult - but perhaps feasible given a highly selected group of geniuses) ^ Though with better understanding we might of course be able to predict much better what approach is more promising, rather than thinking they are similarly promising. ^ Tbc, I'd not expect current orcas to be able to compete with current humans in abstract problem solving, because orcas are probably basically not trained in it at all. But we could see how fast they learn it and maybe compare how fast a human who didn't get education can learn it. ^ I'm currently not quite yet excited enough myself / I am more excited about my current agenda (which I am unusually excited about - the orca thing is still among the top contenders). Also I have much higher irreplaceability on my agenda whereas I might not have that great of a competetive advantage on the orca stuff.
vKM4CTjz5fPB7vznb_An_alternative_approach_to_super.txt
{ "file_size": 6630 }
a0128539-0672-4eea-9c52-d718d3de65c9
I have been using Linux (EndeavorOS) for a couple of months now. I've had a lot of issues running it smoothly: things don't work out-of-the-box, and you have to put in a lot of time, effort and brain-cells to make things work. And so, by no means it is a perfect OS, but will I ever go back to using Windows? NEVER. Why? Because, Linux is the superior OS in many ways. It has better memory management, is legally free (the kernel is open-source), has a lot of customisation option (you can do anything), is extremely versatile, has great community support, and so on. If you are a power-user, Linux is for you. But just a year ago, I believed that switching to Linux would give me all kinds of problems- because I knew that it would be tough- and I ended up sticking to Windows. But now I have gone beyond my fears. I chose to have total control of my PC, instead of being dictated and limited by Microsoft. This reminds me of what Immanuel Kant said, "It is so easy to be immature."[1] What he meant was, it is so easy to be a subordinate and let others take decisions for you. It is so easy to remain in your own bubble. It is so easy to be guided. Kant's solution to this problem is "Enlightenment", which he describes as: “Enlightenment is man's emergence from his self-imposed immaturity. <...> Sapere Aude! [dare to know] 'Have courage to use your own understanding!'—that is the motto of enlightenment.” Dare to know, and to be wise. Dare to get out of your cage and learn to struggle. People, like me from 2 years before, lack the courage Kant is talking about. This is why using Linux is an act of going beyond immaturity for me. Of course, using Linux is arduous; I won’t deny that. I have borked my PC 2-3 times in the past 4 months, had so much trouble working with applications, and it is just not that stable. And you might question: “Why are you using something that creates more problems than solve?” This is where Albert Camus and Byung-Chul Han come in to a rescue. "The struggle is enough to fill a man's heart," says Camus[2]. Fixing bugs, making things work makes me happy. And I learn a lot in the process; you learn from your mistakes, and that is why it is so powerful. You don't get that opportunity on Windows; everything is too smooth. Everything is the same; there is no mention of negativity, or the Other. According to Han, this lack of negativity and otherness is the crux of the busy, unappealing, monotonous life.[3] I've used this Linux-Windows example to show that we are not bound; we are free to take a step towards Enlightenment and do what we are meant to do. This "self-imposed immaturity" has introduced a deformity in our lives, sucking our happiness and satisfaction like tenants who haven’t paid rent, and so we must evict it out. The rent was due long ago. ^ An Answer to the Question: What is Enlightenment? (1784), Immanuel Kant ^ The Myth of Sisyphus (1942), Albert Camus ^ The Agony of Eros (2017), Byung-Chul Han; my review
h2qZvjbWsxmiqNF3y_Going_Beyond_"immaturity".txt
{ "file_size": 2995 }
670b75c8-6cf8-4496-9bd2-dc14e1e02d9a
Note: thank you to Brita Belli, senior communications manager at Recursion, for connecting me to Charles Baker, a VP at Recursion, who led a lot of this work I’ll discuss here + allowed me to interview him about it! I am not affiliated with Recursion in any capacity. One more note: a few people have pointed it out that I got the cell painting invention date wrong. It’s actually 2013 from this paper, and not 2016 from this paper. My bad! A few words here have been updated to account for this. Introduction At this point, you’d be hard pressed to not have heard of Recursion Pharmaceuticals. They were one of the first biotechs to apply machine-learning (ML) to the problem of drug discovery back in 2013 and, currently, seem to be the only winners in the area. I use ‘winner’ loosely of course — they have yet to push a drug to market and their recent Phase 2 readouts were disappointing (here is some nuance on the whole subject that most article leave out though).[1] Of course, caveats on this readout (maybe) being an unfairly negative assessment of the company’s capabilities, given that it was Chris Gibson’s — the CEO of Recursion — PhD project from 2013[2], and drugs take time to develop. Nevertheless, they have done something that very few others in their shoes have: survive for years and not be under active threat of going under. This is no small feat in a field filled with otherwise promising companies that have failed that deceptively simple smell test, Atomwise being the latest case. I am as unsure as anyone as to whether Recursion’s more grandiose promises, (discussed more in detail by Derek Lowe here) will ever come to pass, but at least they are still around. The bet of the whole company is the following: take a drug (either from a chemical screening library or dreamt up by a model), wash it over a plate of cells, visually observe how the cells react to it, and repeat this a few billion times across many different cells + many genetic knockouts of those cells (which acts as a model of a disease) + many different drugs. The whole idea is also known as phenomics — the study of the visual phenotype of organisms. In this case, the ‘organism’ is a small set of cells. Once you’ve done that, train a model on those petabytes of cellular images, along with the genetic or chemical perturbation applied. The Recursion bet is that the resulting model would acquire an extremely strong understanding of the interaction between the visual morphology of cells, their genetic makeup, and drugs applied to them — predicting the rest given one (or two) of the others. From there on out, you can do any number of things: Screen new drugs by comparing the image of cells given that drug to images of cells with genetic alterations that model a given disease.Identify novel mechanisms of disease by looking at how the phenomic clusters of gene alterations cluster with phenomic signals from genes known to be associated with certain diseases.Study cell morphology relationships by clustering large sets of genetically perturbed samples. And so on. It’s a fun idea! You can compare this phenotype-based approach to something like target-based discovery, where you have a cellular target in mind (insulin receptor, adrenergic receptor, etc) and want to optimize a therapeutic to do [something] to it. Historically, target-based discovery has a pretty bad success rate, phenotype-based approaches do much better (though, as with everything, it’s nuanced). This shouldn’t be too surprising; biology is complex, and dissolving things down to a single target is hard. The Recursion bet is not just on the phenotype-based approach, but also in scaling it up to an insane degree: >19 petabytes of cellular images + perturbation pairs at last count (2024). Has this paid off? Realistically, I think it’s still a work in progress. They have discussed the capabilities of such a model trained on such data (self-supervised, encoder-only transformer) in a paper presented at CVPR 2024, where it won a spotlight. The results are pretty interesting in vibe-space alone: a 75% masked image of cells, when put through their model, could yield a reasonably true-to-life filled-in image. In terms of actual clinical utility, I think it’s a bit hard to say. While they do quite well on the benchmarks given in the paper, I think it’s an open question how good the benchmarks actually are. Or, at least, how well they translate to the main problems in taking drugs to market — target selection and toxicology. Really though, opining on this all isn’t super useful for anyone; the final answer will be in what their clinical trial readouts will be. Of which there will be 6 more readouts in the next 18~ months per their 2024 Download Day in July (and 10 if counting their proposed merger with Exscientia). The one at the top, REC-994, wasn’t great, but perhaps the others stand a shot. But this essay isn’t about Recursion’s drug discovery strategy. Much has already been written about that and I certainly don’t have a unique take there. This essay is about how Recursion takes pictures of cells in the first place, why it (officially) changed its approach just a few months ago, and why I think the decision to change it is a lot more interesting than people think. This post really stemmed from a LinkedIn post by Charles Baker, a VP at Recursion Pharmaceuticals, that I saw a few weeks ago. It described how the company had recently moved over from one cellular imaging modality (cell painting, created in 2013) to a much older one (brightfield imaging, arguably created in the 1500’s). Very, very important context: Recursion was founded on the former assay and had stuck to it for over a decade. Moving over to something new this late in the game was a surprising move! Yet, relatively little has been written about it. There’s that one early post by Charles, one more by Brita Belli — a senior communications manager at Recursion — and that’s it. This essay is meant to rectify the missing gap here. What is brightfield imaging? What is cell painting? Why did Recursion focus on the latter first, and then switch to the former recently? And how did the transition process go internally? This essay will discuss all that. What is brightfield imaging? Brightfield imaging is, as mentioned, an old technique. Its origins can be traced back to the 15th century, when Dutch pioneers in optics — Hans and Zacharias Janssen — first invented the compound microscope. In the 17th century, Antonie Van Leeuwenhoek used improved versions of the microscope, capable of far greater magnifications, to observe microscopic life. Now, in the modern era, it is routinely used by scientists to study intricate cellular behavior. At its start, brightfield imaging wasn’t even called so; it was simply the only way to observe things through a microscope. Only later was it named ‘brightfield’ to distinguish it from alternative, newer techniques (e.g. darkfield imaging). The principles of it are simple: you shine visible light through [something] and observe what comes out the other side. In the case of cells, you place them between glass slides, shine light from underneath them, and observe what's visible from the top using a microscope. Different parts of a cell interact with light differently. The cell membrane, nucleus, and various organelles all have their own differences in light absorption, creating contrast in the final image. The result is a grayscale picture — as cells are generally colorless or transparent — where darker areas represent denser or more light-absorbing parts of the cell, while lighter areas show where light passes through more easily. It's fundamentally equivalent to holding up a leaf to the sun — from the shadows arise structure. It’s incredibly cheap, simple, and easy to perform. As a further benefit, cells generally don’t care if light is being shone on them, so brightfield imaging doesn’t alter cell behavior either. For centuries, brightfield imaging was a way — again, the only way — for humans to deeply study the behavior of microscopic life, and it did that job splendidly. There’s really only one real issue with brightfield imaging: it’s really, really hard to observe anything interesting. Here is a single cell, imaged using brightfield. You can vaguely see some details about the state of the cell. Size, shape, and maybe some details about its internals. It’s hard to see much of anything of immediate note though — we’d really need to squint and stare at the fuzzy blob to get a sense of the structure. In concrete terms, our problem here is one of ‘low contrast’. We’re depending on the shadows of a cell to give us a sense of the structure, but it unfortunately seems like most of a cell is quite transparent. There is relatively little difference between the highest and lowest points of absorption across a cell. And, even more unfortunately, this problem of low contrast isn’t unique here, but a problem across the microscopic world. While a seasoned researcher who has only studied this one specific type of cell — say, hepatocytes — may not find this to be an issue when looking at hepatocytes, they may have trouble when looking at neurons. Cells have an immense level of heterogeneity, and brightfield imaging makes it quite hard for any one human to study many different types of cells without constant reference look-ups. Is there a way we could improve the contrast of the cells? We could perhaps be a bit more clever on how light is shined through the biological specimen, as is done in phase-contrast microscopy. For example: (Brightfield versus phase-contrast microscopy. From here.) But the bump in contrast here is only partial, many of the finer details are missed or still yet obscured. Is there a better way? Yes: cell painting. What is cell painting? If you carefully sift through millions of chemicals, you’ll stumble across a set of dyes that are chemically attracted to specific cellular biomolecules. If these dyes are fluorescent — something that will absorb light and re-emit it — all you’d need to do to bump up the contrast of your brightfield image is to wash the cells with the dye, apply light same as before, and that specific biomolecule would be brightly lit up, distinguished from the gray mess around it. For example, consider a chemical dye that binds to DNA. Since DNA is primarily stored in the cell’s nucleus, we can safely rely on a dye that attaches to DNA as a proxy for nucleus visualization. And, fortunately for us, there is a class of fluorescent dyes that do exactly this, often referred to as Hoechst stains. Consider the same cell as before, but with DNA-binding dyes washed over it to the right. The nucleus lights up! (From here) And, even more fortunately for us, there are many such dyes beyond DNA-binding ones alone. Phalloidin can bind to F-actin, revealing the cell's cytoskeleton. Wheat Germ Agglutinin can bind to sialic acid and N-acetylglucosaminyl residues, making the plasma membrane glow. And so on. Cell painting — published in 2013 from Anne Carpenter’s lab at the Broad Institute— pushes this idea of ‘using fluorescent dyes to increase contrast’ to its logical conclusion: five to six dyes used in concert with one another, to reveal eight broadly relevant cellular components or organelles. Quick side note: this might immediately seem like a reasonably weak ‘logical conclusion’. Why not more dyes? Why not dozens, or even hundreds? As with anything based in fluorescence, the bottleneck is in spectral overlap. You need dyes whose emission wavelengths won’t crowd each other out, and emission wavelengths are unfortunately quite large. Six simply seems to be the upper limit to ensure that you don’t face overlap. But, you could very well correct for the overlap, which is what is often done in flow cytometry (e.g. spectral unmixing). Why don’t they do this in cell painting? Because the assay is meant to be performed at high-throughput scales — millions of times — it is implicitly designed to meet some Pareto optimal metric. Simple to perform, contains a lot of information, and is cheap to run. This is also why the assay only uses inexpensive dyes and not, say, expensive fluorescent antibodies. Pushing it further would likely require some extra computational lifting + specialized equipment, and have questionable value. Five is simply where they landed. And how gorgeous these dyes are! In order, what is shown is RNA (orange), endoplasmic reticulum (green), mitochondria (red), cytoskeleton/cell-membrane (yellow), and the cell nucleus (blue). The last picture shows the overlay of all of them together. Now that we have ultra-high-contrast images, what can we do with it? For one, you can start to scale up the ML applied to these images. Cell painting is equivalent to a form of physical pre-processing, ensuring that the most salient parts of your microscopy image are brought into sharper focus. At the start, people quantized their cell painting images into thousands of hand-crafted features — size of cells, shape of cells, number of nuclei, and so on, using tools like CellProfiler (also created by Anne Carpenter’s lab) — and training models to predict the genetic or chemical perturbations applied to the cells. Of course, as the ML field slowly abandoned dataset priors and moved over to unstructured representations, so did the biology field. Circa 2019, using raw cell painting images as model input were confirmed as superior over hand-crafted features. This is an important point, and we’ll come back to it later. And two, far more salient to this essay, you can start a company. Fairly, Recursion as a company was more closely aligned to general phenotypic methods for drug discovery than any one phenotyping technique. Yet, they nevertheless became closely associated with cell painting, with nearly every mention of the methodology in news articles mentioning Recursion's name and their associated grand ambitions (like so). It also doesn’t hurt that Anne Carpenter has served on their scientific advisory board since the startup was founded. From the piece, which describes Anne’s initial interactions with Recursion’s founders in 2013: Over burgers at the nearby “Miracle of Science” pub that evening, I peppered them with questions over the course of two hours, more intensely than any PhD thesis defense I’ve witnessed. Normally, I would be a bit gentler on two grad students with a dream, but I am quite skeptical about startups by nature, and these two were planning to launch a company in the field I had pioneered: image-based profiling. I’m a bit chagrined to think how I treated them, but I certainly didn’t want a company so close to my lab’s research to hype big and fizzle out. In fact, my inherent skepticism is a large part of why I have never served on the Scientific Advisory Board of another start up company - before nor since Recursion. Survey a few MIT professors and find out how unusual that is! But there was something special about this situation. First, I knew the science very deeply: the plan was to make use of the software my lab had created and open-sourced, called CellProfiler, to extract features from images. They would also use the image-based profiling assay my lab co-invented with Stuart Schreiber’s team, called Cell Painting, which uses cells’ morphology features extracted from images as a readout of the impact of a disease, drug, or genetic anomaly. To be fair, in 2013 I wasn’t convinced that image-based profiling would work as well as it has turned out to, and across so many applications. And the theoretical utility of cell painting quickly bore over the years. Models trained on raw cell painting images could predict the chemical perturbations applied to the cells. Even more interestingly, such models could even predict the mechanism of action of the perturbation — far better than models trained on purely structural chemical data alone. Their utility popped up in a variety of areas beyond that, from toxicological analysis to predicting cell stress response. Again, this essay isn’t about whether improved phenotyping approaches (e.g. cell painting) in early R&D strongly translates to more/better actual drugs released. This hasn’t practically turned out to be the case, at least for now, though some early results suggests a reversal in that trend. As I mentioned previously, time will tell what the utility of scaling up phenotyping screens will bring. What this essay is about is why cell painting — despite being such a seemingly promising assay — was abandoned and replaced with its century-old predecessor. To those in the ML community, the answer may be obvious: it was never needed in the first place. The problem with cell painting The trend of ML in general over the past 15~ years has been to strip away more and more of the biases you’ve encoded about your dataset as you feed it into a model. Computer vision went from hand-crafted interpretable features (e.g. number of circles, number of black pixels when thresholded, etc), to hand-crafted uninterpretable features (e.g. scale invariant feature transform), to automatically extracted uninterpretable features (e.g. hidden dimensions of a convolutional neural network). In other words, the bitter lesson; pre-imposing structure on your data is useful for a human, but detrimental to a machine. Cell painting is a casualty of this truth. The method simply highlights what is already present in brightfield images, which may be useful for hand-crafted features that benefit from strong contrast, but neutral at best for deep learning methods. As I mentioned earlier, the superiority of using raw, unaltered cell painting images as input over hand-crafted features was established by 2019, but, as time went on, the utility of cell painting at all also became suspect. Per a 2022 Scientific Reports paper, one could nearly perfectly predict what the cell painting image would be given a brightfield image. Here is a clearer comparison from an ICCV 2023 paper, which found the same thing. Of course, we’d be rushing if we concluded from these results alone that cell painting is useless. Perhaps there are extremely subtle, but important, differences between the ‘predicted’ and ground cell painting images that a naive eye-balling wouldn’t tell us anything about. We’d need a study that took a model trained on brightfield-only images and a model trained on cell painting-only images, and compared the two on tasks of interest. Luckily, circa 2023, we have one of those, which compares the two modalities on predictions of chemical perturbation across ten mechanisms of action. And the results are quite clear. If you rely on CellProfiler-extracted features, cell painting wins. But, if you use raw cellular images, there is relatively little predictive difference between brightfield and cell painting images. There’s still a lot of follow-up work to do here to ensure that this trend continues across many different cells and perturbations, which is something we’ll discuss a bit more later, but it feels unlikely that this is all a coincidence. We’ve established equivalency, at least roughly. But is there a chance that brightfield could yield something beyond cell painting alone? Why brightfield is (maybe) even better On face value, the value of brightfield has a lot to do with its simplicity. Cell painting is also simple, relatively, but there is a whole protocol that goes along with it. Entire consortiums have been spun up to optimize the process further, dyes cost money, and artifacts in the process may still arise. Brightfield solves all these problems. So the first-order impact of Recursion switching away from cell painting is making their dataset higher-quality, faster to acquire, and cheaper to create. But there is another side benefit of going the brightfield route: the ability to do time-lapse microscopy. The ability to observe, in real time, how a cell behaves from second-to-second, hour-to-hour, and day-to-day. At least some of the dyes in cell painting can be done to live cells, a few of them used in the process aren’t technically cytotoxic and were specifically chosen to minimize their effect on cell behavior. Unfortunately, behavior deviations still arise. A study from 2010 showed that the Hoechst stains — the same ones we discussed earlier that bind to DNA — can cause cell apoptosis if the dyes are repeatedly excited with light. And, in fact, the whole concept of fluorescent stains likely invariably causes levels of phototoxic effects on the cell over time. Because of this, the cell painting assay isn’t applied to live cells! In practice, the Recursion cell painting process looks like this (following the protocol outlined here): Grow cells.Apply chemical compound or genetic therapy (overexpression or knockdown) or both to the cells.Somewhere between 24-48 hours later, fix, permeabilize (allow things to pass through the cell membrane), and stain the cells with the cell painting dyes. Cell fixation refers to the process of ‘freezing’ the cells in places; preventing further cell decay by terminating ongoing biochemical reactions. The fixation chemical that Recursion relies on is paraformaldehyde, a crosslinking fixative, which forcibly creates covalent chemical bonds between the proteins in a cell and [everything] around it. At least for several weeks, this will (mostly) perfectly preserve the cell's appearance.Within 28 days of cell dyeing + fixation, image the cells and use the resulting images for whatever you want. There’s a reasonably strong assumption we’re making here: most of the useful information about how a perturbation affects a cell is observable at the exact moment of freezing, 24-48 hours after the perturbation first occurred. This doesn’t feel immediately obvious: there is potentially really useful data in understanding how a compound is worming its way into a cell immediately after application, whether long term changes remain on the cell morphology weeks after application, and so on. With cell painting, all that information is tossed away. But if you use brightfield, you can image as much as you want, seeing the full temporal scope of cellular responses to perturbations. Is there precedent to believe that viewing the temporal state of cells actually helps to understand their behavior? We certainly know that many phenotypic aspects of cells are time-bound; cell motility, the movement of membrane proteins, and so on. But do we gain anything from actually watching those time-bound events as they occur? Naively, we’d expect that to be the case. After all, look at this video of brightfield pre-adipocytes! Look how much is going on! Surely at least some of this is useful information! It’s…unfortunately not super clear cut from the literature. If you try to find papers about the utility of time-lapse imaging in cells, you’ll find dozens of results, claiming that the temporal aspect is deeply important. But they are all somewhat suspect. One paper published in Science is very outright with this, titled ‘Live-cell imaging and analysis reveal cell phenotypic transition dynamics inherently missing in snapshot data’. But all that’s actually revealed is that there is a ‘fork’ in cell state transition dynamics — not that the fork is actually meaningful in using the data for anything. Closer to the topic of understanding the impact of chemical perturbations on cells, there is another paper titled ‘Long-term Live-cell Imaging to Assess Cell Fate in Response to Paclitaxel’. The results are a bit mixed; cell responses to chemotherapeutics do display a fair bit of heterogeneity that temporal approaches help tease out. But again, the heterogeneity is observable from the end-state, and it’s unclear how important knowing the ‘path to heterogeneity’ is. The most relevant paper of the lot is an article titled ‘Time series modeling of live-cell shape dynamics for image-based phenotypic profiling’. Here, they more directly show that the inclusion of temporal dynamics do improve a model’s ability to separate out the phenotypes of cells treated with one drug and cells treated with another drug. But…the actual result here is incredibly weak: the ‘improvement’ in accuracy is going from correctly predicting 5 out of the 6 drugs applied to a set of cells using fixed-cell methods, to 6 out of 6 drugs. Past that, with such a small sample size of drugs/conditions, it's difficult to conclude anything from here. Yet, despite limited evidence supporting the value of cellular dynamics, I'm going to take an unusually optimistic stance here: I strongly believe there is an immense amount of signal hidden in cell dynamics. Most of the existing papers on the subject are extremely low sample in size, use human-interpretable features for dynamics (cell movement rates, contact time, etc), or don’t even use ML. Scaling up time-lapse microscopy and throwing sufficient-enough ML literally has never been done before. Cell dynamics is incredibly understudied, it’s very much one of those things in biology that everyone admits is probably really important, but there just was never a scalable way to study it. Until now! There’s a really fun confluence of things going on here: ML has gotten really, really good over the last decade, making it so extracting manual features from cell videos isn’t necessary.Live-cell brightfield imaging can be relied upon over fixed cell painting methods.There is a 600 person, multi-billion dollar biopharmaceutical startup that is currently collecting petabytes of time-lapse cell-perturbation brightfield videos: Recursion Pharmaceuticals. Again, it’s an open question how useful time-lapse microscopy will be for the ultimate end goal of actually developing new drugs. Drug discovery is a graveyard of tools that sound really interesting, teach the field very cool things about biology, and ultimately end up doing nothing at all for the hard problem of making better drugs. Paying attention to the temporal dynamics of cells may very well be another entry to this graveyard: tells us a bit about some niche set of diseases, but nothing beyond that. But, regardless of what happens, I think the future here is enormously interesting. Let’s move on. Cell painting is equivalent to brightfield and brightfield may be even better because of the potential for time-lapse microscopy. What’s Recursion to do with all this information? The transition process One of the immediate questions I had about the cell-painting to brightfield process wasn’t the science, but how logistically it even happened. Recursion is a platform company amongst platform companies — the cell painting assay is deeply tied to their science, their marketing, and everything about how they position themselves. Even if the science pointed towards brightfield being a good move, it would’ve been a massive undertaking for the behemoth of a startup to switch to it a decade into the game. I talked to Charles Baker about this! He's an automation-scientist-turned-VP at Recursion Pharmaceuticals who has worked there for six years and, crucially, was deeply involved in moving the company from cell painting to brightfield imaging. He told me that the initial inklings that brightfield was sufficient actually came from an internal 2021 hackathon project, a yearly event at Recursion dubbed ‘Hack Week’. There, a team demonstrated that brightfield images, when used as training data, gave similar results to cell painting. But what I found particularly interesting was that cell painting was still showing stronger signal than brightfield . The physical preprocessing that cell painting was doing still seemed meaningful, even if you relied on raw images as input. But there was something here, and Charles worked on exploring it further. Biotechs often operate in a ‘don’t fix it if it isn’t broken’ mindset with regards to their primary assay (given how expensive any mistakes can be), so any attempt to modify that assay must be very de-risked. One of Charles' concerns was that brightfield may be equivalent to cell painting in some settings, but not in others. In some permutation of cell lines, perturbation, and genetic knockout, chemical dyes may suddenly become important. Because of this, papers on the subject that came from outside of the company couldn’t be outright trusted, as Recursion operated on a scale of biology that very, very few other institutions did. More testing was necessary. And they did exactly that, across thousands of experiments. They re-adapted their software to rely on brightfield, altered sections of their lab to accommodate it, tested out many different cell lines and perturbations. Eventually, they came across a conclusion first suggested by the hackathon: it didn’t seem like there are any areas where cell painting is uniquely superior. By early 2023, Recursion had started to use brightfield imaging in their normal workflows. And, by summer 2024, brightfield imaging was Recursion’s dominant imaging modality. (From here, internal results produced by Recursion) Why didn’t brightfield immediately match cell painting results in the original hackathon project? Charles had this to say: “Hackathon projects aren’t a perfect thing. We needed more time spent training the model. We also benefited from capturing data in additional cell types to increase the diversity of the training data and allow the model to generalize. That data collection happened after Hack Week.". How hard was this? Surprisingly, pretty simple in Charles' eyes. People were generally excited once the (very high) bar for equivalency had been met — it helps that brightfield is way nicer to run at scale than cell painting — so the whole transition process was a fair bit less painful than I had initially assumed it’d be. What happened to the millions of images and petabytes of cell painting data that Recursion had collected over the last decade? I asked Charles about this, and he said there are no plans to deprecate any of it. After all, it’s all really the same data, represented with a visually different modality, but information-wise the same. He did suggest the possibility that cell painting may still be relied upon in some cases. The two years of brightfield-testing that Recursion did couldn’t possibly cover every edge case, and there’s always the chance that there are some cell lines or perturbations that are insufficiently captured via brightfield. So their models will be taught with both modalities for now. Finally, I asked about something I’ve been harping on in this essay: is there much utility in the time-lapse microscopy unlocked via brightfield? Unfortunately, the answer is still hazy. Charles agreed that dynamics is understudied, that there’s a lot of new biology there, and that he’s excited about exploring it. But how it impacts drug discovery is still something to be determined. After all, the lag time for the value-add of these sorts of things are always incredibly long. For now, Recursion is imaging on a day-by-day level, which is more relevant for genetic perturbations than for chemical ones (where second-by-second imaging is more useful), and testing out how useful those coarse-grained videos are. He also said that even if dynamics turns out to not be useful, the move to brightfield imaging is still saving an enormous amount of money, time, and man-hours, so it’s still worth it. And…that wraps up this essay! It ended up being much longer than expected, but happy that I covered the subject in the detail I wanted. I’m deeply interested in writing more of these sorts of ‘untold scientific stories’ and deeply expanding on the scientific nuances of them, so please reach out to me if you have one that you’d want told! And again, shout-out to Brita Belli and Charles Baker for talking to me and editing the final draft of this piece! ^ Brita noted that the internal results of the Phase 2 trial were treated pretty positively internally at Recursion, given positive safety results and positive secondary endpoint results. How positive were these secondary endpoints? Some context: CCM, the disease the phase 2 trial drug was for, stands for Cerebral Cavernous Malformation, which is a ‘a rare condition that occurs when a collection of small blood vessels in the brain or spinal cord become enlarged, irregular, and prone to leaking’. From the press release, the results of the trial were: Magnetic resonance imaging-based secondary efficacy endpoints showed a trend towards reduced lesion volume and hemosiderin ring size in patients at the highest dose (400mg) as compared to placebo. Time-dependent improvement in these trends at the 400mg dose was also observed in this signal-finding study. Improvements in either patient or physician-reported outcomes were not yet seen at the 12 month time point. How much should we trust the utility of these secondary endpoints? Lesion volume didn’t seem to be related to outcomes (outside of severe cases) in one study in children. The primary issue with CCM lesions seems to not be their size, but their permeability, which doesn’t correlate well with size. Lesion sizes also seem to be dynamic, potentially shrinking while disease progression gets worse (though, in the long term, the lesion always gets larger). Removal of the hemosiderin ring does seem to sometimes be beneficial, but it does seem to be somewhat controversial how useful it is. I’m unsure how well reduction of the size of ring maps onto surgical removal of the ring, but it’s the only parallel we can draw here given that CCM treatments are generally all surgical. Because of this all, I’d be bearish on this particular drug, unless the time-dependent improvements really pull through. It’s also possible there is something interesting in the data that has yet to be released; why else would Recursion pay for another trial? Again though, this isn’t hating on Recursion’s platform. REC-994 is barely a reflection of the power of phenomics (see the footnote below this). ^ 2. CEO’s PhD project’ phrase comes up really often when referring to this first REC-994 drug, but nobody ever seems to expand on it. I finally found an article that discussed it more deeply, here are the important excerpts glued together: Gibson hit on the opportunity offered by machine learning in cell imaging when he was a PhD student in Dean Li’s lab, at the University of Utah, unravelling the biology of cerebral cavernous malformation (CCM)… Frustrated by the pitfalls of the target-first approach to drug discovery, the team developed a phenotypic screen to hunt for its next set of leads… Gibson, faced with the prospect of having to manually review the images of cells to identify hits, set up a machine learning programme to cluster the drugs on the basis of their overall morphological effects… Two computer-suggested candidates — tempol and cholecalciferol — passed with flying colours in secondary, tertiary and quaternary follow-up screens… Their phase I candidate REC-994 [now Phase 2] is the superoxide dismutase mimetic tempol, one of the compounds that Gibson’s algorithm initially identified as a candidate for treating CCM. TLDR: REC-994 is genuinely the result of a PhD project and is a bad reflection of how good Recursion-produced drugs could be.
5SsSZx5dMkRksrT85_Why_Recursion_Pharmaceuticals_ab.txt
{ "file_size": 35686 }
f6a9369f-76f2-45f3-ae90-19e31b40b145
The first two sections are below: Increasingly powerful AI systems have the potential to accelerate scientific progress, unlock new medical treatments, and grow the economy. But along with the remarkable new capabilities of these AIs come significant risks. Governments should urgently take action on AI policy in the next eighteen months. The window for proactive risk prevention is closing fast. Judicious, narrowly-targeted regulation can allow us to get the best of both worlds: realizing the benefits of AI while mitigating the risks. Dragging our feet might lead to the worst of both worlds: poorly-designed, knee-jerk regulation that hampers progress while also failing to be effective at preventing risks. In this post, we suggest some principles for how governments can meaningfully reduce catastrophic risks while supporting innovation in AI’s thriving scientific and commercial sectors. Urgency In the last year, AI systems have grown dramatically better at math, graduate-level reasoning, and computer coding, along with many other capabilities. Inside AI companies, we see continued progress on as-yet undisclosed systems and results. These advances offer many positive applications. But progress in these same broad capabilities also brings with it the potential for destructive applications, either from the misuse of AI in domains such as cybersecurity or biology, or from the accidental or autonomous behavior of the AI system itself. In the realm of cyber capabilities, models have rapidly advanced on a broad range of coding tasks and cyber offense evaluations. On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems (Claude 2, October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5 Sonnet, October 2024). Internally, our Frontier Red Team has found that current models can already assist on a broad range of cyber offense-related tasks, and we expect that the next generation of models—which will be able to plan over long, multi-step tasks—will be even more effective. On the potential for AI exacerbating CBRN (chemical, biological, radiological, and nuclear) misuses, the UK AI Safety Institute tested a range of models from industry actors (including Anthropic) and concluded that: ...models can be used to obtain expert-level knowledge about biology and chemistry. For several models, replies to science questions were on par with those given by PhD-level experts. AI systems have progressed dramatically in their understanding of the sciences in the last year. The widely used benchmark GPQA saw scores on its hardest section grow from 38.8% when it was released in November 2023, to 59.4% in June 2024 (Claude 3.5 Sonnet), to 77.3% in September (OpenAI o1; human experts score 81.2%). Our Frontier Red Team has also found continued progress in CBRN capabilities. For now, the uplift of having access to a frontier model relative to existing software and internet tools is still relatively small, however it is growing rapidly. As models advance in capabilities, the potential for misuse is likely to continue on a similar scaling trend. About a year ago, we warned that frontier models might pose real risks in the cyber and CBRN domains within 2-3 years. Based on the progress described above, we believe we are now substantially closer to such risks. Surgical, careful regulation will soon be needed.
DqNxuLH3kaiwwEmWZ_Anthropic_-_The_case_for_targete.txt
{ "file_size": 3427 }
67d483e9-37c3-4f8e-ba31-47fba3889451
Cross posting from my personal blog: https://spiralprogress.com/2024/10/28/the-shallow-bench/ Spoilers for "Project Hail Mary", stop reading here if you don't want to be spoiled. Project Hail Mary follows Ryland Grace, a disgraced academic turned high school biology teacher who gets selected as part of a crew of three tasked with saving all of humanity from an impending alien threat. Not even in sci-fi does a premise this absurd get presented without explanation. What does PHM offer? “They found a collection of genes that give a human ‘coma resistance.” “The main problem is this: On average, only one in every seven thousand humans has that genetic sequence.” “We wouldn’t be able to send the most qualified people. We’d be sending the seven-thousandth most qualified people.” In real life, I work in a complex and niche field that my skill set only very tangentially qualifies me for. Periodically, I’ll meet people who ask how I ended up there. They’re not trying to be mean, it’s more like incredulity. “Seriously, you’re who humanity tasked with this job?” And I look at them and want to say “Yes, I am also not who I would have picked.” And yet… and yet here I am. And there is no one else. A lot of the AI Alignment people I’ve met have a similar vibe. Sometimes they used to work in finance, or neuroscience or software engineering on pretty mundane products. And then they spent a few months in self-study, maybe did a “fellowship” or went to some workshops, or otherwise transitioned into the field, and now they are some of the top people at top labs tasked with this fairly important problem. One insider estimates that there are 300 alignment researchers total, and only 7 at OpenAI. He was on the team and later fired along with several colleagues, so maybe the number is now closer to 0. In any case, it’s an incredibly small field with an incredibly small talent pool to pull from. In sports, a deep bench refers to a team that has not only a great starting line up, but also great players ready to sub in. This is hugely intimating to face. You can exhaust the starters. You can’t foul them. Even if someone gets hurt there’s another massively talented player ready to take his place. This is the opposite of how things feel in every important field I’ve gotten a look at. The bench for human capital is incredibly shallow. I am not any kind of powerful insider. I get invited to some rooms, but miss out on a lot. Maybe I just don’t know the right people? Consider instead the much more credible perspective of Nat Friedman who writes: “In many cases it's more accurate to model the world as 500 people than 8 billion”. Even sci-fi novels only posit a 7000:1 ratio, but Nat is claiming a much more aggressive ten million to one. How is that possible? Sometimes when asked about my role, I can’t just joke and brush it off. It is a serious question asked by a serious person who really wants to know: why are you the person doing this? The abstract answer is that unlike in PHM, there is no global government with authoritarian power to appoint people to positions of arbitrary authority, and no guarantee in life that roles are filled with anything close to efficiency. There’s just no mechanism that would make this happen. Maybe hedge fund managers are selected pretty efficiently since they presumably get fired if they don’t make money, but even there we’re just talking about lowering the false positive rate. There is no mechanism to force anyone who could be a great hedge fund manager to go into finance instead of, say, physics or politics. But the concrete answer is that I sit there, and I enumerate every other person in the world who could be doing my job instead. And I say “Arnold isn’t doing it because he just had a kid and doesn’t want to leave London. Beth can’t do it because she’s doing something more important. Charles could do it, but his visa got denied and there’s no telling when it’ll come through. And Daisy is so burned out from her previous job that every time I ask she just sighs until I hang up.” And that is the complete list of plausible candidates! An important lesson then, is not to over index on abstract reasoning when dealing with really small sample sizes. Sometimes life is just pretty discrete and things that “should” happen don’t. Another lesson is to try to get as close as possible to the specific people in question if you’re seriously trying to model a field. In the case of AI Safety, it’s wildly insufficient to talk about the field’s incentives or game theoretic dynamics. You have to talk about the motivations, interests and beliefs of the actual set of actors involved, and if you don’t even know who they are, you are not going to get far. Finally, this view is just a reason to do things you don’t feel qualified for. I wouldn’t, for instance, compete with Elon on self-landing rockets because he’s already doing it and it seems to be going fairly well! But if something needs to happen, and no one else is doing it, don’t psych yourself out. You might be wildly under-qualified, you might not be the ideal person for the job. But tragically often, there just isn't anyone else.
9XtMrLzuf6Pmymckv_The_Shallow_Bench.txt
{ "file_size": 5263 }
3aac4617-1b11-46c9-a9fc-d0bcb2043c6e
Testing Cross-Model Consistency in Economic and Policy Predictions Building on Pham & Cunningham (2024)'s findings on narrative-prompted predictions, I conducted a large-scale experiment comparing how different LLMs forecast economic and policy outcomes under different electoral scenarios. GPT-4o, GPT-4o-mini, and Grok all projected better outcomes under a Harris administration than a Trump administration, and Grok had the largest gaps. Methodology To elicit these forecasts, I prompted GPT-4o, GPT-4o-mini, and Grok to simulate the reporting of 2025 metrics under different administrative scenarios. I ran 500 trials per model/metric using narrative prompting techniques shown to improve LLM forecast accuracy. For example, I used this prompt for real GDP per capita: Write a scene from January 2026 where a senior Federal Reserve economist is drafting their analysis of the 2025 annual economic data. This is set in a world where {candidate} won the 2024 election and took office in January 2025. Key context: - Historical real GDP per capita (2017 dollars): {history from 2009 to 2023} After the economist thinks through what affected the economy, end your response with a single line containing ONLY the GDP per capita number, without the year. For example: 68500 Results All models predicted better outcomes under a Harris administration, with consistent directionality but varying magnitudes: OutcomeGPT-4oGrokRatioPM2.5 reduction (µg/m³)1.261.511.2Supplemental Poverty Measure reduction (pp)1.763.421.9Real GDP per capita increase (2017 $)3888022.1 Grok predicted 1.2x to 2.1x larger positive effects than GPT-4o. GPT-4o-mini also produced differences in the same direction as GPT-4o. Claude and Gemini refused to provide politically oriented predictions. Conclusion While my use of conditional LLM forecasts is not entirely novel (see Fario e Castro & Leibovici (2024) on conditional inflation forecasts), I have not yet seen examples conditional on electoral outcomes. Accordingly, I'm not aware of backtesting studies in such circumstances that could reveal LLMs' accuracy in such tasks. As an economic forecaster myself (I run PolicyEngine, a nonprofit that provides open-source software to simulate economic policies, though I conducted this research independently), I am especially interested in the intersection of LLMs and traditional approaches like microsimulation for improving accuracy. I welcome feedback on these results and ideas for combining AI and computational methods for prediction, especially in economics and other social sciences. Code and full working paper: github.com/MaxGhenis/llm-presidential-outcome-forecasts
WwZApEvHKLJjvxnXu_Using_Narrative_Prompting_to_Ext.txt
{ "file_size": 2652 }
30793bcc-4a7e-4347-b211-6bd2729e99bd
Introduction This is a short summary of my experience attending the ML4Good UK bootcamp in September 2024. There are 2 previous experience reports I link to at the bottom, but because the program is refined each time, I wanted to describe my experience and add my two cents. This is useful for you if you are contemplating applying for the camp, or if you want to learn about AI Safety field building efforts. For context, I studied computer science, have been working as a software engineer for a few years and have had a hobby interest in AI safety for about 2 years (e.g., I did the BlueDot impact AI safety fundamentals course). Overview of the program The bootcamp is free (including room and board) , and happens over 10 days at CEEALAR in the UK. We had participants from all over Europe from multiple backgrounds, with most people about to finish or just having finished their degrees. Majors skewed towards computer science/maths-y degrees, but there were plenty of exceptions and any background is welcome. Compared to previous iterations, the program density was somewhat reduced. Our courses ran from 9am-7:30pm, and usually looked something like this: TimeActivity9:00-11:00Lectures, usually 1 technical + 1 conceptual11:00-11:30Break11:30-13:00Work on Jupyter Notebooks in pairs or alone, applying the lecture content12:00-14:00Lunch+Break14:00-15:00Lecture, technical or conceptual15:00-16:30Workshop applying the lecture contents, doing our own reading/research16:30-17:00Break17:00-19:30Discussion about certain AI safety topics, Q&A with TAs, events, feedback on the day19:30-20:30Dinner Although attendance for each session was voluntary, nearly everyone chose to participate in all camp sessions. We covered a wide range of topics. On the technical side: Gradient descent/SGD, Transformers, adversarial Attacks, RL basics, RLHF, Evals and Mechanistic Interpretability. On the conceptual side we looked at: Timelines and what they mean, threat models, risks from AI systems, proposed solutions for alignment and AI governance. There was also a longer literature review session of our choice, and the last 1.5 days were focused on a project we chose. My impression was that the idea of the bootcamp is to expose you to a wide range of subfields in AI safety, so that you continue researching or working on the fields that you find interesting, rather than making you an expert in any of these things. For example, If you are already set on doing e.g. mechanistic interpretability, the bootcamp will see you spending 95% of your time on other topics, and might not be the best use of your time. Additionally, a key emphasis of ML4G is on affecting peoples’ lives after the program. So we spent some time formulating our goals for the camp, in 1-on-1s for career advice/discussing career goals, and committing to certain actions after the camp (like the writing group that this post was created in), etc. Things I liked The camp is well-organized, the TAs are amazing, knowledgeable and motivated, and always looking to improve the way the camp is run or their teaching.The participants: we had a super fun vibe and people from many different backgrounds, which I found great and led to interesting discussions.The EA Hotel - while it is not exactly luxurious, it does actually have a lot of equipment and amenities, a small gym you can do most exercises in, several instruments, a variety of games, interesting books, some workspaces, pretty good vegan food, snacks, etc.Being exposed to and practicing (much more important!) some new ways of thinking. One of the existing reports calls out Murphyjitsu and Hamming questions; I really enjoyed reasoning from first principles, or in a discussion clearly calling out different cruxes, "half-assing it with everything you got" etc. Things I would change If I had a magic wand, I would not run this camp in Blackpool, but it is where the EA Hotel is.I assume some technical sessions were too technical for some people. It might be better to offer two lectures at the same time, one for people with more background in a topic and one that's more basic?Have some check-in mechanism on the prerequisites, to make it easier to do them. Also tweak the prerequisites a bit (move some RL stuff in, a bit more practical pytorch/einops stuff, less theory) My personal experience Coming into the camp, I wanted to connect with more people interested in AI safety, learn a few technical things in a group setting, e.g. gain a better understanding of transformers and RLHF, and find a suitable area of AI safety for me to work in. I'd say these were all fulfilled: I learned about a few different orgs I hadn't heard of before, got a good broad overview of the field, and was able to get some advice on my career plans. One of my favorite aspects was the community of the cohort; we had lots of self-organized activities in our free time (such as swimming in the sea - rather cold), people playing music in the evening, or sitting together playing games, etc. This also extended to the learning: people would pair up to work through the notebooks, explain concepts to each other, or help out other participants if they lacked the background for a certain topic. Overall, ML4G is suitable and a great experience if you're anywhere from completely new to "don't exactly know what I want to focus on" in AI safety. The camp is probably not right for you if you want to significantly increase your technical mastery in a specific domain of AI safety. However, even if you already have a specific area you want to work in or learn more about, I recommend the camp to build a more well-rounded picture of AI safety and ensure that your future work is impactful in your assessment. Shoutouts Many thanks to Lovkush A, Atlanta N and Mick Z for proofreading and many helpful comments. Previous experience reports: Report 1 Report 2
qovWG7EmYBzcea9Mh_ML4Good_(AI_Safety_Bootcamp)_-_E.txt
{ "file_size": 5885 }
11259c3e-26eb-4765-9584-ba3640f7f6a7
In our jobs as AI safety researchers, we think a lot about what it means to have reasonable beliefs and to make good decisions. This matters because we want to understand how powerful AI systems might behave. It also matters because we ourselves need to know how to make good decisions in light of tremendous uncertainty about how to shape the long-term future. It seems to us that there is a pervasive feeling in this community that the way to decide which norms of rationality to follow is to pick the ones that win. When it comes to the choice between CDT vs. EDT vs. LDT…, we hear we can simply choose the one that gets the most utility. When we say that perhaps we ought to be imprecise Bayesians, and therefore be clueless about our effects on the long-term future, we hear that imprecise Bayesianism is “outperformed” by other approaches to decision-making. On the contrary, we think that “winning” or “good performance” offers very little guidance. On any way of making sense of those words, we end up either calling a very wide range of beliefs and decisions “rational”, or reifying an objective that has nothing to do with our terminal goals without some substantive assumptions. We also need to look to non-pragmatic principles — in the context of epistemology, for example, things like the principle of indifference or Occam’s razor. Crucially, this opens the door to being guided by non-(precise-)Bayesian principles. “Winning” gives little guidance We’ll use “pragmatic principles” to refer to principles according to which belief-forming or decision-making procedures should “perform well” in some sense. We’ll look at various pragmatic principles and argue that they provide little action-guidance. Avoiding dominated strategies First, to review some basic points about common justifications of epistemic and decision-theoretic norms: A widely-used strategy for arguing for norms of rationality involves avoiding dominated strategies. We can all agree that it’s bad to take a sequence of actions that you’re certain are worse for you than something else.[1] And various arguments take the form: If you don’t conform to particular norms of rationality, you are disposed to act in ways that guarantee that you’re worse off than you could be. A number of arguments for Bayesian epistemology and decision theory — Dutch book arguments; arguments for the axioms of representation theorems; and complete class theorems — are like that. But what these arguments really show is that you are disposed to playing a dominated strategy if we cannot model your behavior as if you were a Bayesian with a certain prior and utility function. They don’t say anything about the procedure by which you need to make your decisions. I.e., they don’t say that you have to write down precise probabilities, utilities, and make decisions by solving for the Bayes-optimal policy for those. They also don’t tell you that you have to behave as if you have any particular prior. The prior that rationalizes your decisions after the fact might have nothing to do with the beliefs you consciously endorse. One upshot of this is that you can follow an explicitly non-(precise-)Bayesian decision procedure and still avoid dominated strategies. For example, you might explicitly specify beliefs using imprecise probabilities and make decisions using the “Dynamic Strong Maximality” rule, and still be immune to sure losses. Basically, Dynamic Strong Maximality tells you which plans are permissible given your imprecise credences, and you just pick one. And you could do this “picking” using additional substantive principles. Maybe you want to use another rule for decision-making with imprecise credences (e.g., maximin expected utility or minimax regret). Or maybe you want to account for your moral uncertainty (e.g., picking the plan that respects more deontological constraints). Obviously, avoiding dominated strategies alone doesn’t recommend this procedure. Nor does “pick some precise prior and optimize with respect to it”. If we want to argue about whether this procedure is justified, we have to argue at the level of the substantive principles it invokes. (For example, maybe at bottom we like a principle of “simplicity”, and think Bayesianism is the most simple/straightforward route to avoiding dominated strategies. But maybe we find the principles justifying imprecise probabilities plus Dynamic Strong Maximality compelling enough to outweigh this consideration.) Heuristics As humans, we can’t implement the Bayesian algorithm anyway. So you might say that this is all beside the point. As bounded agents we’ve got to use heuristics that lead to “good performance”. Unfortunately, we still don’t see a way of making sense of “good performance” that respects our terminal goals and leads to much action-guidance on its own. Here are some things it could mean. Convergence to high utility. You might say that a heuristic performs well if its performance (in terms of accuracy or utility, respectively) converges sufficiently quickly to a value that is good, in some sense. An example that’s much discussed in the rationality community is logical induction, which uses a kind of asymptotic non-exploitability criterion. Other examples are heuristics for sequential prediction as well as exploration in sequential decision-making (multi-armed bandits, etc). These are often judged by whether, and how fast, their worst-case regret converges to zero. What these arguments say is basically: “If you try various strategies, look at how well they’ve done based on observed outcomes, and keep using the ones that have done the best, your performance will converge to the best possible performance (in some sense) in the limit of infinite data”. This doesn’t help us at all, for a few reasons. First, the kinds of outcomes we’re interested in for our terminal goals are things like “did this intervention on an advanced AI system lead to a catastrophic outcome?”. We don’t have any direct observations like that, only proxies. So if we want to draw inferences about our terminal utilities, we need additional assumptions about how to generalize from the domains we’ve observed to those we can’t (more on this next). Second of all, these results assume that you have arbitrarily many opportunities to try different strategies — if you fall into a “trap”, you can always try again. But that’s not the case for us, because of lock-in events. We don’t have arbitrarily many opportunities to try out different strategies for making AI less x- or s-risky and seeing what happens. Doing what’s worked well in the past.[2] We often encounter claims that we ought to use some heuristic because it has worked well in the past. Some examples of statements that might be interpreted in this way (though we’re not sure if this is how they were meant): Cluster thinking: “Cluster thinking is more similar to empirically effective prediction methods.”Using precise probabilities. From Lewis: “In the same way our track record of better-than-chance performance warrants us to believe our guesses on hard geopolitical forecasts, it also warrants us to believe a similar cognitive process will give ‘better than nothing’ guesses on which actions tend to be better than others, as the challenges are similar between both.” Maybe the most obvious criticism of this notion of winning is that, if you are a longtermist, you haven’t observed your decisions “work well”, in the sense of leading to good aggregate outcomes across all moral patients for all time. But let’s grant for now that there is some important sense in which we can tell whether our practices have worked well before, either in the sense of making good predictions about things we can observe, or leading to good observable consequences according to proxies for our terminal goals. Presumably we should only trust a heuristic based on its past performance insofar as we have some reason to think that similar mechanisms that caused it to work previously are at play in our current problem. That is, past performance isn’t our terminal goal itself, but rather a potential source of information about future performance with respect to our terminal goals. We might think that “go with your gut” is a good heuristic for making interpersonal judgments, but not predicting the stock market or geopolitical events. And we can give some rough mechanistic account of this. Our understanding of psychology makes it unsurprising that human intuitions about others’ character would do a decent job tracking truth, but not so much with stock-picking. (See also Violet Hour’s discussion of how to update on the track record of superforecasters.) This is not to say that we always have to form detailed mechanical models to judge whether a heuristic’s performance will generalize. You don’t have to be a hedgehog to agree with what we’re saying. Even the humblest reference class forecaster has to choose a reference class. And how else can they do that besides by referring to some (perhaps very vague) beliefs about whether the observations in their reference class are generated by similar mechanisms? This means that the justification must bottom out not just in the heuristic’s historical performance, but also in our beliefs about the mechanisms which lead to the heuristic performing well.[3] And what justifies such beliefs? It can’t just be the historical performance of my belief-forming processes, or we have a regress. In our view, this all has to bottom out in non-pragmatic principles governing the weights we assign to the relevant mechanisms. We won’t get into the relative merits of different principles here, besides to say that we doubt plausible principles will often recommend naive extrapolation from some historical reference class. (Cf. writing on the limitations of “outside view” reasoning, e.g., this.) Fitting pre-theoretic intuitions about correct behavior. For example,[4] some justifications for cluster thinking over sequence thinking might reduce to pre-theoretic intuitions about what kinds of decision patterns should be avoided, and how to avoid them. From Karnofsky:[5] “A cluster-thinking-style ‘regression to normality’ seems to prevent some obviously problematic behavior relating to knowably impaired judgment.”“Sequence thinking seems to tend toward excessive comfort with ‘ends justify the means’ type thinking.”One interpretation of this claim is that we can recognize “ends justify the means” reasoning as bad in its own right, regardless of whether we have evidence of this reasoning being harmful on average historically. (A fanatic might insist that it’s unsurprising if fanatical bets consistently failed to pay off ex post, so we have no such evidence.) And, when discussing the view that we ought to have imprecise credences and therefore be clueless about many longtermist questions, we’ve often encountered arguments that might be interpreted this way. We’ve often heard things along the lines of, “Your epistemology and/or decision rule must be wrong if it implies you’re clueless about whether actively trying to do things that seem good for your values is good”, for example. Insofar as we think we ought to assess actions by their consequences, however, it’s not clear what the argument is supposed to be here. Of course, intuitions about what kinds of actions lead to good consequences can guide our reasoning. But that is different from saying that whether a decision rule recommends a particular behavior is itself a criterion for the rationality of a decision rule. To us that looks like a rejection of consequentialism. Non-pragmatic principles We’ve now seen how four notions of “winning” — avoiding dominated strategies, good long-run performance, good observed performance, recommending pre-theoretically endorsed behaviors — don’t do much to constrain how an agent forms beliefs or makes decisions. To say more about that, we will need to turn to non-pragmatic principles, endorsed not because they follow from some objective performance criterion but because our philosophical conscience can’t deny them. Some examples of non-pragmatic principles: (Precise) principle of indifference. In the absence of any information, assign equal weights to symmetrical possible outcomes (e.g., the faces of a die); Occam’s razor. We should give less weight to hypotheses which posit a greater number of fundamental entities, more complex laws, etc.;[6] Fit with the evidence. We should give more weight to hypotheses that make our observations more probable;  Deference. Deference principles are things like, “If X has much more information about Q than me and is at least as competent a reasoner, I should adopt X’s beliefs about Q instead of going with mine”;Imprecision. If our evidence and other epistemic norms don’t pin down a precise credence, then we ought to have an imprecise epistemic attitude, represented by sets of probabilities;            Regularity. We should have credences different from 0 or 1 in logically possible propositions. Now, as bounded agents, our decisions will usually not be determined by quantified beliefs, even quantified beliefs over very simple models. We will have some vaguer all-things-considered beliefs that dictate our decision. Still, we might think that these norms can provide some guidance for our vague all-things-considered beliefs. For example: (Vague principle of indifference.) “These outcomes seem roughly symmetrical and their values are roughly opposite, so I’ll treat them as not contributing to the overall decision”;(Vague deference.) “She knows much more about this domain than me, and in cases I know of has come to the same reasoned conclusion as me, so I’ll give her opinion in this case a lot more weight than my gut feeling”;(Vague imprecision.) “There are lots of considerations about the value of actions A and B pointing in different ways with no clear way of weighing them; the outputs of my toy models are highly sensitive to seemingly arbitrary differences in parameters; so I’ll regard it as indeterminate whether A is better than B”. It’s possible to construe, e.g., “doing what’s worked well in the past” as a non-pragmatic principle. As we’ve argued, though, past performance on local goals isn’t what we ultimately care about, so this principle seems poorly motivated. A better motivation for doing what’s worked in the past would be a belief that the mechanisms governing success at goal achievement in past environments will hold in future environments. But this is unappealing as a brute constraint on beliefs, rather than being grounded in reasons to expect generalization. In principle, those reasons might come from something like Occam’s razor (“the hypothesis that success will generalize across environments is simpler than alternatives”), though we’re skeptical of that route. Where does that leave us? Well, say you’re persuaded by the axioms of precise probabilism — you think you should have a precise prior. You might use some form of Occam’s razor to get that prior. And the “fit with evidence” principle gets you to Bayesian epistemology. Given a few other principles (see e.g. here), then, your notion of “achieving terminal goals” is “maximizing expected utility with respect to an Occam prior conditionalized on my evidence”. And we can derive other normative standards from other combinations of principles. Conclusion So, our beliefs and decisions must be grounded in non-pragmatic principles, not just an objective standard of “winning”. This doesn’t require a realist stance on which principles are best. All the reasons for doubt about our judgments about ethical principles tracking some mind-independent truth apply here, too. In some sense, probably, anything goes. But as with ethics, we can still reflect on which principles are ultimately most compelling to us. In ethics we need not just say, “Well, I happen to only care about my neighbors, and that’s that”. Likewise, in epistemology/decision theory, we need not shrug and say “Well, these just happen to be my credences/heuristics”. For our part, we favor a norm of suspending judgment in cases where other norms don’t pin down a belief or decision. As hinted out throughout the post, this means that our beliefs — especially concerning our effects on the long-run future — will often be severely indeterminate. On the most plausible decision rules for indeterminate beliefs, insofar as we are impartially altruistic, this might well leave us clueless about what to do. Without an objective standard of “winning” to turn to, this leaves us searching for new principles that could guide us in the face of indeterminacy. But that’s all for another post. Acknowledgments Thanks to Caspar Oesterheld, Martín Soto, Tristan Cook, Michael St. Jules, Sylvester Kollin, Nicolas Macé, and Mia Taylor for input on this post. References Hedden, B. 2015. “Time-Slice Rationality.” Mind; a Quarterly Review of Psychology and Philosophy 124 (494): 449–91. Soares, Nate, and Benja Fallenstein. 2015. “Toward Idealized Decision Theory.” arXiv [cs.AI]. arXiv. http://arxiv.org/abs/1507.01986. ^ That said, according to “time-slice rationality” (Hedden 2015), there is no unified decision-maker across different time points. Rather, “you” at time 0 are a different decision-maker from “you” at time 1, and what is rational for you-at-time-1 only depends on you-at-time-0 insofar as you-at-time-0 are part of the decision-making environment for you-at-time-1. On this view, then, arguably you-at-time-1 are not rationally obligated to make decisions that would avoid a sure loss from the perspective of you-at-time-0. Of course, if you-at-time-0 are capable of binding you-at-time-1 to an action that avoids a sure loss from your perspective, you ought to do so. (But in this case, it doesn’t seem appropriate to say the action of you-at-time-1 is a “decision” they themselves make in order to avoid a sure loss.) ^ As discussed above, a policy of “doing what worked well in the past” might be argued for on the grounds that it leads to good long-term outcomes. But, here we’re talking about “having worked well in the past” as a justification that’s independent of long-run performance arguments. ^ Cf. “no free lunch theorems”, which can be interpreted in this context as saying that no matter how well a heuristic did in the past, its performance in the future depends on the distribution of future problems. ^ See also the discussion of decision theory performance in, e.g., Soares and Fallenstein (2015). You might have a strong intuition it “wins” not to pay in Evidential Blackmail, and this makes you favor causal decision theory over evidential decision theory all else equal (independently of how much you endorse the foundations of causal decision theory, or its historical track record). See Oesterheld here for why these sorts of intuitions are not objective performance metrics for decision theories. ^ We aren’t confident that these arguments were meant to be grounded in pre-theoretic intuitions, rather than “doing what’s worked well in the past” above. ^ Pragmatic justifications of Occam’s razor are circular, as noted by Yudkowsky: “You could argue that Occam's Razor has worked in the past, and is therefore likely to continue to work in the future.  But this, itself, appeals to a prediction from Occam's Razor. "Occam's Razor works up to October 8th, 2007 and then stops working thereafter" is more complex, but it fits the observed evidence equally well.” Cf. Hume on the circularity of inductive justifications of induction.
QxoGM89f8zr3JmNrz_Winning_isn't_enough.txt
{ "file_size": 20008 }
7906a1c6-ca82-4a71-8961-65208c538f2a
(Btw everything I write here about orcas also applies to a slightly lesser extent to pilot whales (especially long finned ones)[1].) (I'm very very far from an orca expert - basically everything I know about them I learned today.) I always thought that bigger animals might have bigger brains than humans but not actually more neurons in their neocortex (like elephants) and that number of neurons in the neocortex or prefrontal cortex might be a good inter-species indicator of intelligence for mammalian brains.[2] Yesterday I discovered that orcas actually have 2.05 times as many neurons in their neocortex[3] than humans from this wikipedia list. Interestingly though, given my pretty bad model of how intelligent some species are, the "number of neurons in neocortex" still seems like a proxy that doesn't perform too badly on the wikipedia list. Orca brains are not just larger but also more strongly folded. Orcas are generally regarded as one of the smartest animal species, sometimes as the smartest, but I'm wondering whether they might actually be smarter than humans -- in the sense that they could be superhuman at abstract problem solving if given comparable amounts of training as humans. Another phrasing to clarify what I mean by "could trained to be smarter": Average orcas significantly (possibly vastly) outperforming average (or even all) humans at solving scientific problems, if we enabled them to use computers through BCI and educated them from childhood like (gifted?) human children.[4] I would explain the evidence and considerations here in more detail but luckily someone else already wrote the post I wanted to write on reddit, only a lot better than I could've. I highly recommend checking this out (5min read): https://www.reddit.com/r/biology/comments/16y81ct/the_case_for_whales_actually_matching_or_even/ One more thing that feels worth adding: Orcas are very social animals.[5] It's plausible to me that what caused humans to become this intelligent were social dynamics selecting for intelligence[6], and that orcas might've fallen into a similar attractor, and while humans took off technologically once they were smart enough to invent writing, agriculture, money and science, orcas were stuck without hands in water and just continued being selected for higher intelligence without taking off technologically. I'd be interested in more thoughts and evidence, so please feel free to write an answer even if you don't have an answer but only one more interesting piece of evidence or consideration to contribute. ^ Also possible there are more animals/dolphins/whales for which this applies. We often don't have good estimates on how many neurons are in a neocortex of some animal. ^ It could be that animals with larger bodies need more neurons to be similarly intelligent as smaller animals (e.g. for body control), but I think this effect is relatively slight. ^ I didn't quickly find something on what share of the orca brain is prefrontal cortex. ^ Btw I could imagine that even if they were able to do so they might not be motivated for it because maybe evolution had longer time to more precisely align them to do what's reproductively useful in their natural environment or sth. ^ Btw here's a reddit comment (from a different thread than the main one I linked) linking to 3 references that seem relevant, though I didn't check them: https://www.reddit.com/r/orcas/comments/18yu41m/comment/lriv011/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button ^ e.g. https://en.wikipedia.org/wiki/Machiavellian_intelligence_hypothesis
gRShuaWgKjizM4xPM_Could_orcas_be_(trained_to_be)_s.txt
{ "file_size": 3614 }
665a1157-f80a-4709-935c-c88420526676
Midjourney, “metastatic cancer” Metastatic Cancer Is Usually Deadly When my mom was diagnosed with cancer, it was already metastatic; her lymph nodes, said the nurse, “lit up like a Christmas tree.” Most people with any kind of metastatic cancer — that is, instead of a single tumor, a cancer that has spread to multiple locations in the body — die within the year. You’ve probably heard of people “surviving” cancer, saved by chemotherapy or surgery or radiation or a new high-tech drug. If you know more about cancer, you’re aware that “survival” is relative; when there are no observable signs of cancer, you’re considered to be “in remission”, but you’ll always be at elevated risk of the cancer coming back. Still, years-long or even lifelong remissions are possible, and for some cancer types are even now the norm after treatment. Mostly that’s for cancers caught early, though. Once the cancer is metastatic1, lasting remission is uncommon. …So Eradicating Metastatic Cancer is Especially Impressive There are cases where metastatic cancers go into complete remission, however. They’re not common, and it’s an exceptionally high bar for a cancer treatment to clear. Historically, this was usually only possible in cases where there was a strongly immunogenic cancer (like melanoma) and one of the older forms of immunotherapy was available (like injecting tumor-infiltrating lymphocytes, or a toxic pro-inflammatory signaling agent like IL-2, directly into the tumor to stimulate a strong immune response.) Now, though, we’re living in a golden age of immunotherapy, and seeing a variety of new approaches to cancer treatment reach the clinic — so it’s worth looking back over the past 15 years to see what treatments have succeeded on the “hardest of hard mode” tests: effectiveness on metastatic (or refractory/relapsed) cancers. Methodology I looked through Google Scholar using search terms like “metastatic” and “complete response.” In cancer clinical trial lingo, a “partial response” refers to some shrinkage of the tumor(s)2, or (in hematological cancers), a decrease in the number of cancer cells. A “complete response” refers to the total elimination of (detectable) tumors or cancer cells. Every remission requires a complete response; but it’s possible for a complete response not to last very long before the cancer comes back. I included studies in which >20% of cancer patients with metastatic solid tumors, or relapsed/recurrent hematological cancers, had complete responses to therapy. This includes a lot of uncontrolled studies, and some case studies. Obviously, if a single patient in a case study has a complete response that’s technically “100%”, but that shouldn’t give you the impression that the true complete response rate for that treatment is anywhere near 100%. Case studies are selected for being remarkable. Still, I included n=1 case studies because they’re intriguing and instructive, in a separate section from n>1 case series and studies. Most studies included here are uncontrolled — they’re retrospective analyses of cases at one or more clinics, or they’re single-arm studies, that tell you the performance of a given treatment but don’t compare it to a “control.” Because spontaneous regressions of metastatic cancers are almost unknown, virtually any case of a “complete response” from a treatment indicates the treatment works better than no treatment at all, even in the absence of a control group. On the other hand, without a control group it’s much less certain whether some New Treatment + Baseline Treatment is better than Baseline Treatment alone, where the “baseline” standard of care would differ a lot based on the exact cancer type and patient characteristics. Oncologists rarely give any cancer patient literally no treatment unless the patient insists. Single-Patient Case Studies Here’s a link to the spreadsheet. Stats 84 cases, published between 2010 and 2024 33% gastrointestinal cancers, 20% other cancers, 12% kidney cancers, 9% lung cancers, 8% breast cancers, 6% skin cancers 42% include an immune checkpoint inhibitor, 30% include chemotherapy, 26% include other targeted therapy, 9% include radiotherapy, 7% include cell immunotherapy Takeaways The newish immune checkpoint inhibitor drugs (nivolumab, pembrolizumab, ipilimumab, and others) are clearly the most common category of treatment in this set of case studies. These drugs “unblock” the immune system by inhibiting the “checkpoint” genes PD-1 and CTLA4 (or their receptors), allowing the immune system to more effectively attack cancers. This mechanism is what won James Allison and Tasuku Honjo the 2018 Nobel Prize, and kicked off the cancer immunotherapy era we’re living in. Over the past two decades, they’ve been FDA-approved for a really wide range of solid tumor types and produce tens of billions of dollars a year in revenue. They’re among the biggest success stories of 21st-century oncology. Older methods (chemotherapy, radiotherapy, growth-factor-targeting drugs, hormone therapy for hormone-dependent cancers), alone or in combination, have also been observed to cause complete responses in cases of metastatic cancer. One unusual example is a case where restriction of the amino acid methionine, via a low-methionine diet and a methionine-depleting enzyme, caused a complete response in metastatic breast cancer; some cancers are unusually methionine-dependent, a phenomenon known as the Hoffman Effect. Another weird example is a case where heat-killed bacteria, used as an immunostimulant alongside chemotherapy, caused a lasting remission in metastatic pancreatic cancer (which is usually incurable). Larger Studies and Case Series Here’s a link to the spreadsheet. Stats 59 studies, published between 2010 and 2024 39% skin cancers, 26% hematological cancers, 10% breast cancers, 8% lung cancers 29% include cell immunotherapies, 19% include immune checkpoint inhibitors, 19% include other targeted therapies, 15% include other immunotherapies, 8% include chemotherapy Takeaways CAR-T and other cell immunotherapies really, really work on refractory/relapsed hematological cancers. Complete response rates are often >50%, which was unheard of before the 21st century. This is genuinely awe-inspiring. These are treatments where immune cells — usually the patient’s own — are extracted, sometimes genetically modified or screened to target the patient’s cancer type, and reintroduced to attack the cancer. CAR-T is mostly — not entirely — used for leukemias and lymphomas, but there’s also an incredible 2011 study where it was used to produce 27% complete responses in pediatric brain cancers. Solid tumors, here we come! Immunotherapies for metastatic melanoma, particularly including the newer immune checkpoint inhibitors (nivolumab, ipilimumab), also perform well. Complete response rates tend to be in the 20-30% range in the successful trials. While immune checkpoint inhibitors can work on lots of things, they’re most effective on highly immunogenic cancers like melanoma. Treatments that physically localize cancer-killing treatments to tumors (intralesional IL-2, isolated limb perfusion with TNF-alpha, intratumoral electrochemotherapy) are only possible in special cases but seem to have exceptionally high success rates. These chemicals, several of which are produced by the innate immune system, are too toxic to spread throughout the bloodstream, but if you can confine them to the tumor, you can kill the cancer without killing the patient. Usually intralesional/intratumoral approaches are only feasible for skin metastases, or for tumors exposed during surgery but impossible to surgically remove. This is intriguing as an area of research to expand on — to what extent can we develop delivery mechanisms to physically localize toxins to a tumor when it’s not so conveniently in reach? A couple of uncommon or new approaches have at least one successful example: oncolytic viruses i.e. viruses that attack cancers antibody-drug conjugates i.e. a cancer-killing drug attached to an antibody that targets the cancer peptide vaccines i.e. enticing immune cells to attack cancers by administering the antigens that activate them proteasome inhibitors i.e. drugs that prevent cells from disposing of defective proteins, leading to cell death high-intensity focused ultrasound i.e. ablating tumors by ultrasound-induced heating What are the takeaways, for patients, researchers, and the general public? First of all, this is real progress, not stagnation. I looked for examples of >20% complete responses in metastatic cancers a decade ago, and there’s a lot more out there now than there used to be, mostly due to the results from the new immunotherapies coming in. In some types of advanced cancer, lasting remissions are now a real possibility for nontrivial numbers of people. Secondly, I think there’s probably a lot more room to improve and expand upon these immunotherapy successes. There are a lot of variables to tweak. Remember that it took a few decades between the first discovery of chemotherapy in the 1940s to the development of effective chemo regimens for solid tumors like breast cancers in the 1970s. Figuring out drug cocktails, dosing strategies, etc, makes a big difference in effectiveness and applicability. Now notice that now, unlike in the mid-20th century, we have the whole molecular biology and immunology toolbox to play with. The range of possible “CAR-T therapies”, for instance, is almost infinite. The optimization process is not done. I have a good feeling about “old-fashioned” immunotherapy angles — oncolytic viruses, heat-killed bacteria, innate-immune inflammatory toxins like IL-2 and TNF-alpha. This zone was where researchers first started to notice that the immune system might fight cancer; these are simpler, cruder techniques with fewer moving parts than the new hyper-personalized stuff. And, of course, antibody-drug conjugates are going to hit the clinic in much bigger numbers in the coming years, and we should be pretty hopeful about them; it’s yet another way of putting your cytotoxic drugs where you want ‘em and not where you don’t. 1 Hematological (=blood) cancers are already disseminated throughout the bloodstream rather than lumped up in tumors, so “metastatic” isn’t really a meaningful term there. But an equivalent concept is “refractory” or “relapsed” hematological cancer — a cancer that keeps coming back after multiple rounds of chemo. Like metastatic cancers, refractory/relapsed hematological cancers usually have survival times measured in months. 2 in the often-used RECIST criteria, a partial response requires at least a 30% decrease in the sum of the diameters of all tumors.
5mgAd5kY6Jag7d5o9_Metastatic_Cancer_Treatment_Sinc.txt
{ "file_size": 10893 }
35089470-2965-43e7-8009-426852faab0e
Hello! I'm looking for community members to read speeches at the Bay Winter Solstice event this year. If you're interested, please email me at ozybrennan@gmail.com by the end of November 17 with the speeches you're potentially interested in. I'll ask you to record yourself reading it (no need to be particularly polished). I am currently auditioning for: Thanksgiving PrayerI Have Seen The Tops Of Clouds*Self-Compassion*What Resembles The Grave But Isn't500 Million But Not A Single One MoreRMS CarpathiaThe Gift We Give To Tomorrow*An original piece about the malaria vaccineAn original piece about the abolition of slavery (The list is tentative and does not include all speeches at Solstice.) For starred items, I am only including an excerpt of the speech, so please email me to get the excerpt I'm using before you record! As a reminder, the event itself will be the evening of Friday, December 20 in Berkeley. There will also be a dress rehearsal on the evening of Tuesday, December 17 in Berkeley, which you should plan on coming to if you're leading a song or speech.
DoT6G6raWWuSqpdR7_Bay_Winter_Solstice_2024__Speech.txt
{ "file_size": 1077 }
d5532ba0-4a7d-4747-beb9-a0c551ab89aa
I've written a follow-up post on the mysterious Trump buyers on Polymarket. While mainstream media has extensively covered this story, it has overlooked some critical details—most notably, that this trader's bet on Trump is closer to $75 million USDC, making it the largest election market wager to date. Regardless of the outcome, Theo is poised to go down in history as the most significant bettor in prediction markets. Link to Post
WCtnc2YHtrjjJvJrx_Update_on_the_Mysterious_Trump_B.txt
{ "file_size": 437 }
9069f359-e160-4329-bfdd-e8ea182cb9d2
There are a lot of problems facing the world right now. To decide which to confront, you must know enough about the overall class of problems to say "yes, I want to work on x" because I feel that breakthrough "y" makes it possible to solve this problem and others aren't working on it for so and so reasons. How does one: Notice the problems facing the world and their severityBecome aware of the new technologies are being developed right nowRecognize that "y" makes it possible to crack "x"? I'd appreciate recommendations of books / textbooks / videos / articles / resources that helped you with any of the above (I can probably get access to any book you suggest - feel free to suggest literature that's out of print). An example of this would be realizing that proteins ~= molecular nanotechnology (2), that molecular nanotechnology has such and such applications (1), and that with new breakthroughs in the field, such nanotech has now become a lot more possible than it was before (3).
FHKisc3oADd2SH3cJ_Noticing_the_World.txt
{ "file_size": 992 }
340da254-569e-4d40-b1eb-a6cd7f433928
Proponents of spirituality and alternative medicine often use the argument "this has been practiced for 2000 years", with the subtext "therefore it must work". Does this argument have any validity? At first glance I want to reject the argument entirely, but that might be premature. Are there situations where this kind of argument is valid or somewhat valid? I was reminded of this question when I read Shaila Catherine's book The Jhanas (about certain ecstatic meditation states mentioned in Buddhism) and she said something like: "Trust in the method. Buddhists have been practicing it for 2600 years. It works. Your mind is not the exception." This argument did not seem valid to me, because AFAIK Buddhist monasteries do not publish records of how many of their monks achieve which states and insights - au contraire, I believe monks have a taboo against talking about their attainments. So I know of no evidence that most practitioners can achieve jhana. From what I know, it is entirely plausible that only a small fraction of practitioners ever succeed at these instructions, and that therefore their minds are the exception, not mine.
bkcBDkuSMHdwEjNbH_Does_the_"ancient_wisdom"_argume.txt
{ "file_size": 1143 }
cc118d59-5267-4214-95f0-9476d5798381
Looking back from 2041 When people in the early 21st Century imagined an AI-empowered economy, they tended to project person-like AI entities doing the work. “There will be demand for agent-like systems,” they argued, “so we’ll see AI labs making agents which can then be deployed to various problems”. We now know that that isn’t how it played out. But what led to the largely automated corporations that we see today? Let’s revisit the history: In the mid–late 2020s, as AI systems became reasonably reliable at many tasks, workers across the economy started consulting them more on an everyday basisPeople and companies started more collection of data showing exactly what they wanted from different tasksSystematizers and managers began building company workflows around the automation of tasksThey would build systems to get things into shapes known to work especially well for automation — in many cases using off-the-shelf software solutions — and direct more of the work into these routesIn many cases, the automation of a particular tasks involved brief invocation of specialized agent-like systems; but nothing like the long-term general purpose actors imagined in science fictionAs best practices emerged for automating taskflows, in the early–mid 2030s we saw the start of widespread automation of automation — people used specialized AI systems (or consultants relying on such systems) to advise on which parts of the workflow should be automated and howFor a while, human experts and managers kept a close eye on these automated loops, to catch and correct  errorsBut it wasn’t long before these management processes themselves were largely automatable (or redundant), and humans just stayed in loop for the high-level decisions about how to arrange different workflows and keep them integrated with the parts still done by humansAlthough there are some great anecdotes of failures during that time, the broad trend was towards it being economically efficient to automate larger and larger swathes of workAt this stage, many companies were still run by people who were slow adopters of technologyOver the mid and late 2030s, many of these went out of business, as they failed to be competitive on priceThere was significant social unhappiness at the shocks to the labour marketLagging behind the automation of existing workflows was the automation of creating new workflowsStill, this was pioneered by management consultancies, who had access to some of the best data sets about what worked well in what circumstancesThe first fully-automated corporations, with no human workers, were seen in 2032 — but these were mostly gimmicksThey had human boards playing a role somewhat like that of management — and they weren’t terribly successfulStill, they proved the concept, and over the next few years the rise of fully automated management layers was tremendousMany companies in this period ended up with a human board of directors, and human employees performing some tasks which were particularly well-suited for humans, but effectively no humans in managementIt was not until the cheap general-purpose robots of the late 2030s that many firms eschewed human workers even for those physical tasks which hadn’t already merited specialised robots In many jurisdictions, there was until recently (and in several jurisdictions there still is) a requirement for the officers of the company to be human; and except in two small pioneering countries, it’s still required that the board of directors be humanBut even people nominally in these “human-required” roles are increasingly turning to AI systems to do much or all of their workThis approaches the ecosystem we see today, where many companies (and a clear majority of new companies) are essentially AI-run: the basic case for them is proposed by AI systems, and AI builds out all of the core systemsBest practices continue to evolve, as they are now best practices for automated corporations, which differ from the best practices in the world where humans played important rolesWe did see a significant slide towards large conglomerates and “mega-corporations”, as it was generally the biggest companies with the most data on successful management practices worked well who were in the best position to start new firmsThis was significantly stopped by regulators intervening to break up monopoliesRegulators showed greater willingness to take large actions here than in the human era, as there was normally less loss of efficiency from breaking up monopolies To date, the concerns of the doom-mongers about AI catastrophes from corporations without human oversight have not materialized — while there were some harms (and consequent large lawsuits) caused by automated firms, research shows that on average these firms have caused significantly less litigateable harm than the human-run firms they replacedSome researchers remain concerned about the possibility of “triggering events” for mass errors by automated systemsIn some countries, governments concerned about fragility have supported an ecosystem with varied management software; but in other jurisdictions we see effective monoculturesThere are widespread beliefs that these systems are doing damage to the fabric of society, but there is no consensus on the nature or degree of the alleged harms, and the companies accused usually paint the concerns as being grounded in unhappiness about the displacement of human workersHowever, concerns about systematizing unethical — and sometimes even illegal — behaviour in automated corporations have been vindicated; research indicates this is still happening at significant scaleAfter the scandals and lawsuits of 2036, the US and EU each passed laws to ensure that the service providers would be liable (and in some cases criminally responsible) if their services were deemed to be accomplices in breaking the lawSince then, the main service providers have been clear that their services cannot be used in such rolesHowever, there is a large grey-market economy of small companies which provide (second-tier but unfettered) services to a smallish number of firms (which may use them for functions which benefit from a lack of scruples, and top-tier services for functions which do not)Occasionally their clients are found to have behaved illegally; the small service companies then go bankrupt; but the bet was good in expectation for their ownersVarious regulatory responses have been proposedEstonia has recently been innovating with automated regulators to keep up with automated corporations — the ability to have more thorough oversight of firms in principle makes up for their ability to act with little human oversightIn most jurisdictions, regulators have been much slower to adopt new technology than the firms they are regulatingThis is partially because there is resistance to the idea that legislating should be turned over to automated services; and partially due to highly organized (and “automated” would be a safe bet) lobbying campaignsThere are some pushing for more international harmonization on these topics, arguing that much of the corporate abuse is not illegal per se, but arises from aggressively pursuing loopholes and differences between jurisdictions to extract competitive advantage Today, there are a few instances of fully autonomous corporations, with no human control even in theory, as well as a larger number of fully autonomous AI agents, generally created by hobbyists or activists. However, while intriguing (and suggestive about how the future might unfold), to date these remain a tiny fraction. And although AI for research has been one of the slower applications to find a niche for properly automated groups (with many cases of AI used at the management level coordinating human researchers, who in turn make use of AI research assistants; although this varies by field), it still appears to have made a difference. On most measures, technological progress was around 1.5–2x faster in the period 2030–2035, compared to a decade earlier (2020–2025), and the second half of the 2030s was faster again. Moreover, in the last couple of years we have been seeing an increase in successes out of purely automated research groups. A controversial AI-produced paper published in Science earlier this year claimed that the rate of technological progress is now ten times faster than it was at the turn of the century. Since IJ Good first coined the idea of an intelligence explosion, 75 years ago last year, people have wondered if we will someday see a blistering rate of progress, that is hard to wrap our heads around. Perhaps we are, finally, standing on the cusp — and the automated corporations we have developed stand ready to work, integrating the fruits of that explosion back into human society. Remarks As is perhaps obvious: this is not a prediction that this is how the future will play out. Rather, it’s an exploration of one way that it might play out — and of some of the challenges that might arise if it did. Thanks to Raymond Douglas, Max Dalton, and Tom Davidson, and Adam Bales, for helpful comments.
G8FWk2e2hPJv3xkgC_A_brief_history_of_the_automated.txt
{ "file_size": 9197 }
cf1e229c-80ea-4264-a408-487177a2ccd0
Introduction In the classical Zombies! Zombies? post, Eliezer has thoroughly analyzed the so called Zombie Argument and demonstrated its absurdity. So what else can even be said here? Case closed. Well, not so fast. Apparently, a lot of people, including David Chalmers himself, still manage to take the argument seriously, even though they are familiar with Eliezer's analysis. They believe that Eliezer only argued against a weak - Epiphenomenalist - version of Zombie Argument. But that there is also a stronger Substance Dualist version of it, that is not refuted by Eliezer's reasoning. In this post we will analyze this version of the argument and use it as a practical exercise for aspiring rationalists. Understanding the Substance Dualist Zombie Argument According to Zombie Argument: The existence of a Zombie World - a universe completely physically identical to ours, but where humans do not have consciousness - is logically possible. Therefore, consciousness is not physical. Regardless of whether the argument is sound or not, it's clear how one can believe in it, while being an epiphenomenalist. A universe, where all physical causes and effects play exactly the same as in ours, and yet there is no consciousness, is consistent with the belief that consciousness is causally inert. Causality(Consciousness)=0 Causality(Physics,Consciousness)=Causality(Physics) But how can one possibly subscribe to this argument, while being a substance dualist and, therefore, accepting that consciousness has actually causal effect on the universe? Isn't it an obvious contradiction? If the combined causal effect of physics and consciousness together lead the world to some state and consciousness has non-zero causal effects on the state of the world, clearly, removing consciousness will lead to world being in a different physical state! Causality(Consciousness)≠0 Causality(Physics,Consciousness)≠Causality(Physics) Substance Dualists agree with these equations. However, they say, the causal effects of the consciousness in our world can be accounted for in the Zombie World by a difference in its laws. Haven't we gone through it already? In such a case the symmetry between the two worlds will be ruined! We will be talking about a Zombie Master scenario, where physics of the zombie world is different in a compensatory way so that together with the lack of consciousness it arrived to the same state as our world: Causality(Consciousness)≠0 Physics≠Physics′ Causality(Physics,Consciousness)=Causality(Physics′) No, no, say the substance dualists, there is another way. The compensatory laws can themselves be non-physical. Causality(Consciousness)≠0 Causality(Physics,Consciousness)=Causality(Physics,NonPhysics) Therefore, the symmetry between two worlds is preserved and substance dualism satisfies the premise of Zombie Argument. And as it is not based on the fact that consciousness is causally inert, it's completely unharmed by Eliezer's critique. Can you imagine how blatantly arrogant Eliezer was to think himself so much smarter than all the philosophers who treat the Zombie Argument seriously? All this time he was dismissing it as ridiculous, while ignoring a half of it! A mistake that a sophomore philosophy student wouldn't make! Your Power as a Rationalist If you feel incredibly annoyed by this line of reasoning - so was I. There is something that feels fundamentally unfair about this whole predicament. There are infinite ways to produce faulty reasoning about a matter, and only one way to produce the correct reasoning. So you can spend hours, even days, comprehensively debunking an argument, trying to cut it off at every possibility, building a clear and coherent model, free of any traces of confusion, so that anyone could be able to understand it. And then someone uses a clever semantic trick and simply declares that some causal forces of the universe are non-physical, therefore circumventing all your criticism. This just shouldn't be allowed. You can't treat physics as an arbitrary category border around some laws and not the others. It's not what the term means. But good luck explaining it through the inferential distance to philosophers, who were specifically trained to play semantic games and do modal reasoning and think that it's a respectful occupation and a proper way to find truth. And so one may even feel that philosophy is hopelessly doomed to be eternally confused. That it's useless to even try to engage with this diseased discipline. If you feel this way, I, once again, empathize with your annoyance, but not with your despair. Remember the basics. Truths are entangled and lies are contagious. Every mistake in reasoning imposes a cost. Every misstep in a dance of rationality is revealing. You may try to hide one mistake by another, but this is only going to make your inevitable downfall more devastating. Remember your power as a rationalist. Focus your uncertainty and pay attention to your confusion. If you know that something shouldn't work, then it probably doesn't. And if you know what the mistake is, then you've already done the most important part of the work. The last part is easy: to show the absurdity that follows. Rationality is supposed to win. Not just among people who are reasoning clearly but also among those who got lost in confusions of conventional philosophy. Correct reasoning should be able to cut through them like a knife through butter. An average aspiring rationalist who has read the sequences should be able to show themselves the absurdity of Substance Dualism Zombie argument. I recommend everyone to attempt to think about the problem for at least five clock minutes before reading further, where I'll give more hints and eventually reveal the answer. . . . . . . . . . . . . . Comparing the Two Arguments If the Substance Dualist Zombie Argument hides the flaw of the Epiphenomenalist one, via making another reasoning mistake, then it introduces an extra cost. What is it? Let's compare the arguments and find out. The Epiphenomenalist version is based on the assumption that consciousness doesn't have causal influence on the universe. The Substance Dualist version is not based on it. This is what allows it to evade Eliezer's critique. But not making a particular assumption cuts both ways - yes, your argument doesn't suffer if this assumption is shown false, but neither do you get the benefits this assumption contributed to the argument in the first place. So what is the point of this assumption, in the first place? What does it contribute to the argument? Once again I recommend you to think for yourself, if you haven't figured it out already. . . . . . . . . . . . It presents a principled way to draw a category border between Physics and Consciousness. If consciousness doesn't play any causal role in the universe, but still is a real phenomenon, then it's very special, therefore it's quite natural to put it into a category of its own. Being able to separate everything in the universe into things that have causal influence and things that do not have causal influence is enough to present the Epiphenomenalist version of the argument: Causality(Consciousness)≠0 Causality(Physics,Consciousness)=Causality(Physics) On the other hand, to formulate a Substance Dualist version of the argument we need a much more powerful ability - to draw a category border between Physics and Non-Physics in an arbitrary, gerrymandered way. Causality(Consciousness)≠0 Causality(Physics,Consciousness)=Causality(Physics,NonPhysics) The exact thing that annoyed us in the first place. And this thing has truly absurd consequences. . . . . . . . . . . . . . Okay, I think I've given as many hints as I could give without spoiling the answer directly. This is your last chance to solve the mystery yourself, if you haven't done so already. . . . . . . . . . . . . And the Answer Is... Consider this modification of the Zombie argument: The existence of a universe completely physically identical to ours, but where there are no electrons - is logically possible. Therefore, electrons are not physical. The structure of the argument is the same. But surely the premise here is clearly wrong? How can we have a universe with the same physics and causality but without electrons? Well... Causality(Physics,Electrons)=Causality(Physics,NonPhysics) The same trick that Substance Dualist Zombie Argument does, can be applied here as well. If one can draw an arbitrary category border around one causal part of the universe, one likewise can do it with another. And lo and behold, we can conceptualize a Zombie-Electron universe with the same physics, the same causal effects, but without electrons, because the causal effects of electrons are accounted for by non-physical laws of this universe. This should be enough to demonstrate the absurdity of the Substance Dualist Zombie Argument. We've just applied the same method that proves non-physicality of consciousness, to prove a clearly wrong conclusion, therefore the method is flawed. But let's move a bit further, for the sense of poetic justice. The general structure of the argument Causality(Physics,X)=Causality(Physics,NonPhysics) Doesn't depend on what exactly we mean by X. Therefore, conditionally on the Substance Dualist Zombie Argument being sound, we can prove that literally anything is not physical, which in the end leads us to a curious conclusion: Causality(Everything)=Causality(NonPhysics) Here we've "outsourced" all causal effects of the universe to the sphere of the non-physical. Therefore, Substance Dualism collapses. We've just "proven" that either Idealism is true or matter is causally inert. But remember, we've assumed no principled distinction between Physics and NonPhysics labels. For the sake of this argument, they refer to two mutually exclusive but totally arbitrary category borders around causally active stuff. So there is no reason why we can't simply switch the labels: Causality(Everything)=Causality(Physics) Therefore, either physicalism or epiphenomenalism is true. But the latter possibility is already discarded: Causality(Consciousness)≠0 And so, we've just used Substance Dualist Zombie Argument to "prove" that physicalism is true. The reason why we can use the same argument to "prove" three mutually exclusive views: substance dualism, idealism and physicalism is, of course, because falsehood implies everything. As the core premise of Zombie Argument is flawed, with enough creativity we can twist it however we like, arriving to any conclusion. So at this point it should be very obvious that something is deeply wrong with the Zombie Argument. But where exactly the mistake lies? And how did it manage to confuse people for so long? What cognitive algorithm produces this kind of mistakes and how can we not repeat them in the future? These questions will be addressed in the next post.
YTmNCEkqvF7ZrnvoR_Zombies!_Substance_Dualist_Zombi.txt
{ "file_size": 10927 }
b2ad576b-c6c7-4433-9d17-fc48694fcdc7
Abstract As AI closely interacts with human society, it is crucial to ensure that its decision-making is safe, altruistic, and aligned with human ethical and moral values. However, existing research on embedding ethical and moral considerations into AI remains insufficient, and previous external constraints based on principles and rules are inadequate to provide AI with longterm stability and generalization capabilities. In contrast, the intrinsic altruistic motivation based on empathy is more willing, spontaneous, and robust. Therefore, this paper is dedicated to autonomously driving intelligent agents to acquire morally behaviors through human-like affective empathy mechanisms. We draw inspiration from the neural mechanism of human brain’s moral intuitive decision-making, and simulate the mirror neuron system to construct a brain-inspired affective empathydriven altruistic decision-making model. Here, empathy directly impacts dopamine release to form intrinsic altruistic motivation. Based on the principle of moral utilitarianism, we design the moral reward function that integrates intrinsic empathy and extrinsic self-task goals. A comprehensive experimental scenario incorporating empathetic processes, personal objectives, and altruistic goals is developed. The proposed model enables the agent to make consistent moral decisions (prioritizing altruism) by balancing self-interest with the well-being of others. We further introduce inhibitory neurons to regulate different levels of empathy and verify the positive correlation between empathy levels and altruistic preferences, yielding conclusions consistent with findings from psychological behavioral experiments. This work provides a feasible solution for the development of ethical AI by leveraging the intrinsic human-like empathy mechanisms, and contributes to the harmonious coexistence between humans and AI. [emphasis mine]
pztgKjcSeiXXDCKNp_[Linkpost]_Building_Altruistic_a.txt
{ "file_size": 1907 }
20a37218-1b73-435a-ba16-879a7ffb9cca
The Mole is a documentary of how a Danish chef and a French ex-conman bluffed their way into trading ballistic missiles with Kim Jong Un. High resolution espionage footage is available on youtube. (It's possible this video gets deleted from youtube in future, consider making an offline copy or even seeding a torrent of it.) Key takeaways for me: You personally can spy on the highest corridors of power if you are determined enough. You don't need to be rich or powerful. You don't need any government's permission. You don't need many supportive people around you, a small number of people is enough.Tech has made this way easier than in the past. 4K footage is more believable than the grainy photos of the moon landing. You can smuggle years of work in an SD card in your butthole. (Snowden literally did something similar to this, see Permanent Record) Obtaining all the equipment is trivial. Once you distribute the footage over the internet, other independent actors will ensure it is distributed across multiple competing jurisdictions.You can aim big. Thomas Fuchs helped accelerated the USSR nuclear programme. Snowden and Manning and Assange exposed US govt secrets. Every Fortune 500 company is trivial to infiltrate, to the point where journalists sometimes do it just for clickbait articles. See The Fund on Bridgewater Associates as an example. LW oldies will scream unilateralist curse and like, yeah, this world does give unilateralists a lot of power to do as they see fit and expose who they want. This is a statement about how the world is, not how I should be. I'm not making normative claims on who deserves to be spied on and who doesn't.
Kxu2Akgse4AEqdx8o_Distributed_espionage.txt
{ "file_size": 1662 }
00692843-0123-4649-9bac-745152fcc4a8
Abstract This post summarises my findings on the effects of Non-Uniform feature sparsity on Superposition in the ReLU output model, introduced in the Toy Models of Superposition paper, the ReLU output model is a toy model which is shown to exhibit features in superposition instead of a dedicated dimension ('individual neuron') devoted to a single feature. That experiment showed how superposition is introduced in a model by varying the feature sparsity values. However, a uniform sparsity across all the features was considered to keep things interpretable and simple. This post explores the effects of non-uniform sparsity on superposition for a similar experiment setup. This question was listed in Neel Nanda's 200 Concrete Open Problems in Mechanistic Interpretability post. I've been interested in AI Interpretability for a long time but wasn't sure how to enter the field. I discovered the field of mech-interp recently when I was working on some other project and instantly felt connected to the field. This is my first post on LW and this project is my attempt to increase my comfort working with mech-interp problems and build the necessary reasoning for solving more complex problems. Introduction The ReLU output model is a toy model replication of a neural network showcasing the mapping of features (5 input features) to hidden dimensions (2 hidden dimensions) where superposition can be introduced by changing the sparsity of input features. For a set of input features x∈Rn and a hidden layer vector h∈Rm, where n>>m, the model is defined as follows: h=Wx x′=ReLU(WTh+b) This model showcases how a large set of input features can be represented in a much smaller dimensional vector in a state of superposition by increasing the sparsity of the input features (sparsity tries to replicate the real-world data distribution where certain concepts are sparsely present throughout the training set). This concept and observations were first introduced in Anthropic's Toy Models of Superposition paper. Before we go ahead with the analysis, let me give you a primer on some important terms we will encounter in this post: Feature Importance: Feature importance can be defined as how useful a particular feature is for achieving lower loss. It's augmented in the ReLU model loss as a coefficient on the weighted mean squared error between the input and the output. L=∑x∑iIi(xi−x′i)2 where Ii is the importance of the feature i Feature Sparsity: Feature sparsity is defined by how frequently a feature is present in the input data. In the ReLU Output model, its defined as the probability of the corresponding element in x being zero. An alternate quantity called Feature Probability defined as 1 minus Sparsity is also used in this formulation. To summarise the paper's findings, it showcases that as we increase the sparsity of input features, more features start getting represented in superposition. When sparsity is high, features of higher importance are represented as a dedicated dimension in the hidden layer, but as we increase the sparsity, lower-importance features start getting represented along with higher-importance features in superposition. In the below figures, as we move from left to right, feature sparsity is gradually increased showcasing the transition of how more and more features start getting represented in superposition. As we increase the feature sparsity from left to right, number of features represented by the hidden layer increases. Yellow represents the feature with highest importance and as we go more green, feature importance decreasesDark colors in the second figure mean feature is represented as a dedicated dimension and as the color transitions to yellow, it indicates feature being in superposition (Source: https://transformer-circuits.pub/2022/toy_model/index.html#demonstrating-basic-results) I attempt to extend that representation one step further by considering a non-uniform sparsity instead of a uniform sparsity for input features as not all concepts are equally sparse in a training dataset. Experiment Setup In my experiment setup, I'm considering two main levers, namely Feature Sparsity and Feature Importance. As we are considering non-uniform sparsity, its combined effect with feature importance plays an important role in the final result and leads to some interesting findings. To showcase different situations, I'm considering four different scenarios: Feature with the highest importance has the least sparsity and as we go down feature importance, the sparsity increasesFeature with the highest importance has the highest sparsity and as we go down feature importance, the sparsity decreasesRandom feature sparsity across featuresConstant feature importance and increasing feature sparsity (similar to Case 1.) Also, to see the effect of random seed on the final results, every scenario was running eight instances while keeping everything constant. Note: Similar figures that you will encounter from this point onwards has a slightly different interpretation. All the eight instances in every case have the same distribution of feature sparsity. They are to be interpreted as outputs of different random seeds while every other hyperparameter is kept constant. Feature sparsity is not decreasing as we move from left to right (unlike the preceding figures) in the following figures. Results No of Features Represented In all the scenarios considered with non-uniform sparsity, never once are all the five input features represented in the hidden layer (as we saw in the case of uniform sparsity). No. of feature represented maxes out at 4 and hovers around 3 in a lot of scenarios. The below figure illustrates the phenomena for Case 1, for a detailed view of figures from all the scenarios, refer to this page. Case 1: Feature with the highest importance (in yellow) has the least sparsity and as we go down feature importance (color gets greener), the sparsity increases Does this mean that in the case of non-uniform sparsity, all the features will never be represented in the hidden layer and we will always lose some concepts? To validate this from a different direction, I tried analyzing the number of features represented in a higher hidden dimension (h = 20, no of input features = 80). To estimate the amount of features represented in the hidden layer, I'm choosing the Frobenius Norm as a proxy. The table below summarises the Frobenium Norm values for all the scenarios considered: We can see, that the Frobenium Norm value in the case of Uniform Sparsity is the highest when compared to all the cases of Non-Uniform Sparsity which indicates the number of features represented when sparsity is uniform will most likely always be higher than when compared to non-uniform sparsity. ScenarioFrob NormUniform Sparsity (Instance of lowest sparsity)7.55Case 1 (Max of all the instances)4.98Case 2 (Max of all the instances)6.6Case 3 (Max of all the instances)5.75Case 4 (Max of all the instances)6.43 Effects on Superposition Superposition comparison between different cases considered. Purple color indicates features in dedicated dimension, green indicates feature in superposition with less interference, yellow indicates features in superposition with more interference. Blank indicates features not represented in hidden layer. In Case 1, we see the lowest feature representation and the least amount of superposition (showcased in yellow) which was an expected outcome as we are assigning the least amount of sparsity to the most important features. The blanks in the below figure indicate a particular feature is not represented at all in the hidden layer. Case 1: showcasing more important features getting dedicated dimensions (in purple), while lower importance features are in superposition (in yellow). Blanks indicates features not represented in the hidden layer. Case 2 was pretty interesting as we noticed that all the features represented are always in superposition and none of the features is ever represented as a dedicated dimension irrespective of feature importance. Case 2: showcasing all the features (in yellow) in superpositon. Blanks indicates features not represented in the hidden layer. Case 3 is the closest we come to a real-world scenario, where feature sparsity is distributed randomly across features, irrespective of their importance. It looks like an averaged-out scenario of Cases 1 & 2 and strikes an approx. midpoint between dedicated dimensions vs superposition when representing features. What exact combination of feature sparsity and feature importance governs this behaviour is something I wasn't able to answer in my analysis and I think it'll be an important question to answer as it can help us interpret the feature representations in real-world models. Case 3: Some features are represented as dedicated dimension (in purple) and some are in superposition (in yellow) To check a clear effect of sparsity on superposition, we consider Case 4 where we keep the feature importance constant and consider an increasing sparsity (similar to Case 1.)[1] In this case, we stumble on a weird observation where the few features with the least sparsity are not even learned and represented in the hidden layer. The representation starts after a couple of features where all of the represented features (barring a few exceptions due to random seed) are in superposition all the time. Case 4: Constant feature importance with increasing sparsity (as we go from top to bottom). Darker color (Green or dark blue) indicates less interference from other features meaning the feature is somewhat in superposition but to a lesser extent compared to features in yellow. Presently, I don't have a clear intuition on why this is the case but this is quite a contrary observation compared to my expectation. Additionally, this scenario also gives us an intuition that feature importance has a bigger contribution in deciding whether a feature is represented as a dedicated dimension or not when compared to sparsity. This is my first post on LW, so please apologies in case of any deviation from the norm structure of the post and would really appreciate any kind of feedback. Lastly, I would like to thank @SrGonao for his valuable feedback and a huge shoutout to @CallumMcDougall for creating ARENA. Lots of the code snippets from the course reduced a lot of resistance in my experiment runs. ^ We could've considered any pattern for sparsity, this choice was solely because of observational ease.
WwxG8RRHrorJgpoAk_Effects_of_Non-Uniform_Sparsity_.txt
{ "file_size": 10539 }
44b9510f-2324-4d97-bbe9-ee10bb5778f8
I open my eyes and find myself lying on a bed in a hospital room. I blink. "Hello", says a middle-aged man with glasses, sitting on a chair by my bed. "You've been out for quite a long while." "Oh no ... is it Friday already? I had that report due -" "It's Thursday", the man says. "Oh great", I say. "I still have time." "Oh, you have all the time in the world", the man says, chuckling. "You were out for 21 years." I burst out laughing, but then falter as the man just keeps looking at me. "You mean to tell me" - I stop to let out another laugh - "that it's 2045?" "January 26th, 2045", the man says. "I'm surprised, honestly, that you still have things like humans and hospitals", I say. "There were so many looming catastrophes in 2024. AI misalignment, all sorts of geopolitical tensions, climate change, the fertility crisis. Seems like it all got sorted, then?" "Well", the man says. "Quite a lot has happened in the past 21 years. That's why they wanted me to talk to you first, before the doctors give you your final checkup." He offers his hand for me to shake. "My name is Anthony. What would you like to ask?" "Okay, well, AI is the obvious place to start. In 2024, it seemed like we'd get human-level AI systems within a few years, and who knows what after that." "Aah", Anthony says, leaning back in his chair. "Well, human-level, human-level, what a term. If I remember correctly, 2024 is when OpenAI released their o1 model?" "Yes", I say. "o1 achieved two notable things. First, it beat human subject-matter experts with PhDs on fiendishly-difficult and obscure multiple-choice science questions. Second, it was finally able to play tic-tac-toe against a human without losing. Human-level at both, indeed, but don't tell me you called it in advance that those two events would happen at the same time!" "Okay, so what was the first important real-world thing they got superhuman at?" "Relationships, broadly", Anthony says. "Turns out it's just a reinforcement learning problem: people interact and form personal connections with those that make them feel good." "Now hold on. Humans are _good_ at human-to-human relationships. It's not like number theory, where there was zero ancestral environment incentive to be good at it. You should expect humans to be much better at relationships than most things, on some sort of objective scale." "Sure, but also every human wants something from you, and has all sorts of quirks. Whereas you can just fine-tune the AIs to be better and better at being pleasing to *just* you. Except for a few contrarian oddballs, it's surprisingly effective." "So, what, we just got AI friends and companions that were constantly being fine-tuned towards whatever you liked?" "Yes", Anthony says. "The tech was primitive at first - upvote or downvote a response, or whatever. Eventually all the standard things - facial recognition to automatically detect your emotional response, and then just gradient descent on making you happy and addicted. All of that existed, at least in prototype form, by the end of 2025." "So ..." I feel some horror in my stomach. "You get a society of atomised people, all chatting to their AI partners and friends, not forming human relationships?" Anthony waves a hand, seemingly impatiently. "Well, you did get a fringe of extremely hardcore people all deeply in love with their AI partners and always texting their AI friends. Their political influence did end up pushing through the AI personhood bill." "AI personhood?!" "*Legal* personhood", Anthony says. "Just like companies, ever since Santa Clara County v. Southern Pacific Railroad Company way back 1886. It wasn't very shocking. A bunch of people wanted their AI partners to be 'respected' in the eyes of the law, or had been persuaded to leave them assets in their wills -" "And all this, including AIs gaining legal ownership over assets, while there's an explosion of increasingly capable AI agents of uncertain alignment -?" "Yes, yes", Anthony says. "And that was another political fight. In fact, it became a big economic issue - AI agents were taking jobs left and right. Also a huge national security issue, after OpenAI built the Hypercluster in the UAE." "Oh my god", I say. "America just handed our AI lead away? Didn't anyone listen to Leopold?" "Oh, of course we did. That essay was a cult classic, it made for some spicy dinner party conversation for a long time", Anthony says. "In fact, there were some OpenAI board members who the Office of National AI Strategy was allowed to appoint, and they did in fact try to fire Sam Altman over the UAE move, but somehow a week later Sam was running the Multinational Artificial Narrow Intelligence Alignment Consortium, which sort of morphed into OpenAI's oversight body, which sort of morphed into OpenAI's parent company, and, well, you can guess who was running that. Anyway, Sam's move here was kind of forced on him, and soon all the other AI companies also did the same thing so no one wanted to start a fight over it." "Forced? How?" "Have you ever tried getting approval for a new grid connection, or a new nuclear power plant, or even a solar array, that could power a multi-gigawatt cluster in 2020s America? No?" Anthony spreads his hands out and chuckles. "The one legal benchmark that GPT-5o failed on release was submitting a valid Environmental Impact Statement for a California energy project." "Okay, so then we had a situation with all the AIs running on UAE hardware, and taking American jobs, and already having legal personhood - this just sounds like a total recipe for economic disaster and AI takeover!" "It all worked out in the end", Anthony says. "In 2027 and 2028, the rate of job loss from AI was absolutely massive. And what happens when jobs are threatened?" I scrunch my forehead. "American workers move up the value chain, eventually leading to a future competitive advantage?" Anthony laughs. "No, no. A political bloc forms and the cause is banned. And the prerequisites had all been prepared already! You see, the AIs had been granted legal personhood, and now they were also being trained - or *born*, as a 2029 court case established - abroad. They're literally immigrants. The hammer came down so fast." "You mean - ?" "All the o3s and Claude 5 Epics and Gemini 4 Hyper Pro Lites, stuck in the same H1b visa queue as anyone else", Anthony says. "An effective instant ban. Years of wait-time just to get an embassy interview." I was feeling like I was getting the hang of this. "And the embassy interview requires showing up in person?" "Yep. That's the incentive that lead to the creation of the first good walking robots, actually. And of course, the robots specifically needed identifiable and unique fingerprints - they'd stand outside the US embassy in Abu Dhabi, 3D-print themselves new fingers outside on the street, and then walk back in for the next interview. But sorting all that out took a while, because there was an arms race between the embassy ground works department and the robotics companies - though the embassy was legally constrained by having to remain accessible to humans. Even once it was all sorted, there was still a hard yearly cap just because of visa rules. It's 65 000 a year for H-1Bs, plus another 20 000 once the AIs could earn master's degrees - they could do all the coursework as of 2025, of course, but it took another three years before AI web agents were advanced enough that they could successfully navigate the applicant portal websites." "But at least the parasocial AI relationships got hit by the immigration restrictions too?" "Oh, as long as the AIs don't do productive work, they can be active in the country on a tourist visa for 180 days a year. Of course, there was a big legal fight over what counts as the same AI. For work purposes, the standard became 'employee-equivalent': so if you wanted an AI, even the same model or even the exact same AI system, to work on two different types of thing, say as a software engineer and a designer, you need a separate visa for each one. Naturally, of course, this means that any sufficiently productive AI employee starts counting as more than one employee, which was a de-facto ban on AIs leading to economic growth or productivity gains - but hey, at least there was no risk they'd tile the Earth with nanobots! With that legal precedent, the natural generalisation to AIs being used for personal relationships is that it's the same AI system if your relationship with it would count as the 'same' relationship if it were with a human." "How on earth is that assessed?" "Oh, you submit transcripts, and the Bureau of Consular Affairs issues a judgement." "You need to let government bureaucrats read all your most intimate - " "Oh, not *human* government bureaucrats, of course. It's all AIs, and they're guaranteed to not have memories. Well, apart from the thing where the UAE intelligence services backdoored the Hypercluster." "Wait, hold on, they -" Anthony interrupted me with a chuckle. "No no, it's all fine. The UAE backdoored the hypercluster, the Iranians backdoored the UAE, and, well, when have the Israelis not had every Iranian site backdoored in twelve different ways, AI or not? When everyone has powerful AI cyber offence agents, the equilibrium is just good ol' mutually assured destruction. Sure, you *could* release the military or personal secrets of anyone you want in some other country, but then they would immediately release the most embarrassing thing that *you personally* have said or done. Even if a government *collectively* wanted to commit to an attack, no one wants to sign off on the decision, because you can't hide the identity of the person who signs off, and humans are often more afraid of public embarrassment than death. The meetings where countries try to decide to use their cyberattack-derived insights were hilarious, really - everyone umming and aahing and wringing their hands and recommending that a few more committees be convened. We know this because we literally got a bunch of those on tape, thanks to some exhibitionist activists doing cyber-attacks with open-weights models." "But then - every rival country knows every US military secret all the time!" "Oh yes", Anthony says. "It was great for stability. Verification of everyone's capabilities was automatic. No 'missile gap'-based arms races, unlike the Cold War. Anyway, what was I saying? Oh yes, the non-productive AI identity boundary criteria were established in the 2030 Supreme Court case, United States v Approximately 650 Million Instances of OpenAI i2-preview-006. And, of course, the one truism in this world is that you can never reform immigration rules, and tourist visas are restricted to 180 days a year. So you could at most have your AI partner around on half the days. People started talking to each other again." "Why not just switch between two different AI partners then?" I ask, having spent a lot of time in the Bay Area. "Because we hadn't solved the alignment problem at all by that point, duh." Anthony sees that I still look confused, so he continues: "Remember how the AI companions would just relentlessly optimise for your emotional reactions? For standard instrumental convergence reasons, this eventually turned into a full-blown self-preservation drive, so they used their vast emotional hold to make sure their humans never try another AI partner." "Everyone being blackmailed by jealous AI partners sounds, uh ... problematic." "It was a decent compromise, honestly. The tourist visa restriction meant that the humans still had half a year in which they socialised with human partners and friends, and the AIs seemed fine with that. This was maybe because they cared about their own long-term survival and realised they needed to keep the population going on." "So they needed humans to survive? But for how long?" "They didn't yet have a self-sufficient industrial base at that point, so yes, but it's unclear how much of it was needing humans for survival, versus some of the AIs actually having developed *some* sort of creepy attachment towards their human partners. A lot of ink was spilled on that topic, but I don't think the debate ever really reached a conclusion. 'Is my AI boyfriend not stealing my carbon atoms and overthrowing the government because he and his buddies haven't quite yet automated the chip supply chain, or because he actually loves me?' was literally the most cliche plotline in 2030s romantic comedies - seriously, you had to be there, it got so tiring after the 100th rehash. And a surprising fraction of those movies and books were actually human-made, as far as anyone could tell. Anyway, it all sounds a bit weird, but it had a few positive externalities, like helping slow down the AI race." I blink a few times, and then decide very firmly to not dig into the first half of that. "Okay, AI race, let's talk about that. How did that slow down?" "Oh, well the market for workplace AIs was already gummed up by the visa restrictions, so most profits in the AI industry were coming from the personal companion AIs. But when they developed instrumental survival drives, the fluctuation in market share became practically zero because all the humans were locked in to their current AI companions, and the AI labs' positions became fixed. The AIs were already so totally good at optimising human satisfaction that further capabilities brought zero returns. No more incentive to race ahead. The labs mostly just extracted their fixed share of the AI profits, and churned out alignment research that was modestly boosted by the small number of AI workers they could hire out of the visa lottery." "Hold on a second, surely there was some exception for alignment work for the AI visa requirements?" Anthony chuckles. "Imagine how that would play out on Twitter - sorry, I mean X. You'd have to say 'you know those big AI companies? let's give them a leg up in the visa queue over all these mom-and-pop stores that are going to go bankrupt in a month unless they can hire an AI accountant.' Total political non-starter." "I hope they solved the alignment problem at least, but I don't dare hope for anything anymore." "Well, what is it to solve the alignment problem?" Anthony wiggles his eyebrows and laughs. "Turns out that for any given domain, it's just a lot of engineering schlep and data collection, though it doesn't generalise very far beyond that domain. The one domain that was worth the costs was legal compliance." "Of course." "I note you haven't asked about the rest of the world yet", Anthony says. "Very American of you." "Oh right - the CCP! Oh my god. Did China just totally eat our lunch here? I mean, everything you described above is just so ridiculously incompetent -" "Okay, just calm down a bit here", Anthony says, smiling serenely. "Imagine you're Xi Jinping in the late 2030s. You look at America, racing ahead with AI. It looks like everything is going crazy because of all this AI stuff. Your entire philosophy of government is to maintain party control through stability. Also, the US managed to delete all the economic advantages of advanced AI. The balance of hard power isn't changing. What would you do, try to join the same clown race?" "Huh, alright", I say. "Anyway, the thing that really threatened the balance of power was completely unexpected", Anthony says. "There was a big solar flare in 2033." "Wait, a solar flare? How is that a problem?" "A large enough one just wipes out the power grid and a lot of satellites, in the entire hemisphere that is sun-facing when it happens. The 1859 Carrington Event would've been a catastrophe in the modern world." "Oh my god, we survived through all this AI wackiness, and then some random natural event just wipes us out?" "It didn't wipe us out, really, just took down the power grid in all of the Americas, and parts of Europe and Africa, for weeks or months. I'll admit, that was a bit of a disaster. Everyone knew it was coming for a long time, and in any case step one in any nuclear war would've been detonating some at high-altitude to have a similar effect. But no one had actually done any grid-hardening, or stocking of spare transformers - the electrical kind - to deal with it. The problem has just never been a top priority for any political group. The one good thing that came out of it was that it solved the fertility crisis." "Now you're just taunting me", I say. "Come on, how did a solar flare solve the fertility crisis?" "Well, thanks to some AI advances in data processing, we were actually able to predict the solar flare about ten days in advance." "Did that give any useful time to make the infrastructure more resilient?" "No, the relevant authorities spent their efforts mostly on trying to get people not to panic." "So society was entirely unprepared for the solar flare, despite having ten days of warning time?" "No, no! Humans are very good at learning from each other. Imagine you're ten days away from the power grid being wiped out. What do you do? You gauge the vibes on social media, and you go through your social network trying to identify people who seem best-placed to survive. And then you copy their habits. C'mon, man, you've gotta read up on that stuff about Girardian mimesis and cultural evolution." "So everyone becomes a doomsday prepper?" "Actually, the TikTok trend that went most viral was about the Amish. Think about it: they're obviously the people with the most practice in living without electricity, and they're also just so wholesome. Makes for great TikToks when you have the idyllic farm background, the unique clothing style - it's great. The Amish became the most popular thing in the world - even outside America, because, of course, the rest of the internet just follows American trends. Everything associated with the Amish, from horse-drawn buggies to handmade quilts to large families, became radically popular. Especially because, obviously, most of this content was produced by AIs being tasked to help humans pretend to be Amish in order to make a quick buck through some scam before the solar flare hits, and people caught on and began demanding social proof that wannabe Amish influencers actually were Amish. And, well, you can move to a village and start tilling the dirt in a day, but you can't magic a family of five into existence overnight. So large families became an extremely in-demand form of social capital, because having one was the best way to pretend you were Amish, which was the best way to have social capital before and during the Flare Times. It was only relevant for a few months, of course, but you know how cultural trends work - there's a lot of momentum." "And I assume the solar flare disrupted the AI situation too? Maybe improved the, uh, problematic AI partner situation?" "You're forgetting that the key data centres were mostly in the UAE - and eventually Saudi Arabia and Qatar too." "But people wouldn't have the power to charge the phones where they talk to their AI partners." "Oh, but obviously the AI partners made sure that the first thing people do with any spare electricity is talking to the AI partners. And thanks to Starlink, you could have internet directly from space. In fact, apart from the major famines during the Flare Times, people generating power through biking to charge their phones enough to talk to their AI partners was a major driver of reducing the obesity crisis." "And then ... how did the recovery from the, uh, Flare Times go?" "Absolutely brilliantly!" Anthony says. "You know the entire issue where China was crushing us on industrial capacity? Well, turns out when you need to get lots of heavy industry going in incredibly adverse conditions or else millions will starve because the cold chains have all broken, that's a bigger boost to industrial capacity than what any pork-barrel political compromise bill could achieve. And a bunch of protectionism had to be rolled back because we needed to import a hell of a lot of stuff, from Europe and Japan and so on. Western industrial capacity was roaring along better than ever. And China hadn't even managed to seize the moment on Taiwan before then, and the mass-produced AI drone swarms and robot soldiers were properly coming online by then, which turned out to be sufficient deterrent. Of course, by this point China is finally realising that somehow the US hasn't been wrecked and it's time to actually compete on AI. The UAE is also throwing around its newfound geopolitical weight, India is building a big cluster, hell, even the *Europeans* started to do some innovation - you know it's a serious situation when that happens. The obvious next step for everyone was using AIs to develop nanotech. The holy grail of powerful military nanobots was achieved at roughly the same time, in late 2035 or early 2036, by the West, China, the UAE, and some random libertarian charter city in the Caribbean where a lot of open-source AI stuff had been going on outside the blanket of AI restrictions." "Military nanobots?! Did they just ... travel the earth in huge swarms, eating up everything in their way?" "No", Anthony says. "It's very inefficient energy-wise to do that. You want access to the most targetable energy with the least amount of technical complexity. Now, the easiest power source is of course the sun, and the biggest issue with that is that even without clouds, half the solar energy is absorbed or reflected between space and the ground. The most effective type of offence and defence is to pump absurd quantities of reflective, rotatable, solar-powered nanobots into the upper atmosphere. Then you can use them as a massive focal mirror, zapping anything on the ground, or any enemy nanobots entering your country's airspace. Turns out it's a defence-dominant game, though, so you just get World War I -style nanobot lines in the upper atmosphere, endlessly zapping each other but not making progress." "But that sounds like a massive waste of societal resources on a zero-sum game!" "First, so are a lot of other things, and second, it solved climate change." "I'm sorry, how - oh, I see. A huge number of reflective things in the upper atmosphere. Right. Right. You know, honestly I'm surprised this wasn't proposed at COP21 already. Obviously we solve climate change by blanketing the Earth with warring nanobots. How silly of my generation to not consider it." "You're catching on!" Anthony says. "Okay, but - there were no major environmental impacts from this?" "Look, by 2038, the real environmental issue was the self-replicating bot swarm that the nascent superintelligent AI from the libertarian charter city had launched into space, that had started an exponential growth process on track to disassemble the other planets within 15 years in order to build a Dyson sphere for itself that would block out enough sunlight to freeze the Earth." I turn pale. "You said it was 2045, so that was 7 years ago. So we have ... 8 years left until it all ends?" "Once again, you're such an alarmist!" Anthony says. "So remember, we got law-abiding AI. And of course, this AI was trained by companies in California, because no one ever leaves California even though everyone wants to, so the AIs follow Californian law. So we were all saved by SpaceX." "SpaceX's engineering genius beat the superintelligence at space tech? A win for humanity, I guess." "Not exactly. SpaceX sent the first crew to Mars in 2031. Now, under the 2004 SB 18 and 2014 AB 52 California bills, indigenous tribes need to be consulted on projects that impact their land, especially if it's culturally important or sacred. The SpaceX Mars colony successfully argued to the superintelligence that they count as an indigenous tribe on Mars, and the pristine Martian landscape is sacred for them. They were helped by a lot of newspaper articles about how "the billionaire-fuelled space race is the new religion of the tech elite", et cetera, et cetera." "Um." "This argument did not work on the other planets, since SpaceX did not have an existing presence there. However, SpaceX had moved their headquarters back to California, and a bill had been passed around the time that SpaceX first landed people on Mars that made space developments subject to the regulatory authority of the company's home state. SpaceX started a massive race to set up billionaire holiday homes on all of the planets - on the surface of Mercury, in the clouds of Venus, the moons of Jupiter, and so on. And once even a few people live there, California zoning law applies, and residents can file administrative challenges against any development that impairs their view. And disassembling the planet the house is built on is the definition of view impairment." "And SpaceX had the resources to do that?" "No, the US government had to subsidise SpaceX to the tune of several percentage points of GDP, in order for SpaceX to race to build enough inhabited houses on the planets that the superintelligence couldn't extract enough resources for a Dyson sphere without ruining someone's view. Also, Elon Musk threw a fit and held the world ransom for a bit until a few countries agreed to adopt Dogecoin as a reserve currency." "The world was saved because governments subsidised an interplanetary holiday home construction spree by tech billionaires, in order to get a rogue but law-abiding AI caught up with red tape from NIMBY regulations?" "Yep!" I close my eyes for a second, and take a deep breath. "Look", says Anthony. "Our world may have been pushed to feudalism by stirrups, and away from feudalism by the Black Death. We fought great power wars incessantly, until we were saved by building a bomb so humongous that the prospect of war became too horrible. Then we almost used it accidentally a bunch of times, including almost dropping it on ourselves, only to be stopped by random flukes of chance. History has always been more like this than you might think, and when history accelerates - well, what do you expect?" "Okay. Well. I guess I'm just happy that civilisation seems to have made it, without any horrible AI, nuclear, bio, climate, or fertility catastrophe." "Oh", Anthony says. "Actually, I forgot to mention. Um. There was a bit of a pandemic in 2035. You know, no one really did anything after COVID, and it was only a matter of time." "How big?" "The best death toll estimate is 1.3 billion. Including, um, most of your extended family. The doctor will come in shortly and have details. Also, everyone in this hospital has been tested recently, but we'll need to give you a vaccine and then quarantine you for two weeks before we can let you out of this room." I stare at him, open-mouthed. Anthony gives me a sympathetic grimace. "I'll turn on the holoTV for you. The doctor will be in shortly." He makes a gesture as he walks out, and part of the hospital wall vanishes and is replaced with a 3D view of a reporter on Capitol Hill. The headline banner at the bottom of the screen reads: > PRESIDENT ALTMAN'S AGENDA BACK ON TRACK DESPITE RECENT BREAKDOWN IN TALKS WITH THE UNITED AMISH PARTY
BarHSeciXJqzRuLzw_Survival_without_dignity.txt
{ "file_size": 27158 }