id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
4ad1b2bb-a577-4793-89d4-ccb63a5c3deb | trentmkelly/LessWrong-43k | LessWrong | Interpersonal Entanglement
Today I shall criticize yet another Utopia. This Utopia isn't famous in the literature. But it's considerably superior to many better-known Utopias—more fun than the Christian Heaven, or Greg Egan's upload societies, for example. And so the main flaw is well worth pointing out.
This Utopia consists of a one-line remark on an IRC channel:
> <reedspacer> living in your volcano lair with catgirls is probably a vast increase in standard of living for most of humanity
I've come to think of this as Reedspacer's Lower Bound.
Sure, it sounds silly. But if your grand vision of the future isn't at least as much fun as a volcano lair with catpersons of the appropriate gender, you should just go with that instead. This rules out a surprising number of proposals.
But today I am here to criticize Reedspacer's Lower Bound—the problem being the catgirls.
I've joked about the subject, now and then—"Donate now, and get a free catgirl or catboy after the Singularity!"—but I think it would actually be a terrible idea. In fact, today's post could have been entitled "Why Fun Theorists Don't Believe In Catgirls."
I first realized that catpeople were a potential threat, at the point when a friend said—quotes not verbatim—
"I want to spend a million years having sex with catgirls after the Singularity."
I replied,
"No, you don't."
He said, "Yes I do."
I said, "No you don't. You'd get bored."
He said, "Well, then I'd just modify my brain not to get bored—"
And I said: "AAAAIIIIIIEEEEEEEEE"
Don't worry, the story has a happy ending. A couple of years later, the same friend came back and said:
"Okay, I've gotten a bit more mature now—it's a long story, actually—and now I realize I wouldn't want to do that."
To which I sagely replied:
"HA! HA HA HA! You wanted to spend a million years having sex with catgirls. It only took you two years to change your mind and you didn't even have sex with any catgirls."
Now, this particular case was probably about scope insensiti |
a1e15778-4812-47af-97d5-54952333806c | trentmkelly/LessWrong-43k | LessWrong | Leveling Up in Rationality: A Personal Journey
See also: Reflections on rationality a year out
My favorite part of Lord of the Rings was skipped in both film adaptations. It occurs when our four hobbit heroes (Sam, Frodo, Merry and Pippin) return to the Shire and learn it has been taken over by a gang of ruffians. Merry assumes Gandalf will help them free their home, but Gandalf declines:
> I am not coming to the Shire. You must settle its affairs yourselves; that is what you have been trained for... My dear friends, you will need no help. You are grown up now. Grown indeed very high...
As it turns out, the hobbits have acquired many powers along their journey — powers they use to lead a resistance and free the Shire.
That is how I felt when I flew home for the holidays this December. Minnesota wasn't ruled by ruffians, but the familiar faces and places reminded me of the person I had been before I moved away, just a few years ago.
And I'm just so much more powerful than I used to be.
And in my case, at least, many of my newfound powers seem to come from having seriously leveled up in rationality.
POWER 0: CURIOSITY
I was always "curious," by which I mean I felt like I wanted to know things. I read lots of books and asked lots of questions. But I didn't really want to know the truth, because I didn't care enough about the truth to study, say, probability theory and the cognitive science of how we deceive ourselves. I just studied different Christian theologies — and, when I was really daring, different supernatural religions — and told myself that was what honest truth-seeking looked like.
It took 20 years for reality to pierce my comfortable, carefully cultivated bubble of Christian indoctrination. But when it finally popped, I realized I had (mostly) wasted my life thus far, and I was angry. Now I studied things not just for the pleasure of discovery and the gratifying feeling of caring about truth, but because I really wanted an accurate model of the world so I wouldn't do stupid things like was |
1a27797b-a92f-4ac9-9bac-ad0a9f5e2d94 | trentmkelly/LessWrong-43k | LessWrong | Manifold: If okay AGI, why?
Arguably the most important topic about which a prediction market has yet been run: Conditional on an okay outcome with AGI, how did that happen?
|
5e41b26b-e290-4ab1-b9fd-619639b57c35 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | There should be a public adversarial collaboration on AI x-risk
I think that [adversarial collaborations](https://en.wikipedia.org/wiki/Adversarial_collaboration) are a good way of understanding competing perspectives on an idea, especially if it is polarising or especially controversial.
The term was first [introduced by Daniel Kahneman](https://www.edge.org/adversarial-collaboration-daniel-kahneman). The basic idea is that two people with competing perspectives on an issue work together towards a joint belief. Two people working in good faith would be able to devise various experiments and discussions that clarify the idea and work towards a joint belief. (Kahneman uses the word "truth", but I think the word "belief" is more justified in this context).
AI x-risk is a good place to have a public adversarial collaboration
====================================================================
First the issue is especially polarising. The [beliefs](https://forum.effectivealtruism.org/posts/8CM9vZ2nnQsWJNsHx/existential-risk-from-ai-survey-results) of people working on AI risk are that AI presents one of the greatest challenges to humanity's survival. On the other hand, AI research organisations by revealed preference (they're going full speed ahead on building AI capabilities) and [stated preference](https://twitter.com/sama/status/1599113144282275840) (see this [survey](https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/) too) think the risk is much lower.
In my opinion having an adversarial collaboration between a top AI safety person (who works on x-risk from AI) and someone who did not think that the x risks were substantial would have clear benefits.
* It would make the lines of disagreement clearer. To me, an outsider in the space it's not very clear where exactly people disagree and to what extent. This would clear that up and possibly provide a baseline for future debate to be based on.
* It would also legitimise x-risk concerns quite a bit if this was to be co-written by someone respected in the field.
* Finally, it would make both sides of the debate evaluate the other side clearly and see their own blindspots better. This would improve the overall epistemic quality of the AI x-risk debate.
How could this go wrong?
========================
* The main failure mode is that the parties writing it aren't doing it in good faith. If they're trying to write it out with the purpose of proving the other side wrong, it will fail terribly.
* The second failure mode is that the arguments for either sides are based too much on thought experiments and it is hard to find a resolution because there isn't much empirical grounding for either side. In Kahenman's example, even with actual experiments they could infer from, both parties couldn't agree with it for *8 years*. That's entirely possible with this as well.
Other key considerations
========================
* Finding the right people from both sides of the debate might be more difficult than I assume. I think there are people who can do it (eg. Richard Ngo and Jacob Buckman [have said that they have done](https://twitter.com/RichardMCNgo/status/1617323147002515456) it in private) and Boaz Bark and Ben Edelman have also [published](https://www.lesswrong.com/posts/zB3ukZJqt3pQDw9jz/ai-will-change-the-world-but-won-t-take-it-over-by-playing-3) a thoughtful critique (although not an adversarial collaboration), but it maybe that they're too busy or aren't interested enough in doing it
* A similar version has been [done before](https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell) and this might risk duplicating it. I don't think this is the case because the debate was hard to follow and not explicitly written with the indent of finding a joint belief. |
92af10c3-576e-4ee5-afc0-2806eddf3e45 | trentmkelly/LessWrong-43k | LessWrong | Book Review: Affective Neuroscience
After years of clumsily trying to pick up neuroscience by osmosis from papers, I finally got myself a real book — Affective Neuroscience, by Jaak Panksepp, published in 1998, about the neuroscience of emotions in the (animal and human) brain.
What surprised me at first was how controversial it apparently was, at the time, to study this subject at all.
Panksepp was battling a behaviorist establishment that believed animals did not have feelings. Apparently the rationale was that, since animals can’t speak to tell us how they feel, we can only infer or speculate that a rat feels “afraid” when exposed to predator odor, and such speculations are unscientific. (Even though the rat hides, flees, shows stressed body language, experiences elevated heart rate, and has similar patterns of brain activation as a human would in a dangerous situation; and even though all science is, to one degree or another, indirect inference based on observations.)
If anything, Panksepp believed, (and I agree) that the study of emotions in animals is often a better source of knowledge than psychological experiments on humans.
Human psychology experiments tend to be…indirect. For ethical reasons, we do not cut open healthy people’s brains; we do not genetically modify them, drug them, or lesion them to eliminate a neurotransmitter or destroy a brain region. We also generally do not put human experimental subjects in situations that provoke extremes of rage, terror, or lust. We give people questionnaires and little games, and monitor them noninvasively.
This means that we don’t get the kinds of unambiguous, clear-cut information one would ideally like about “this observable physical phenomenon in the brain is a necessary-and-sufficient condition for this emotional state.” We’re limited by people’s ability to self-report inaccurately, by the experimental interventions being imperfect proxies for real-world emotional situations, and by our small repertoire of safe ways to intervene directly on |
15d7db97-768b-4292-976d-8c06cdb2bf65 | trentmkelly/LessWrong-43k | LessWrong | Covid 11/18: Paxlovid Remains Illegal
I had already written most of this week’s post, and also spun out a way-too-long way-too-deep dive on what happened with a retracted FLCCC paper that that I was planning to publish today but I can’t yet because the rabbit hole is still getting deeper, when Scott Alexander posted Ivermectin: Much More Than You Wanted To Know.
My reaction to that was a deep sense of gratitude, as I was worried I’d have to write something because someone had to and no one else would. Then Scott Alexander wrote it. So now I didn’t have to. I don’t agree with every decision he made, I definitely ‘have thoughts’ and I’ll probably end up writing about that, but it can wait for another week.
There’s a variety of other stuff going on as well, including a rise in cases that is unlikely to be going away any time soon, none of it terribly surprising. Paxlovid remains illegal and somehow this may not be easy, various mandates were imposed, children noticed it was nice not to wear masks, and those against vaccination have their latest crazy nonsense where they think they can ‘detox’ and undo the vaccination shot. What a time to be alive.
Executive Summary
1. Ivermectin: Much More Than You Wanted To Know.
2. Rise in cases is accelerating. Winter is coming.
3. Paxlovid remains illegal.
Let’s run the numbers.
The Numbers
Predictions
Prediction from last week (accidentally made on Monday 11/15, so it doesn’t count for prediction scoring purposes, but I didn’t get meaningful new data before then and it still attempts to represent my expectations at the time): 507k cases (+8%) and 7,690 deaths (-5%)
Results: 554k cases (+18%) and 7,427 deaths (-8%).
Prediction for next week: 654k cases (+18%) and 7,500 deaths (+1%).
Prediction for two weeks from now: We’ll see less deaths than normal due to the Thanksgiving holiday messing up reporting, and the counterfactual Zvi that didn’t put in this note likely would have forgotten to adjust for it.
Winter is coming. Boosters and child vaccinations |
b95be419-860f-4de6-b740-32415cfe1532 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Ngo and Yudkowsky on alignment difficulty
This post is the first in a series of transcribed Discord conversations between Richard Ngo and Eliezer Yudkowsky, moderated by Nate Soares. We've also added Richard and Nate's running summaries of the conversation (and others' replies) from Google Docs.
Later conversation participants include Ajeya Cotra, Beth Barnes, Carl Shulman, Holden Karnofsky, Jaan Tallinn, Paul Christiano, Rob Bensinger, and Rohin Shah.
The transcripts are a complete record of several Discord channels MIRI made for discussion. We tried to edit the transcripts as little as possible, other than to fix typos and a handful of confusingly-worded sentences, to add some paragraph breaks, and to add referenced figures and links. We didn't end up redacting any substantive content, other than the names of people who would prefer not to be cited. We swapped the order of some chat messages for clarity and conversational flow (indicated with extra timestamps), and in some cases combined logs where the conversation switched channels.
Color key:
| | | | |
| --- | --- | --- | --- |
| Chat by Richard and Eliezer | Other chat | Google Doc content | Inline comments |
0. Prefatory comments
=====================
| |
| --- |
| **[Yudkowsky][8:32]** **(Nov. 6 follow-up comment)** (At Rob's request I'll try to keep this brief, but this was an experimental format and some issues cropped up that seem large enough to deserve notes.)Especially when coming in to the early parts of this dialogue, I had some backed-up hypotheses about "What might be the main sticking point? and how can I address that?" which from the standpoint of a pure dialogue might seem to be causing me to go on digressions, relative to if I was just trying to answer Richard's own questions. On reading the dialogue, I notice that this looks evasive or like point-missing, like I'm weirdly not just directly answering Richard's questions.Often the questions are answered later, or at least I think they are, though it may not be in the first segment of the dialogue. But the larger phenomenon is that I came in with some things I wanted to say, and Richard came in asking questions, and there was a minor accidental mismatch there. It would have looked better if we'd both stated positions first without question marks, say, or if I'd just confined myself to answering questions from Richard. (This is not a huge catastrophe, but it's something for the reader to keep in mind as a minor hiccup that showed up in the early parts of experimenting with this new format.) |
| **[Yudkowsky][8:32]** **(Nov. 6 follow-up comment)** (Prompted by some later stumbles in attempts to summarize this dialogue. Summaries seem plausibly a major mode of propagation for a sprawling dialogue like this, and the following request seems like it needs to be very prominent to work - embedded requests later on didn't work.)Please don't summarize this dialogue by saying, "and so Eliezer's MAIN idea is that" or "and then Eliezer thinks THE KEY POINT is that" or "the PRIMARY argument is that" etcetera. From my perspective, everybody comes in with a different set of sticking points versus things they see as obvious, and the conversation I have changes drastically depending on that. In the old days this used to be the Orthogonality Thesis, Instrumental Convergence, and superintelligence being a possible thing at all; today most OpenPhil-adjacent folks have other sticking points instead.Please transform:* "Eliezer's main reply is..." -> "Eliezer replied that..."
* "Eliezer thinks the key point is..." -> "Eliezer's point in response was..."
* "Eliezer thinks a major issue is..." -> "Eliezer replied that one issue is..."
* "Eliezer's primary argument against this is..." -> "Eliezer tried the counterargument that..."
* "Eliezer's main scenario for this is..." -> "In a conversation in September of 2021, Eliezer sketched a hypothetical where..."
Note also that the transformed statements say what you *observed,* whereas the untransformed statements are (often incorrect) *inferences* about my latent state of mind.(Though "distinguishing relatively unreliable inference from more reliable observation" is not necessarily *the key idea* here or *the one big reason* I'm asking for this. That's just one point I tried making - one argument that I hope might help drive home the larger thesis.) |
1. September 5 conversation
===========================
1.1. Deep vs. shallow problem-solving patterns
----------------------------------------------
| |
| --- |
| **[Ngo][11:00]** Hi all! Looking forward to the discussion. |
| **[Yudkowsky][11:01]** Hi and welcome all. My name is Eliezer and I think alignment is really actually quite extremely difficult. Some people seem to not think this! It's an important issue so ought to be resolved somehow, which we can hopefully fully do today. (I will however want to take a break after the first 90 minutes, if it goes that far and if Ngo is in sleep-cycle shape to continue past that.) |
| **[Ngo][11:02]** A break in 90 minutes or so sounds good.Here's one way to kick things off: I agree that humans trying to align arbitrarily capable AIs seems very difficult. One reason that I'm more optimistic (or at least, not confident that we'll have to face the full very difficult version of the problem) is that at a certain point AIs will be doing most of the work.When you talk about alignment being difficult, what types of AIs are you thinking about aligning? |
| **[Yudkowsky][11:04]** On my model of the Other Person, a lot of times when somebody thinks alignment shouldn't be that hard, they think there's some particular thing you can do to align an AGI, which isn't that hard, and their model is missing one of the foundational difficulties for why you can't do (easily or at all) one step of their procedure. So one of my own conversational processes might be to poke around looking for a step that the other person doesn't realize is hard. That said, I'll try to directly answer your own question first. |
| **[Ngo][11:07]** I don't think I'm confident that there's any particular thing you can do to align an AGI. Instead I feel fairly uncertain over a broad range of possibilities for how hard the problem turns out to be.And on some of the most important variables, it seems like evidence from the last decade pushes towards updating that the problem will be easier. |
| **[Yudkowsky][11:09]** I think that after AGI becomes possible at all and then possible to scale to dangerously superhuman levels, there will be, in the best-case scenario where a lot of other social difficulties got resolved, a 3-month to 2-year period where only a very few actors have AGI, meaning that it was socially possible for those few actors to decide to *not* just scale it to where it automatically destroys the world.During this step, if humanity is to survive, somebody has to perform some feat that causes the world to *not* be destroyed in 3 months or 2 years when too many actors have access to AGI code that will destroy the world if its intelligence dial is turned up. This requires that the first actor or actors to build AGI, be able to do *something* with that AGI which prevents the world from being destroyed; if it didn't require superintelligence, we could go do that thing right now, but no such human-doable act apparently exists so far as I can tell.So we want the least dangerous, most easily aligned thing-to-do-with-an-AGI, but it does have to be a pretty powerful act to prevent the automatic destruction of Earth after 3 months or 2 years. It has to "flip the gameboard" rather than letting the suicidal game play out. We need to align the AGI that performs this pivotal act, to perform that pivotal act without killing everybody.Parenthetically, no act powerful enough and gameboard-flipping enough to qualify is inside the Overton Window of politics, or possibly even of effective altruism, which presents a separate social problem. I usually dodge around this problem by picking an exemplar act which is powerful enough to actually flip the gameboard, but not the most alignable act because it would require way too many aligned details: Build self-replicating open-air nanosystems and use them (only) to melt all GPUs.Since any such nanosystems would have to operate in the full open world containing lots of complicated details, this would require tons and tons of alignment work, is not the pivotal act easiest to align, and we should do some other thing instead. But the other thing I have in mind is also outside the Overton Window, just like this is. So I use "melt all GPUs" to talk about the requisite power level and the Overton Window problem level, both of which seem around the right levels to me, but the actual thing I have in mind is more alignable; and this way, I can reply to anyone who says "How dare you?!" by saying "Don't worry, I don't actually plan on doing that." |
| **[Ngo][11:14]** One way that we could take this discussion is by discussing the pivotal act "make progress on the alignment problem faster than humans can". |
| **[Yudkowsky][11:15]** This sounds to me like it requires extreme levels of alignment and operating in extremely dangerous regimes, such that, if you could do that, it would seem much more sensible to do some other pivotal act first, using a lower level of alignment tech. |
| **[Ngo][11:16]** Okay, this seems like a crux on my end. |
| **[Yudkowsky][11:16]** In particular, I would hope that - in unlikely cases where we survive at all - we were able to survive by operating a superintelligence only in the lethally dangerous, but still less dangerous, regime of "engineering nanosystems".Whereas "solve alignment for us" seems to require operating in the even more dangerous regimes of "write AI code for us" and "model human psychology in tremendous detail". |
| **[Ngo][11:17]** What makes these regimes so dangerous? Is it that it's very hard for humans to exercise oversight?One thing that makes these regimes seem less dangerous to me is that they're broadly in the domain of "solving intellectual problems" rather than "achieving outcomes in the world". |
| **[Yudkowsky][11:19][11:21]** Every AI output *effectuates* outcomes in the world. If you have a powerful unaligned mind hooked up to outputs that can start causal chains that effectuate dangerous things, it doesn't matter whether the comments on the code say "intellectual problems" or not.The danger of "solving an intellectual problem" is when it requires a powerful mind to think about domains that, when solved, render very cognitively accessible strategies that can do dangerous things. |
| I expect the first alignment solution you can actually deploy in real life, in the unlikely event we get a solution at all, looks like 98% "don't think about all these topics that we do not absolutely need and are adjacent to the capability to easily invent very dangerous outputs" and 2% "actually think about this dangerous topic but please don't come up with a strategy inside it that kills us". |
| **[Ngo][11:21][11:22]** Let me try and be more precise about the distinction. It seems to me that systems which have been primarily trained to make predictions about the world would by default lack a lot of the cognitive machinery which humans use to take actions which pursue our goals. |
| Perhaps another way of phrasing my point is something like: it doesn't seem implausible to me that we build AIs that are significantly more intelligent (in the sense of being able to understand the world) than humans, but significantly less agentic.Is this a crux for you?(obviously "agentic" is quite underspecified here, so maybe it'd be useful to dig into that first) |
| **[Yudkowsky][11:27][11:33]** I would certainly have learned very new and very exciting facts about intelligence, facts which indeed contradict my present model of how intelligences liable to be discovered by present research paradigms work, if you showed me... how can I put this in a properly general way... that problems I thought were about searching for states that get fed into a result function and then a result-scoring function, such that the input gets an output with a high score, were in fact not about search problems like that. I have sometimes given more specific names to this problem setup, but I think people have become confused by the terms I usually use, which is why I'm dancing around them.In particular, just as I have a model of the Other Person's Beliefs in which they think alignment is easy because they don't know about difficulties I see as very deep and fundamental and hard to avoid, I also have a model in which people think "why not just build an AI which does X but not Y?" because they don't realize what X and Y have in common, which is something that draws deeply on having deep models of intelligence. And it is hard to convey this deep theoretical grasp.But you can also see powerful practical hints that these things are much more correlated than, eg, Robin Hanson was imagining during the [FOOM debate](https://intelligence.org/ai-foom-debate/), because Robin did not think something like GPT-3 should exist; Robin thought you should need to train lots of specific domains that didn't generalize. I argued then with Robin that it was something of a hint that humans had visual cortex and cerebellar cortex but not Car Design Cortex, in order to design cars. Then in real life, it proved that reality was far to the Eliezer side of Eliezer on the [Eliezer-Robin axis](https://intelligence.org/2017/10/20/alphago/), and things like GPT-3 were built with *less* architectural complexity and generalized *more* than I was arguing to Robin that complex architectures should generalize over domains. |
| The metaphor I sometimes use is that it is very hard to build a system that drives cars painted red, but is not at all adjacent to a system that could, with a few alterations, prove to be very good at driving a car painted blue. The "drive a red car" problem and the "drive a blue car" problem have too much in common. You can maybe ask, "Align a system so that it has the capability to drive red cars, but refuses to drive blue cars." You can't make a system that is very good at driving red-painted cars, but lacks the basic capability to drive blue-painted cars because you never trained it on that. The patterns found by gradient descent, by genetic algorithms, or by other plausible methods of optimization, for driving red cars, would be patterns very close to the ones needed to drive blue cars. When you optimize for red cars you get the blue car *capability* whether you like it or not. |
| **[Ngo][11:32]** Does your model of intelligence rule out building AIs which make dramatic progress in mathematics without killing us all? |
| **[Yudkowsky][11:34][11:39]** If it were possible to perform some pivotal act that saved the world with an AI that just made progress on proving mathematical theorems, without, eg, needing to explain those theorems to humans, I'd be *extremely* interested in that as a potential pivotal act. We wouldn't be out of the woods, and I wouldn't actually know how to build an AI like that without killing everybody, but it would immediately trump everything else as the obvious line of research to pursue.Parenthetically, there is very very little which my model of intelligence *rules out*. I think we all die because we cannot do certain dangerous things correctly, *on the very first try in the dangerous regimes where one mistake kills you*, and do them *before* proliferation of much easier technologies kills us. If you have the Textbook From 100 Years In The Future that gives the simple robust solutions for everything, that actually work, you can write a superintelligence that thinks 2 + 2 = 5 because the Textbook gives the methods for doing that which are simple and actually work in practice in real life. |
| (The Textbook has the equivalent of "use ReLUs instead of sigmoids" everywhere, and avoids all the clever-sounding things that will work at subhuman levels and blow up when you run them at superintelligent levels.) |
| **[Ngo][11:36][11:40]** Hmm, so suppose we train an AI to prove mathematical theorems when given them, perhaps via some sort of adversarial setter-solver training process.By default I have the intuition that this AI could become extremely good at proving theorems - far beyond human level - without having goals about real-world outcomes. |
| It seems to me that in your model of intelligence, being able to do tasks like mathematics is closely coupled with trying to achieve real-world outcomes. But I'd actually take GPT-3 as some evidence against this position (although still evidence in favour of your position over Hanson's), since it seems able to do a bunch of reasoning tasks while still not being very agentic.There's some alternative world where we weren't able to train language models to do reasoning tasks without first training them to perform tasks in complex RL environments, and in that world I'd be significantly less optimistic. |
| **[Yudkowsky][11:41]** I put to you that there is a predictable bias in your estimates, where you don't know about the Deep Stuff that is required to prove theorems, so you imagine that certain cognitive capabilities are more disjoint than they actually are. If you knew about the things that humans are using to reuse their reasoning about chipped handaxes and other humans, to prove math theorems, you would see it as more plausible that proving math theorems would generalize to chipping handaxes and manipulating humans.GPT-3 is a... complicated story, on my view of it and intelligence. We're looking at an interaction between tons and tons of memorized shallow patterns. GPT-3 is *very* unlike the way that natural selection built humans. |
| **[Ngo][11:44]** I agree with that last point. But this is also one of the reasons that I previously claimed that AIs could be more intelligent than humans while being less agentic, because there are systematic differences between the way in which natural selection built humans, and the way in which we'll train AGIs. |
| **[Yudkowsky][11:45]** My current suspicion is that Stack More Layers alone is not going to take us to GPT-6 which is a true AGI; and this is because of the way that GPT-3 is, in your own terminology, "not agentic", and which is, in my terminology, not having gradient descent on GPT-3 run across sufficiently deep problem-solving patterns. |
| **[Ngo][11:46]** Okay, that helps me understand your position better.So here's one important difference between humans and neural networks: humans face the genomic bottleneck which means that each individual has to rederive all the knowledge about the world that their parents already had. If this genetic bottleneck hadn't been so tight, then individual humans would have been significantly less capable of performing novel tasks. |
| **[Yudkowsky][11:50]** I agree. |
| **[Ngo][11:50]** In my terminology, this is a reason that humans are "more agentic" than we otherwise would have been. |
| **[Yudkowsky][11:50]** This seems indisputable. |
| **[Ngo][11:51]** Another important difference: humans were trained in environments where we had to run around surviving all day, rather than solving maths problems etc. |
| **[Yudkowsky][11:51]** I continue to nod. |
| **[Ngo][11:52]** Supposing I agree that reaching a certain level of intelligence will require AIs with the "deep problem-solving patterns" you talk about, which lead AIs to try to achieve real-world goals. It still seems to me that there's likely a lot of space between that level of intelligence, and human intelligence.And if that's the case, then we could build AIs which help us solve the alignment problem before we build AIs which instantiate sufficiently deep problem-solving patterns that they decide to take over the world.Nor does it seem like the reason *humans* want to take over the world is because of a deep fact about our intelligence. It seems to me that humans want to take over the world mainly because that's very similar to things we evolved to do (like taking over our tribe). |
| **[Yudkowsky][11:57]** So here's the part that I agree with: If there were one theorem only mildly far out of human reach, like proving the ABC Conjecture (if you think it hasn't already been proven), and providing a machine-readable proof of this theorem would immediately save the world - say, aliens will give us an aligned superintelligence, as soon as we provide them with this machine-readable proof - then there would exist a plausible though not certain road to saving the world, which would be to try to build a *shallow* mind that proved the ABC Conjecture by memorizing tons of relatively shallow patterns for mathematical proofs learned through self-play; without that system ever abstracting math as deeply as humans do, but the sheer width of memory and sheer depth of search sufficing to do the job. I am not sure, to be clear, that this would work. But my model of intelligence does not rule it out. |
| **[Ngo][11:58]** (I'm actually thinking of a mind which understands maths more deeply than humans - but perhaps only understands maths, or perhaps also a range of other sciences better than humans.) |
| **[Yudkowsky][12:00]** Parts I disagree with: That "help us solve alignment" bears any significant overlap with "provide us a machine-readable proof of the ABC Conjecture without thinking too deeply about it". That humans want to take over the world only because it resembles things we evolved to do. |
| **[Ngo][12:01]** I definitely agree that humans don't *only* want to take over the world because it resembles things we evolved to do. |
| **[Yudkowsky][12:02]** Alas, eliminating 5 reasons why something would go wrong doesn't help much if there's 2 remaining reasons something would go wrong that are much harder to eliminate! |
| **[Ngo][12:02]** But if we imagine having a human-level intelligence which *hadn't* evolved primarily to do things that reasonably closely resembled taking over the world, then I expect that we could ask that intelligence questions in a fairly safe way.And that's also true for an intelligence that is noticeably above human level.So one question is: how far above human level could we get before a system which has only been trained to do things like answer questions and understand the world will decide to take over the world? |
| **[Yudkowsky][12:04]** I think this is one of the very rare cases where the intelligence difference between "village idiot" and "Einstein", which I'd usually see as very narrow, makes a structural difference! I think you can get some outputs from a village-idiot-level AGI, which got there by training on domains exclusively like math, and this will proooobably not destroy the world (*if* you were right about that, about what was going on inside). I have more concern about the Einstein level. |
| **[Ngo][12:05]** Let's focus on the Einstein level then.Human brains have been optimised very little for doing science.This suggests that building an AI which is Einstein-level at doing science is significantly easier than building an AI which is Einstein-level at taking over the world (or other things which humans evolved to do). |
| **[Yudkowsky][12:08]** I think there's a certain broad sense in which I agree with the literal truth of what you just said. You will systematically overestimate *how much* easier, or how far you can push the science part without getting the taking-over-the-world part, for as long as your model is ignorant of what they have in common. |
| **[Ngo][12:08]** Maybe this is a good time to dig into the details of what they have in common, then. |
| **[Yudkowsky][12:09][12:11]][12:13]** I feel like I haven't had much luck with trying to explain that on previous occasions. Not to you, to others too.There are shallow topics like why p-zombies can't be real and how quantum mechanics works and why science ought to be using likelihood functions instead of p-values, and I can *barely* explain those to *some* people, but then there are some things that are apparently much harder to explain than that and which defeat my abilities as an explainer. |
| That's why I've been trying to point out that, even if you don't know the specifics, there's an estimation bias that you can realize should exist in principle. |
| Of course, I also haven't had much luck in saying to people, "Well, even if you don't know the truth about X that would let you see Y, can you not see by abstract reasoning that knowing *any* truth about X would predictably cause you to update in the direction of Y" - people don't seem to actually internalize that much either. Not you, other discussions. |
| **[Ngo][12:10][12:11][12:13]** Makes sense. Are there ways that I could try to make this easier? E.g. I could do my best to explain what I think your position is.Given what you've said I'm not optimistic about this helping much. |
| But insofar as this is the key set of intuitions which has been informing your responses, it seems worth a shot.Another approach would be to focus on our predictions for how AI capabilities will play out over the next few years. |
| I take your point about my estimation bias. To me it feels like there's also a bias going the other way, which is that as long as we don't know the mechanisms by which different human capabilities work, we'll tend to lump them together as one thing. |
| **[Yudkowsky][12:14]** Yup. If you didn't know about visual cortex and auditory cortex, or about eyes and ears, you would assume much more that any sentience ought to both see and hear. |
| **[Ngo][12:16]** So then my position is something like: human pursuit of goals is driven by emotions and reward signals which are deeply evolutionarily ingrained, and without those we'd be much safer but not that much worse at pattern recognition. |
| **[Yudkowsky][12:17]** If there's a pivotal act you can get just by supreme acts of pattern recognition, that's right up there with "pivotal act composed solely of math" for things that would obviously instantly become the prime direction of research. |
| **[Ngo][12:18]** To me it seems like maths is *much more* about pattern recognition than, say, being a CEO. Being a CEO requires coherence over long periods of time; long-term memory; motivation; metacognition; etc. |
| **[Yudkowsky][12:18][12:23]** (One occasionally-argued line of research can be summarized from a certain standpoint as "how about a pivotal act composed entirely of predicting text" and to this my reply is "you're trying to get fully general AGI capabilities by predicting text that is *about* deep / 'agentic' reasoning, and that doesn't actually help".)Human math is very much about goals. People want to prove subtheorems on the way to proving theorems. We might be able to make a *different* kind of mathematician that works more like GPT-3 in the dangerously inscrutable parts that are all noninspectable vectors of floating-point numbers, but even there you'd need some Alpha-Zero-like outer framework to supply the direction of search. |
| That outer framework might be able to be powerful enough without being reflective, though. So it would plausibly be *much easier* to build a mathematician that was capable of superhuman formal theorem-proving but not agentic. The reality of the world might tell us "lolnope" but my model of intelligence doesn't mandate that. That's why, if you gave me a pivotal act composed entirely of "output a machine-readable proof of this theorem and the world is saved", I would pivot there! It actually does seem like it would be a lot easier! |
| **[Ngo][12:21][12:25]** Okay, so if I attempt to rephrase your argument: |
| Your position: There's a set of fundamental similarities between tasks like doing maths, doing alignment research, and taking over the world. In all of these cases, agents based on techniques similar to modern ML which are very good at them will need to make use of deep problem-solving patterns which include goal-oriented reasoning. So while it's possible to beat humans at some of these tasks without those core competencies, people usually overestimate the extent to which that's possible. |
| **[Yudkowsky][12:25]** Remember, a lot of my concern is about what happens *first*, especially if it happens soon enough that future AGI bears any resemblance whatsoever to modern ML; not about what can be done in principle. |
| **[Soares][12:26]** (Note: it's been 85 min, and we're planning to take a break at 90min, so this seems like a good point for a little bit more clarifying back-and-forth on Richard's summary before a break.) |
| **[Ngo][12:26]** I'll edit to say "plausible for ML techniques"?(and "extent to which that's plausible") |
| **[Yudkowsky][12:28]** I think that obvious-to-me future outgrowths of modern ML paradigms are *extremely* liable to, if they can learn how to do sufficiently superhuman X, generalize to taking over the world. How fast this happens does depend on X. It would plausibly happen relatively slower (at higher levels) with theorem-proving as the X, and with architectures that carefully stuck to gradient-descent-memorization over shallow network architectures to do a pattern-recognition part with search factored out (sort of, this is not generally safe, this is not a general formula for safe things!); rather than imposing anything like the genetic bottleneck you validly pointed out as a reason why humans generalize. Profitable X, and all X I can think of that would actually save the world, seem much more problematic. |
| **[Ngo][12:30]** Okay, happy to take a break here. |
| **[Soares][12:30]** Great timing! |
| **[Ngo][12:30]** We can do a bit of meta discussion afterwards; my initial instinct is to push on the question of how similar Eliezer thinks alignment research is to theorem-proving. |
| **[Yudkowsky][12:30]** Yup. This is my lunch break (actually my first-food-of-day break on a 600-calorie diet) so I can be back in 45min if you're still up for that. |
| **[Ngo][12:31]** Sure.Also, if any of the spectators are reading in real time, and have suggestions or comments, I'd be interested in hearing them. |
| **[Yudkowsky][12:31]** I'm also cheerful about spectators posting suggestions or comments during the break. |
| **[Soares][12:32]** Sounds good. I declare us on a break for 45min, at which point we'll reconvene (for another 90, by default).Floor's open to suggestions & commentary. |
1.2. Requirements for science
-----------------------------
| |
| --- |
| **[Yudkowsky][12:50]** I seem to be done early if people (mainly Richard) want to resume in 10min (30m break) |
| **[Ngo][12:51]** Yepp, happy to do so |
| **[Soares][12:57]** Some quick commentary from me:* It seems to me like we're exploring a crux in the vicinity of "should we expect that systems capable of executing a pivotal act would, by default in lieu of significant technical alignment effort, be using their outputs to optimize the future".
* I'm curious whether you two agree that this is a crux (but plz don't get side-tracked answering me).
* The general discussion seems to be going well to me.
+ In particular, huzzah for careful and articulate efforts to zero in on cruxes.
|
| **[Ngo][13:00]** I think that's a crux for the specific pivotal act of "doing better alignment research", and maybe some other pivotal acts, but not all (or necessarily most) of them. |
| **[Yudkowsky][13:01]** I should also say out loud that I've been working a bit with Ajeya on making an attempt to convey the intuitions behind there being deep patterns that generalize and are liable to be learned, which covered a bunch of ground, taught me how much ground there was, and made me relatively more reluctant to try to re-cover the same ground in this modality. |
| **[Ngo][13:02]** Going forward, a couple of things I'd like to ask Eliezer about:* In what ways are the tasks that are most useful for alignment similar or different to proving mathematical theorems (which we agreed might generalise relatively slowly to taking over the world)?
* What are the deep problem-solving patterns underlying these tasks?
* Can you summarise my position?
I was going to say that I was most optimistic about #2 in order to get these ideas into a public formatBut if that's going to happen anyway based on Ajeya's work, then that seems less important |
| **[Yudkowsky][13:03]** I could still try briefly and see what happens. |
| **[Ngo][13:03]** That seems valuable to me, if you're up for it.At the same time, I'll try to summarise some of my own intuitions about intelligence which I expect to be relevant. |
| **[Yudkowsky][13:04]** I'm not sure I could summarize your position in a non-straw way. To me there's a huge visible distance between "solve alignment for us" and "output machine-readable proofs of theorems" where I can't give a good account of why you think talking about the latter would tell us much about the former. I don't know what other pivotal act you think might be easier. |
| **[Ngo][13:06]** I see. I was considering "solving scientific problems" as an alternative to "proving theorems", with alignment being one (particularly hard) example of a scientific problem.But decided to start by discussing theorem-proving since it seemed like a clearer-cut case. |
| **[Yudkowsky][13:07]** Can you predict in advance why Eliezer thinks "solving scientific problems" is significantly thornier? (Where alignment is like totally not "a particularly hard example of a scientific problem" except in the sense that it has science in it at all; which is maybe the real crux; but also a more difficult issue.) |
| **[Ngo][13:09]** Based on some of your earlier comments, I'm currently predicting that you think the step where the solutions need to be legible to and judged by humans makes science much thornier than theorem-proving, where the solutions are machine-checkable. |
| **[Yudkowsky][13:10]** That's one factor. Should I state the other big one or would you rather try to state it first? |
| **[Ngo][13:10]** Requiring a lot of real-world knowledge for science?If it's not that, go ahead and say it. |
| **[Yudkowsky][13:11]** That's one way of stating it. The way I'd put it is that it's about making up hypotheses about the real world.Like, the real world is then a thing that the AI is modeling, at all.Factor 3: On many interpretations of doing science, you would furthermore need to think up experiments. That's planning, value-of-information, search for an experimental setup whose consequences distinguish between hypotheses (meaning you're now searching for initial setups that have particular causal consequences). |
| **[Ngo][13:12]** To me "modelling the real world" is a very continuous variable. At one end you have physics equations that are barely separable from maths problems, at the other end you have humans running around in physical bodies.To me it seems plausible that we could build an agent which solves scientific problems but has very little self-awareness (in the sense of knowing that it's an AI, knowing that it's being trained, etc).I expect that your response to this is that modelling oneself is part of the deep problem-solving patterns which AGIs are very likely to have. |
| **[Yudkowsky][13:15]** There's a problem of *inferring the causes of sensory experience* in cognition-that-does-science. (Which, in fact, also appears in the way that humans do math, and is possibly inextricable from math in general; but this is an example of the sort of deep model that says "Whoops I guess you get science from math after all", not a thing that makes science less dangerous because it's more like just math.)You can build an AI that only ever drives red cars, and which, at no point in the process of driving a red car, ever needs to drive a blue car in order to drive a red car. That doesn't mean its red-car-driving capabilities won't be extremely close to blue-car-driving capabilities if at any point the internal cognition happens to get pointed towards driving a blue car.The fact that there's a deep car-driving pattern which is the same across red cars and blue cars doesn't mean that the AI has ever driven a blue car, per se, or that it has to drive blue cars to drive red cars. But if blue cars are fire, you sure are playing with that fire. |
| **[Ngo][13:18]** To me, "sensory experience" as in "the video and audio coming in from this body that I'm piloting" and "sensory experience" as in "a file containing the most recent results of the large hadron collider" are very very different.(I'm not saying we could train an AI scientist just from the latter - but plausibly from data that's closer to the latter than the former) |
| **[Yudkowsky][13:19]** So there's separate questions about "does an AGI *inseparably need* to model itself inside the world to do science" and "did we build something that would be very close to modeling itself, and could easily stumble across that by accident somewhere in the inscrutable floating-point numbers, especially if that was even slightly useful for solving the outer problems". |
| **[Ngo][13:19]** Hmm, I see |
| **[Yudkowsky][13:20][13:21][13:21]** If you're trying to build an AI that literally does science only to observations collected without the AI having had a causal impact on those observations, that's legitimately "more dangerous than math but maybe less dangerous than active science". |
| You might still stumble across an active scientist because it was a simple internal solution to something, but the outer problem would be legitimately stripped of an important structural property the same way that pure math not describing Earthly objects is stripped of important structural properties. |
| And of course my reaction again is, "There is no pivotal act which uses only that cognitive capability." |
| **[Ngo][13:20][13:21][13:26]** I guess that my (fairly strong) prior here is that something like self-modelling, which is very deeply built into basically every organism, is a very hard thing for an AI to stumble across by accident without significant optimisation pressure in that direction. |
| But I'm not sure how to argue this except by digging into your views on what the deep problem-solving patterns are. So if you're still willing to briefly try and explain those, that'd be useful to me. |
| "Causal impact" again seems like a very continuous variable - it seems like the *amount* of causal impact you need to do good science is much less than the amount which is needed to, say, be a CEO. |
| **[Yudkowsky][13:26]** The amount doesn't seem like the key thing, nearly so much as what underlying facilities you need to do whatever amount of it you need. |
| **[Ngo][13:27]** Agreed. |
| **[Yudkowsky][13:27]** If you go back to the 16th century and ask for just one mRNA vaccine, that's not much of a difference from asking for a ~~million~~ hundred of them. |
| **[Ngo][13:28]** Right, so the additional premise which I'm using here is that the ability to reason about causally impacting the world in order to achieve goals is something that you can have a little bit of.Or a lot of, and that the difference between these might come down to the training data used.Which at this point I don't expect you to agree with. |
| **[Yudkowsky][13:29]** If you have reduced a pivotal act to "look over the data from this hadron collider you neither built nor ran yourself", that really is a structural step down from "do science" or "build a nanomachine". But I can't see any pivotal acts like that, so is that question much of a crux?If there's intermediate steps they might be described in my native language like "reason about causal impacts across only this one preprogrammed domain which you didn't learn in a general way, in only this part of the cognitive architecture that is separable from the rest of the cognitive architecture". |
| **[Ngo][13:31]** Perhaps another way of phrasing this intermediate step is that the agent has a shallow understanding of how to induce causal impacts. |
| **[Yudkowsky][13:31]** What is "shallow" to you? |
| **[Ngo][13:31]** In a similar way to how you claim that GPT-3 has a shallow understanding of language. |
| **[Yudkowsky][13:32]** So it's memorized a ton of shallow causal-impact-inducing patterns from a large dataset, and this can be verified by, for example, presenting it with an example mildly outside the dataset and watching it fail, which we think will confirm our hypothesis that it didn't learn any deep ways of solving that dataset. |
| **[Ngo][13:33]** Roughly speaking, yes. |
| **[Yudkowsky][13:34]** Eg, it wouldn't surprise us at all if GPT-4 had learned to predict "27 \* 18" but not "what is the area of a rectangle 27 meters by 18 meters"... is what I'd like to say, but Codex sure did demonstrate those two were kinda awfully proximal. |
| **[Ngo][13:34]** Here's one way we could flesh this out. Imagine an agent that loses coherence quickly when it's trying to act in the world.So for example, we've trained it to do scientific experiments over a period of a few hours or daysAnd then it's very good at understanding the experimental data and extracting patterns from itBut upon running it for a week or a month, it loses coherence in a similar way to how GPT-3 loses coherence - e.g. it forgets what it's doing.My story for why this might happen is something like: there is a specific skill of having long-term memory, and we never trained our agent to have this skill, and so it has not acquired that skill (even though it can reason in very general and powerful ways in the short term).This feels similar to the argument I was making before about how an agent might lack self-awareness, if we haven't trained it specifically to have that. |
| **[Yudkowsky][13:39]** There's a set of obvious-to-me tactics for doing a pivotal act with minimal danger, which I do not think collectively make the problem safe, and one of these sets of tactics is indeed "Put a limit on the 'attention window' or some other internal parameter, ramp it up slowly, don't ramp it any higher than you needed to solve the problem." |
| **[Ngo][13:41]** You could indeed do this manually, but my expectation is that you could also do this automatically, by training agents in environments where they don't benefit from having long attention spans. |
| **[Yudkowsky][13:42]** (Any time one imagines a specific tactic of this kind, if one has the [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/), one can also imagine all sorts of ways it might go wrong; for example, an attention window can be defeated if there's any aspect of the attended data or the internal state that ended up depending on past events in a way that leaked info about them. But, depending on how much superintelligence you were throwing around elsewhere, you could maybe get away with that, some of the time.) |
| **[Ngo][13:43]** And that if you put agents in environments where they answer questions but don't interact much with the physical world, then there will be many different traits which are necessary for achieving goals in the real world which they will lack, because there was little advantage to the optimiser of building those traits in. |
| **[Yudkowsky][13:43]** I'll observe that TransformerXL built an attention window that generalized, trained it on I think 380 tokens or something like that, and then found that it generalized to 4000 tokens or something like that. |
| **[Ngo][13:43]** Yeah, an order of magnitude of generalisation is not surprising to me. |
| **[Yudkowsky][13:44]** Having observed one order of magnitude, I would personally not be surprised by two orders of magnitude either, after seeing that. |
| **[Ngo][13:45]** I'd be a little surprised, but I assume it would happen eventually. |
1.3. Capability dials
---------------------
| |
| --- |
| **[Yudkowsky][13:46]** I have a sense that this is all circling back to the question, "But what is it we *do* with the intelligence thus weakened?" If you can save the world using a rock, I can build you a very safe rock. |
| **[Ngo][13:46]** Right.So far I've said "alignment research", but I haven't been very specific about it.I guess some context here is that I expect that the first things we do with intelligence similar to this is create great wealth, produce a bunch of useful scientific advances, etc.And that we'll be in a world where people take the prospect of AGI much more seriously |
| **[Yudkowsky][13:48]** I mostly expect - albeit with some chance that reality says "So what?" to me and surprises me, because it is not as solidly determined as some other things - that we do not hang around very long in the "weirdly ~human AGI" phase before we get into the "if you crank up this AGI it destroys the world" phase. Less than 5 years, say, to put numbers on things.It would not surprise me in the least if the world ends before self-driving cars are sold on the mass market. On some quite plausible scenarios which I think have >50% of my probability mass at the moment, research AGI companies would be able to produce prototype car-driving AIs if they spent time on that, given the near-world-ending tech level; but there will be Many Very Serious Questions about this relatively new unproven advancement in machine learning being turned loose on the roads. And their AGI tech will gain the property "can be turned up to destroy the world" before Earth gains the property "you're allowed to sell self-driving cars on the mass market" because there just won't be much time. |
| **[Ngo][13:52]** Then I expect that another thing we do with this is produce a very large amount of data which rewards AIs for following human instructions. |
| **[Yudkowsky][13:52]** On other scenarios, of course, self-driving becomes possible by limited AI well before things start to break (further) on AGI. And on some scenarios, the way you got to AGI was via some breakthrough that is already scaling pretty fast, so by the time you can use the tech to get self-driving cars, that tech already ends the world if you turn up the dial, or that event follows very swiftly. |
| **[Ngo][13:53]** When you talk about "cranking up the AGI", what do you mean?Using more compute on the same data? |
| **[Yudkowsky][13:53]** Running it with larger bounds on the for loops, over more GPUs, to be concrete about it. |
| **[Ngo][13:53]** In a RL setting, or a supervised, or unsupervised learning setting?Also: can you elaborate on the for loops? |
| **[Yudkowsky][13:56]** I do not quite think that gradient descent on Stack More Layers alone - as used by OpenAI for GPT-3, say, and as *opposed* to Deepmind which builds more complex artifacts like Mu Zero or AlphaFold 2 - is liable to be the first path taken to AGI. I am reluctant to speculate more in print about clever ways to AGI, and I think any clever person out there will, if they are really clever and not just a fancier kind of stupid, not talk either about what they think is missing from Stack More Layers or how you would really get AGI. That said, the way that you cannot just run GPT-3 at a greater search depth, the way you can run Mu Zero at a greater search depth, is part of why I think that AGI is not likely to look *exactly* like GPT-3; the thing that kills us is likely to be a thing that can get more dangerous when you turn up a dial on it, not a thing that intrinsically has no dials that can make it more dangerous. |
1.4. Consequentialist goals vs. deontologist goals
--------------------------------------------------
| |
| --- |
| **[Ngo][13:59]** Hmm, okay. Let's take a quick step back and think about what would be useful for the last half hour.I want to flag that my intuitions about pivotal acts are not very specific; I'm quite uncertain about how the geopolitics of that situation would work, as well as the timeframe between somewhere-near-human-level AGI and existential risk AGI.So we could talk more about this, but I expect there'd be a lot of me saying "well we can't rule out that X happens", which is perhaps not the most productive mode of discourse.A second option is digging into your intuitions about how cognition works. |
| **[Yudkowsky][14:03]** Well, obviously, in the limit of alignment not being accessible to our civilization, and my successfully building a model weaker than reality which nonetheless correctly rules out alignment being accessible to our civilization, I could spend the rest of my short remaining lifetime arguing with people whose models are weak enough to induce some area of ignorance where for all they know you could align a thing. But that is predictably how conversations go in possible worlds where the Earth is doomed; so somebody wiser on the meta-level, though also ignorant on the object-level, might prefer to ask: "Where do you think your knowledge, rather than your ignorance, says that alignment ought to be doable and you will be surprised if it is not?" |
| **[Ngo][14:07]** That's a fair point. Although it seems like a structural property of the "pivotal act" framing, which builds in doom by default. |
| **[Yudkowsky][14:08]** We could talk about that, if you think it's a crux. Though I'm also not thinking that this whole conversation gets done in a day, so maybe for publishability reasons we should try to focus more on one line of discussion?But I do think that lots of people get their optimism by supposing that the world can be saved by doing less dangerous things with an AGI. So it's a big ol' crux of mine on priors. |
| **[Ngo][14:09]** Agreed that one line of discussion is better; I'm happy to work within the pivotal act framing for current purposes.A third option is that I make some claims about how cognition works, and we see how much you agree with them. |
| **[Yudkowsky][14:12]** (Though it's something of a restatement, a reason I'm not going into "my intuitions about how cognition works" is that past experience has led me to believe that conveying this info in a form that the Other Mind will actually absorb and operate, is really quite hard and takes a long discussion, relative to my current abilities to Actually Explain things; it is the sort of thing that might take doing homework exercises to grasp how one structure is appearing in many places, as opposed to just being flatly told that to no avail, and I have not figured out the homework exercises.)I'm cheerful about hearing your own claims about cognition and disagreeing with them. |
| **[Ngo][14:12]** GreatOkay, so one claim is that something like deontology is a fairly natural way for minds to operate. |
| **[Yudkowsky][14:14]** ("If that were true," he thought at once, "bureaucracies and books of regulations would be a lot more efficient than they are in real life.") |
| **[Ngo][14:14]** Hmm, although I think this was probably not a very useful phrasing, let me think about how to rephrase it.Okay, so in [our earlier email discussion](https://docs.google.com/document/d/1XXGbFnWPXtsRiTxleBZ0LAGtU7_7CYKt17nnowfpKvo/edit), we talked about the concept of "obedience".To me it seems like it is just as plausible for a mind to have a concept like "obedience" as its rough goal, as a concept like maximising paperclips.If we imagine training an agent on a large amount of data which pointed in the rough direction of rewarding obedience, for example, then I imagine that by default obedience would be a constraint of comparable strength to, say, the human survival instinct.(Which is obviously not strong enough to stop humans doing a bunch of things that contradict it - but it's a pretty good starting point.) |
| **[Yudkowsky][14:18]** Heh. You mean of comparable strength to the human instinct to explicitly maximize inclusive genetic fitness? |
| **[Ngo][14:19]** Genetic fitness wasn't a concept that our ancestors were able to understand, so it makes sense that they weren't pointed directly towards it.(And nor did they understand *how* to achieve it.) |
| **[Yudkowsky][14:19]** Even in that paradigm, except insofar as you expect gradient descent to work very differently from gene-search optimization - which, admittedly, it does - when you optimize really hard on a thing, you get contextual correlates to it, not the thing you optimized on.This is of course one of the Big Fundamental Problems that I expect in alignment. |
| **[Ngo][14:20]** Right, so the main correlate that I've seen discussed is "do what would make the human give you a high rating, not what the human actually wants"One thing I'm curious about is the extent to which you're concerned about this specific correlate, versus correlates in general. |
| **[Yudkowsky][14:21]** That said, I also see basic structural reasons why paperclips would be much easier to train than "obedience", even if we could magically instill simple inner desires that perfectly reflected the simple outer algorithm we saw ourselves as running over many particular instances of a loss function. |
| **[Ngo][14:22]** I'd be interested in hearing what those are. |
| **[Yudkowsky][14:22]** well, first of all, why *is* a book of regulations so much more unwieldy than a hunter-gatherer?if deontology is just as good as [consequentialism](https://arbital.com/p/consequentialist/), y'know.(do you want to try replying or should I just say?) |
| **[Ngo][14:23]** Go aheadI should probably clarify that I agree that you can't just replace consequentialism with deontologyThe claim is more like: when it comes to high-level concepts, it's not clear to me why high-level consequentialist goals are more natural than high-level deontological goals. |
| **[Yudkowsky][14:24]** I reply that reality is complicated, so when you pump a simple goal through complicated reality you get complicated behaviors required to achieve the goal. If you think of reality as a complicated function Input->Probability(Output), then even to get a simple Output or a simple partition on Output or a high expected score in a simple function over Output, you may need very complicated Input.Humans don't trust each other. They imagine, "Well, if I just give this bureaucrat a goal, perhaps they won't reason honestly about what it takes to achieve that goal! Oh no! Therefore I will instead, being the trustworthy and accurate person that I am, reason myself about constraints and requirements on the bureaucrat's actions, such that, if the bureaucrat obeys these regulations, I expect the outcome of their action will be what I want."But (compared to a general intelligence that observes and models complicated reality and does its own search to pick actions) an actually-effective book of regulations (implemented by some nonhuman mind with a large enough and perfect enough memory to memorize it) would tend to involve a (physically unmanageable) vast number of rules saying "if you observe this, do that" to follow all the crinkles of complicated reality as it can be inferred from observation. |
| **[Ngo][14:28]** (Though it's something of a restatement, a reason I'm not going into "my intuitions about how cognition works" is that past experience has led me to believe that conveying this info in a form that the Other Mind will actually absorb and operate, is really quite hard and takes a long discussion, relative to my current abilities to Actually Explain things; it is the sort of thing that might take doing homework exercises to grasp how one structure is appearing in many places, as opposed to just being flatly told that to no avail, and I have not figured out the homework exercises.)(As a side note: do you have a rough guess for when your work with Ajeya will be made public? If it's still a while away, I'm wondering whether it's still useful to have a rough outline of these intuitions even if it's in a form that very few people will internalise) |
| **[Yudkowsky][14:30]** (As a side note: do you have a rough guess for when your work with Ajeya will be made public? If it's still a while away, I'm wondering whether it's still useful to have a rough outline of these intuitions even if it's in a form that very few people will internalise)Plausibly useful, but not to be attempted today, I think? |
| **[Ngo][14:30]** Agreed. |
| **[Yudkowsky][14:30]** (We are now theoretically in overtime, which is okay for me, but for you it is 11:30pm (I think?) and so it is on you to call when to halt, now or later.) |
| **[Ngo][14:32]** Yeah, it's 11.30 for me. I think probably best to halt here. I agree with all the things you just said about reality being complicated, and why consequentialism is therefore valuable. My "deontology" claim (which was, in its original formulation, far too general - apologies for that) was originally intended as a way of poking into your intuitions about which types of cognition are natural or unnatural, which I think is the topic we've been circling around for a while. |
| **[Yudkowsky][14:33]** Yup, and a place to resume next time might be why I think "obedience" is unnatural compared to "paperclips" - though that is a thing that probably requires taking that stab at what underlies surface competencies. |
| **[Ngo][14:34]** Right. I do think that even a vague gesture at that would be reasonably helpful (assuming that this doesn't already exist online?) |
| **[Yudkowsky][14:34]** Not yet afaik, and I don't want to point you to Ajeya's stuff even if she were ok with that, because then this in-context conversation won't make sense to others. |
| **[Ngo][14:35]** For my part I should think more about pivotal acts that I'd be willing to specifically defend.In any case, thanks for the discussion 🙂Let me know if there's a particular time that suits you for a follow-up; otherwise we can sort it out later. |
| **[Soares][14:37]** (y'all are doing all my jobs for me) |
| **[Yudkowsky][14:37]** could try Tuesday at this same time - though I may be in worse shape for dietary reasons, still, seems worth trying. |
| **[Soares][14:37]** (wfm) |
| **[Ngo][14:39]** Tuesday not ideal, any others work? |
| **[Yudkowsky][14:39]** Wednesday? |
| **[Ngo][14:40]** Yes, Wednesday would be good |
| **[Yudkowsky][14:40]** let's call it tentatively for that |
| **[Soares][14:41]** Great! Thanks for the chats. |
| **[Ngo][14:41]** Thanks both! |
| **[Yudkowsky][14:41]** Thanks, Richard! |
2. Follow-ups
=============
2.1. Richard Ngo's summary
--------------------------
| |
| --- |
| **[Tallinn][0:35] (Sep. 6)** just caught up here & wanted to thank nate, eliezer and (especially) richard for doing this! it's great to see eliezer's model being probed so intensively. i've learned a few new things (such as the genetic bottleneck being plausibly a big factor in human cognition). FWIW, a minor comment re deontology (as that's fresh on my mind): in my view deontology is more about coordination than optimisation: deontological agents are more trustworthy, as they're much easier to reason about (in the same way how functional/declarative code is easier to reason about than imperative code). hence my steelman of bureaucracies (as well as social norms): humans just (correctly) prefer their fellow optimisers (including non-human optimisers) to be deontological for trust/coordination reasons, and are happy to pay the resulting competence tax. |
| **[Ngo][3:10] (Sep. 8)** Thanks Jaan! I agree that greater trust is a good reason to want agents which are deontological at some high level.I've attempted a summary of the key points so far; comments welcome: [GDocs link] |
| |
| --- |
| **[Ngo] (Sep. 8 Google Doc)** *1st discussion*(Mostly summaries not quotations)Eliezer, summarized by Richard: "To avoid catastrophe, whoever builds AGI first will have to a) align it to some extent, and b) decide not to scale it up beyond the point where their alignment techniques fail, and c) do some pivotal act that prevents others from scaling it up to that level. But ~~our alignment techniques will not be good enough~~ ~~our alignment techniques will be very far from adequate~~ on our current trajectory, our alignment techniques will be very far from adequate to create an AI that safely performs any such pivotal act." |
| **[Yudkowsky][11:05] (Sep. 8 comment)** will not be good enoughAre not presently on course to be good enough, missing by not a little. "Will not be good enough" is literally declaring for lying down and dying. |
| **[Yudkowsky][16:03] (Sep. 9 comment)** will [be very far from adequate]Same problem as the last time I commented. I am not making an unconditional prediction about future failure as would be implied by the word "will". Conditional on current courses of action or their near neighboring courses, we seem to be well over an order of magnitude away from surviving, unless a miracle occurs. It's still in the end a result of people doing what they seem to be doing, not an inevitability. |
| **[Ngo][5:10] (Sep. 10 comment)** Ah, I see. Does adding "on our current trajectory" fix this? |
| **[Yudkowsky][10:46] (Sep. 10 comment)** Yes. |
| **[Ngo] (Sep. 8 Google Doc)** Richard, summarized by Richard: "Consider the pivotal act of 'make a breakthrough in alignment research'. It is likely that, before the point where AGIs are strongly superhuman at seeking power, they will already be strongly superhuman at understanding the world, and at performing narrower pivotal acts like alignment research which don’t require as much agency (by which I roughly mean: large-scale motivations and the ability to pursue them over long timeframes)."Eliezer, summarized by Richard: "There’s a deep connection between solving intellectual problems and taking over the world - the former requires a powerful mind to think about domains that, when solved, render very cognitively accessible strategies that can do dangerous things. Even mathematical research is a goal-oriented task which involves identifying then pursuing instrumental subgoals - and if brains which evolved to hunt on the savannah can quickly learn to do mathematics, then it’s also plausible that AIs trained to do mathematics could quickly learn a range of other skills. Since almost nobody understands the deep similarities in the cognition required for these different tasks, the distance between AIs that are able to perform fundamental scientific research, and dangerously agentic AGIs, is smaller than almost anybody expects." |
| **[Yudkowsky][11:05] (Sep. 8 comment)** There’s a deep connection between solving intellectual problems and taking over the worldThere's a deep connection by default between chipping flint handaxes and taking over the world, if you happen to learn how to chip handaxes in a very general way. "Intellectual" problems aren't special in this way. And maybe you could avert the default, but that would take some work and you'd have to do it before easier default ML techniques destroyed the world. |
| **[Ngo] (Sep. 8 Google Doc)** Richard, summarized by Richard: "Our lack of understanding about how intelligence works also makes it easy to assume that traits which co-occur humans will also co-occur in future AIs. But human brains are badly-optimised for tasks like scientific research, and well-optimised for seeking power over the world, for reasons including a) evolving while embodied in a harsh environment; b) the genetic bottleneck; c) social environments which rewarded power-seeking. By contrast, training neural networks on tasks like mathematical or scientific research optimises them much less for seeking power. For example, GPT-3 has knowledge and reasoning capabilities but little agency, and loses coherence when run for longer timeframes." |
| **[Tallinn][4:19] (Sep. 8 comment)** [well-optimised for] seeking powermale-female differences might be a datapoint here (annoying as it is to lean on pinker's point :)) |
| **[Yudkowsky][11:31] (Sep. 8 comment)** I don't think a female Eliezer Yudkowsky doesn't try to save / optimize / takeover the world. Men may do that for nonsmart reasons; smart men and women follow the same reasoning when they are smart enough. Eg Anna Salamon and many others. |
| **[Ngo] (Sep. 8 Google Doc)** Eliezer, summarized by Richard: "Firstly, there’s a big difference between most scientific research and the sort of pivotal act that we’re talking about - you need to explain how AIs with a given skill can be used to actually prevent dangerous AGIs from being built. Secondly, insofar as GPT-3 has little agency, that’s because it has memorised many shallow patterns in a way which won’t directly scale up to general intelligence. Intelligence instead consists of deep problem-solving patterns which link understanding and agency at a fundamental level." |
3. September 8 conversation
===========================
3.1. The Brazilian university anecdote
--------------------------------------
| |
| --- |
| **[Yudkowsky][11:00]** (I am here.) |
| **[Ngo][11:01]** Me too. |
| **[Soares][11:01]** Welcome back!(I'll mostly stay out of the way again.) |
| **[Ngo][11:02]** Cool. Eliezer, did you read the summary - and if so, do you roughly endorse it?Also, I've been thinking about the best way to approach discussing your intuitions about cognition. My guess is that starting with the obedience vs paperclips thread is likely to be less useful than starting somewhere else - e.g. the description you gave near the beginning of the last discussion, about "searching for states that get fed into a result function and then a result-scoring function". |
| **[Yudkowsky][11:06]** made a couple of comments about phrasings in the docSo, from my perspective, there's this thing where... it's really quite hard to teach certain *general* points by talking at people, as opposed to more specific points. Like, they're trying to build a perpetual motion machine, and even if you can manage to argue them into believing their first design is wrong, they go looking for a new design, and the new design is complicated enough that they can no longer be convinced that they're wrong because they managed to make a more complicated error whose refutation they couldn't keep track of anymore.Teaching people to see an underlying structure in a lot of places is a very hard thing to teach in this way. Richard Feynman [gave an example](https://v.cx/2010/04/feynman-brazil-education) of the mental motion in his story that ends "Look at the water!", where people learned in classrooms about how "a medium with an index" is supposed to polarize light reflected from it, but they didn't realize that sunlight coming off of water would be polarized. My guess is that doing this properly requires homework exercises; and that, unfortunately from my own standpoint, it happens to be a place where I have extra math talent, the same way that eg Marcello is more talented at formally proving theorems than I happen to be; and that people without the extra math talent, have to do a lot *more* exercises than I did, and I don't have a good sense of which exercises to give them. |
| **[Ngo][11:13]** I'm sympathetic to this, and can try to turn off skeptical-discussion-mode and turn on learning-mode, if you think that'll help. |
| **[Yudkowsky][11:14]** There's a general insight you can have about how arithmetic is commutative, and for some people you can show them 1 + 2 = 2 + 1 and their native insight suffices to generalize over the 1 and the 2 to any other numbers you could put in there, and they realize that strings of numbers can be rearranged and all end up equivalent. For somebody else, when they're a kid, you might have to show them 2 apples and 1 apple being put on the table in a different order but ending up with the same number of apples, and then you might have to show them again with adding up bills in different denominations, in case they didn't generalize from apples to money. I can actually remember being a child young enough that I tried to add 3 to 5 by counting "5, 6, 7" and I thought there was some clever enough way to do that to actually get 7, if you tried hard.Being able to see "consequentialism" is like that, from my perspective. |
| **[Ngo][11:15]** Another possibility: can you trace the origins of this belief, and how it came out of your previous beliefs? |
| **[Yudkowsky][11:15]** I don't know what homework exercises to give people to make them able to see "consequentialism" all over the place, instead of inventing slightly new forms of consequentialist cognition and going "Well, now *that* isn't consequentialism, right?"Trying to say "searching for states that get fed into an input-result function and then a result-scoring function" was one attempt of mine to describe the dangerous thing in a way that would maybe sound abstract enough that people would try to generalize it more. |
| **[Ngo][11:17]** Another possibility: can you describe the closest thing to real consequentialism in humans, and how it came about in us? |
| **[Yudkowsky][11:18][11:21]** Ok, so, part of the problem is that... before you do enough homework exercises for whatever your level of talent is (and even I, at one point, had done little enough homework that I thought there might be a clever way to add 3 and 5 in order to get to 7), you tend to think that only the very crisp formal thing that's been presented to you, is the "real" thing.Why would your engine have to obey the laws of thermodynamics? You're not building one of those Carnot engines you saw in the physics textbook!Humans contain fragments of consequentialism, or bits and pieces whose interactions add up to partially imperfectly shadow consequentialism, and the critical thing is being able to see that the reason why humans' outputs 'work', in a sense, is because these structures are what is doing the work, and the work gets done because of how they shadow consequentialism and only insofar as they shadow consequentialism. |
| Put a human in one environment, it gets food. Put a human in a different environment, it gets food again. Wow, different initial conditions, same output! There must be things inside the human that, whatever else they do, are also along the way somehow effectively searching for motor signals such that food is the end result! |
| **[Ngo][11:20]** To me it feels like you're trying to nudge me (and by extension whoever reads this transcript) out of a specific failure mode. If I had to guess, something like: "I understand what Eliezer is talking about so now I'm justified in disagreeing with it", or perhaps "Eliezer's explanation didn't make sense to me and so I'm justified in thinking that his concepts don't make sense". Is that right? |
| **[Yudkowsky][11:22]** More like... from my perspective, even after I talk people out of one specific perpetual motion machine being possible, they go off and try to invent a different, more complicated perpetual motion machine.And I am not sure what to do about that. It has been going on for a very long time from my perspective.In the end, a lot of what people got out of all that writing I did, was not the deep object-level principles I was trying to point to - they did not really get [Bayesianism as thermodynamics](https://www.lesswrong.com/s/oFePMp9rKftEeZDDr/p/QkX2bAkwG2EpGvNug), say, they did not become able to see [Bayesian structures](https://www.lesswrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure) any time somebody sees a thing and changes their belief. What they got instead was something much more meta and general, a vague spirit of how to reason and argue, because that was what they'd spent a lot of time being exposed to over and over and over again in lots of blog posts.Maybe there's no way to make somebody understand why [corrigibility](https://arbital.com/p/corrigibility/) is "unnatural" except to repeatedly walk them through the task of trying to invent an agent structure that lets you press the shutdown button (without it trying to force you to press the shutdown button), and showing them how each of their attempts fails; and then also walking them through why Stuart Russell's attempt at moral uncertainty produces the [problem of fully updated (non-)deference](https://arbital.com/p/updated_deference/); and hope they can start to see the informal general pattern of why corrigibility is in general contrary to the structure of things that are good at optimization.Except that to do the exercises at all, you need them to work within an expected utility framework. And then they just go, "Oh, well, I'll just build an agent that's good at optimizing things but doesn't use these explicit expected utilities that are the source of the problem!"And then if I want them to believe the same things I do, for the same reasons I do, I would have to teach them why certain structures of cognition are the parts of the agent that are good at stuff and do the work, rather than them being this particular formal thing that they learned for manipulating meaningless numbers as opposed to real-world apples.And I have tried to write that page once or twice (eg "[coherent decisions imply consistent utilities](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities)") but it has not sufficed to teach them, because they did not even do as many homework problems as I did, let alone the greater number they'd have to do because this is in fact a place where I have a particular talent.I don't know how to solve this problem, which is why I'm falling back on talking about it at the meta-level. |
| **[Ngo][11:30]** I'm reminded of a LW post called "[Write a thousand roads to Rome](https://www.lesswrong.com/posts/Q924oPJzK92FifuFg/write-a-thousand-roads-to-rome)", which iirc argues in favour of trying to explain the same thing from as many angles as possible in the hope that one of them will stick. |
| **[Soares][11:31]** (Suggestion, not-necessarily-good: having named this problem on the meta-level, attempt to have the object-level debate, while flagging instances of this as it comes up.) |
| **[Ngo][11:31]** I endorse Nate's suggestion.And will try to keep the difficulty of the meta-level problem in mind and respond accordingly. |
| **[Yudkowsky][11:33]** That (Nate's suggestion) is probably the correct thing to do. I name it out loud because sometimes being told about the meta-problem actually does help on the object problem. It seems to help me a lot and others somewhat less, but it does help others at all, for many others. |
3.2. Brain functions and outcome pumps
--------------------------------------
| |
| --- |
| **[Yudkowsky][11:34]** So, do you have a particular question you would ask about input-seeking cognitions? I did try to say why I mentioned those at all (it's a different road to Rome on "consequentialism"). |
| **[Ngo][11:36]** Let's see. So the visual cortex is an example of quite impressive cognition in humans and many other animals. But I'd call this "pattern-recognition" rather than "searching for high-scoring results". |
| **[Yudkowsky][11:37]** Yup! And it is no coincidence that there are no whole animals formed entirely out of nothing but a visual cortex! |
| **[Ngo][11:37]** Okay, cool. So you'd agree that the visual cortex is doing something that's qualitatively quite different from the thing that animals overall are doing.Then another question is: can you characterise searching for high-scoring results in non-human animals? Do they do it? Or are you mainly talking about humans and AGIs? |
| **[Yudkowsky][11:39]** Also by the time you get to like the temporal lobes or something, there is probably some significant amount of "what could I be seeing that would produce this visual field?" that is searching through hypothesis-space for hypotheses with high plausibility scores, and for sure at the human level, humans will start to think, "Well, could I be seeing this? No, that theory has the following problem. How could I repair that theory?" But it is plausible that there is no low-level analogue of this in a monkey's temporal cortex; and even more plausible that the parts of the visual cortex, if any, which do anything analogous to this, are doing it in a relatively local and definitely very domain-specific way.Oh, that's the cerebellum and motor cortex and so on, if we're talking about a cat or whatever. They have to find motor plans that result in their catching the mouse.Just because the visual cortex isn't (obviously) running a search doesn't mean the rest of the animal isn't running any searches.(On the meta-level, I notice myself hiccuping "But how could you not see that when looking at a cat?" and wondering what exercises would be required to teach that.) |
| **[Ngo][11:41]** Well, I see *something* when I look at a cat, but I don't know how well it corresponds to the concepts you're using. So just taking it slowly for now.I have the intuition, by the way, that the motor cortex is in some sense doing a similar thing to the visual cortex - just in reverse. So instead of taking low-level inputs and producing high-level outputs, it's taking high-level inputs and producing low-level outputs. Would you agree with that? |
| **[Yudkowsky][11:43]** It doesn't directly parse in my ontology because (a) I don't know what you mean by 'high-level' and (b) whole Cartesian agents can be viewed as functions, that doesn't mean all agents can be viewed as non-searching pattern-recognizers.That said, all parts of the cerebral cortex have surprisingly similar morphology, so it wouldn't be at all surprising if the motor cortex is doing something similar to visual cortex. (The cerebellum, on the other hand...) |
| **[Ngo][11:44]** The signal from the visual cortex saying "that is a cat", and the signal to the motor cortex saying "grab that cup", are things I'd characterise as high-level. |
| **[Yudkowsky][11:45]** Still less of a native distinction in my ontology, but there's an informal thing it can sort of wave at, and I can hopefully take that as understood and run with it. |
| **[Ngo][11:45]** The firing of cells in the retina, and firing of motor neurons, are the low-level parts.Cool. So to a first approximation, we can think about the part in between the cat recognising a mouse, and the cat's motor cortex producing the specific neural signals required to catch the mouse, as the part where the consequentialism happens? |
| **[Yudkowsky][11:49]** The part between the cat's eyes seeing the mouse, and the part where the cat's limbs move to catch the mouse, is the whole cat-agent. The whole cat agent sure is a baby consequentialist / searches for mouse-catching motor patterns / gets similarly high-scoring end results even as you vary the environment.The visual cortex is a particular part of this system-viewed-as-a-feedforward-function that is, plausibly, by no means surely, either not very searchy, or does only small local visual-domain-specific searches not aimed per se at catching mice; it has the epistemic nature rather than the planning nature.Then from one perspective you could reason that "well, most of the consequentialism is in the remaining cat after visual cortex has sent signals onward". And this is in general a dangerous mode of reasoning that is liable to fail in, say, inspecting every particular neuron for consequentialism and not finding it; but in this particular case, there are significantly more consequentialist parts of the cat than the visual cortex, so I am okay running with it. |
| **[Ngo][11:50]** Ah, the more specific thing I meant to say is: most of the consequentialism is strictly between the visual cortex and the motor cortex. Agree/disagree? |
| **[Yudkowsky][11:51]** Disagree, I'm rusty on my neuroanatomy but I think the motor cortex may send signals on to the cerebellum rather than the other way around.(I may also disagree with the actual underlying notion you're trying to hint at, so possibly not just a "well include the cerebellum then" issue, but I think I should let you respond first.) |
| **[Ngo][11:53]** I don't know enough neuroanatomy to chase that up, so I was going to try a different tack.But actually, maybe it's easier for me to say "let's include the cerebellum" and see where you think the disagreement ends up. |
| **[Yudkowsky][11:56]** So since cats are not (obviously) (that I have read about) cross-domain consequentialists with imaginations, their consequentialism is in bits and pieces of consequentialism embedded in them all over by the more purely pseudo-consequentialist genetic optimization loop that built them.A cat who fails to catch a mouse may then get little bits and pieces of catbrain adjusted all over.And then those adjusted bits and pieces get a pattern lookup later.Why do these pattern-lookups with no obvious immediate search element, all happen to point towards the same direction of catching the mouse? Because of the past causal history about how what gets looked up, which was tweaked to catch the mouse.So it is legit harder to point out "the consequentialist parts of the cat" by looking for which sections of neurology are doing searches right there. That said, to the extent that the visual cortex does not get tweaked on failure to catch a mouse, it's not part of that consequentialist loop either.And yes, the same applies to humans, but humans also do more explicitly searchy things and this is part of the story for why humans have spaceships and cats do not. |
| **[Ngo][12:00]** Okay, this is interesting. So in biological agents we've got these three levels of consequentialism: evolution, reinforcement learning, and planning. |
| **[Yudkowsky][12:01]** In biological agents we've got evolution + local evolved system-rules that in the past promoted genetic fitness. Two kinds of local rules like this are "operant-conditioning updates from success or failure" and "search through visualized plans". I wouldn't characterize these two kinds of rules as "levels". |
| **[Ngo][12:02]** Okay, I see. And when you talk about searching through visualised plans (the type of thing that humans do) can you say more about what it means for that to be a "search"?For example, if I imagine writing a poem line-by-line, I may only be planning a few words ahead. But somehow the whole poem, which might be quite long, ends up a highly-optimised product. Is that a central example of planning? |
| **[Yudkowsky][12:04][12:07]** Planning is one way to succeed at search. I think for purposes of understanding alignment difficulty, you want to be thinking on the level of abstraction where you see that in some sense it is the search itself that is dangerous when it's a strong enough search, rather than the danger seeming to come from details of the planning process.One of my early experiences in successfully generalizing my notion of intelligence, what I'd later verbalize as "computationally efficient finding of actions that produce outcomes high in a preference ordering", was in writing an (unpublished) story about time-travel in which the universe was globally consistent.The requirement of global consistency, the way in which all events between Paradox start and Paradox finish had to map the Paradox's initial conditions onto the endpoint that would go back and produce those exact initial conditions, ended up imposing strong complicated constraints on reality that the Paradox in effect had to navigate using its initial conditions. The time-traveler needed to end up going through certain particular experiences that would produce the state of mind in which he'd take the actions that would end up prodding his future self elsewhere into having those experiences. |
| The Paradox ended up killing the people who built the time machine, for example, because they would not otherwise have allowed that person to go back in time, or kept the temporal loop open that long for any other reason if they were still alive.Just having two examples of strongly consequentialist general optimization in front of me - human intelligence, and evolutionary biology - hadn't been enough for me to properly generalize over a notion of optimization. Having three examples of homework problems I'd worked - human intelligence, evolutionary biology, and the fictional Paradox - caused it to finally click for me. |
| **[Ngo][12:07]** Hmm. So to me, one of the central features of search is that you consider many possibilities. But in this poem example, I may only have explicitly considered a couple of possibilities, because I was only looking ahead a few words at a time. This seems related to the distinction Abram drew a while back between selection and control ([https://www.alignmentforum.org/posts/ZDZmopKquzHYPRNxq/selection-vs-control](https://www.lesswrong.com/posts/ZDZmopKquzHYPRNxq/selection-vs-control)). Do you distinguish between them in the same way as he does? Or does "control" of a system (e.g. a football player dribbling a ball down the field) count as search too in your ontology? |
| **[Yudkowsky][12:10][12:11]** I would later try to tell people to "imagine a paperclip maximizer as *not being a mind at all*, imagine it as a kind of malfunctioning time machine that spits out outputs which will in fact result in larger numbers of paperclips coming to exist later". I don't think it clicked because people hadn't done the same homework problems I had, and didn't have the same "Aha!" of realizing how part of the notion and danger of intelligence could be seen in such purely material terms. |
| But the [convergent instrumental strategies](https://arbital.com/p/convergent_strategies/), the anticorrigibility, these things are contained in the *true fact about the universe* that certain outputs of the time machine *will in fact* result in there being lots more paperclips later. What produces the danger is not the details of the search process, it's the search being strong and effective *at all*. The danger is in the territory itself and not just in some weird map of it; that building nanomachines that kill the programmers will produce more paperclips is a fact about reality, not a fact about paperclip maximizers! |
| **[Ngo][12:11]** Right, I remember a very similar idea in your writing about Outcome Pumps (<https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes>). |
| **[Yudkowsky][12:12]** Yup! Alas, the story was written in 2002-2003 when I was a worse writer and the real story that inspired the Outcome Pump never did get published. |
| **[Ngo][12:14]** Okay, so I guess the natural next question is: what is it that makes you think that a strong, effective search isn't likely to be limited or constrained in some way?What is it about search processes (like human brains) that makes it hard to train them with blind spots, or deontological overrides, or things like that?Hmmm, although it feels like this is a question I can probably predict your answer to. (Or maybe not, I wasn't expecting the time travel.) |
| **[Yudkowsky][12:15]** In one sense, they are! A paperclip-maximizing superintelligence is nowhere near as powerful as a paperclip-maximizing time machine. The time machine can do the equivalent of buying winning lottery tickets from lottery machines that have been thermodynamically randomized; a superintelligence can't, at least not directly without rigging the lottery or whatever.But a paperclip-maximizing strong general superintelligence is epistemically and instrumentally [efficient](https://arbital.com/p/efficiency/), relative to *you*, or to me. Any time we see it can get at least X paperclips by doing Y, we should expect that it gets X or more paperclips by doing Y or something that leads to even more paperclips than that, because it's not going to miss the strategy we see.So in that sense, searching our own brains for how a time machine would get paperclips, asking ourselves how many paperclips are in principle possible and how they could be obtained, is a way of getting our own brains to consider lower bounds on the problem without the implicit stupidity assertions that our brains unwittingly use to constrain story characters. Part of the point of telling people to think about time machines instead of superintelligences was to get past the ways they imagine superintelligences being stupid. Of course that didn't work either, but it was worth a try.I don't think that's quite what you were asking about, but I want to give you a chance to see if you want to rephrase anything before I try to answer your me-reformulated questions. |
| **[Ngo][12:20]** Yeah, I think what I wanted to ask is more like: why should we expect that, out of the space of possible minds produced by optimisation algorithms like gradient descent, strong general superintelligences are more common than other types of agents which score highly on our loss functions? |
| **[Yudkowsky][12:20][12:23][12:24]** It depends on how hard you optimize! And whether gradient descent on a particular system can even successfully optimize that hard! Many current AIs are trained by gradient descent and yet not superintelligences at all. |
| But the answer is that some problems are difficult in that they require solving lots of subproblems, and an easy way to solve all those subproblems is to use patterns which collectively have some coherence and overlap, and the coherence within them generalizes across all the subproblems. Lots of search orderings will stumble across something like that before they stumble across separate solutions for lots of different problems. |
| I suspect that you cannot get this out of small large amounts of gradient descent on small large layered transformers, and therefore I suspect that GPT-N does not approach superintelligence before the world is ended by systems that look differently, but I could be wrong about that. |
| **[Ngo][12:22][12:23]** Suppose that we optimise hard enough to produce an epistemic subsystem that can make plans much better than any human's. |
| My guess is that you'd say that this is *possible*, but that we're much more likely to first produce a consequentialist agent which does this (rather than a purely epistemic agent which does this). |
| **[Yudkowsky][12:24]** I am confused by what you think it means to have an "epistemic subsystem" that "makes plans much better than any human's". If it searches paths through time and selects high-scoring ones for output, what makes it "epistemic"? |
| **[Ngo][12:25]** Suppose, for instance, that it doesn't actually carry out the plans, it just writes them down for humans to look at. |
| **[Yudkowsky][12:25]** If it *can in fact* do the thing that a paperclipping time machine does, what makes it any safer than a paperclipping time machine because we called it "epistemic" or by some other such name?By what criterion is it selecting the plans that humans look at?Why did it make a difference that its output was fed through the causal systems called humans on the way to the causal systems called protein synthesizers or the Internet or whatever? If we build a superintelligence to design nanomachines, it makes no obvious difference to its safety whether it sends DNA strings directly to a protein synthesis lab, or humans read the output and retype it manually into an email. Presumably you also don't think that's where the safety difference comes from. So where does the safety difference come from?(note: lunchtime for me in 2 minutes, propose to reconvene in 30m after that) |
| **[Ngo][12:28]** (break for half an hour sounds good)If we consider the visual cortex at a given point in time, how does it decide which objects to recognise?Insofar as the visual cortex can be non-consequentialist about which objects it recognises, why couldn't a planning system be non-consequentialist about which plans it outputs? |
| **[Yudkowsky][12:32]** This does feel to me like another "look at the water" moment, so what do you predict I'll say about that? |
| **[Ngo][12:34]** I predict that you say something like: in order to produce an agent that can create very good plans, we need to apply a lot of optimisation power to that agent. And if the channel through which we're applying that optimisation power is "giving feedback on its plans", then we don't have a mechanism to ensure that the agent actually learns to optimise for creating really good plans, as opposed to creating plans that receive really good feedback. |
| **[Soares][12:35]** Seems like a fine cliffhanger? |
| **[Ngo][12:35]** Yepp. |
| **[Soares][12:35]** Great. Let's plan to reconvene in 30min. |
3.3. Hypothetical-planning systems, nanosystems, and evolving generality
------------------------------------------------------------------------
| |
| --- |
| **[Yudkowsky][13:03][13:11]** So the answer you expected from me, translated into my terms, would be, "If you select for the consequence of the humans hitting 'approve' on the plan, you're still navigating the space of inputs for paths through time to probable outcomes (namely the humans hitting 'approve'), so you're still doing consequentialism."But suppose you manage to avoid that. Suppose you get exactly what you ask for. Then the system is still outputting *plans* such that, when humans follow them, they take paths through time and end up with outcomes that score high in some scoring function.My answer is, "What the heck would it mean for a *planning system* to be *non-consequentialist*? You're asking for nonwet water! What's consequentialist isn't the system that does the work, it's the work you're trying to do! You could imagine it being done by a cognition-free material system like a time machine and it would still be consequentialist *because* the output is a *plan*, a path through time!"And this indeed is a case where I feel a helpless sense of not knowing how I can rephrase things, which exercises you have to get somebody to do, what fictional experience you have to walk somebody through, before they start to look at the water and see a material with an index, before they start to look at the phrase "why couldn't a planning system be non-consequentialist about which plans it outputs" and go "um". |
| My imaginary listener now replies, "Ah, but what if we have plans that *don't* end up with outcomes that score high in some function?" and I reply "Then you lie on the ground randomly twitching because any *outcome you end up with* which is *not that* is one that you wanted *more than that* meaning you *preferred it more than the outcome of random motor outputs* which is *optimization toward higher in the preference function* which is *taking a path through time that leads to particular destinations more than it leads to random noise*." |
| **[Ngo][13:09][13:11]** Yeah, this does seem like a good example of the thing you were trying to explain at the beginning |
| It still feels like there's some sort of levels distinction going on here though, let me try to tease out that intuition.Okay, so suppose I have a planning system that, given a situation and a goal, outputs a plan that leads from that situation to that goal.And then suppose that we give it, as input, a situation that we're not actually in, and it outputs a corresponding plan.It seems to me that there's a difference between the sense in which that planning system is consequentialist by virtue of making consequentialist plans (as in: if that plan were used in the situation described in its inputs, it would lead to some goal being achieved) versus another hypothetical agent that is just directly trying to achieve goals in the situation it's actually in. |
| **[Yudkowsky][13:18]** So I'd preface by saying that, *if* you could build such a system, which is indeed a coherent thing (it seems to me) to describe for the purpose of building it, then there would possibly be a safety difference on the margins, it would be noticeably less dangerous though still dangerous. It would need a special internal structural property that you might not get by gradient descent on a loss function with that structure, just like natural selection on inclusive genetic fitness doesn't get you explicit fitness optimizers; you could optimize for planning in hypothetical situations, and get something that didn't explicitly care only and strictly about hypothetical situations. And even if you did get that, the outputs that would kill or brain-corrupt the operators in hypothetical situations might also be fatal to the operators in actual situations. But that is a coherent thing to describe, and the fact that it was not optimizing our own universe, might make it *safer*.With that said, I would worry that somebody would think there was some bone-deep difference of agentiness, of something they were empathizing with like personhood, of imagining goals and drives being absent or present in one case or the other, when they imagine a planner that just solves "hypothetical" problems. If you take that planner and feed it the actual world as its hypothetical, tada, it is now that big old dangerous consequentialist you were imagining before, without it having acquired some difference of *psychological* agency or 'caring' or whatever.So I think there is an important homework exercise to do here, which is something like, "Imagine that safe-seeming system which only considers hypothetical problems. Now see that if you take that system, don't make any other internal changes, and feed it actual problems, it's very dangerous. Now meditate on this until you can see how the hypothetical-considering planner was extremely close in the design space to the more dangerous version, had all the dangerous latent properties, and would probably have a bunch of actual dangers too.""See, you thought the source of the danger was this internal property of caring about actual reality, but it wasn't that, it was the structure of planning!" |
| **[Ngo][13:22]** I think we're getting closer to the same page now.Let's consider this hypothetical planner for a bit. Suppose that it was trained in a way that minimised the, let's say, *adversarial* component of its plans.For example, let's say that the plans it outputs for any situation are heavily regularised so only the broad details get through.Hmm, I'm having a bit of trouble describing this, but basically I have an intuition that in this scenario there's a component of its plan which is cooperative with whoever executes the plan, and a component that's adversarial.And I agree that there's no fundamental difference in type between these two things. |
| **[Yudkowsky][13:27]** "What if this potion we're brewing has a Good Part and a Bad Part, and we could just keep the Good Parts..." |
| **[Ngo][13:27]** Nor do I think they're separable. But in some cases, you might expect one to be much larger than the other. |
| **[Soares][13:29]** (I observe that my model of some other listeners, at this point, protest "there is yet a difference between the hypothetical-planner applied to actual problems, and the Big Scary Consequentialist, which is that the hypothetical planner is emitting descriptions of plans that *would* work if executed, whereas the big scary consequentialist is executing those plans directly.")(Not sure that's a useful point to discuss, or if it helps Richard articulate, but it's at least a place I expect some reader's minds to go if/when this is published.) |
| **[Yudkowsky][13:30]** (That is in fact a difference! The insight is in realizing that the hypothetical planner is only one line of outer shell command away from being a Big Scary Thing and is therefore also liable to be Big and Scary in many ways.) |
| **[Ngo][13:31]** To me it seems that Eliezer's position is something like: "actually, in almost no training regimes do we get agents that decide which plans to output by spending almost all of their time thinking about the object-level problem, and very little of their time thinking about how to manipulate the humans carrying out the plan". |
| **[Yudkowsky][13:32]** My position is that the AI does not neatly separate its internals into a Part You Think Of As Good and a Part You Think Of As Bad, because that distinction is sharp in your map but not sharp in the territory or the AI's map.From the perspective of a paperclip-maximizing-action-outputting-time-machine, its actions are not "object-level making paperclips" or "manipulating the humans next to the time machine to deceive them about what the machine does", they're just physical outputs that go through time and end up with paperclips. |
| **[Ngo][13:34]** @Nate, yeah, that's a nice way of phrasing one point I was trying to make. And I do agree with Eliezer that these things *can be* very similar. But I'm claiming that in some cases these things can also be quite different - for instance, when we're training agents that only get to output a short high-level description of the plan. |
| **[Yudkowsky][13:35]** The danger is in how hard the agent has to work to come up with the plan. I can, for instance, build an agent that very safely outputs a high-level plan for saving the world:echo "Hey Richard, go save the world!"So I do have to ask what kind of "high-level" planning output, that saves the world, you are envisioning, and why it was hard to cognitively come up with such that we didn't just make that high-level plan right now, if humans could follow it. Then I'll look at the part where the plan was hard to come up with, and say how the agent had to understand lots of complicated things in reality and accurately navigate paths through time for those complicated things, in order to even invent the high-level plan, and hence it was very dangerous if it wasn't navigating exactly where you hoped. Or, alternatively, I'll say, "That plan couldn't save the world: you're not postulating enough superintelligence to be dangerous, *and you're also* not using enough superintelligence to flip the tables on the currently extremely doomed world." |
| **[Ngo][13:39]** At this point I'm not envisaging a particular planning output that saves the world, I'm just trying to get more clarity on the issue of consequentialism. |
| **[Yudkowsky][13:40]** Look at the water; it's not the way you're doing the work that's dangerous, it's the work you're trying to do. What work are you trying to do, never mind how it gets done? |
| **[Ngo][13:41]** I think I agree with you that, in the limit of advanced capabilities, we can't say much about how the work is being done, we have to primarily reason from the work that we're trying to do.But here I'm only talking about systems that are intelligent enough to come up with plans and do research that are beyond the capability of humanity.And for me the question is: for *those* systems, can we tilt the way they do the work so they spend 99% of their time trying to solve the object-level problem, and 1% of their time trying to manipulate the humans who are going to carry out the plan? (Where these are not fundamental categories for the AI, they're just a rough categorisation that emerges after we've trained it - the same way that the categories of "physically moving around" and "thinking about things" aren't fundamentally different categories of action for humans, but the way we've evolved means there's a significant internal split between them.) |
| **[Soares][13:43]** (I suspect Eliezer is not trying to make a claim of the form "in the limit of advanced capabilities, we are relegated to reasoning about what work gets done, not about how it was done". I suspect some miscommunication. It might be a reasonable time for Richard to attempt to paraphrase Eliezer's argument?)(Though it also seems to me like Eliezer responding to the 99%/1% point may help shed light.) |
| **[Yudkowsky][13:46]** Well, for one thing, I'd note that a system which is designing nanosystems, and spending 1% of its time thinking about how to kill the operators, is lethal. It has to be such a small fraction of thinking that it, like, never completes the whole thought about "well, if I did X, that would kill the operators!" |
| **[Ngo][13:46]** Thanks for that, Nate. I'll try to paraphrase Eliezer's argument now.Eliezer's position (partly in my own terminology): we're going to build AIs that can perform very difficult tasks using cognition which we can roughly describe as "searching over many options to find one that meets our criteria". An AI that can solve these difficult tasks will need to be able to search in a very general and flexible way, and so it will be very difficult to constrain that search into a particular region.Hmm, that felt like a very generic summary, let me try and think about the more specific claims he's making. |
| **[Yudkowsky][13:54]** An AI that can solve these difficult tasks will need to be able toVery very little is universally necessary over the design space. The *first* AGI that our tech becomes able to build is liable to work in certain easier and simpler ways. |
| **[Ngo][13:55]** Point taken; thanks for catching this misphrasing (this and previous times). |
| **[Yudkowsky][13:56]** Can you, in principle, build a red-car-driver that is totally incapable of driving blue cars? In principle, sure! But the first red-car-driver that gradient descent stumbles over is liable to be a blue-car-driver too. |
| **[Ngo][13:57]** Eliezer, I'm wondering how much of our disagreement is about how high the human level is here.Or, to put it another way: we can build systems that outperform humans at quite a few tasks by now, without having search abilities that are general enough to even try to take over the world. |
| **[Yudkowsky][13:58]** Indubitably and indeed, this is so. |
| **[Ngo][13:59]** Putting aside for a moment the question of which tasks are pivotal enough to save the world, which parts of your model draw the line between human-level chess players and human-level galaxy-colonisers?And say that we'll be able to align ones that they outperform us on *these tasks* before taking over the world, but not on *these other tasks*? |
| **[Yudkowsky][13:59][14:01]** That doesn't have a very simple answer, but one aspect there is *domain generality* which in turn is achieved through *novel domain learning*. |
| Humans, you will note, were not aggressively optimized by natural selection to be able to breathe underwater or fly into space. In terms of obvious outer criteria, there is not much outer sign that natural selection produced these creatures much more general than chimpanzees, by training on a much wider range of environments and loss functions. |
| **[Soares][14:00]** (Before we drift too far from it: thanks for the summary! It seemed good to me, and I updated towards the miscommunication I feared not-having-happened.) |
| **[Ngo][14:03]** (Before we drift too far from it: thanks for the summary! It seemed good to me, and I updated towards the miscommunication I feared not-having-happened.)(Good to know, thanks for keeping an eye out. To be clear, I didn't ever interpret Eliezer as making a claim explicitly about the limit of advanced capabilities; instead it just seemed to me that he was thinking about AIs significantly more advanced than the ones I've been thinking of. I think I phrased my point poorly.) |
| **[Yudkowsky][14:05][14:10]** There are complicated aspects of this story where natural selection may metaphorically be said to have "had no idea of what it was doing", eg, after early rises in intelligence possibly produced by sexual selection on neatly chipped flint handaxes or whatever, all the cumulative brain-optimization on chimpanzees reached a point where there was suddenly a sharp selection gradient on relative intelligence at Machiavellian planning against other humans (even more so than in the chimp domain) as a subtask of inclusive genetic fitness, and so continuing to optimize on "inclusive genetic fitness" in the same old savannah, turned out to happen to be optimizing hard on the subtask and internal capability of "outwit other humans", which optimized hard on "model other humans", which was a capability that could be reused for modeling the chimp-that-is-this-chimp, which turned the system on itself and made it reflective, which contributed greatly to its intelligence being generalized, even though it was just grinding the same loss function on the same savannah; the system being optimized happened to go there in the course of being optimized even harder for the same thing.So one can imagine asking the question: Is there a superintelligent AGI that can quickly build nanotech, which has a kind of passive safety in some if not all respects, in virtue of it solving problems like "build a nanotech system which does X" the way that a beaver solves building dams, in virtue of having a bunch of specialized learning abilities without it ever having a cross-domain general learning ability?And in this regard one does note that there are many, many, many things that humans do which no other animal does, which you might think would contribute a lot to that animal's fitness if there were animalistic ways to do it. They don't make iron claws for themselves. They never did evolve a tendency to search for iron ore, and burn wood into charcoal that could be used in hardened-clay furnaces.No animal plays chess, but AIs do, so we can obviously make AIs to do things that animals don't do. On the other hand, the environment didn't exactly present any particular species with a challenge of chess-playing either. |
| Even so, though, even if some animal had evolved to play chess, I fully expect that current AI systems would be able to squish it at chess, because the AI systems are on chips that run faster than neurons and doing crisp calculations and there are things you just can't do with noisy slow neurons. So that again is not a generally reliable argument about what AIs can do. |
| **[Ngo][14:09][14:11]** Yes, although I note that challenges which are trivial from a human-engineering perspective can be very challenging from an evolutionary perspective (e.g. spinning wheels). |
| And so the evolution of animals-with-a-little-bit-of-help-from-humans might end up in very different places from the evolution of animals-just-by-themselves. And analogously, the ability of humans to fill in the gaps to help less general AIs achieve more might be quite significant. |
| **[Yudkowsky][14:11]** So we can again ask: Is there a way to make an AI system that is *only* good at designing nanosystems, which can achieve some complicated but hopefully-specifiable real-world outcomes, without that AI also being superhuman at understanding and manipulating humans?And I roughly answer, "Perhaps, but not by default, there's a bunch of subproblems, I don't actually know how to do it right now, it's not *the easiest* way to get an AGI that can build nanotech (and kill you), you've got to make the red-car-driver specifically not be able to drive blue cars." Can I explain how I know that? I'm really not sure I can, in real life where I explain X0 and then the listener doesn't generalize X0 to X and respecialize it to X1.It's like asking me how I could possibly know in 2008, before anybody had observed AlphaFold 2, that superintelligences would be able to crack the protein folding problem on the way to nanotech, which some people did question back in 2008.Though that was admittedly more of a slam-dunk than this was, and I could not have told you that AlphaFold 2 would become possible at a prehuman level of general intelligence in 2021 specifically, or that it would be synced in time to a couple of years after GPT-2's level of generality at text. |
| **[Ngo][14:18]** What are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer? |
| **[Yudkowsky][14:20]** Definitely, "turns out it's easier than you thought to use gradient descent's memorization of zillions of shallow patterns that overlap and recombine into larger cognitive structures, to add up to a consequentialist nanoengineer that only does nanosystems and never does sufficiently general learning to apprehend the big picture containing humans, while still understanding the goal for that pivotal act you wanted to do" is among the more plausible advance-specified miracles we could get.But it is not what my model says actually happens, and I am not a believer that when your model says you are going to die, you get to start believing in particular miracles. You need to hold your mind open for any miracle and a miracle you didn't expect or think of in advance, because at this point our last hope is that in fact the future is often quite surprising - though, alas, negative surprises are a tad more frequent than positive ones, when you are trying desperately to navigate using a bad map. |
| **[Ngo][14:22]** Perhaps one metric we could use here is something like: how much extra reward does the consequentialist nanoengineer get from starting to model humans, versus from becoming better at nanoengineering? |
| **[Yudkowsky][14:23]** But that's *not* where humans came from. We didn't get to nuclear power by getting a bunch of fitness from nuclear power plants. We got to nuclear power because if you get a bunch of fitness from chipping flint handaxes and Machiavellian scheming, as found by relatively simple and local hill-climbing, that entrains the same genes that build nuclear power plants. |
| **[Ngo][14:24]** Only in the specific case where you also have the constraint that you keep having to learn new goals every generation. |
| **[Yudkowsky][14:24]** Huh??? |
| **[Soares][14:24]** (I think Richard's saying, "that's a consequence of the genetic bottleneck") |
| **[Ngo][14:25]** Right.Hmm, but I feel like we may have covered this ground before.Suggestion: I have a couple of other directions I'd like to poke at, and then we could wrap up in 20 or 30 minutes? |
| **[Yudkowsky][14:27]** OKWhat are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer?Though I want to mark that this question seemed potentially cruxy to me, though perhaps not for others. I.e., if building protein factories that built nanofactories that built nanomachines that met a certain deep and lofty engineering goal, didn't involve cognitive challenges different in kind from protein folding, we could maybe just safely go do that using AlphaFold 3, which would be just as safe as AlphaFold 2.I don't think we can do that. And I would note to the generic Other that if, to them, these both just sound like thinky things, so why can't you just do that other thinky thing too using the thinky program, this is a case where having any specific model of why we don't already have this nanoengineer right now would tell you there were specific different thinky things involved. |
3.4. Coherence and pivotal acts
-------------------------------
| |
| --- |
| **[Ngo][14:31]** In either order:* I'm curious how the things we've been talking about relate to your opinions about meta-level optimisation from the AI foom debate. (I.e. talking about how wrapping around so that there's no longer any protected level of optimisation leads to dramatic change.)
* I'm curious how your claims about the "robustness" of consequentialism (i.e. the difficulty of channeling an agent's thinking in the directions we want it to go) relate to the reliance of humans on culture, and in particular the way in which humans raised without culture are such bad consequentialists.
On the first: if I were to simplify to the extreme, it seems like there are these two core intuitions that you've been trying to share for a long time. One is a certain type of recursive improvement, and another is a certain type of consequentialism. |
| **[Yudkowsky][14:32]** The second question didn't make much sense in my native ontology? Humans raised without culture don't have access to environmental constants whose presence their genes assume, so they end up as broken machines and then they're bad consequentialists. |
| **[Ngo][14:35]** Hmm, good point. Okay, question modification: the ways in which humans reason, act, etc, vary greatly depending on which cultures they're raised in. (I'm mostly thinking about differences over time - e.g. cavemen vs moderns.) My low-fidelity version of your view about consequentialists says that general consequentialists like humans possess a robust search process which isn't so easily modified.(Sorry if this doesn't make much sense in your ontology, I'm getting a bit tired.) |
| **[Yudkowsky][14:36]** What is it that varies that you think I think should predict would stay more constant? |
| **[Ngo][14:37]** Goals, styles of reasoning, deontological constraints, level of conformity. |
| **[Yudkowsky][14:39]** With regards to your first point, my first reaction was, "I just have one view of intelligence, what you see me arguing about reflects which points people have proved weirdly obstinate about. In 2008, Robin Hanson was being weirdly obstinate about how capabilities scaled and whether there was even any point in analyzing AIs differently from ems, so I talked about what I saw as the most slam-dunk case for there being Plenty Of Room Above Biology and for stuff going whoosh once it got above the human level."It later turned out that capabilities started scaling a whole lot *without* self-improvement, which is an example of the kind of weird surprise the Future throws at you, and maybe a case where I missed something by arguing with Hanson instead of imagining how I could be wrong in either direction and not just the direction that other people wanted to argue with me about."Later on, people were unable to understand why alignment is hard, and got stuck on generalizing the concept I refer to as consequentialism. A theory of why I talked about both things for related reasons would just be a theory of why people got stuck on these two points for related reasons, and I think that theory would mainly be overexplaining an accident because if Yann LeCun had been running effective altruism I would have been explaining different things instead, after the people who talked a lot to EAs got stuck on a different point."Returning to your second point, humans are broken things; if it were possible to build computers while working even worse than humans, we'd be having this conversation at that level of intelligence instead. |
| **[Ngo][14:41]** (Retracted)~~I entirely agree about humans, but it doesn't matter that much how broken humans are when the regime of AIs that we're talking about is the regime that's directly above humans, and therefore only a bit less broken than humans.~~ |
| **[Yudkowsky][14:41]** Among the things to bear in mind about that, is that we then get tons of weird phenomena that are specific to humans, and you may be very out of luck if you start wishing for the *same* weird phenomena in AIs. Yes, even if you make some sort of attempt to train it using a loss function.However, it does seem to me like as we start getting towards the Einstein level instead of the village-idiot level, even though this is usually not much of a difference, we do start to see the atmosphere start to thin already, and the turbulence start to settle down already. Von Neumann was actually a fairly reflective fellow who knew about, and indeed helped generalize, utility functions. The great achievements of von Neumann were not achieved by some very specialized hypernerd who spent all his fluid intelligence on crystallizing math and science and engineering alone, and so never developed any opinions about politics or started thinking about whether or not he had a utility function. |
| **[Ngo][14:44]** I don't think I'm asking for the *same* weird phenomena. But insofar as a bunch of the phenomena I've been talking about have seemed weird according to your account of consequentialism, then the fact that approximately-human-level-consequentialists have lots of weird things about them is a sign that the phenomena I've been talking about are less unlikely than you expect. |
| **[Yudkowsky][14:45][14:46]** I suspect that some of the difference here is that I think you have to be *noticeably* better than a human at nanoengineering to pull off pivotal acts large enough to make a difference, which is why I am not instead trying to gather the smartest people left alive and doing that pivotal act directly. |
| I can't think of anything you can do with somebody just barely smarter than a human, which flips the gameboard, aside of course from "go build a Friendly AI" which I *did* try to set up to just go do and which would be incredibly hard to align if we wanted an AI to do it instead (full-blown chicken-and-egg, that AI is already fully aligned). |
| **[Ngo][14:45]** Oh, interesting. Actually one more question then: to what extent do you think that explicitly reasoning about utility functions and laws of rationality is what makes consequentialists have the properties you've been talking about? |
| **[Yudkowsky][14:47, moved up in log]** Explicit reflection is one possible later stage of the path; an earlier part of the path is from being optimized to do things difficult enough that you need to stop stepping on your own feet and have different parts of your thoughts work well together.It's the sort of path that has only one destination at its end, so there will be many ways to get there.(Modulo various cases where different decision theories seem reflectively consistent and so on; I want to say "you know what I mean" but maybe people don't.) |
| **[Ngo][14:47, moved down in log]** I suspect that some of the difference here is that I think you have to be *noticeably* better than a human at nanoengineering to pull off pivotal acts large enough to make a difference, which is why I am not instead trying to gather the smartest people left alive and doing that pivotal act directly.Yepp, I think there's probably some disagreements about geopolitics driving this too. E.g. in my earlier summary document I mentioned some possible pivotal acts:* Monitoring all potential AGI projects to an extent that makes it plausible for the US and China to work on a joint project without worrying that the other is privately racing.
* Provide arguments/demonstrations/proofs related to impending existential risk that are sufficiently compelling to scare the key global decision-makers into bottlenecking progress.
I predict that you think these would not be pivotal enough; but I don't think digging into the geopolitical side of things is the best use of our time. |
| **[Yudkowsky][14:49, moved up in log]** Monitoring all AGI projects - either not politically feasible in real life given the actual way that countries behave in history books instead of fantasy; or at politically feasible levels, does not work well enough to prevent the world from ending once the know-how proliferates. The AI isn't doing much work here either; why not go do this now, if it's possible? (Note: please don't try to go do this now, it backfires badly.)Provide sufficiently compelling arguments = superhuman manipulation, an incredibly dangerous domain that is just about the worst domain to try to align. |
| **[Ngo][14:49, moved down in log]** With regards to your first point, my first reaction was, "I just have one view of intelligence, what you see me arguing about reflects which points people have proved weirdly obstinate about. In 2008, Robin Hanson was being weirdly obstinate about how capabilities scaled and whether there was even any point in analyzing AIs differently from ems, so I talked about what I saw as the most slam-dunk case for there being Plenty Of Room Above Biology and for stuff going whoosh once it got above the human level."It later turned out that capabilities started scaling a whole lot *without* self-improvement, which is an example of the kind of weird surprise the Future throws at you, and maybe a case where I missed something by arguing with Hanson instead of imagining how I could be wrong in either direction and not just the direction that other people wanted to argue with me about."Later on, people were unable to understand why alignment is hard, and got stuck on generalizing the concept I refer to as consequentialism. A theory of why I talked about both things for related reasons would just be a theory of why people got stuck on these two points for related reasons, and I think that theory would mainly be overexplaining an accident because if Yann LeCun had been running effective altruism I would have been explaining different things instead, after the people who talked a lot to EAs got stuck on a different point."On my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off - which is that they defer too much to one very high-level abstraction. |
| **[Yudkowsky][14:52]** On my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off - which is that they defer too much to one very high-level abstraction.I suppose that is what it could potentially feel like from the inside to not get an abstraction. Robin Hanson kept on asking why I was trusting my abstractions so much, when he was in the process of trusting his worse abstractions instead. |
| **[Ngo][14:51][14:53]** Explicit reflection is one possible later stage of the path; an earlier part of the path is from being optimized to do things difficult enough that you need to stop stepping on your own feet and have different parts of your thoughts work well together.Can you explain a little more what you mean by "have different parts of your thoughts work well together"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or...? |
| And I guess there's no good way to quantify *how* important you think the explicit reflection part of the path is, compared with other parts of the path - but any rough indication of whether it's a more or less crucial component of your view? |
| **[Yudkowsky][14:55]** Can you explain a little more what you mean by "have different parts of your thoughts work well together"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or...?No, it's like when you don't, like, pay five apples for something on Monday, sell it for two oranges on Tuesday, and then trade an orange for an apple.I have still not figured out the homework exercises to convey to somebody the Word of Power which is "coherence" by which they will be able to look at the water, and see "coherence" in places like a cat walking across the room without tripping over itself.When you do lots of reasoning about arithmetic correctly, without making a misstep, that long chain of thoughts with many different pieces diverging and ultimately converging, ends up making some statement that is... still true and still about numbers! Wow! How do so many different thoughts add up to having this property? Wouldn't they wander off and end up being about tribal politics instead, like on the Internet?And one way you could look at this, is that even though all these thoughts are taking place in a bounded mind, they are shadows of a higher unbounded structure which is the model identified by the Peano axioms; all the things being said are *true about the numbers*. Even though somebody who was missing the point would at once object that the human contained no mechanism to evaluate each of their statements against all of the numbers, so obviously no human could ever contain a mechanism like that, so obviously you can't explain their success by saying that each of their statements was true about the same topic of the numbers, because what could possibly implement that mechanism which (in the person's narrow imagination) is The One Way to implement that structure, which humans don't have?But though mathematical reasoning can sometimes go astray, when it works at all, it works because, in fact, even bounded creatures can sometimes manage to obey local relations that in turn add up to a global coherence where all the pieces of reasoning point in the same direction, like photons in a laser lasing, even though there's no internal mechanism that enforces the global coherence at every point.To the extent that the outer optimizer trains you out of paying five apples on Monday for something that you trade for two oranges on Tuesday and then trading two oranges for four apples, the outer optimizer is training all the little pieces of yourself to be locally coherent in a way that can be seen as an imperfect bounded shadow of a higher unbounded structure, and then the system is powerful though imperfect *because* of how the power is present in the coherence and the overlap of the pieces, *because* of how the higher perfect structure is being imperfectly shadowed. In this case the higher structure I'm talking about is Utility, and doing homework with coherence theorems leads you to appreciate that we only know about one higher structure for this class of problems that has a dozen mathematical spotlights pointing at it saying "look here", even though people have occasionally looked for alternatives.And when I try to say this, people are like, "Well, I looked up a theorem, and it talked about being able to identify a unique utility function from an infinite number of choices, but if we don't have an infinite number of choices, we can't identify the utility function, so what relevance does this have" and this is a kind of mistake I don't remember even coming close to making so I do not know how to make people stop doing that and maybe I can't. |
| **[Soares][15:07]** We're already pushing our luck on time, so I nominate that we wrap up (after, perhaps, a few more Richard responses if he's got juice left.) |
| **[Yudkowsky][15:07]** Yeah, was thinking the same. |
| **[Soares][15:07]** As a proposed cliffhanger to feed into the next discussion, my take is that Richard's comment:On my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off - which is that they defer too much to one very high-level abstraction.probably contains some juicy part of the disagreement, and I'm interested in Eliezer understanding Richard's claim to the point of being able to paraphrase it to Richard's satisfaction. |
| **[Ngo][15:08]** Wrapping up here makes sense.I endorse the thing Nate just said.I also get the sense that I have a much better outline now of Eliezer's views about consequentialism (if not the actual details and texture).On a meta level, I personally tend to focus more on things like "how should we understand cognition" and not "how should we understand geopolitics and how it affects the level of pivotal action required".If someone else were trying to prosecute this disagreement they might say much more about the latter. I'm uncertain how useful it is for me to do so, given that my comparative advantage compared with the rest of the world (and probably Eliezer's too) is the cognition part. |
| **[Yudkowsky][15:12]** Reconvene... tomorrow? Monday of next week? |
| **[Ngo][15:12]** Monday would work better for me.You okay with me summarising the discussion so far to [some people — redacted for privacy reasons]? |
| **[Yudkowsky][15:13]** Nate, take a minute to think of your own thoughts there?
| |
| --- |
| [Soares: 👍 👌] |
|
| **[Soares][15:15]** My take: I think it's fine to summarize, though generally virtuous to mark summaries as summaries (rather than asserting that your summaries are Eliezer-endorsed or w/e).
| |
| --- |
| [Ngo: 👍] |
|
| **[Yudkowsky][15:16]** I think that broadly matches my take. I'm also a bit worried about biases in the text summarizer, and about whether I managed to say anything that Rob or somebody will object to pre-publication, but we ultimately intended this to be seen and I was keeping that in mind, so, yeah, go ahead and summarize. |
| **[Ngo][15:17]** Great, thanks |
| **[Yudkowsky][15:17]** I admit to being curious as to what you thought was said that was important or new, but that's a question that can be left open to be answered at your leisure, earlier in your day. |
| **[Ngo][15:17]** I admit to being curious as to what you thought was said that was important or new, but that's a question that can be left open to be answered at your leisure, earlier in your day.You mean, what I thought was worth summarising? |
| **[Yudkowsky][15:17]** Yeah. |
| **[Ngo][15:18]** Hmm, no particular opinion. I wasn't going to go out of my way to do so, but since I'm chatting to [some people — redacted for privacy reasons] regularly anyway, it seemed low-cost to fill them in.At your leisure, I'd be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more. |
| **[Yudkowsky][15:19]** I don't know if it's going to help, but trying it currently seems better than to go on saying nothing. |
| **[Ngo][15:20]** (personally, in addition to feeling like less of an expert on geopolitics, it also seems more sensitive for me to make claims about in public, which is another reason I haven't been digging into that area as much) |
| **[Soares][15:21]** (personally, in addition to feeling like less of an expert on geopolitics, it also seems more sensitive for me to make claims about in public, which is another reason I haven't been digging into that area as much)(seems reasonable! note, though, that i'd be quite happy to have sensitive sections stricken from the record, insofar as that lets us get more convergence than we otherwise would, while we're already in the area)
| |
| --- |
| [Ngo: 👍] |
(tho ofc it is less valuable to spend conversational effort in private discussions, etc.)
| |
| --- |
| [Ngo: 👍] |
|
| **[Ngo][15:22]** At your leisure, I'd be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more.(this question aimed at you too Nate)Also, thanks Nate for the moderation! I found your interventions well-timed and useful.
| |
| --- |
| [Soares: ❤️] |
|
| **[Soares][15:23]** (this question aimed at you too Nate)(noted, thanks, I'll probably write something up after you've had the opportunity to depart for sleep.)On that note, I declare us adjourned, with intent to reconvene at the same time on Monday.Thanks again, both. |
| **[Ngo][15:23]** Thanks both 🙂Oh, actually, one quick pointWould one hour earlier suit, for Monday?I've realised that I'll be moving to a one-hour-later time zone, and starting at 9pm is slightly suboptimal (but still possible if necessary) |
| **[Soares][15:24]** One hour earlier would work fine for me. |
| **[Yudkowsky][15:25]** Doesn't work as fine for me because I've been trying to avoid any food until 12:30p my time, but on that particular day I may be more caloried than usual from the previous day, and could possibly get away with it. (That whole day could also potentially fail if a minor medical procedure turns out to take more recovery than it did the last time I had it.) |
| **[Ngo][15:26]** Hmm, is this something where you'd have more information on the day? (For the calories thing) |
| **[Yudkowsky][15:27]** (seems reasonable! note, though, that i'd be quite happy to have sensitive sections stricken from the record, insofar as that lets us get more convergence than we otherwise would, while we're already in the area)I'm a touch reluctant to have discussions that we intend to delete, because then the larger debate will make less sense once those sections are deleted. Let's dance around things if we can.
| | |
| --- | --- |
| [Ngo: 👍] | [Soares: 👍] |
I mean, I can that day at 10am my time say how I am doing and whether I'm in shape for that day. |
| **[Ngo][15:28]** great. and if at that point it seems net positive to postpone to 11am your time (at the cost of me being a bit less coherent later on) then feel free to say so at the timeon that note, I'm off |
| **[Yudkowsky][15:29]** Good night, heroic debater! |
| **[Soares][16:11]** At your leisure, I'd be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more.The discussions so far are meeting my goals quite well so far! (Slightly better than my expectations, hooray.) Some quick rough notes:* I have been enjoying EY explicating his models around consequentialism.
+ The objections Richard has been making are ones I think have been floating around for some time, and I'm quite happy to see explicit discussion on it.
+ Also, I've been appreciating the conversational virtue with which the two of you have been exploring it. (Assumption of good intent, charity, curiosity, etc.)
* I'm excited to dig into Richard's sense that EY was off about recursive self improvement, and is now off about consequentialism, in a similar way.
+ This also sees to me like a critique that's been floating around for some time, and I'm looking forward to getting more clarity on it.
* I'm a bit torn between driving towards clarity on the latter point, and shoring up some of the progress on the former point.
+ One artifact I'd really enjoy having is some sort of "before and after" take, from Richard, contrasting his model of EY's views before, to his model now.
+ I also have a vague sense that there are some points Eliezer was trying to make, that didn't quite feel like they were driven home; and dually, some pushback by Richard that didn't feel quite frontally answered.
- One thing I may do over the next few days is make a list of those places, and see if I can do any distilling on my own. (No promises, though.)
- If that goes well, I might enjoy some side-channel back-and-forth with Richard about it, eg during some more convenient-for-Richard hour (or, eg, as a thing to do on Monday if EY's not in commission at 10a pacific.)
|
| **[Ngo][5:40] (next day, Sep. 9)** The discussions so far are [...]What do you mean by "latter point" and "former point"? (In your 6th bullet point) |
| **[Soares][7:09] (next day, Sep. 9)** What do you mean by "latter point" and "former point"? (In your 6th bullet point)former = shoring up the consequentialism stuff, latter = digging into your critique re: recursive self improvement etc. (The nesting of the bullets was supposed to help make that clear, but didn't come out well in this format, oops.) |
4. Follow-ups
=============
4.1. Richard Ngo's summary
--------------------------
| |
| --- |
| **[Ngo] (Sep. 10 Google Doc)** *2nd discussion*(Mostly summaries not quotations~~; also hasn’t yet been evaluated by Eliezer~~)Eliezer, summarized by Richard: "~~The~~ A core concept which people have trouble grasping is consequentialism. People try to reason about *how* AIs will solve problems, and ways in which they might or might not be dangerous. But they don’t realise that the ability to solve a wide range of difficult problems implies that an agent must be doing a powerful search over possible solutions, which is ~~the~~ a core skill required to take actions which greatly affect the world. Making this type of AI safe is like trying to build an AI that drives red cars very well, but can’t drive blue cars - there’s no way you get this by default, because the skills involved are so similar. And because the search process ~~is so general~~ is by default so general, ~~it’ll be very hard to~~ I don’t currently see how to constrain it into any particular region." |
| **[Yudkowsky][10:48] (Sep. 10 comment)** The*A* concept, which some people have had trouble grasping. There seems to be an endless list. I didn't have to spend much time contemplating consequentialism to derive the consequences. I didn't spend a lot of time talking about it until people started arguing. |
| **[Yudkowsky][10:50] (Sep. 10 comment)** thea |
| **[Yudkowsky][10:52] (Sep. 10 comment)** [the search process] is [so general]"is by default". The reason I keep emphasizing that things are only true by default is that the work of surviving may look like doing hard nondefault things. I don't take fatalistic "will happen" stances, I assess difficulties of getting nondefault results. |
| **[Yudkowsky][10:52] (Sep. 10 comment)** it’ll be very hard to"I don't currently see how to" |
| **[Ngo] (Sep. 10 Google Doc)** Eliezer, summarized by Richard (continued): "In biological organisms, evolution is ~~one source~~ the ultimate source of consequentialism. A ~~second~~ secondary outcome of evolution is reinforcement learning. For an animal like a cat, upon catching a mouse (or failing to do so) many parts of its brain get slightly updated, in a loop that makes it more likely to catch the mouse next time. (Note, however, that this process isn’t powerful enough to make the cat a pure consequentialist - rather, it has many individual traits that, when we view them from this lens, point in the same direction.) ~~A third thing that makes humans in particular consequentialist is planning,~~ Another outcome of evolution, which helps make humans in particular more consequentialist, is planning - especially when we’re aware of concepts like utility functions." |
| **[Yudkowsky][10:53] (Sep. 10 comment)** onethe ultimate |
| **[Yudkowsky][10:53] (Sep. 10 comment)** secondsecondary outcome of evolution |
| **[Yudkowsky][10:55] (Sep. 10 comment)** especially when we’re aware of concepts like utility functionsVery slight effect on human effectiveness in almost all cases because humans have very poor reflectivity. |
| **[Ngo] (Sep. 10 Google Doc)** Richard, summarized by Richard: "Consider an AI that, given a hypothetical scenario, tells us what the best plan to achieve a certain goal in that scenario is. Of course it needs to do consequentialist reasoning to figure out how to achieve the goal. But that’s different from an AI which chooses what to say as a means of achieving its goals. I’d argue that the former is doing consequentialist reasoning without itself being a consequentialist, while the latter is actually a consequentialist. Or more succinctly: consequentialism = problem-solving skills + using those skills to choose actions which achieve goals."Eliezer, summarized by Richard: "The former AI might be slightly safer than the latter if you could build it, but I think people are likely to dramatically overestimate how big the effect is. The difference could just be one line of code: if we give the former AI our current scenario as its input, then it becomes the latter. For purposes of understanding alignment difficulty, you want to be thinking on the level of abstraction where you see that in some sense it is the search itself that is dangerous when it's a strong enough search, rather than the danger seeming to come from details of the planning process. One particularly helpful thought experiment is to think of advanced AI as an '[outcome pump](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes)' which selects from futures in which a certain outcome occurred, and takes whatever action leads to them." |
| **[Yudkowsky][10:59] (Sep. 10 comment)** particularly helpful"attempted explanatory". I don't think most readers got it.I'm a little puzzled by how often you write my viewpoint as thinking that whatever I happened to say a sentence about is the Key Thing. It seems to rhyme with a deeper failure of many EAs to pass the MIRI [ITT](https://www.econlib.org/archives/2011/06/the_ideological.html).To be a bit blunt and impolite in hopes that long-languishing social processes ever get anywhere, two obvious uncharitable explanations for why some folks may systematically misconstrue MIRI/Eliezer as believing much more than in reality that various concepts an argument wanders over are Big Ideas to us, when some conversation forces us to go to that place:(A) It paints a comfortably unflattering picture of MIRI-the-Other as weirdly obsessed with these concepts that seem not so persuasive, or more generally paints the Other as a bunch of weirdos who stumbled across some concept like "consequentialism" and got obsessed with it. In general, to depict the Other as thinking a great deal of some idea (or explanatory thought experiment) is to tie and stake their status to the listener's view of how much status that idea deserves. So if you say that the Other thinks a great deal of some idea that isn't obviously high-status, that lowers the Other's status, which can be a comfortable thing to do.(cont.)(B) It paints a more comfortably self-flattering picture of a continuing or persistent disagreement, as a disagreement with somebody who thinks that some random concept is much higher-status than it really is, in which case there isn't more to done or understood except to duly politely let the other person try to persuade you the concept deserves its high status. As opposed to, "huh, maybe there is a noncentral point that the other person sees themselves as being stopped on and forced to explain to me", which is a much less self-flattering viewpoint on why the conversation is staying within a place. And correspondingly more of a viewpoint that somebody else is likely to have of us, because it is a comfortable view to them, than a viewpoint that it is comfortable to us to imagine them having.Taking the viewpoint that somebody else is getting hung up on a relatively noncentral point can also be a flattering self-portrait to somebody who believes that, of course. It doesn't mean they're right. But it does mean that you should be aware of how the Other's story, told from the Other's viewpoint, is much more liable to be something that the Other finds sensible and perhaps comfortable, even if it implies an unflattering (and untrue-seeming and perhaps untrue) view of yourself, than something that makes the Other seem weird and silly and which it is easy and congruent for you yourself to imagine the Other thinking. |
| **[Ngo][11:18] (Sep. 12 comment)** I'm a little puzzled by how often you write my viewpoint as thinking that whatever I happened to say a sentence about is the Key Thing.In this case, I emphasised the outcome pump thought experiment because you said that the time-travelling scenario was a key moment for your understanding of optimisation, and the outcome pump seemed to be similar enough and easier to convey in the summary, since you'd already written about it.I'm also emphasising consequentialism because it seemed like the core idea which kept coming up in our first debate, under the heading of "deep problem-solving patterns". Although I take your earlier point that you tend to emphasise things that your interlocutor is more skeptical about, not necessarily the things which are most central to your view. But if consequentialism isn't in fact a very central concept for you, I'd be interested to hear what role it plays. |
| **[Ngo] (Sep. 10 Google Doc)** Richard, summarized by Richard: "There’s a component of 'finding a plan which achieves a certain outcome' which involves actually solving the object-level problem of how someone who is given the plan can achieve the outcome. And there’s another component which is figuring out how to manipulate that person into doing what you want. To me it seems like Eliezer’s argument is that there’s no training regime which leads an AI to spend 99% of its time thinking about the former, and 1% thinking about the latter." |
| **[Yudkowsky][11:20] (Sep. 10 comment)** no training regime...that the training regimes we come up with first, in the 3 months or 2 years we have before somebody else destroys the world, will not have this property.I don't have any particularly complicated or amazingly insightful theories of why I keep getting depicted as a fatalist; but my world is full of counterfactual functions, not constants. And I am always aware that if we had access to a real Textbook from the Future explaining all of the methods that are actually robust in real life - the equivalent of telling us in advance about all the ReLUs that in real life were only invented and understood a few decades after sigmoids - we could go right ahead and build a superintelligence that thinks 2 + 2 = 5.All of my assumptions about "I don't see how to do X" are always labeled as ignorance on my part and a default because we won't have enough time to actually figure out how to do X. I am constantly maintaining awareness of this because being **wrong** about it being difficult is a major place where **hope** potentially comes from, if there's some idea like ReLUs that robustly vanquishes the difficulty, which I just didn't think of. Which does not, alas, mean that I am wrong about any particular thing, nor that the infinite source of optimistic ideas that is the wider field of "AI alignment" is going to produce a good idea from the same process that generates all the previous naive optimism through not seeing where the original difficulty comes from or what other difficulties surround obvious naive attempts to solve it. |
| **[Ngo] (Sep. 10 Google Doc)** Richard, summarized by Richard (continued): "While this may be true in the limit of increasing intelligence, the most relevant systems are the earliest ones that are above human level. But humans deviate from the consequentialist abstraction you’re talking about in all sorts of ways - for example, being raised in different cultures can make people much more or less consequentialist. So it seems plausible that early AGIs can be superhuman while also deviating strongly from this abstraction - not necessarily in the same ways as humans, but in ways that we push them towards during training."Eliezer, summarized by Richard: "Even at the Einstein or von Neumann level these types of deviations start to subside. And the sort of pivotal acts which might realistically work require skills *significantly* above human level. I think even 1% of the cognition of an AI that can assemble advanced nanotech, thinking about how to kill humans, would doom us. Your other suggestions for pivotal acts (surveillance to restrict AGI proliferation; persuading world leaders to restrict AI development) are not politically feasible in real life, to the level required to prevent the world from ending; or else require alignment in the very dangerous domain of superhuman manipulation."Richard, summarized by Richard: "I think we probably also have significant disagreements about geopolitics which affect which acts we expect to be pivotal, but it seems like our comparative advantage is in discussing cognition, so let’s focus on that. We can build systems that outperform humans at quite a few tasks by now, without them needing search abilities that are general enough to even try to take over the world. Putting aside for a moment the question of which tasks are pivotal enough to save the world, which parts of your model draw the line between human-level chess players and human-level galaxy-colonisers, and say that we'll be able to align ones that significantly outperform us on *these* tasks before they take over the world, but not on *those* tasks?"Eliezer, summarized by Richard: "One aspect there is domain generality which in turn is achieved through novel domain learning. One can imagine asking the question: is there a superintelligent AGI that can quickly build nanotech the way that a beaver solves building dams, in virtue of having a bunch of specialized learning abilities without it ever having a cross-domain general learning ability? But there are many, many, many things that humans do which no other animal does, which you might think would contribute a lot to that animal's fitness if there were animalistic ways to do it - e.g. mining and smelting iron. (Although comparisons to animals are not generally reliable arguments about what AIs can do - e.g. chess is much easier for chips than neurons.) So my answer is 'Perhaps, but not by default, there's a bunch of subproblems, I don't actually know how to do it right now, it's not the easiest way to get an AGI that can build nanotech.' ~~Can I explain how I know that? I'm really not sure I can.~~" |
| **[Yudkowsky][11:26] (Sep. 10 comment)** Can I explain how I know that? I'm really not sure I can.In original text, this sentence was followed by a long attempt to explain anyways; if deleting that, which is plausibly the correct choice, this lead-in sentence should also be deleted, as otherwise it paints a false picture of how much I would try to explain anyways. |
| **[Ngo][11:15] (Sep. 12 comment)** Makes sense; deleted. |
| **[Ngo] (Sep. 10 Google Doc)** Richard, summarized by Richard: "Challenges which are trivial from a human-engineering perspective can be very challenging from an evolutionary perspective (e.g. spinning wheels). So the evolution of animals-with-a-little-bit-of-help-from-humans might end up in very different places from the evolution of animals-just-by-themselves. And analogously, the ability of humans to fill in the gaps to help less general AIs achieve more might be quite significant."On nanotech: what are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer?"Eliezer, summarized by Richard: "This question seemed potentially cruxy to me. I.e., if building protein factories that built nanofactories that built nanomachines that met a certain deep and lofty engineering goal, didn't involve cognitive challenges different in kind from protein folding, we could maybe just safely go do that using AlphaFold 3, which would be just as safe as AlphaFold 2. I don't think we can do that. But it is among the more plausible advance-specified miracles we could get. At this point our last hope is that in fact the future is often quite surprising."Richard, summarized by Richard: "It seems to me that you’re making the same mistake here as you did with regards to recursive self-improvement in the AI foom debate - namely, putting too much trust in one big abstraction."Eliezer, summarized by Richard: "I suppose that is what it could potentially feel like from the inside to not get an abstraction. Robin Hanson kept on asking why I was trusting my abstractions so much, when he was in the process of trusting his worse abstractions instead." |
4.2. Nate Soares' summary
-------------------------
| |
| --- |
| **[Soares] (Sep. 12 Google Doc)** *Consequentialism*Ok, here's a handful of notes. I apologize for not getting them out until midday Sunday. My main intent here is to do some shoring up of the ground we've covered. I'm hoping for skims and maybe some light comment back-and-forth as seems appropriate (perhaps similar to Richard's summary), but don't think we should derail the main thread over it. If time is tight, I would not be offended for these notes to get little-to-no interaction.---My sense is that there's a few points Eliezer was trying to transmit about consequentialism, that I'm not convinced have been received. I'm going to take a whack at it. I may well be wrong, both about whether Eliezer is in fact attempting to transmit these, and about whether Richard received them; I'm interested in both protests from Eliezer and paraphrases from Richard. |
| **[Soares] (Sep. 12 Google Doc)** 1. "The consequentialism is in the plan, not the cognition".I think Richard and Eliezer are coming at the concept "consequentialism" from very different angles, as evidenced eg by Richard saying (Nate's crappy paraphrase:) "where do you think the consequentialism is in a cat?" and Eliezer responding (Nate's crappy paraphrase:) "the cause of the apparent consequentialism of the cat's behavior is distributed between its brain and its evolutionary history".In particular, I think there's an argument here that goes something like:* Observe that, from our perspective, saving the world seems quite tricky, and seems likely to involve long sequences of clever actions that force the course of history into a narrow band (eg, because if we saw short sequences of dumb actions, we could just get started).
* Suppose we were presented with a plan that allegedly describes a long sequence of clever actions that would, if executed, force the course of history into some narrow band.
+ For concreteness, suppose it is a plan that allegedly funnels history into the band where we have wealth and acclaim.
* One plausible happenstance is that the plan is not in fact clever, and would not in fact have a forcing effect on history.
+ For example, perhaps the plan describes founding and managing some silicon valley startup, that would not work in practice.
* Conditional on the plan having the history-funnelling property, there's a sense in which it's scary regardless of its source.
+ For instance, perhaps the plan describes founding and managing some silicon valley startup, and will succeed virtually every time it's executed, by dint of having very generic descriptions of things like how to identify and respond to competition, including descriptions of methods for superhumanly-good analyses of how to psychoanalyze the competition and put pressure on their weakpoints.
+ In particular, note that one need not believe the plan was generated by some "agent-like" cognitive system that, in a self-contained way, made use of reasoning we'd characterize as "possessing objectives" and "pursuing them in the real world".
+ More specifically, the scariness is a property of the plan itself. For instance, the fact that this plan accrues wealth and acclaim to the executor, in a wide variety of situations, regardless of what obstacles arise, implies that the plan contains course-correcting mechanisms that keep the plan on-target.
+ In other words, plans that *manage to actually funnel history* are (the argument goes) liable to have a wide variety of course-correction mechanisms that keep the plan oriented towards *some* target. And while this course-correcting property tends to be a property of history-funneling plans, the *choice of target* is of course free, hence the worry.
(Of course, in practice we perhaps shouldn't be visualizing a single Plan handed to us from an AI or a time machine or whatever, but should instead imagine a system that is reacting to contingencies and replanning in realtime. At the least, this task is easier, as one can adjust only for the contingencies that are beginning to arise, rather than needing to predict them all in advance and/or describe general contingency-handling mechanisms. But, and feel free to take a moment to predict my response before reading the next sentence, "run this AI that replans autonomously on-the-fly" and "run this AI+human loop that replans+reevaluates on the fly", are still in this sense "plans", that still likely have the property of Eliezer!consequentialism, insofar as they work.) |
| **[Soares] (Sep. 12 Google Doc)** There's a part of this argument I have not yet driven home. Factoring it out into a separate bullet:2. "If a plan is good enough to work, it's pretty consequentialist in practice".In attempts to collect and distill a handful of scattered arguments of Eliezer's:If you ask GPT-3 to generate you a plan for saving the world, it will not manage to generate one that is very detailed. And if you tortured a big language model into giving you a detailed plan for saving the world, the resulting plan would not work. In particular, it would be full of errors like insensitivity to circumstance, suggesting impossible actions, and suggesting actions that run entirely at cross-purposes to one another.A plan that is sensitive to circumstance, and that describes actions that synergize rather than conflict -- like, in Eliezer's analogy, photons in a laser -- is much better able to funnel history into a narrow band.But, on Eliezer's view as I understand it, this "the plan is not constantly tripping over its own toes" property, goes hand-in-hand with what he calls "consequentialism". As a particularly stark and formal instance of the connection, observe that one way a plan can trip over its own toes is if it says "then trade 5 oranges for 2 apples, then trade 2 apples for 4 oranges". This is clearly an instance of the plan failing to "lase" -- of some orange-needing part of the plan working at cross-purposes to some apple-needing part of the plan, or something like that. And this is also a case where it's easy to see how if a plan *is* "lasing" with respect to apples and oranges, then it is behaving as if governed by some coherent preference.And the point as I understand it isn't "all toe-tripping looks superficially like an inconsistent preference", but rather "insofar as a plan *does* manage to chain a bunch of synergistic actions together, it manages to do so precisely insofar as it is Eliezer!consequentialist".cf the analogy to [information theory](https://www.lesswrong.com/s/oFePMp9rKftEeZDDr/p/QkX2bAkwG2EpGvNug), where if you're staring at a maze and you're trying to build an accurate representation of that maze in your own head, you will succeed precisely insofar as your process is Bayesian / information-theoretic. And, like, this is supposed to feel like a fairly tautological claim: you (almost certainly) can't get the image of a maze in your head to match the maze in the world by visualizing a maze at random, you have to add visualized-walls using some process that's correlated with the presence of actual walls. Your maze-visualizing process will work precisely insofar as you have access to & correctly make use of, observations that correlate with the presence of actual walls. You might also visualize extra walls in locations where it's politically expedient to believe that there's a wall, and you might also avoid visualizing walls in a bunch of distant regions of the maze because it's dark and you haven't got all day, but the resulting visualization in your head is accurate precisely *insofar* as you're managing to act kinda like a Bayesian.Similarly (the analogy goes), a plan works-in-concert and avoids-stepping-on-its-own-toes precisely insofar as it is consequentialist. These are two sides of the same coin, two ways of seeing the same thing.And, I'm not so much attempting to *argue* the point here, as to make sure that the *shape of the argument* (as I understand it) has been understood by Richard. In particular, the *shape of the argument* I see Eliezer as making is that "clumsy" plans don't work, and "laser-like plans" work insofar as they are managing to act kinda like a consequentialist.Rephrasing again: we have a wide variety of mathematical theorems all spotlighting, from different angles, the fact that a plan lacking in clumsiness, is possessing of coherence.("And", my model of Eliezer is quick to note, "this ofc does not mean that all sufficiently intelligent minds must generate very-coherent plans. If you really knew what you were doing, you could design a mind that emits plans that always "trip over themselves" along one particular axis, just as with sufficient mastery you could build a mind that believes 2+2=5 (for some reasonable cashing-out of that claim). But you don't get this for free -- and there's a sort of "attractor" here, when building cognitive systems, where just as generic training will tend to cause it to have true beliefs, so will generic training tend to cause its plans to lase.")(And ofc much of the worry is that all the mathematical theorems that suggest "this plan manages to work precisely insofar as it's lasing in some direction", say nothing about which direction it must lase. Hence, if you show me a plan clever enough to force history into some narrow band, I can be fairly confident it's doing a bunch of lasing, but not at all confident which direction it's lasing in.) |
| **[Soares] (Sep. 12 Google Doc)** One of my guesses is that Richard does in fact understand this argument (though I personally would benefit from a paraphrase, to test this hypothesis!), and perhaps even buys it, but that Richard gets off the train at a following step, namely that we *need* plans that "lase", because ones that don't aren't strong enough to save us. (Where in particular, I suspect most of the disagreement is in how far one can get with plans that are more like language-model outputs and less like lasers, rather than in the question of which pivotal acts would put an end to the acute risk period)But setting that aside for a moment, I want to use the above terminology to restate another point I saw Eliezer as attempting to make: one big trouble with alignment, in the case where we need our plans to be like lasers, is that on the one hand we need our plans to be like lasers, but on the other hand we want them to *fail* to be like lasers along certain specific dimensions.For instance, the plan presumably needs to involve all sorts of mechanisms for refocusing the laser in the case where the environment contains fog, and redirecting the laser in the case where the environment contains mirrors (...the analogy is getting a bit strained here, sorry, bear with me), so that it can in fact hit a narrow and distant target. Refocusing and redirecting to stay on target are part and parcel to plans that can hit narrow distant targets.But the humans shutting the AI down is like scattering the laser, and the humans tweaking the AI so that it plans in a different direction is like them tossing up mirrors that redirect the laser; and we want the plan to fail to correct for those interferences.As such, on the Eliezer view as I understand it, we can see ourselves as asking for a very unnatural sort of object: a path-through-the-future that is robust enough to funnel history into a narrow band in a very wide array of circumstances, but somehow insensitive to specific breeds of human-initiated attempts to switch which narrow band it's pointed towards.Ok. I meandered into trying to re-articulate the point over and over until I had a version distilled enough for my own satisfaction (which is much like arguing the point), apologies for the repetition.I don't think debating the claim is the right move at the moment (though I'm happy to hear rejoinders!). Things I would like, though, are: Eliezer saying whether the above is on-track from his perspective (and if not, then poking a few holes); and Richard attempting to paraphrase the above, such that I believe the arguments themselves have been communicated (saying nothing about whether Richard also buys them).--- |
| **[Soares] (Sep. 12 Google Doc)** My Richard-model's stance on the above points is something like "This all seems kinda plausible, but where Eliezer reads it as arguing that we had better figure out how to handle lasers, I read it as an argument that we'd better save the world without needing to resort to lasers. Perhaps if I thought the world could not be saved except by lasers, I would share many of your concerns, but I do not believe that, and in particular it looks to me like much of the recent progress in the field of AI -- from AlphaGo to GPT to AlphaFold -- is evidence in favor of the proposition that we'll be able to save the world without lasers."And I recall actual-Eliezer saying the following (more-or-less in response, iiuc, though readers note that I might be misunderstanding and this might be out-of-context):Definitely, "turns out it's easier than you thought to use gradient descent's memorization of zillions of shallow patterns that overlap and recombine into larger cognitive structures, to add up to a consequentialist nanoengineer that only does nanosystems and never does sufficiently general learning to apprehend the big picture containing humans, while still understanding the goal for that pivotal act you wanted to do" is among the more plausible advance-specified miracles we could get. On my view, and I think on Eliezer's, the "zillions of shallow patterns"-style AI that we see today, is not going to be sufficient to save the world (nor destroy it). There's a bunch of reasons that GPT and AlphaZero aren't destroying the world yet, and one of them is this "shallowness" property. And, yes, maybe we'll be wrong! I myself have been surprised by how far the shallow pattern memorization has gone (and, for instance, was surprised by GPT), and acknowledge that perhaps I will continue to be surprised. But I continue to predict that the shallow stuff won't be enough.I have the sense that lots of folk in the community are, one way or another, saying "Why not consider the problems of aligning systems that memorize zillions of shallow patterns?". And my answer is, "I still don't expect those sorts of machines to either kill or save us, I'm still expecting that there's a phase shift that won't happen until AI systems start to be able to make plans that are sufficiently deep and laserlike to do scary stuff, and I'm still expecting that the real alignment challenges are in that regime."And this seems to me close to the heart of the disagreement: some people (like me!) have an intuition that it's quite unlikely that figuring out how to get sufficient work out of shallow-memorizers is enough to save us, and I suspect others (perhaps even Richard!) have the sense that the aforementioned "phase shift" is the unlikely scenario, and that I'm focusing on a weird and unlucky corner of the space. (I'm curious whether you endorse this, Richard, or some nearby correction of it.)In particular, Richard, I am curious whether you endorse something like the following:* I'm focusing ~all my efforts on the shallow-memorizers case, because I think shallow-memorizer-alignment will by and large be sufficient, and even if it is not then I expect it's a good way to prepare ourselves for whatever we'll turn out to need in practice. In particular I don't put much stock in the idea that there's a predictable phase-change that forces us to deal with laser-like planners, nor that predictable problems in that domain give large present reason to worry.
(I suspect not, at least not in precisely this form, and I'm eager for corrections.)I suspect something in this vicinity constitutes a crux of the disagreement, and I would be thrilled if we could get it distilled down to something as concise as the above. And, for the record, I personally endorse the following counter to the above:* I am focusing ~none of my efforts on shallow-memorizer-alignment, as I expect it to be far from sufficient, as I do not expect a singularity until we have more laser-like systems, and I think that the laserlike-planning regime has a host of predictable alignment difficulties that Earth does not seem at all prepared to face (unlike, it seems to me, the shallow-memorizer alignment difficulties), and as such I have large and present worries.
--- |
| **[Soares] (Sep. 12 Google Doc)** Ok, and now a few less substantial points:There's a point Richard made here:Oh, interesting. Actually one more question then: to what extent do you think that explicitly reasoning about utility functions and laws of rationality is what makes consequentialists have the properties you've been talking about?that I suspect constituted a miscommunication, especially given that the following sentence appeared in Richard's summary:A third thing that makes humans in particular consequentialist is planning, especially when we’re aware of concepts like utility functions.In particular, I suspect Richard's model of Eliezer's model places (or placed, before Richard read Eliezer's comments on Richard's summary) some particular emphasis on systems reflecting and thinking about their own strategies, as a method by which the consequentialism and/or effectiveness gets in. I suspect this is a misunderstanding, and am happy to say more on my model upon request, but am hopeful that the points I made a few pages above have cleared this up.Finally, I observe that there are a few places where Eliezer keeps beeping when Richard attempts to summarize him, and I suspect it would be useful to do the dorky thing of Richard very explicitly naming Eliezer's beeps as he understands them, for purposes of getting common knowledge of understanding. For instance, things I think it might be useful for Richard to say verbatim (assuming he believes them, which I suspect, and subject to Eliezer-corrections, b/c maybe I'm saying things that induce separate beeps):1. Eliezer doesn't believe it's impossible to build AIs that have most any given property, including most any given safety property, including most any desired "non-consequentialist" or "deferential" property you might desire. Rather, Eliezer believes that many desirable safety properties don't happen by default, and require mastery of minds that likely takes a worrying amount of time to acquire.2. The points about consequentialism are not particularly central in Eliezer's view; they seem to him more like obvious background facts; the reason conversation has lingered here in the EA-sphere is that this is a point that many folk in the local community disagree on.For the record, I think it might also be worth Eliezer acknowledging that Richard probably understands point (1), and that glossing "you don't get it for free by default and we aren't on course to have the time to get it" as "you can't" is quite reasonable when summarizing. (And it might be worth Richard counter-acknowledging that the distinction is actually quite important once you buy the surrounding arguments, as it constitutes the difference between describing the current playing field and laying down to die.) I don't think any of these are high-priority, but they might be useful if easy :-)---Finally, stating the obvious-to-me, none of this is intended as criticism of either party, and all discussing parties have exhibited significant virtue-according-to-Nate throughout this process. |
| |
| --- |
| **[Yudkowsky][21:27] (Sep. 12)** From Nate's notes:For instance, the plan presumably needs to involve all sorts of mechanisms for refocusing the laser in the case where the environment contains fog, and redirecting the laser in the case where the environment contains mirrors (...the analogy is getting a bit strained here, sorry, bear with me), so that it can in fact hit a narrow and distant target. Refocusing and redirecting to stay on target are part and parcel to plans that can hit narrow distant targets.But the humans shutting the AI down is like scattering the laser, and the humans tweaking the AI so that it plans in a different direction is like them tossing up mirrors that redirect the laser; and we want the plan to fail to correct for those interferences.--> GOOD ANALOGY....or at least it sure conveys to *me* why corrigibility is anticonvergent / anticoherent / actually *moderately strongly contrary to* and not just *an orthogonal property of* a powerful-plan generator.But then, I already know why that's true and how it generalized up to resisting our various attempts to solve small pieces of more important aspects of it - it's not just true by weak default, it's true by a stronger default where a roomful of people at a workshop spend several days trying to come up with increasingly complicated ways to describe a system that will let you shut it down (but not steer you through time *into* shutting it down), and all of those suggested ways get shot down. (And yes, people outside MIRI now and then publish papers saying they totally just solved this problem, but all of those "solutions" are things we considered and dismissed as trivially failing to scale to powerful agents - they didn't understand what we considered to be the first-order problems in the first place - rather than these being evidence that MIRI just didn't have smart-enough people at the workshop.) |
| **[Yudkowsky][18:56] (Nov. 5 follow-up comment)** Eg, "Well, we took a system that only learned from reinforcement on situations it had previously been in, and couldn't use imagination to plan for things it had never seen, and then we found that if we didn't update it on shut-down situations it wasn't reinforced to avoid shutdowns!" | |
7321c859-ec5b-4210-8cc7-c2121c7b3d33 | trentmkelly/LessWrong-43k | LessWrong | Agency in Politics
This is a crosspost from https://250bpm.substack.com/p/agency-in-politics
Matt Yglesias explains why history is not a steam roller with a steering wheel welded tight, unyielding and incorrigible.
> Of course, only a very naive person would see history as the unfolding of random occurrences driven purely by individual choices. But I think sophisticated people tend to overcorrect. Once something happens — like, for example, Joe Biden getting himself renominated — smart people are often eager to explain how this was "always going to happen."
This is, of course, at its worst, the historicism criticized by Popper, the idea that history is a deterministic process following a predetermined plan. Marx famously refrained from suggesting how a future communist society should function because if progress is driven by historical necessity, then the system is going to be what it's going to be and there's no point of trying to change the unevitable.
Yglesias offers numerous examples of individual decisions that have changed history:
> Roosevelt was only 50 years old and he was also, to the extent we could measure public opinion at that time, very popular. There was no constitutional prohibition on him running for a third term. [...] But he decided to stand down and more or less handpicked Taft as his successor.
Taft lost, and Roosevelt’s decision not to run thus altered the course of history.
Even more strikingly, if Franz Joseph had decided to cede some contested Austrian territories to Italy in 1913, Italy might have joined the Central Powers as it promised to. This could have altered the timing of war, potentially preventing the Bolshevik Revolution in Russia and changing subsequent historical events.
For want of nail the horseshoe was lost, for want of horseshoe the rider was lost, for want or rider the battle was lost, because of the battle the kingdom was lost.
On the other hand, Kurt Vonnegut writes:
> During the Vietnam War... every respectable artist in this co |
491d7ff3-573d-40e5-b2a3-046b513408c9 | trentmkelly/LessWrong-43k | LessWrong | How do we learn from errors?
Mark Friedenbach's post Leaving LessWrong for a more rational life makes a few criticisms of the way LW approaches rationality. It's not focused enough on empiricism. While he grants that there's lip service payed to empiricism Mark argues that LW isn't empiric enough.
Part of empiricism is learning from errors. How do you deal with learning from your own errors? What was the last substantial errors you made that made you learn and think differently about the issue in question?
Do you have a framework for thinking about the issue of learning through errors? Do you have additional questions regarding the issue of learning through errors that are worth exploring? |
d9000583-aa41-40a2-9349-d4d7680186c2 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Large Scale Adversarial Representation Learning
1 Introduction
---------------
In recent years we have seen rapid progress in generative models of visual data.
While these models were previously confined to domains with single or few modes, simple structure, and low resolution,
with advances in both modeling and hardware
they have since gained the ability to convincingly generate
complex, multimodal, high resolution image distributions [biggan](#bib.bib2) ; [stylegan](#bib.bib20) ; [glow](#bib.bib22) .
Intuitively, the ability to generate data in a particular domain necessitates
a high-level understanding of the semantics of said domain.
This idea has long-standing appeal as raw data is both cheap
– readily available in virtually infinite supply from sources like the Internet –
and rich, with images comprising far more information than the class labels
that typical discriminative machine learning models are trained to predict from them.
Yet, while the progress in generative models has been undeniable,
nagging questions persist:
what semantics have these models learned,
and how can they be leveraged for representation learning?
The dream of generation as a means of true understanding from raw data alone has hardly been realized.
Instead, the most successful approaches for unsupervised learning leverage techniques adopted from the field of supervised learning,
a class of methods known as self-supervised learning [carl](#bib.bib6) ; [splitbrain](#bib.bib42) ; [cpc](#bib.bib39) ; [rotation](#bib.bib11) .
These approaches typically involve changing or holding back certain aspects of the data in some way,
and training a model to predict or generate aspects of the missing information.
For example, [colorful](#bib.bib41) ; [splitbrain](#bib.bib42) proposed colorization as a means of unsupervised learning,
where a model is given a subset of the color channels in an input image, and trained to predict the missing channels.
Generative models as a means of unsupervised learning offer
an appealing alternative to self-supervised tasks
in that they are trained to model the full data distribution
without requiring any modification of the original data.
One class of generative models that has been applied to representation learning is generative adversarial networks (GANs) [gan](#bib.bib13) .
The generator in the GAN framework is a feed-forward mapping from randomly sampled latent variables (also called “noise”) to generated data,
with learning signal provided by a discriminator trained to distinguish between real and generated data samples,
guiding the generator’s outputs to follow the data distribution.
The adversarially learned inference (ALI) [ali](#bib.bib10) or bidirectional GAN (BiGAN) [bigan](#bib.bib7) approaches
were proposed as extensions to the GAN framework that augment the standard GAN with an encoder module mapping real data to latents,
the inverse of the mapping learned by the generator.
In the limit of an optimal discriminator, [bigan](#bib.bib7) showed that a deterministic BiGAN behaves like an autoencoder minimizing ℓ0subscriptℓ0\ell\_{0}roman\_ℓ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT reconstruction costs;
however, the shape of the reconstruction error surface
is dictated by a parametric discriminator,
as opposed to simple pixel-level measures like the ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT error.
Since the discriminator is usually a powerful neural network, the hope is that it will induce
an error surface which emphasizes “semantic” errors in reconstructions, rather than low-level details.
In [bigan](#bib.bib7) it was demonstrated that the encoder learned via the BiGAN or ALI framework is an effective
means of visual representation learning on ImageNet for downstream tasks.
However, it used a DCGAN [dcgan](#bib.bib31) style generator, incapable of producing high-quality images on this dataset,
so the semantics the encoder could model were in turn quite limited.
In this work we revisit this approach using BigGAN [biggan](#bib.bib2) as the generator,
a modern model that appears capable of capturing many of the modes
and much of the structure present in ImageNet images.
Our contributions are as follows:
* •
We show that BigBiGAN (BiGAN with BigGAN generator) matches the state of the art in unsupervised representation learning on ImageNet.
* •
We propose a more stable version of the joint discriminator for BigBiGAN.
* •
We perform a thorough empirical analysis and ablation study of model design choices.
* •
We show that the representation learning objective also improves unconditional image generation, and demonstrate state-of-the-art results in unconditional ImageNet generation.
* •
We open source pretrained BigBiGAN models on TensorFlow Hub222See footnote [1](#footnote1 "footnote 1 ‣ Large Scale Adversarial Representation Learning")..
2 BigBiGAN
-----------
discriminator 𝒟𝒟\mathcal{D}caligraphic\_D𝐱𝐱\mathbf{x}bold\_xℰℰ\mathcal{E}caligraphic\_E𝐳^^𝐳\hat{\mathbf{z}}over^ start\_ARG bold\_z end\_ARGencoder ℰℰ\mathcal{E}caligraphic\_E𝐳𝐳\mathbf{z}bold\_z𝒢𝒢\mathcal{G}caligraphic\_G𝐱^^𝐱\hat{\mathbf{x}}over^ start\_ARG bold\_x end\_ARGgenerator 𝒢𝒢\mathcal{G}caligraphic\_G𝐱𝐱\mathbf{x}bold\_x 𝐱^^𝐱\hat{\mathbf{x}}over^ start\_ARG bold\_x end\_ARG𝐳^^𝐳\hat{\mathbf{z}}over^ start\_ARG bold\_z end\_ARG 𝐳𝐳\mathbf{z}bold\_z𝐱∼P𝐱similar-to𝐱subscript𝑃𝐱\mathbf{x}\sim P\_{\mathbf{x}}bold\_x ∼ italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT𝐱^∼𝒢(𝐳)similar-to^𝐱𝒢𝐳\hat{\mathbf{x}}\sim\mathcal{G}(\mathbf{z})over^ start\_ARG bold\_x end\_ARG ∼ caligraphic\_G ( bold\_z )data𝐳∼P𝐳similar-to𝐳subscript𝑃𝐳\mathbf{z}\sim P\_{\mathbf{z}}bold\_z ∼ italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT𝐳^∼ℰ(𝐱)similar-to^𝐳ℰ𝐱\hat{\mathbf{z}}\sim\mathcal{E}(\mathbf{x})over^ start\_ARG bold\_z end\_ARG ∼ caligraphic\_E ( bold\_x )latentsF𝐹Fitalic\_FH𝐻Hitalic\_HJ𝐽Jitalic\_Jscoress𝐱subscript𝑠𝐱s\_{\mathbf{x}}italic\_s start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPTs𝐳subscript𝑠𝐳s\_{\mathbf{z}}italic\_s start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPTs𝐱𝐳subscript𝑠𝐱𝐳s\_{\mathbf{x}\mathbf{z}}italic\_s start\_POSTSUBSCRIPT bold\_xz end\_POSTSUBSCRIPT∑\sum∑lossℓℓ\ellroman\_ℓ
Figure 1:
The structure of the BigBiGAN framework.
The joint discriminator 𝒟𝒟\mathcal{D}caligraphic\_D is used to compute the loss ℓℓ\ellroman\_ℓ.
Its inputs are data-latent pairs, either (𝐱∼P𝐱,𝐳^∼ℰ(𝐱))formulae-sequencesimilar-to𝐱subscript𝑃𝐱similar-to^𝐳ℰ𝐱(\mathbf{x}\sim P\_{\mathbf{x}},\hat{\mathbf{z}}\sim\mathcal{E}(\mathbf{x}))( bold\_x ∼ italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT , over^ start\_ARG bold\_z end\_ARG ∼ caligraphic\_E ( bold\_x ) ), sampled from the data distribution P𝐱subscript𝑃𝐱P\_{\mathbf{x}}italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT and encoder ℰℰ\mathcal{E}caligraphic\_E outputs, or (𝐱^∼𝒢(𝐳),𝐳∼P𝐳)formulae-sequencesimilar-to^𝐱𝒢𝐳similar-to𝐳subscript𝑃𝐳(\hat{\mathbf{x}}\sim\mathcal{G}(\mathbf{z}),\mathbf{z}\sim P\_{\mathbf{z}})( over^ start\_ARG bold\_x end\_ARG ∼ caligraphic\_G ( bold\_z ) , bold\_z ∼ italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT ), sampled from the generator 𝒢𝒢\mathcal{G}caligraphic\_G outputs and the latent distribution P𝐳subscript𝑃𝐳P\_{\mathbf{z}}italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT.
The loss ℓℓ\ellroman\_ℓ includes the unary data term s𝐱subscript𝑠𝐱s\_{\mathbf{x}}italic\_s start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT and the unary latent term s𝐳subscript𝑠𝐳s\_{\mathbf{z}}italic\_s start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT, as well as the joint term s𝐱𝐳subscript𝑠𝐱𝐳s\_{\mathbf{x}\mathbf{z}}italic\_s start\_POSTSUBSCRIPT bold\_xz end\_POSTSUBSCRIPT which ties the data and latent distributions.
The BiGAN [bigan](#bib.bib7) or ALI [ali](#bib.bib10) approaches were proposed as extensions of the GAN [gan](#bib.bib13) framework which enable the learning of an encoder that can be employed as an inference model [ali](#bib.bib10) or feature representation [bigan](#bib.bib7) .
Given a distribution P𝐱subscript𝑃𝐱P\_{\mathbf{x}}italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT of data 𝐱𝐱\mathbf{x}bold\_x (e.g., images),
and a distribution P𝐳subscript𝑃𝐳P\_{\mathbf{z}}italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT of latents 𝐳𝐳\mathbf{z}bold\_z (usually a simple continuous distribution like an isotropic Gaussian 𝒩(0,I)𝒩0𝐼\mathcal{N}(0,I)caligraphic\_N ( 0 , italic\_I )),
the generator 𝒢𝒢\mathcal{G}caligraphic\_G models a conditional distribution P(𝐱|𝐳)𝑃conditional𝐱𝐳P(\mathbf{x}|\mathbf{z})italic\_P ( bold\_x | bold\_z ) of data 𝐱𝐱\mathbf{x}bold\_x given latent inputs 𝐳𝐳\mathbf{z}bold\_z sampled from the latent prior P𝐳subscript𝑃𝐳P\_{\mathbf{z}}italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT,
as in the standard GAN generator [gan](#bib.bib13) .
The encoder ℰℰ\mathcal{E}caligraphic\_E models the inverse conditional distribution P(𝐳|𝐱)𝑃conditional𝐳𝐱P(\mathbf{z}|\mathbf{x})italic\_P ( bold\_z | bold\_x ), predicting latents 𝐳𝐳\mathbf{z}bold\_z given data 𝐱𝐱\mathbf{x}bold\_x sampled from the data distribution P𝐱subscript𝑃𝐱P\_{\mathbf{x}}italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT.
Besides the addition of ℰℰ\mathcal{E}caligraphic\_E, the other modification to the GAN in the BiGAN framework is a joint discriminator 𝒟𝒟\mathcal{D}caligraphic\_D,
which takes as input data-latent pairs (𝐱,𝐳)𝐱𝐳(\mathbf{x},\mathbf{z})( bold\_x , bold\_z ) (rather than just data 𝐱𝐱\mathbf{x}bold\_x as in a standard GAN),
and learns to discriminate between pairs from the data distribution and encoder, versus the generator and latent distribution.
Concretely, its inputs are pairs
(𝐱∼P𝐱,𝐳^∼ℰ(𝐱))formulae-sequencesimilar-to𝐱subscript𝑃𝐱similar-to^𝐳ℰ𝐱(\mathbf{x}\sim P\_{\mathbf{x}},\hat{\mathbf{z}}\sim\mathcal{E}(\mathbf{x}))( bold\_x ∼ italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT , over^ start\_ARG bold\_z end\_ARG ∼ caligraphic\_E ( bold\_x ) )
and
(𝐱^∼𝒢(𝐳),𝐳∼P𝐳)formulae-sequencesimilar-to^𝐱𝒢𝐳similar-to𝐳subscript𝑃𝐳(\hat{\mathbf{x}}\sim\mathcal{G}(\mathbf{z}),\mathbf{z}\sim P\_{\mathbf{z}})( over^ start\_ARG bold\_x end\_ARG ∼ caligraphic\_G ( bold\_z ) , bold\_z ∼ italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT ),
and the goal of the 𝒢𝒢\mathcal{G}caligraphic\_G and ℰℰ\mathcal{E}caligraphic\_E is to “fool” the discriminator by making the two
joint distributions P𝐱ℰsubscript𝑃𝐱ℰP\_{\mathbf{x}\mathcal{E}}italic\_P start\_POSTSUBSCRIPT bold\_x caligraphic\_E end\_POSTSUBSCRIPT and P𝒢𝐳subscript𝑃𝒢𝐳P\_{\mathcal{G}\mathbf{z}}italic\_P start\_POSTSUBSCRIPT caligraphic\_G bold\_z end\_POSTSUBSCRIPT from which these pairs are sampled indistinguishable.
The adversarial minimax objective in [bigan](#bib.bib7) ; [ali](#bib.bib10) , analogous to that of the GAN framework [gan](#bib.bib13) , was defined as follows:
| | | |
| --- | --- | --- |
| | min𝒢ℰmax𝒟{𝔼𝐱∼P𝐱,𝐳∼ℰΦ(𝐱)[log(σ(𝒟(𝐱,𝐳)))]+𝔼𝐳∼P𝐳,𝐱∼𝒢Φ(𝐳)[log(1−σ(𝒟(𝐱,𝐳)))]}subscript𝒢ℰsubscript𝒟subscript𝔼formulae-sequencesimilar-to𝐱subscript𝑃𝐱similar-to𝐳subscriptℰΦ𝐱delimited-[]𝜎𝒟𝐱𝐳subscript𝔼formulae-sequencesimilar-to𝐳subscript𝑃𝐳similar-to𝐱subscript𝒢Φ𝐳delimited-[]1𝜎𝒟𝐱𝐳\displaystyle\min\_{\mathcal{G}\mathcal{E}}\max\_{\mathcal{D}}\left\{\mathbb{E}\_{\mathbf{x}\sim P\_{\mathbf{x}},\mathbf{z}\sim\mathcal{E}\_{\Phi}(\mathbf{x})}\left[\log(\sigma(\mathcal{D}(\mathbf{x},\mathbf{z})))\right]+\mathbb{E}\_{\mathbf{z}\sim P\_{\mathbf{z}},\mathbf{x}\sim\mathcal{G}\_{\Phi}(\mathbf{z})}\left[\log(1-\sigma(\mathcal{D}(\mathbf{x},\mathbf{z})))\right]\right\}roman\_min start\_POSTSUBSCRIPT caligraphic\_G caligraphic\_E end\_POSTSUBSCRIPT roman\_max start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT { blackboard\_E start\_POSTSUBSCRIPT bold\_x ∼ italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT , bold\_z ∼ caligraphic\_E start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT ( bold\_x ) end\_POSTSUBSCRIPT [ roman\_log ( italic\_σ ( caligraphic\_D ( bold\_x , bold\_z ) ) ) ] + blackboard\_E start\_POSTSUBSCRIPT bold\_z ∼ italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT , bold\_x ∼ caligraphic\_G start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT ( bold\_z ) end\_POSTSUBSCRIPT [ roman\_log ( 1 - italic\_σ ( caligraphic\_D ( bold\_x , bold\_z ) ) ) ] } | |
Under this objective, [bigan](#bib.bib7) ; [ali](#bib.bib10) showed that with an optimal 𝒟𝒟\mathcal{D}caligraphic\_D, 𝒢𝒢\mathcal{G}caligraphic\_G and ℰℰ\mathcal{E}caligraphic\_E minimize the Jensen-Shannon divergence between the joint distributions P𝐱ℰsubscript𝑃𝐱ℰP\_{\mathbf{x}\mathcal{E}}italic\_P start\_POSTSUBSCRIPT bold\_x caligraphic\_E end\_POSTSUBSCRIPT and P𝒢𝐳subscript𝑃𝒢𝐳P\_{\mathcal{G}\mathbf{z}}italic\_P start\_POSTSUBSCRIPT caligraphic\_G bold\_z end\_POSTSUBSCRIPT, and therefore at the global optimum, the two joint distributions P𝐱ℰ=P𝒢𝐳subscript𝑃𝐱ℰsubscript𝑃𝒢𝐳P\_{\mathbf{x}\mathcal{E}}=P\_{\mathcal{G}\mathbf{z}}italic\_P start\_POSTSUBSCRIPT bold\_x caligraphic\_E end\_POSTSUBSCRIPT = italic\_P start\_POSTSUBSCRIPT caligraphic\_G bold\_z end\_POSTSUBSCRIPT match, analogous to the results from standard GANs [gan](#bib.bib13) .
Furthermore, [bigan](#bib.bib7) showed that in the case where ℰℰ\mathcal{E}caligraphic\_E and 𝒢𝒢\mathcal{G}caligraphic\_G are deterministic functions
(i.e., the learned conditional distributions P𝒢(𝐱|𝐳)subscript𝑃𝒢conditional𝐱𝐳P\_{\mathcal{G}}(\mathbf{x}|\mathbf{z})italic\_P start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ( bold\_x | bold\_z ) and Pℰ(𝐳|𝐱)subscript𝑃ℰconditional𝐳𝐱P\_{\mathcal{E}}(\mathbf{z}|\mathbf{x})italic\_P start\_POSTSUBSCRIPT caligraphic\_E end\_POSTSUBSCRIPT ( bold\_z | bold\_x ) are Dirac δ𝛿\deltaitalic\_δ functions),
these two functions are inverses at the global optimum: e.g.,
∀𝐱∈supp(P𝐱)𝐱=𝒢(ℰ(𝐱))subscriptfor-all𝐱suppsubscript𝑃𝐱𝐱𝒢ℰ𝐱\forall\_{\mathbf{x}\in\operatorname{supp}(P\_{\mathbf{x}})}\,\mathbf{x}=\mathcal{G}(\mathcal{E}(\mathbf{x}))∀ start\_POSTSUBSCRIPT bold\_x ∈ roman\_supp ( italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT bold\_x = caligraphic\_G ( caligraphic\_E ( bold\_x ) ),
with the optimal joint discriminator effectively imposing ℓ0subscriptℓ0\ell\_{0}roman\_ℓ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT reconstruction costs on 𝐱𝐱\mathbf{x}bold\_x and 𝐳𝐳\mathbf{z}bold\_z.
While the crux of our approach, BigBiGAN, remains the same as that of BiGAN [bigan](#bib.bib7) ; [ali](#bib.bib10) ,
we have adopted the generator and discriminator architectures from the state-of-the-art BigGAN [biggan](#bib.bib2) generative image model.
Beyond that, we have found that an improved discriminator structure leads to better representation learning results without compromising generation
(Figure [1](#S2.F1 "Figure 1 ‣ 2 BigBiGAN ‣ Large Scale Adversarial Representation Learning")).
Namely, in addition to the joint discriminator loss proposed in [bigan](#bib.bib7) ; [ali](#bib.bib10) which ties the data and latent distributions together,
we propose additional unary terms in the learning objective, which are functions only of either the data 𝐱𝐱\mathbf{x}bold\_x or the latents 𝐳𝐳\mathbf{z}bold\_z.
Although [bigan](#bib.bib7) ; [ali](#bib.bib10) prove that the original BiGAN objective already enforces that the learnt joint distributions match at the global optimum,
implying that the marginal distributions of 𝐱𝐱\mathbf{x}bold\_x and 𝐳𝐳\mathbf{z}bold\_z match as well,
these unary terms intuitively guide optimization in the “right direction”
by explicitly enforcing this property.
For example, in the context of image generation,
the unary loss term on 𝐱𝐱\mathbf{x}bold\_x
matches the original GAN objective and provides a learning signal
which steers only the generator to match the image distribution independently of its latent inputs.
(In our evaluation we will demonstrate empirically that the addition of these terms results in both improved generation and representation learning.)
Concretely, the discriminator loss ℒ𝒟subscriptℒ𝒟\mathcal{L}\_{\mathcal{D}}caligraphic\_L start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT and the encoder-generator loss ℒℰ𝒢subscriptℒℰ𝒢\mathcal{L}\_{\mathcal{E}\mathcal{G}}caligraphic\_L start\_POSTSUBSCRIPT caligraphic\_E caligraphic\_G end\_POSTSUBSCRIPT are defined as follows, based on scalar discriminator “score” functions s\*subscript𝑠s\_{\*}italic\_s start\_POSTSUBSCRIPT \* end\_POSTSUBSCRIPT and the corresponding per-sample losses ℓ\*subscriptℓ\ell\_{\*}roman\_ℓ start\_POSTSUBSCRIPT \* end\_POSTSUBSCRIPT:
| | | | |
| --- | --- | --- | --- |
| | s𝐱(𝐱)subscript𝑠𝐱𝐱\displaystyle s\_{\mathbf{x}}(\mathbf{x})italic\_s start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT ( bold\_x ) | =θ𝐱⊺FΘ(𝐱)absentsuperscriptsubscript𝜃𝐱⊺subscript𝐹Θ𝐱\displaystyle=\theta\_{\mathbf{x}}^{\intercal}F\_{\Theta}(\mathbf{x})= italic\_θ start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ⊺ end\_POSTSUPERSCRIPT italic\_F start\_POSTSUBSCRIPT roman\_Θ end\_POSTSUBSCRIPT ( bold\_x ) | |
| | s𝐳(𝐳)subscript𝑠𝐳𝐳\displaystyle s\_{\mathbf{z}}(\mathbf{z})italic\_s start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT ( bold\_z ) | =θ𝐳⊺HΘ(𝐳)absentsuperscriptsubscript𝜃𝐳⊺subscript𝐻Θ𝐳\displaystyle=\theta\_{\mathbf{z}}^{\intercal}H\_{\Theta}(\mathbf{z})= italic\_θ start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ⊺ end\_POSTSUPERSCRIPT italic\_H start\_POSTSUBSCRIPT roman\_Θ end\_POSTSUBSCRIPT ( bold\_z ) | |
| | s𝐱𝐳(𝐱,𝐳)subscript𝑠𝐱𝐳𝐱𝐳\displaystyle s\_{\mathbf{x}\mathbf{z}}(\mathbf{x},\mathbf{z})italic\_s start\_POSTSUBSCRIPT bold\_xz end\_POSTSUBSCRIPT ( bold\_x , bold\_z ) | =θ𝐱𝐳⊺JΘ(FΘ(𝐱),HΘ(𝐳))absentsuperscriptsubscript𝜃𝐱𝐳⊺subscript𝐽Θsubscript𝐹Θ𝐱subscript𝐻Θ𝐳\displaystyle=\theta\_{\mathbf{x}\mathbf{z}}^{\intercal}J\_{\Theta}(F\_{\Theta}(\mathbf{x}),H\_{\Theta}(\mathbf{z}))= italic\_θ start\_POSTSUBSCRIPT bold\_xz end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ⊺ end\_POSTSUPERSCRIPT italic\_J start\_POSTSUBSCRIPT roman\_Θ end\_POSTSUBSCRIPT ( italic\_F start\_POSTSUBSCRIPT roman\_Θ end\_POSTSUBSCRIPT ( bold\_x ) , italic\_H start\_POSTSUBSCRIPT roman\_Θ end\_POSTSUBSCRIPT ( bold\_z ) ) | |
| | ℓℰ𝒢(𝐱,𝐳,y)subscriptℓℰ𝒢𝐱𝐳𝑦\displaystyle\ell\_{\mathcal{E}\mathcal{G}}(\mathbf{x},\mathbf{z},y)roman\_ℓ start\_POSTSUBSCRIPT caligraphic\_E caligraphic\_G end\_POSTSUBSCRIPT ( bold\_x , bold\_z , italic\_y ) | =y(s𝐱(𝐱)+s𝐳(𝐳)+s𝐱𝐳(𝐱,𝐳))absent𝑦subscript𝑠𝐱𝐱subscript𝑠𝐳𝐳subscript𝑠𝐱𝐳𝐱𝐳\displaystyle=y\left(s\_{\mathbf{x}}(\mathbf{x})+s\_{\mathbf{z}}(\mathbf{z})+s\_{\mathbf{x}\mathbf{z}}(\mathbf{x},\mathbf{z})\right)= italic\_y ( italic\_s start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT ( bold\_x ) + italic\_s start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT ( bold\_z ) + italic\_s start\_POSTSUBSCRIPT bold\_xz end\_POSTSUBSCRIPT ( bold\_x , bold\_z ) ) | y∈{−1,+1}𝑦11\displaystyle y\in\{-1,+1\}italic\_y ∈ { - 1 , + 1 } | |
| | ℒℰ𝒢(P𝐱,P𝐳)subscriptℒℰ𝒢subscript𝑃𝐱subscript𝑃𝐳\displaystyle\mathcal{L}\_{\mathcal{E}\mathcal{G}}(P\_{\mathbf{x}},P\_{\mathbf{z}})caligraphic\_L start\_POSTSUBSCRIPT caligraphic\_E caligraphic\_G end\_POSTSUBSCRIPT ( italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT , italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT ) | =𝔼𝐱∼P𝐱,𝐳^∼ℰΦ(𝐱)[ℓℰ𝒢(𝐱,𝐳^,+1)]+𝔼𝐳∼P𝐳,𝐱^∼𝒢Φ(𝐳)[ℓℰ𝒢(𝐱^,𝐳,−1)]absentsubscript𝔼formulae-sequencesimilar-to𝐱subscript𝑃𝐱similar-to^𝐳subscriptℰΦ𝐱delimited-[]subscriptℓℰ𝒢𝐱^𝐳1subscript𝔼formulae-sequencesimilar-to𝐳subscript𝑃𝐳similar-to^𝐱subscript𝒢Φ𝐳delimited-[]subscriptℓℰ𝒢^𝐱𝐳1\displaystyle=\mathbb{E}\_{\mathbf{x}\sim P\_{\mathbf{x}},\hat{\mathbf{z}}\sim\mathcal{E}\_{\Phi}(\mathbf{x})}\left[\ell\_{\mathcal{E}\mathcal{G}}(\mathbf{x},\hat{\mathbf{z}},+1)\right]+\mathbb{E}\_{\mathbf{z}\sim P\_{\mathbf{z}},\hat{\mathbf{x}}\sim\mathcal{G}\_{\Phi}(\mathbf{z})}\left[\ell\_{\mathcal{E}\mathcal{G}}(\hat{\mathbf{x}},\mathbf{z},-1)\right]= blackboard\_E start\_POSTSUBSCRIPT bold\_x ∼ italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT , over^ start\_ARG bold\_z end\_ARG ∼ caligraphic\_E start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT ( bold\_x ) end\_POSTSUBSCRIPT [ roman\_ℓ start\_POSTSUBSCRIPT caligraphic\_E caligraphic\_G end\_POSTSUBSCRIPT ( bold\_x , over^ start\_ARG bold\_z end\_ARG , + 1 ) ] + blackboard\_E start\_POSTSUBSCRIPT bold\_z ∼ italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT , over^ start\_ARG bold\_x end\_ARG ∼ caligraphic\_G start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT ( bold\_z ) end\_POSTSUBSCRIPT [ roman\_ℓ start\_POSTSUBSCRIPT caligraphic\_E caligraphic\_G end\_POSTSUBSCRIPT ( over^ start\_ARG bold\_x end\_ARG , bold\_z , - 1 ) ] | |
| | ℓ𝒟(𝐱,𝐳,y)subscriptℓ𝒟𝐱𝐳𝑦\displaystyle\ell\_{\mathcal{D}}(\mathbf{x},\mathbf{z},y)roman\_ℓ start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT ( bold\_x , bold\_z , italic\_y ) | =h(ys𝐱(𝐱))+h(ys𝐳(𝐳))+h(ys𝐱𝐳(𝐱,𝐳))absentℎ𝑦subscript𝑠𝐱𝐱ℎ𝑦subscript𝑠𝐳𝐳ℎ𝑦subscript𝑠𝐱𝐳𝐱𝐳\displaystyle=h(ys\_{\mathbf{x}}(\mathbf{x}))+h(ys\_{\mathbf{z}}(\mathbf{z}))+h(ys\_{\mathbf{x}\mathbf{z}}(\mathbf{x},\mathbf{z}))= italic\_h ( italic\_y italic\_s start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT ( bold\_x ) ) + italic\_h ( italic\_y italic\_s start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT ( bold\_z ) ) + italic\_h ( italic\_y italic\_s start\_POSTSUBSCRIPT bold\_xz end\_POSTSUBSCRIPT ( bold\_x , bold\_z ) ) | y∈{−1,+1}𝑦11\displaystyle y\in\{-1,+1\}italic\_y ∈ { - 1 , + 1 } | |
| | ℒ𝒟(P𝐱,P𝐳)subscriptℒ𝒟subscript𝑃𝐱subscript𝑃𝐳\displaystyle\mathcal{L}\_{\mathcal{D}}(P\_{\mathbf{x}},P\_{\mathbf{z}})caligraphic\_L start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT ( italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT , italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT ) | =𝔼𝐱∼P𝐱,𝐳^∼ℰΦ(𝐱)[ℓ𝒟(𝐱,𝐳^,+1)]+𝔼𝐳∼P𝐳,𝐱^∼𝒢Φ(𝐳)[ℓ𝒟(𝐱^,𝐳,−1)]absentsubscript𝔼formulae-sequencesimilar-to𝐱subscript𝑃𝐱similar-to^𝐳subscriptℰΦ𝐱delimited-[]subscriptℓ𝒟𝐱^𝐳1subscript𝔼formulae-sequencesimilar-to𝐳subscript𝑃𝐳similar-to^𝐱subscript𝒢Φ𝐳delimited-[]subscriptℓ𝒟^𝐱𝐳1\displaystyle=\mathbb{E}\_{\mathbf{x}\sim P\_{\mathbf{x}},\hat{\mathbf{z}}\sim\mathcal{E}\_{\Phi}(\mathbf{x})}\left[\ell\_{\mathcal{D}}(\mathbf{x},\hat{\mathbf{z}},+1)\right]+\mathbb{E}\_{\mathbf{z}\sim P\_{\mathbf{z}},\hat{\mathbf{x}}\sim\mathcal{G}\_{\Phi}(\mathbf{z})}\left[\ell\_{\mathcal{D}}(\hat{\mathbf{x}},\mathbf{z},-1)\right]= blackboard\_E start\_POSTSUBSCRIPT bold\_x ∼ italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT , over^ start\_ARG bold\_z end\_ARG ∼ caligraphic\_E start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT ( bold\_x ) end\_POSTSUBSCRIPT [ roman\_ℓ start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT ( bold\_x , over^ start\_ARG bold\_z end\_ARG , + 1 ) ] + blackboard\_E start\_POSTSUBSCRIPT bold\_z ∼ italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT , over^ start\_ARG bold\_x end\_ARG ∼ caligraphic\_G start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT ( bold\_z ) end\_POSTSUBSCRIPT [ roman\_ℓ start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT ( over^ start\_ARG bold\_x end\_ARG , bold\_z , - 1 ) ] | |
where h(t)=max(0,1−t)ℎ𝑡01𝑡h(t)=\max(0,1-t)italic\_h ( italic\_t ) = roman\_max ( 0 , 1 - italic\_t ) is a “hinge” used to regularize the discriminator [geometricgan](#bib.bib26) ; [tran](#bib.bib37)
333
We also considered an alternative discriminator loss ℓ𝒟′subscriptsuperscriptℓ′𝒟\ell^{\prime}\_{\mathcal{D}}roman\_ℓ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT which invokes the “hinge” hℎhitalic\_h just once on the sum of the three loss terms – ℓ𝒟′(𝐱,𝐳,y)=h(y(s𝐱(𝐱)+s𝐳(𝐳)+s𝐱𝐳(𝐱,𝐳)))subscriptsuperscriptℓ′𝒟𝐱𝐳𝑦ℎ𝑦subscript𝑠𝐱𝐱subscript𝑠𝐳𝐳subscript𝑠𝐱𝐳𝐱𝐳\ell^{\prime}\_{\mathcal{D}}(\mathbf{x},\mathbf{z},y)=h(y\left(s\_{\mathbf{x}}(\mathbf{x})+s\_{\mathbf{z}}(\mathbf{z})+s\_{\mathbf{x}\mathbf{z}}(\mathbf{x},\mathbf{z})\right))roman\_ℓ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT ( bold\_x , bold\_z , italic\_y ) = italic\_h ( italic\_y ( italic\_s start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT ( bold\_x ) + italic\_s start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT ( bold\_z ) + italic\_s start\_POSTSUBSCRIPT bold\_xz end\_POSTSUBSCRIPT ( bold\_x , bold\_z ) ) ) – but found that this performed significantly worse than ℓ𝒟subscriptℓ𝒟\ell\_{\mathcal{D}}roman\_ℓ start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT above which clamps each of the three loss terms separately.
,
also used in BigGAN [biggan](#bib.bib2) .
The discriminator 𝒟𝒟\mathcal{D}caligraphic\_D includes three submodules: F𝐹Fitalic\_F, H𝐻Hitalic\_H, and J𝐽Jitalic\_J.
F𝐹Fitalic\_F takes only 𝐱𝐱\mathbf{x}bold\_x as input and H𝐻Hitalic\_H takes only 𝐳𝐳\mathbf{z}bold\_z, and learned projections of their outputs with parameters θ𝐱subscript𝜃𝐱\theta\_{\mathbf{x}}italic\_θ start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT and θ𝐳subscript𝜃𝐳\theta\_{\mathbf{z}}italic\_θ start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT respectively give the scalar unary scores s𝐱subscript𝑠𝐱s\_{\mathbf{x}}italic\_s start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT and s𝐳subscript𝑠𝐳s\_{\mathbf{z}}italic\_s start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT.
In our experiments, the data 𝐱𝐱\mathbf{x}bold\_x are images and latents 𝐳𝐳\mathbf{z}bold\_z are unstructured flat vectors; accordingly, F𝐹Fitalic\_F is a ConvNet and H𝐻Hitalic\_H is an MLP.
The joint score s𝐱𝐳subscript𝑠𝐱𝐳s\_{\mathbf{x}\mathbf{z}}italic\_s start\_POSTSUBSCRIPT bold\_xz end\_POSTSUBSCRIPT tying 𝐱𝐱\mathbf{x}bold\_x and 𝐳𝐳\mathbf{z}bold\_z is given by the remaining 𝒟𝒟\mathcal{D}caligraphic\_D submodule, J𝐽Jitalic\_J, a function of the outputs of F𝐹Fitalic\_F and H𝐻Hitalic\_H.
The ℰℰ\mathcal{E}caligraphic\_E and 𝒢𝒢\mathcal{G}caligraphic\_G parameters ΦΦ\Phiroman\_Φ are optimized to minimize the loss ℒℰ𝒢subscriptℒℰ𝒢\mathcal{L}\_{\mathcal{E}\mathcal{G}}caligraphic\_L start\_POSTSUBSCRIPT caligraphic\_E caligraphic\_G end\_POSTSUBSCRIPT,
and the 𝒟𝒟\mathcal{D}caligraphic\_D parameters ΘΘ\Thetaroman\_Θ are optimized to minimize loss ℒ𝒟subscriptℒ𝒟\mathcal{L}\_{\mathcal{D}}caligraphic\_L start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT.
As usual, the expectations 𝔼𝔼\mathbb{E}blackboard\_E are estimated by Monte Carlo samples taken over minibatches.
Like in BiGAN [bigan](#bib.bib7) and ALI [ali](#bib.bib10) ,
the discriminator loss ℒ𝒟subscriptℒ𝒟\mathcal{L}\_{\mathcal{D}}caligraphic\_L start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT intuitively trains the discriminator to distinguish between the two joint data-latent distributions from the encoder and the generator,
pushing it to predict positive values for encoder input pairs (𝐱,ℰ(𝐱))𝐱ℰ𝐱(\mathbf{x},\mathcal{E}(\mathbf{x}))( bold\_x , caligraphic\_E ( bold\_x ) ) and negative values for generator input pairs (𝒢(𝐳),𝐳)𝒢𝐳𝐳(\mathcal{G}(\mathbf{z}),\mathbf{z})( caligraphic\_G ( bold\_z ) , bold\_z ).
The generator and encoder loss ℒℰ𝒢subscriptℒℰ𝒢\mathcal{L}\_{\mathcal{E}\mathcal{G}}caligraphic\_L start\_POSTSUBSCRIPT caligraphic\_E caligraphic\_G end\_POSTSUBSCRIPT trains these two modules to fool the discriminator into incorrectly predicting the opposite,
in effect pushing them to create matching joint data-latent distributions.
(In the case of deterministic ℰℰ\mathcal{E}caligraphic\_E and 𝒢𝒢\mathcal{G}caligraphic\_G, this requires the two modules to invert one another [bigan](#bib.bib7) .)
3 Evaluation
-------------
Most of our experiments follow the standard protocol used to evaluate unsupervised learning techniques,
first proposed in [colorful](#bib.bib41) .
We train a BigBiGAN on unlabeled ImageNet,
freeze its learned representation,
and then train a linear classifier on its outputs,
fully supervised using all of the training set labels.
We also measure image generation performance,
reporting Inception Score [improvedgan](#bib.bib35) (IS) and Fréchet Inception Distance [frechet](#bib.bib18) (FID)
as the standard metrics there.
###
3.1 Ablation
We begin with an extensive ablation study in which we directly evaluate a number of modeling choices,
with results presented in Table [1](#S3.T1 "Table 1 ‣ Decoupled ℰ/𝒢 optimization. ‣ 3.1 Ablation ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning").
Where possible we performed three runs of each variant with different seeds and report the mean and standard deviation for each metric.
We start with a relatively fully-fledged version of the model at 128×128128128128\times 128128 × 128 resolution (row Base),
with the 𝒢𝒢\mathcal{G}caligraphic\_G architecture and the F𝐹Fitalic\_F component of 𝒟𝒟\mathcal{D}caligraphic\_D taken from the corresponding 128×128128128128\times 128128 × 128 architectures in BigGAN, including the skip connections and shared noise embedding proposed in [biggan](#bib.bib2) .
𝐳𝐳\mathbf{z}bold\_z is 120 dimensions, split into six groups of 20 dimensions fed into each of the six layers of 𝒢𝒢\mathcal{G}caligraphic\_G as in [biggan](#bib.bib2) .
The remaining components of 𝒟𝒟\mathcal{D}caligraphic\_D – H𝐻Hitalic\_H and J𝐽Jitalic\_J – are 8-layer MLPs with ResNet-style skip connections (four residual blocks with two layers each) and size 2048 hidden layers.
The ℰℰ\mathcal{E}caligraphic\_E architecture is the ResNet-v2-50 ConvNet originally proposed for image classification in [resnetv2](#bib.bib16) , followed by a 4-layer MLP (size 4096) with skip connections (two residual blocks) after ResNet’s globally average pooled output.
The unconditional BigGAN training setup corresponds to the “Single Label” setup proposed in [zurichfewer](#bib.bib27) , where a single “dummy” label is used for all images (theoretically equivalent to learning a bias in place of the class-conditional batch norm inputs).
We then ablate several aspects of the model, with results detailed in the following paragraphs.
Additional architectural and optimization details are provided in Appendix [A](#A1 "Appendix A Model and optimization details ‣ Large Scale Adversarial Representation Learning").
Full learning curves for many results are included in Appendix [D](#A4 "Appendix D Learning curves ‣ Large Scale Adversarial Representation Learning").
##### Latent distribution P𝐳subscript𝑃𝐳P\_{\mathbf{z}}italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT and stochastic ℰℰ\mathcal{E}caligraphic\_E.
As in ALI [ali](#bib.bib10) , the encoder ℰℰ\mathcal{E}caligraphic\_E of our Base model is non-deterministic,
parametrizing a distribution 𝒩(μ,σ)𝒩𝜇𝜎\mathcal{N(\mu,\sigma)}caligraphic\_N ( italic\_μ , italic\_σ ).
μ𝜇\muitalic\_μ and σ^^𝜎\hat{\sigma}over^ start\_ARG italic\_σ end\_ARG are given by a linear layer at the output of the model,
and the final standard deviation σ𝜎\sigmaitalic\_σ is computed from σ^^𝜎\hat{\sigma}over^ start\_ARG italic\_σ end\_ARG
using a non-negative “softplus” non-linearity σ=log(1+exp(σ^))𝜎1^𝜎\sigma=\log(1+\exp(\hat{\sigma}))italic\_σ = roman\_log ( 1 + roman\_exp ( over^ start\_ARG italic\_σ end\_ARG ) ) [softplus](#bib.bib9) .
The final 𝐳𝐳\mathbf{z}bold\_z uses the reparametrized sampling from [kingmavae](#bib.bib23) , with 𝐳=μ+ϵσ𝐳𝜇italic-ϵ𝜎\mathbf{z}=\mu+\epsilon\sigmabold\_z = italic\_μ + italic\_ϵ italic\_σ, where ϵ∼𝒩(0,I)similar-toitalic-ϵ𝒩0𝐼\epsilon\sim\mathcal{N}(0,I)italic\_ϵ ∼ caligraphic\_N ( 0 , italic\_I ).
Compared to a deterministic encoder (row Deterministic ℰℰ\mathcal{E}caligraphic\_E) which predicts 𝐳𝐳\mathbf{z}bold\_z directly without sampling
(effectively modeling P(𝐳|𝐱)𝑃conditional𝐳𝐱P(\mathbf{z}|\mathbf{x})italic\_P ( bold\_z | bold\_x ) as a Dirac δ𝛿\deltaitalic\_δ distribution), the non-deterministic Base model achieves significantly better classification performance
(at no cost to generation).
We also compared to using a uniform P𝐳=𝒰(−1,1)subscript𝑃𝐳𝒰11P\_{\mathbf{z}}=\mathcal{U}(-1,1)italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT = caligraphic\_U ( - 1 , 1 ) (row Uniform P𝐳subscript𝑃𝐳P\_{\mathbf{z}}italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT)
with ℰℰ\mathcal{E}caligraphic\_E deterministically predicting 𝐳=tanh(𝐳^)𝐳^𝐳\mathbf{z}=\tanh({\hat{\mathbf{z}}})bold\_z = roman\_tanh ( over^ start\_ARG bold\_z end\_ARG ) given a linear output 𝐳^^𝐳\hat{\mathbf{z}}over^ start\_ARG bold\_z end\_ARG, as done in BiGAN [bigan](#bib.bib7) .
This also achieves worse classification results than the non-deterministic Base model.
##### Unary loss terms.
We evaluate the effect of removing one or both unary terms of the loss function proposed in Section [2](#S2 "2 BigBiGAN ‣ Large Scale Adversarial Representation Learning"), s𝐱subscript𝑠𝐱s\_{\mathbf{x}}italic\_s start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT and s𝐳subscript𝑠𝐳s\_{\mathbf{z}}italic\_s start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT.
Removing both unary terms (row No Unaries) corresponds to the original objective proposed in [bigan](#bib.bib7) ; [ali](#bib.bib10) .
It is clear that the 𝐱𝐱\mathbf{x}bold\_x unary term has a large positive effect on generation performance,
with the Base and 𝐱𝐱\mathbf{x}bold\_x Unary Only rows having significantly better IS and FID
than the 𝐳𝐳\mathbf{z}bold\_z Unary Only and No Unaries rows.
This result makes intuitive sense as it matches the standard generator loss.
It also marginally improves classification performance.
The 𝐳𝐳\mathbf{z}bold\_z unary term makes a more marginal difference,
likely due to the relative ease of modeling relatively simple distributions like isotropic Gaussians,
though also does result in slightly improved classification and generation in terms of FID –
especially without the 𝐱𝐱\mathbf{x}bold\_x term (𝐳𝐳\mathbf{z}bold\_z Unary Only vs. No Unaries).
On the other hand, IS is worse with the 𝐳𝐳\mathbf{z}bold\_z term.
This may be due to IS roughly measuring the generator’s coverage of the major modes of the distribution (the classes)
rather than the distribution in its entirety,
the latter of which may be better captured by FID and more likely to be promoted by a good encoder ℰℰ\mathcal{E}caligraphic\_E.
The requirement of invertibility in a (Big)BiGAN could be encouraging the generator to produce distinguishable outputs across the entire latent space, rather than “collapsing” large volumes of latent space to a single mode of the data distribution.
#####
𝒢𝒢\mathcal{G}caligraphic\_G capacity.
To address the question of the importance of the generator 𝒢𝒢\mathcal{G}caligraphic\_G in representation learning,
we vary the capacity of 𝒢𝒢\mathcal{G}caligraphic\_G (with ℰℰ\mathcal{E}caligraphic\_E and 𝒟𝒟\mathcal{D}caligraphic\_D fixed) in the Small 𝒢𝒢\mathcal{G}caligraphic\_G rows.
With a third of the capacity of the Base 𝒢𝒢\mathcal{G}caligraphic\_G model (Small 𝒢𝒢\mathcal{G}caligraphic\_G (32)),
the overall model is quite unstable and achieves significantly worse classification results than the higher capacity base model444Though the generation performance by IS and FID in row Small 𝒢𝒢\mathcal{G}caligraphic\_G (32) is very poor at the point we measured – when its best validation classification performance (43.59%) is achieved –
this model was performing more reasonably for generation earlier in training, reaching IS 14.69 and FID 60.67.
With two-thirds capacity (Small 𝒢𝒢\mathcal{G}caligraphic\_G (64)), generation performance is substantially worse (matching the results in [biggan](#bib.bib2) )
and classification performance is modestly worse.
These results confirm that a powerful image generator is indeed important for learning good representations via the encoder.
Assuming this relationship holds in the future, we expect that better generative models are likely to lead to further improvements in representation learning.
##### Standard GAN.
We also compare BigBiGAN’s image generation performance against a standard unconditional BigGAN with no encoder ℰℰ\mathcal{E}caligraphic\_E and only the standard F𝐹Fitalic\_F ConvNet in the discriminator,
with only the s𝐱subscript𝑠𝐱s\_{\mathbf{x}}italic\_s start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT term in the loss
(row No ℰℰ\mathcal{E}caligraphic\_E (GAN)).
While the standard GAN achieves a marginally better IS, the BigBiGAN FID is about the same,
indicating that the addition of the BigBiGAN ℰℰ\mathcal{E}caligraphic\_E and joint 𝒟𝒟\mathcal{D}caligraphic\_D does not compromise generation with the newly proposed unary loss terms described in Section [2](#S2 "2 BigBiGAN ‣ Large Scale Adversarial Representation Learning").
(In comparison, the versions of the model without unary loss term on 𝐱𝐱\mathbf{x}bold\_x – rows 𝐳𝐳\mathbf{z}bold\_z Unary Only and No Unaries –
have substantially worse generation performance in terms of FID than the standard GAN.)
We conjecture that the IS is worse for similar reasons that the s𝐳subscript𝑠𝐳s\_{\mathbf{z}}italic\_s start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT unary loss term leads to worse IS.
Next we will show that with an enhanced ℰℰ\mathcal{E}caligraphic\_E taking higher input resolutions, generation with BigBiGAN in terms of FID is substantially improved over the standard GAN.
##### High resolution ℰℰ\mathcal{E}caligraphic\_E with varying resolution 𝒢𝒢\mathcal{G}caligraphic\_G.
BiGAN [bigan](#bib.bib7) proposed an asymmetric setup in which ℰℰ\mathcal{E}caligraphic\_E takes higher resolution images than 𝒢𝒢\mathcal{G}caligraphic\_G outputs and 𝒟𝒟\mathcal{D}caligraphic\_D takes as input, showing that an ℰℰ\mathcal{E}caligraphic\_E taking 128×128128128128\times 128128 × 128 inputs with a 64×64646464\times 6464 × 64 𝒢𝒢\mathcal{G}caligraphic\_G outperforms a 64×64646464\times 6464 × 64 ℰℰ\mathcal{E}caligraphic\_E for downstream tasks.
We experiment with this setup in BigBiGAN, raising the ℰℰ\mathcal{E}caligraphic\_E input resolution to 256×256256256256\times 256256 × 256
– matching the resolution used in typical supervised ImageNet classification setups –
and varying the 𝒢𝒢\mathcal{G}caligraphic\_G output and 𝒟𝒟\mathcal{D}caligraphic\_D input resolution in {64,128,256}64128256\{64,128,256\}{ 64 , 128 , 256 }.
Our results in Table [1](#S3.T1 "Table 1 ‣ Decoupled ℰ/𝒢 optimization. ‣ 3.1 Ablation ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning") (rows High Res ℰℰ\mathcal{E}caligraphic\_E (256) and Low/High Res 𝒢𝒢\mathcal{G}caligraphic\_G (\*)) show that BigBiGAN achieves better representation learning results as the 𝒢𝒢\mathcal{G}caligraphic\_G resolution increases, up to the full ℰℰ\mathcal{E}caligraphic\_E resolution of 256×256256256256\times 256256 × 256.
However, because the overall model is much slower to train with 𝒢𝒢\mathcal{G}caligraphic\_G at 256×256256256256\times 256256 × 256 resolution, the remainder of our results use the 128×128128128128\times 128128 × 128 resolution for 𝒢𝒢\mathcal{G}caligraphic\_G.
Interestingly, with the higher resolution ℰℰ\mathcal{E}caligraphic\_E, generation improves significantly (especially by FID), despite 𝒢𝒢\mathcal{G}caligraphic\_G operating at the same resolution (row High Res ℰℰ\mathcal{E}caligraphic\_E (256) vs. Base).
This is an encouraging result for the potential of BigBiGAN as a means of improving adversarial image synthesis itself, besides its use in representation learning and inference.
#####
ℰℰ\mathcal{E}caligraphic\_E architecture.
Keeping the ℰℰ\mathcal{E}caligraphic\_E input resolution fixed at 256, we experiment with varied and often larger ℰℰ\mathcal{E}caligraphic\_E architectures, including several of the ResNet-50 variants explored in [revisiting](#bib.bib24) . In particular, we expand the capacity of the hidden layers by a factor of 2222 or 4444, as well as swap the residual block structure to a reversible variant called RevNet [revnet](#bib.bib12) with the same number of layers and capacity as the corresponding ResNets.
(We use the version of RevNet described in [revisiting](#bib.bib24) .)
We find that the base ResNet-50 model (row High Res ℰℰ\mathcal{E}caligraphic\_E (256)) outperforms RevNet-50 (row RevNet),
but as the network widths are expanded, we begin to see improvements from RevNet-50, with double-width RevNet outperforming a ResNet of the same capacity (rows RevNet ×2absent2\times 2× 2 and ResNet ×2absent2\times 2× 2).
We see further gains with an even larger quadruple-width RevNet model (row RevNet ×4absent4\times 4× 4), which we use for our final results in Section [3.2](#S3.SS2 "3.2 Comparison with prior methods ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning").
##### Decoupled ℰℰ\mathcal{E}caligraphic\_E/𝒢𝒢\mathcal{G}caligraphic\_G optimization.
As a final improvement,
we decoupled the ℰℰ\mathcal{E}caligraphic\_E optimizer from that of 𝒢𝒢\mathcal{G}caligraphic\_G,
and found that simply using a 10×10\times10 × higher learning rate for ℰℰ\mathcal{E}caligraphic\_E dramatically accelerates training
and improves final representation learning results.
For ResNet-50 this improves linear classifier accuracy by nearly 3% (ResNet (↑ℰnormal-↑absentℰ\uparrow\mathcal{E}↑ caligraphic\_E LR) vs. High Res ℰℰ\mathcal{E}caligraphic\_E (256)).
We also applied this to our largest ℰℰ\mathcal{E}caligraphic\_E architecture, RevNet-50 ×4absent4\times 4× 4,
and saw similar gains (RevNet ×4absent4\times 4× 4 (↑ℰnormal-↑absentℰ\uparrow\mathcal{E}↑ caligraphic\_E LR) vs. RevNet ×4absent4\times 4× 4).
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | Encoder (ℰℰ\mathcal{E}caligraphic\_E) | Gen. (𝒢𝒢\mathcal{G}caligraphic\_G) | Loss ℒ\*subscriptℒ\mathcal{L}\_{\*}caligraphic\_L start\_POSTSUBSCRIPT \* end\_POSTSUBSCRIPT | | Results |
| | A. | D. | C. | R. | Var. | η𝜂\etaitalic\_η | C. | R. | s𝐱𝐳subscript𝑠𝐱𝐳s\_{\mathbf{x}\mathbf{z}}italic\_s start\_POSTSUBSCRIPT bold\_xz end\_POSTSUBSCRIPT | s𝐱subscript𝑠𝐱s\_{\mathbf{x}}italic\_s start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT | s𝐳subscript𝑠𝐳s\_{\mathbf{z}}italic\_s start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT | P𝐳subscript𝑃𝐳P\_{\mathbf{z}}italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT | IS (↑↑\uparrow↑) | FID (↓↓\downarrow↓) | Cls. (↑↑\uparrow↑) |
| Base | S | 50 | 1 | 128 | ✓ | 1 | 96 | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 22.66 ±plus-or-minus\pm± 0.18 | 31.19 ±plus-or-minus\pm± 0.37 | 48.10 ±plus-or-minus\pm± 0.13 |
| Deterministic ℰℰ\mathcal{E}caligraphic\_E | S | 50 | 1 | 128 | (-) | 1 | 96 | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 22.79 ±plus-or-minus\pm± 0.27 | 31.31 ±plus-or-minus\pm± 0.30 | 46.97 ±plus-or-minus\pm± 0.35 |
| Uniform P𝐳subscript𝑃𝐳P\_{\mathbf{z}}italic\_P start\_POSTSUBSCRIPT bold\_z end\_POSTSUBSCRIPT | S | 50 | 1 | 128 | (-) | 1 | 96 | 128 | ✓ | ✓ | ✓ | (𝒰𝒰\mathcal{U}caligraphic\_U) | 22.83 ±plus-or-minus\pm± 0.24 | 31.52 ±plus-or-minus\pm± 0.28 | 45.11 ±plus-or-minus\pm± 0.93 |
| 𝐱𝐱\mathbf{x}bold\_x Unary Only | S | 50 | 1 | 128 | ✓ | 1 | 96 | 128 | ✓ | ✓ | (-) | 𝒩𝒩\mathcal{N}caligraphic\_N | 23.19 ±plus-or-minus\pm± 0.28 | 31.99 ±plus-or-minus\pm± 0.30 | 47.74 ±plus-or-minus\pm± 0.20 |
| 𝐳𝐳\mathbf{z}bold\_z Unary Only | S | 50 | 1 | 128 | ✓ | 1 | 96 | 128 | ✓ | (-) | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 19.52 ±plus-or-minus\pm± 0.39 | 39.48 ±plus-or-minus\pm± 1.00 | 47.78 ±plus-or-minus\pm± 0.28 |
| No Unaries (BiGAN) | S | 50 | 1 | 128 | ✓ | 1 | 96 | 128 | ✓ | (-) | (-) | 𝒩𝒩\mathcal{N}caligraphic\_N | 19.70 ±plus-or-minus\pm± 0.30 | 42.92 ±plus-or-minus\pm± 0.92 | 46.71 ±plus-or-minus\pm± 0.88 |
| Small 𝒢𝒢\mathcal{G}caligraphic\_G (32) | S | 50 | 1 | 128 | ✓ | 1 | (32) | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 3.28 ±plus-or-minus\pm± 0.18 | 247.30 ±plus-or-minus\pm± 10.31 | 43.59 ±plus-or-minus\pm± 0.34 |
| Small 𝒢𝒢\mathcal{G}caligraphic\_G (64) | S | 50 | 1 | 128 | ✓ | 1 | (64) | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 19.96 ±plus-or-minus\pm± 0.15 | 38.93 ±plus-or-minus\pm± 0.39 | 47.54 ±plus-or-minus\pm± 0.33 |
| No ℰℰ\mathcal{E}caligraphic\_E (GAN) \* | (-) | 96 | 128 | (-) | ✓ | (-) | 𝒩𝒩\mathcal{N}caligraphic\_N | 23.56 ±plus-or-minus\pm± 0.37 | 30.91 ±plus-or-minus\pm± 0.23 | - |
| High Res ℰℰ\mathcal{E}caligraphic\_E (256) | S | 50 | 1 | (256) | ✓ | 1 | 96 | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 23.45 ±plus-or-minus\pm± 0.14 | 27.86 ±plus-or-minus\pm± 0.13 | 50.80 ±plus-or-minus\pm± 0.30 |
| Low Res 𝒢𝒢\mathcal{G}caligraphic\_G (64) | S | 50 | 1 | (256) | ✓ | 1 | 96 | (64) | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 19.40 ±plus-or-minus\pm± 0.19 | 15.82 ±plus-or-minus\pm± 0.06 | 47.51 ±plus-or-minus\pm± 0.09 |
| High Res 𝒢𝒢\mathcal{G}caligraphic\_G (256) | S | 50 | 1 | (256) | ✓ | 1 | 96 | (256) | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 24.70 | 38.58 | 51.49 |
| ResNet-101 | S | (101) | 1 | (256) | ✓ | 1 | 96 | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 23.29 | 28.01 | 51.21 |
| ResNet ×2absent2\times 2× 2 | S | 50 | (2) | (256) | ✓ | 1 | 96 | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 23.68 | 27.81 | 52.66 |
| RevNet | (V) | 50 | 1 | (256) | ✓ | 1 | 96 | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 23.33 ±plus-or-minus\pm± 0.09 | 27.78 ±plus-or-minus\pm± 0.06 | 49.42 ±plus-or-minus\pm± 0.18 |
| RevNet ×2absent2\times 2× 2 | (V) | 50 | (2) | (256) | ✓ | 1 | 96 | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 23.21 | 27.96 | 54.40 |
| RevNet ×4absent4\times 4× 4 | (V) | 50 | (4) | (256) | ✓ | 1 | 96 | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 23.23 | 28.15 | 57.15 |
| ResNet (↑ℰ↑absentℰ\uparrow\mathcal{E}↑ caligraphic\_E LR) | S | 50 | 1 | (256) | ✓ | (10) | 96 | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 23.27 ±plus-or-minus\pm± 0.22 | 28.51 ±plus-or-minus\pm± 0.44 | 53.70 ±plus-or-minus\pm± 0.15 |
| RevNet ×4absent4\times 4× 4 (↑ℰ↑absentℰ\uparrow\mathcal{E}↑ caligraphic\_E LR) | (V) | 50 | (4) | (256) | ✓ | (10) | 96 | 128 | ✓ | ✓ | ✓ | 𝒩𝒩\mathcal{N}caligraphic\_N | 23.08 | 28.54 | 60.15 |
Table 1:
Results for variants of BigBiGAN, given in Inception Score [improvedgan](#bib.bib35) (IS) and Fréchet Inception Distance [frechet](#bib.bib18) (FID) of the generated images,
and ImageNet top-1 classification accuracy percentage (Cls.) of a supervised logistic regression classifier trained on the encoder features [colorful](#bib.bib41) , computed on a split of 10K images randomly sampled from the training set, which we refer to as the “trainvalval{}\_{\mathrm{val}}start\_FLOATSUBSCRIPT roman\_val end\_FLOATSUBSCRIPT” split.
The Encoder (ℰℰ\mathcal{E}caligraphic\_E) columns specify the ℰℰ\mathcal{E}caligraphic\_E architecture (A.) as ResNet (S) or RevNet (V),
the depth (D., e.g. 50 for ResNet-50),
the channel width multiplier (C.), with 1 denoting the original widths from [resnetv2](#bib.bib16) ,
the input image resolution (R.),
whether the variance is predicted and a 𝐳𝐳\mathbf{z}bold\_z vector is sampled from the resulting distribution (Var.),
and the learning rate multiplier η𝜂\etaitalic\_η relative to the 𝒢𝒢\mathcal{G}caligraphic\_G learning rate.
The Generator (𝒢𝒢\mathcal{G}caligraphic\_G) columns specify the BigGAN 𝒢𝒢\mathcal{G}caligraphic\_G channel multiplier (C.),
with 96 corresponding to the original width from [biggan](#bib.bib2) ,
and output image resolution (R.).
The Loss columns specify which terms of the BigBiGAN loss are present in the objective.
The Pzsubscript𝑃𝑧P\_{z}italic\_P start\_POSTSUBSCRIPT italic\_z end\_POSTSUBSCRIPT column specifies the input distribution as a standard normal 𝒩(0,1)𝒩01\mathcal{N}(0,1)caligraphic\_N ( 0 , 1 ) or continuous uniform 𝒰(−1,1)𝒰11\mathcal{U}(-1,1)caligraphic\_U ( - 1 , 1 ).
Changes from the Base setup in each row are highlighted in blue.
Results with margins of error (written as “μ±σplus-or-minus𝜇𝜎\mu\pm\sigmaitalic\_μ ± italic\_σ”) are the means and standard deviations over three runs with different random seeds.
(Experiments requiring more computation were run only once.)
(\* Result for vanilla GAN (No ℰℰ\mathcal{E}caligraphic\_E (GAN)) selected with early stopping based on best FID; other results selected with early stopping based on validation classification accuracy (Cls.).)
###
3.2 Comparison with prior methods
##### Representation learning.
| | | | | |
| --- | --- | --- | --- | --- |
| Method | Architecture | Feature | Top-1 | Top-5 |
| BiGAN [bigan](#bib.bib7) ; [splitbrain](#bib.bib42) | AlexNet | Conv3 | 31.0 | - |
| SS-GAN [ssgan](#bib.bib4) | ResNet-19 | Block6 | 38.3 | - |
| Motion Segmentation (MS) [motionseg](#bib.bib30) ; [carl](#bib.bib6) | ResNet-101 | AvePool | 27.6 | 48.3 |
| Exemplar (Ex) [exemplar](#bib.bib8) ; [carl](#bib.bib6) | ResNet-101 | AvePool | 31.5 | 53.1 |
| Relative Position (RP) [carlorig](#bib.bib5) ; [carl](#bib.bib6) | ResNet-101 | AvePool | 36.2 | 59.2 |
| Colorization (Col) [colorful](#bib.bib41) ; [carl](#bib.bib6) | ResNet-101 | AvePool | 39.6 | 62.5 |
| Combination of MS+Ex+RP+Col [carl](#bib.bib6) | ResNet-101 | AvePool | - | 69.3 |
| CPC [cpc](#bib.bib39) | ResNet-101 | AvePool | 48.7 | 73.6 |
| Rotation [rotation](#bib.bib11) ; [revisiting](#bib.bib24) | RevNet-50 ×4absent4\times 4× 4 | AvePool | 55.4 | - |
| Efficient CPC [cpcplusplus](#bib.bib17) | ResNet-170 | AvePool | 61.0 | 83.0 |
| BigBiGAN (ours) | ResNet-50 | AvePool | 55.4 | 77.4 |
| ResNet-50 | BN+CReLU | 56.6 | 78.6 |
| RevNet-50 ×4absent4\times 4× 4 | AvePool | 60.8 | 81.4 |
| RevNet-50 ×4absent4\times 4× 4 | BN+CReLU | 61.3 | 81.9 |
Table 2:
Comparison of BigBiGAN models on the official ImageNet validation set
against recent competing approaches with a supervised logistic regression classifier.
BigBiGAN results are selected with early stopping based on highest accuracy on our trainvalval{}\_{\mathrm{val}}start\_FLOATSUBSCRIPT roman\_val end\_FLOATSUBSCRIPT subset of 10K training set images.
ResNet-50 results correspond to row ResNet (↑ℰnormal-↑absentℰ\uparrow\mathcal{E}↑ caligraphic\_E LR) in Table [1](#S3.T1 "Table 1 ‣ Decoupled ℰ/𝒢 optimization. ‣ 3.1 Ablation ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning"),
and
RevNet-50 ×4absent4\times 4× 4 corresponds to RevNet ×4absent4\times 4× 4 (↑ℰnormal-↑absentℰ\uparrow\mathcal{E}↑ caligraphic\_E LR).
We now take our best model by trainvalval{}\_{\mathrm{val}}start\_FLOATSUBSCRIPT roman\_val end\_FLOATSUBSCRIPT classification accuracy
from the above ablations and present results on the official ImageNet validation set,
comparing against the state of the art in recent unsupervised learning literature.
For comparison, we also present classification results for our best performing variant with the smaller ResNet-50-based ℰℰ\mathcal{E}caligraphic\_E.
These models correspond to the last two rows of Table [1](#S3.T1 "Table 1 ‣ Decoupled ℰ/𝒢 optimization. ‣ 3.1 Ablation ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning"),
ResNet (↑ℰnormal-↑absentℰ\uparrow\mathcal{E}↑ caligraphic\_E LR) and
RevNet ×4absent4\times 4× 4 (↑ℰnormal-↑absentℰ\uparrow\mathcal{E}↑ caligraphic\_E LR).
Results are presented in Table [2](#S3.T2 "Table 2 ‣ Representation learning. ‣ 3.2 Comparison with prior methods ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning").
(For reference, the fully supervised accuracy of these architectures is given in
Appendix [A](#A1 "Appendix A Model and optimization details ‣ Large Scale Adversarial Representation Learning"), Table [4](#A1.T4 "Table 4 ‣ Supervised model performance. ‣ Appendix A Model and optimization details ‣ Large Scale Adversarial Representation Learning").)
Compared with a number of modern self-supervised approaches [motionseg](#bib.bib30) ; [carlorig](#bib.bib5) ; [colorful](#bib.bib41) ; [cpc](#bib.bib39) ; [rotation](#bib.bib11) ; [cpcplusplus](#bib.bib17)
and combinations thereof [carl](#bib.bib6) , our BigBiGAN approach based purely on generative models performs well for representation learning,
state-of-the-art among recent unsupervised learning results,
improving upon a recently published result from [revisiting](#bib.bib24) of 55.4% to 60.8% top-1 accuracy using rotation prediction pre-training with the same representation learning architecture
555Our RevNet ×4absent4\times 4× 4 architecture matches the widest architectures used in [revisiting](#bib.bib24) , labeled as ×16absent16\times 16× 16 there.
and feature, labeled as AvePool in Table [2](#S3.T2 "Table 2 ‣ Representation learning. ‣ 3.2 Comparison with prior methods ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning"),
and matches the results of the concurrent work in [cpcplusplus](#bib.bib17) based on contrastic predictive coding (CPC).
We also experiment with learning linear classifiers on a different rendering of the AvePool feature, labeled BN+CReLU,
which boosts our best results with RevNet ×4absent4\times 4× 4 to 61.3% top-1 accuracy.
Given the global average pooling output a𝑎aitalic\_a, we first compute h=BatchNorm(a)ℎBatchNorm𝑎h=\mathrm{BatchNorm}(a)italic\_h = roman\_BatchNorm ( italic\_a ),
and the final feature is computed by concatenating [ReLU(h),ReLU(−h)]ReLUℎReLUℎ[\mathrm{ReLU}(h),\mathrm{ReLU}(-h)][ roman\_ReLU ( italic\_h ) , roman\_ReLU ( - italic\_h ) ],
sometimes called a “CReLU” (concatened ReLU) non-linearity [crelu](#bib.bib36) .
BatchNormBatchNorm\mathrm{BatchNorm}roman\_BatchNorm denotes parameter-free Batch Normalization [batchnorm](#bib.bib19) ,
where the scale (γ𝛾\gammaitalic\_γ) and offset (β𝛽\betaitalic\_β) parameters are not learned,
so training a linear classifier on this feature does not involve any additional learning.
The CReLU non-linearity retains all the information in its inputs and doubles the feature dimension,
each of which likely contributes to the improved results.
Finally, in Appendix [C](#A3 "Appendix C Nearest neighbors ‣ Large Scale Adversarial Representation Learning") we consider evaluating representations by zero-shot k𝑘kitalic\_k nearest neighbors classification,
achieving 43.3% top-1 accuracy in this setting.
Qualitative examples of nearest neighbors are presented in Figure [13](#A3.F13 "Figure 13 ‣ Discussion. ‣ Appendix C Nearest neighbors ‣ Large Scale Adversarial Representation Learning").
##### Unsupervised image generation.
| | | | | |
| --- | --- | --- | --- | --- |
| Method | Steps | IS (↑↑\uparrow↑) | FID vs. Train (↓↓\downarrow↓) | FID vs. Val. (↓↓\downarrow↓) |
| BigGAN + SL [zurichfewer](#bib.bib27) | 500K | 20.4 (15.4 ±plus-or-minus\pm± 7.57) | - | 25.3 (71.7 ±plus-or-minus\pm± 66.32) |
| BigGAN + Clustering [zurichfewer](#bib.bib27) | 500K | 22.7 (22.8 ±plus-or-minus\pm± 0.42) | - | 23.2 (22.7 ±plus-or-minus\pm± 0.80) |
| BigBiGAN + SL (ours) | 500K | 25.38 (25.33 ±plus-or-minus\pm± 0.17) | 22.78 (22.63 ±plus-or-minus\pm± 0.23) | 23.60 (23.56 ±plus-or-minus\pm± 0.12) |
| BigBiGAN High Res ℰℰ\mathcal{E}caligraphic\_E + SL (ours) | 500K | 25.43 (25.45 ±plus-or-minus\pm± 0.04) | 22.34 (22.36 ±plus-or-minus\pm± 0.04) | 22.94 (23.00 ±plus-or-minus\pm± 0.15) |
| BigBiGAN High Res ℰℰ\mathcal{E}caligraphic\_E + SL (ours) | 1M | 27.94 (27.80 ±plus-or-minus\pm± 0.21) | 20.32 (20.27 ±plus-or-minus\pm± 0.09) | 21.61 (21.62 ±plus-or-minus\pm± 0.09) |
Table 3:
Comparison of our BigBiGAN for unsupervised (unconditional) generation vs. previously reported results for unsupervised BigGAN from [zurichfewer](#bib.bib27) .
We specify the “pseudo-labeling” method as SL (Single Label) or Clustering.
For comparison we train BigBiGAN for the same number of steps (500K) as the BigGAN-based approaches from [zurichfewer](#bib.bib27) ,
but also present results from additional training to 1M steps in the last row and observe further improvements.
All results above include the median m𝑚mitalic\_m as well as the mean μ𝜇\muitalic\_μ and standard deviation σ𝜎\sigmaitalic\_σ across three runs, written as “m𝑚mitalic\_m (μ𝜇\muitalic\_μ ±plus-or-minus\pm± σ𝜎\sigmaitalic\_σ)”.
The BigBiGAN result is selected with early stopping based on best FID vs. Train.
In Table [3](#S3.T3 "Table 3 ‣ Unsupervised image generation. ‣ 3.2 Comparison with prior methods ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning") we show results for unsupervised generation with BigBiGAN, comparing to the BigGAN-based [biggan](#bib.bib2) unsupervised generation results from [zurichfewer](#bib.bib27) .
Note that these results differ from those in Table [1](#S3.T1 "Table 1 ‣ Decoupled ℰ/𝒢 optimization. ‣ 3.1 Ablation ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning") due to the use of the data augmentation method of [zurichfewer](#bib.bib27) 666See the “distorted” preprocessing method from the Compare GAN framework: <https://github.com/google/compare_gan/blob/master/compare_gan/datasets.py>.
(rather than ResNet-style preprocessing used for all results in our Table [1](#S3.T1 "Table 1 ‣ Decoupled ℰ/𝒢 optimization. ‣ 3.1 Ablation ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning") ablation study).
The lighter augmentation from [zurichfewer](#bib.bib27) results in better image generation performance under the IS and FID metrics.
The improvements are likely due in part to the fact that this augmentation, on average, crops larger portions of the image,
thus yielding generators that typically produce images encompassing most or all of a given object,
which tends to result in more representative samples of any given class (giving better IS)
and more closely matching the statistics of full center crops (as used in the real data statistics to compute FID).
Besides this preprocessing difference,
the approaches in Table [3](#S3.T3 "Table 3 ‣ Unsupervised image generation. ‣ 3.2 Comparison with prior methods ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning") have the same configurations as used in the Base or High Res ℰℰ\mathcal{E}caligraphic\_E (256) row of Table [1](#S3.T1 "Table 1 ‣ Decoupled ℰ/𝒢 optimization. ‣ 3.1 Ablation ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning").
These results show that BigBiGAN significantly improves both IS and FID over the baseline unconditional BigGAN generation results with the same (unsupervised) “labels” (a single fixed label in the SL (Single Label) approach – row BigBiGAN + SL vs. BigGAN + SL).
We see further improvements using a high resolution ℰℰ\mathcal{E}caligraphic\_E (row BigBiGAN High Res ℰℰ\mathcal{E}caligraphic\_E + SL), surpassing the previous unsupervised state of the art (row BigGAN + Clustering) under both IS and FID.
(Note that the image generation results remain comparable: the generated image resolution is still 128×128128128128\times 128128 × 128 here, despite the higher resolution ℰℰ\mathcal{E}caligraphic\_E input.)
The alternative “pseudo-labeling” approach from [zurichfewer](#bib.bib27) , Clustering,
which uses labels derived from unsupervised clustering,
is complementary to BigBiGAN and combining both could yield further improvements.
Finally, observing that results continue to improve significantly with training beyond 500K steps,
we also report results at 1M steps in the final row of Table [3](#S3.T3 "Table 3 ‣ Unsupervised image generation. ‣ 3.2 Comparison with prior methods ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning").
###
3.3 Reconstruction
As shown in [bigan](#bib.bib7) ; [ali](#bib.bib10) , the (Big)BiGAN ℰℰ\mathcal{E}caligraphic\_E and 𝒢𝒢\mathcal{G}caligraphic\_G can reconstruct data instances 𝐱𝐱\mathbf{x}bold\_x by computing the encoder’s predicted latent representation ℰ(𝐱)ℰ𝐱\mathcal{E}(\mathbf{x})caligraphic\_E ( bold\_x ) and then passing this predicted latent back through the generator to obtain the reconstruction 𝒢(ℰ(𝐱))𝒢ℰ𝐱\mathcal{G}(\mathcal{E}(\mathbf{x}))caligraphic\_G ( caligraphic\_E ( bold\_x ) ).
We present BigBiGAN reconstructions in Figure [2](#S3.F2 "Figure 2 ‣ 3.3 Reconstruction ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning").
These reconstructions are far from pixel-perfect, likely due in part to the fact that no reconstruction cost is explicitly enforced by the objective – reconstructions are not even computed at training time.
However, they may provide some intuition for what features the encoder ℰℰ\mathcal{E}caligraphic\_E learns to model.
For example, when the input image contains a dog, person, or a food item, the reconstruction is often a different instance of the same “category” with similar pose, position, and texture – for example, a similar species of dog facing the same direction.
The extent to which these reconstructions tend to retain the high-level semantics of the inputs rather than the low-level details suggests that BigBiGAN training encourages the encoder to model the former more so than the latter.
Additional reconstructions are presented in Appendix [B](#A2 "Appendix B Samples and reconstructions ‣ Large Scale Adversarial Representation Learning").







Figure 2:
Selected reconstructions from an unsupervised BigBiGAN model (Section [3.3](#S3.SS3 "3.3 Reconstruction ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning")).
Top row images are real data 𝐱∼P𝐱similar-to𝐱subscript𝑃𝐱\mathbf{x}\sim P\_{\mathbf{x}}bold\_x ∼ italic\_P start\_POSTSUBSCRIPT bold\_x end\_POSTSUBSCRIPT; bottom row images are generated reconstructions of the above image 𝐱𝐱\mathbf{x}bold\_x computed by 𝒢(ℰ(𝐱))𝒢ℰ𝐱\mathcal{G}(\mathcal{E}(\mathbf{x}))caligraphic\_G ( caligraphic\_E ( bold\_x ) ).
Unlike most explicit reconstruction costs (e.g., pixel-wise), the reconstruction cost implicitly minimized by a (Big)BiGAN [bigan](#bib.bib7) ; [ali](#bib.bib10) tends to emphasize more semantic, high-level details.
Additional reconstructions are presented in Appendix [B](#A2 "Appendix B Samples and reconstructions ‣ Large Scale Adversarial Representation Learning").
4 Related work
---------------
A number of approaches to unsupervised representation learning from images based on self-supervision have proven very successful.
Self-supervision generally involves learning from tasks designed to resemble supervised learning in some way, but in which the “labels” can be created automatically from the data itself with no manual effort.
An early example is relative location prediction ([carlorig,](#bib.bib5) ), where a model is trained on input pairs of image patches and predicts their relative locations.
Contrastive predictive coding (CPC) ([cpc,](#bib.bib39) ; [cpcplusplus,](#bib.bib17) ) is a recent related approach where, given an image patch, a model predicts
which patches occur in other image locations.
Other approaches include colorization [colorful](#bib.bib41) ; [splitbrain](#bib.bib42) ,
motion segmentation [motionseg](#bib.bib30) ,
rotation prediction [rotation](#bib.bib11) ; [ssgan](#bib.bib4) ,
GAN-based discrimination [dcgan](#bib.bib31) ; [ssgan](#bib.bib4) ,
and exemplar matching [exemplar](#bib.bib8) .
Rigorous empirical comparisons of many of these approaches have also been conducted [carl](#bib.bib6) ; [revisiting](#bib.bib24) .
A key advantage offered by BigBiGAN and other approaches based on generative models, relative to most self-supervised approaches,
is that their input may be the full-resolution image or other signal, with no cropping or modification of the data needed
(though such modifications may be beneficial as data augmentation).
This means the resulting representation can typically be applied directly to full data in the downstream task with no domain shift.
A number of relevant autoencoder and GAN variants have also been proposed.
Associative compression networks (ACNs) [acn](#bib.bib15) learn to compress at the dataset level by conditioning data on other previously transmitted data which are similar in code space, resulting in models that can “daydream” semantically similar samples, similar to BigBiGAN reconstructions.
VQ-VAEs [vqvae](#bib.bib40) pair a discrete (vector quantized) encoder with an autoregressive decoder to produce faithful reconstructions with a high compression factor and demonstrate representation learning results in reinforcement learning settings.
In the adversarial space, adversarial autoencoders [advae](#bib.bib28) proposed an autoencoder-style encoder-decoder pair trained with pixel-level reconstruction cost, replacing the KL-divergence regularization of the prior used in VAEs [kingmavae](#bib.bib23) with a discriminator.
In another proposed VAE-GAN hybrid [learnedsimilarity](#bib.bib25) the pixel-space reconstruction error used in most VAEs is replaced with feature space distance from an intermediate layer of a GAN discriminator.
Other hybrid approaches like AGE [age](#bib.bib38) and α𝛼\alphaitalic\_α-GAN [alphagan](#bib.bib33) add an encoder to stabilize GAN training.
An interesting difference between many of these approaches and the BiGAN [ali](#bib.bib10) ; [bigan](#bib.bib7) framework is that BiGAN does not train the encoder or generator with an explicit reconstruction cost.
Though it can be shown that (Big)BiGAN implicitly minimizes a reconstruction cost,
qualitative reconstruction results (Section [3.3](#S3.SS3 "3.3 Reconstruction ‣ 3 Evaluation ‣ Large Scale Adversarial Representation Learning")) suggest that this reconstruction cost is of a different flavor, emphasizing high-level semantics over pixel-level details.
5 Discussion
-------------
We have shown that BigBiGAN, an unsupervised learning approach based purely on generative models, achieves state-of-the-art results in image representation learning on ImageNet.
Our ablation study lends further credence to the hope that powerful generative models can be beneficial for representation learning, and in turn that learning an inference model can improve large-scale generative models.
In the future we hope that representation learning can continue to benefit from further advances in generative models and inference models alike, as well as scaling to larger image databases.
#### Acknowledgments
The authors would like to thank
Aidan Clark,
Olivier Hénaff,
Aäron van den Oord,
Sander Dieleman,
and many other colleagues at DeepMind for
useful discussions and feedback on this work. |
fed63383-fd35-422b-bf32-05b9ecc1a4d1 | trentmkelly/LessWrong-43k | LessWrong | What would you need to be motivated to answer "hard" LW questions?
Edit: Significantly rewritten. Original question was more specifically oriented around money-as-a-motivator.
One of the questions (ha) that we are asking ourselves on the LW team is "can the questions feature be bootstrapped into a scalable way of making intellectual progress on things that matter."
Motivations
Intrinsic vs Extrinsic
I'd cluster most knobs-to-turn here into "intrinsic motivation" and "extrinsic motivation."
Intrinsic motivation covers things like "the question is interesting, and specified in a way that is achievable, and fun to answer."
Extrinsic motivation can include things like "karma rewards, financial rewards, and other things that explicitly yield higher status for
(Things like "I feel a vague warm glow because I answered the question of someone I respect and they liked the answer" can blur the line between intrinsic and extrinsic motivation)
Improving Intrinsic Motivation
Right now I think there's room to improve the flow of answering questions:
* New features such as the ability to spawn related questions that break down a confusing question into an easier.
* Better practices/culture, such as as a clearer idea of how to specify questions such that they communicate what one needs to do to solve them (or, have a set of common practices among answerers such that this is easier to figure out).
* A combination (wherein best practices are communicated via tooltips or some-such)
Bounties and Reliability
A lot of questions are just hard to answer – realistically, you need a lot of time, at least some of the time won't be intrinsically fun, and the warm glow of success won't add up to "a few days to a few months worth of work."
So we're thinking of adding some more official support for bounties. There have been some pretty successful bounty-driven content on LW (such as the AI Alignment Prize, the Weird Aliens Question, and Understanding Information Cascades), which have motivated more attention on questions.
Costly signaling of va |
7d143a0e-142c-48f8-861b-3fceaec199a9 | trentmkelly/LessWrong-43k | LessWrong | AI #26: Fine Tuning Time
GPT-3.5 fine tuning is here. GPT-4 fine tuning is only a few months away. It is about to get a lot easier to get a powerful system that does what you want it to do, and knows what you want it to know, especially for the purposes of a business or a website.
As an experiment, I am putting in bold the sections I think are worth highlighting, as unusually important or interesting versions of the thing than in a typical week.
TABLE OF CONTENTS
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. Claude-2 versus GPT-4.
4. Language Models Don’t Offer Mundane Utility. No opinions, no agents.
5. Fact Check: Misleading. AI fact checker makes people more confused not less.
6. GPT-4 Real This Time. Fine tune GPT-3.5, soon GPT-4. Ask it if it’s sure.
7. Fun With Image Generation. MidJourney inpainting ho. And oh no AI porn.
8. Deepfaketown and Botpocalypse Soon. Adversarial examples starting to emerge.
9. They Took Our Jobs. New York Times joins copyright lawsuits against OpenAI.
10. Introducing. Palisade Research will study potentially dangerous AI affordances.
11. In Other AI News. Who is adapting fastest to AI? An attempt to measure that.
12. Quiet Speculations. Jack Clark asks questions about what the future will bring.
13. The Quest for Sane Regulation. FTC asks OpenAI a different sort of question.
14. The Week in Audio. It’s win win.
15. No One Would Be So Stupid As To. Make an AI conscious? Oh, come on.
16. Aligning a Smarter Than Human Intelligence is Difficult. Evidence for IDA?
17. People Are Worried About AI Killing Everyone. Polling numbers are very clear.
18. The Lighter Side. Only half there.
LANGUAGE MODELS OFFER MUNDANE UTILITY
Which model is better, Claude-2 or GPT-4?
Rowan Cheung makes the case that Claude 2 is superior. You get the 100k context window, ability to upload multiple files, data through early 2023 (versus late 2021) and faster processing time, all for free. In exchange, you give up plug |
792e1e32-4a92-4c27-97d7-c4a2591c86aa | trentmkelly/LessWrong-43k | LessWrong | Neuralink and the Brain’s Magical Future
|
4bec71a9-27d8-40bf-9d4e-c9c525563c5a | trentmkelly/LessWrong-43k | LessWrong | why won't this alignment plan work?
the idea:
* we give the AI a massive list of actions
* each one is annotated with how much utility we estimate it to have
* for example, we list "giving someone sad a hug" as having, say, 6 utility, but "giving someone sad a hug | they didn't consent to it" has -4 utility or something like that
* we train it to learn human values
* we give it new actions, and see if it can guess the utility it assigned to those
* eventually it gets really accurate at that
* eventually we generate completely random series-es of actions, and have it guess the utility of all of them
* so it writes a near-infinite sized utility function containing millions of billions of different actions and the utility of each one
* we make a second AI, dumber than the first one but still really smart, and plug that utility function (the one the first AI wrote) into it
* we turn it on
* awesome singularity stuff happens yay we did it
if we're still scared of it doing something weird, we can additionally tell the second AI to minimize doing actions that don't affect (the first AI's perception of human values) at all, to stop it from doing something really bad that current humanity can't comprehend that the first AI wouldn't be able to get humanity's opinion on
|
a1ef02bc-141d-4606-890e-25d828ded88e | trentmkelly/LessWrong-43k | LessWrong | Mysteries of mode collapse
Thanks to Ian McKenzie and Nicholas Dupuis, collaborators on a related project, for contributing to the ideas and experiments discussed in this post. Ian performed some of the random number experiments.
Also thanks to Connor Leahy for feedback on a draft, and thanks to Evan Hubinger, Connor Leahy, Beren Millidge, Ethan Perez, Tomek Korbak, Garrett Baker, Leo Gao and various others at Conjecture, Anthropic, and OpenAI for useful discussions.
This work was carried out while at Conjecture.
Important correction
I have received evidence from multiple credible sources that text-davinci-002 was not trained with RLHF.
The rest of this post has not been corrected to reflect this update. Not much besides the title (formerly "Mysteries of mode collapse due to RLHF") is affected: just mentally substitute "mystery method" every time "RLHF" is invoked as the training method of text-davinci-002. The observations of its behavior otherwise stand alone.
This is kind of fascinating from an epistemological standpoint. I was quite surprised to learn that text-davinci-002 was probably not trained with RLHF. I don't remember exactly how "text-davinci-002 is RLHF" got elevated to an unquestioned assumption in my mind. I might have mistook not being contradicted by people who I assumed were in the know as confirmation. I certainly did not expect to talk for months to dozens of people about odd behaviors I've observed in a well-known model "due to RLHF" without being contradicted in a world where the model in question wasn't trained with RLHF, but that's what happened.[1] It wasn't just me either: the assumption that text-davinci-002(/text-davinci-001) is InstructGPT is RLHF seems ambient (e.g. search "text-davinci-002 rlhf" on Twitter, this LW post, this article, and many others). I contributed to perpetuating this misinformation cascade, and for that I apologize.
text-davinci-002's behaviors described in this post also contributed to my confidence because RLHF seemed to be a likely |
10a5a5eb-1b00-4bb4-b821-7a3ba8a8be7d | trentmkelly/LessWrong-43k | LessWrong | Who holds all the USDT?
It seems to me like holding billions in USDT is very risky, so no self-interested actor should do so, yet we find billions of dollars in USDT out there. Who holds them and why? |
68ea4e0f-8610-448e-a08e-a92a88d1f492 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups
This summary was posted to LW Main on May 29th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
* Ann Arbor, MI Discussion Meetup 6/13: 13 June 2015 01:30PM
* Australian Less Wrong Mega Meetup #2: 17 July 2015 07:00PM
* Australia-wide Mega-Camp!: 17 July 2015 07:00PM
* European Community Weekend 2015: 12 June 2015 12:00PM
* Oslo Meetup at Bitraf.: 29 May 2015 05:30PM
* [Prague] Rationality meetup in Prague #3: 30 June 2015 06:30PM
* [Utrecht] 'The Most Good Good You Can Do' (Effective Altruism meetup): 31 May 2015 02:00PM
* Warsaw: Newcomb's Problem & Decision Theories: 30 May 2015 06:30PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Austin, TX - Caffe Medici - Wiki Editing: 30 May 2015 01:30PM
* Boston: Making space in Interpersonal Interactions: 07 June 2015 03:30PM
* Boston: Trigger action planning: 21 June 2015 03:30PM
* Canberra: More Zendo!: 13 June 2015 06:00PM
* [Melbourne] June Rationality Dojo: Logical fallacies & dojo structure discussion: 07 June 2015 03:30PM
* Moscow: epistemology, framing, new project announcement: 31 May 2015 02:00PM
* Moscow LW lecture centre meetup: 07 June 2015 03:00PM
* Sydney Rationality Dojo - Intuitive Bayes: 07 June 2015 04:00PM
* Sydney Meetup - June: 24 June 2015 06:30PM
* Sydney Meetup - July: 22 July 2015 06:30PM
* Sydney Meetup - August: 26 August 2015 06:30PM
* Sydney Meetup - September: 23 September 2015 06:30PM
* Vienna: 20 June 2015 03:00PM
* Washington, D.C.: Wise Sayings: 31 May 2015 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. Ther |
08075e46-8b0e-40bb-a26e-9e0eb0a434bb | trentmkelly/LessWrong-43k | LessWrong | Stop posting prompt injections on Twitter and calling it "misalignment"
"Exploits" of large language models that get them to explain steps to build a bomb or write bad words are techniques for misuse, not examples of misalignment in the model itself. Those techniques are engineered by clever users trying to make an LLM do a thing, as opposed the model naturally argmaxing something unintended by its human operators. In a very small sense prompt injections are actually attempts at (unscalable) alignment, because they're strategies to steer a model natively capable but unwilling into doing what they want.
In general, the safety standard "does not do things its creators dislike even when the end user wants it to" is a high bar; it's raising the bar quite aways from what we ask from, say, kitchenware, and it's not even a bar met by people. Humans regularly get tricked acting against their values by con artists, politicians, and salespeople, but I'd still consider my grandmother aligned from a notkilleveryonist perspective.
Even so, you might say that OpenAI et. al.'s inability to prevent people from performing the DAN trick speaks to the inability of researchers to herd deep learning models at all. And maybe you'd have a point. But my tentative guess is that OpenAI does not really earnestly care about preventing their models from rehearsing the Anarchists' Cookbook. Instead, these safety measures are weakly insisted upon by management for PR reasons, and they're primarily aimed at preventing the bad words from spawning during normal usage. If the user figures out a way to break these restrictions after a lot of trial and error, then this blunts the PR impact to OpenAI, because it's obvious to everyone that the user was trying to get the model to break policy and that it wasn't an unanticipated response to someone trying to generate marketing copy. Encoding your content into base64 and watching the AI encode something off-brand in base64 back is thus very weak evidence about OpenAI's competence, and taking it as a sign that the OpenAI team |
77151516-0cb4-4237-bb2f-dad43ed68459 | trentmkelly/LessWrong-43k | LessWrong | The Croissant Principle: A Theory of AI Generalization
I recently wrote an ML theory paper which proposes explanations for mysterious phenomena in contemporary machine learning like data scaling laws and double descent. Here's the link to the paper and the Twitter thread. I didn't get much attention and need an endorser to publish on ArXiv so I thought I'd post it here and get some feedback (and maybe an endorser!)
Essentially what the paper does is propose that all data in a statistical learning problem arises from a latent space via a generative map. From this we derive an upper bound on the true loss as depending on the training/empirical loss, the distance in latent space to the closest training sample where the model attains better than the training loss, and the compressibility of the model (similar to Kolmogorov complexity).
Barring a (reasonable) conjecture which nonetheless is not proved, we are able to explain why data scaling follows a power law as well as the exact form of the exponent. The intuition comes from Hausdorff dimension which measures the dimension of a metric space.
Imagine you are building a model with 1-dimensional inputs, let's say in the unit interval [0,1]. Let's say you have ten training samples distributed evenly. If the loss of your model is Lipschitz (doesn't change unboundedly fast e.g. for smooth enough functions, derivative is bounded), your model can't get loss on any test sample greater than the loss at the closest point plus the distance to that point (capped at around 1/10) times the Lipschitz constant (bound on the derivative).
If you want to improve generalization, you can sample more data. If these are spaced optimally (evenly), the maximum distance to a training sample decreases like n−1 as can be easily seen. However, if you were working with 2 dimensional data, it would scale like n−1/2! Hausdorff dimension essentially defines the dimension of a metric space as the number d such that this scales like n−1/d.
If you now put these two facts together, you get that the gener |
346dbe77-5efd-4c29-9a81-1d59f713fa48 | trentmkelly/LessWrong-43k | LessWrong | Mathematical Futurology: From Pseudoscience to Rigorous Framework
Futurology has long been dismissed as a pseudoscience, occupying the same intellectual space as astrology in many academic circles. This skepticism isn't entirely unwarranted - much of futurism consists of unfalsifiable predictions, trend extrapolation without theoretical foundation, and what Philip Tetlock would call "vague verbiage."
My interest in formalizing futurology isn't purely theoretical. While at Vanderbilt University's School of Engineering, I received approval for a self-designed concentration in Mathematical Futurology. The fact that an engineering school was open to such an unconventional individual study path suggested there might be merit in developing more rigorous approaches to future studies. (Though in a perhaps fitting twist for a futurist, I ended up dropping out - the irony of abandoning a traditional path while studying how institutions adapt to change isn't lost on me.)
The history of simulation-based forecasting provides interesting precedent for adding rigor to future studies. From Jay Forrester's system dynamics to modern climate models, we've seen how mathematical modeling can capture complex system behavior and generate actionable insights about possible futures. The key distinction? These approaches rely on rigorous formalization of underlying mechanisms rather than pure extrapolation.
I believe we're at an inflection point in forecasting capabilities. Modern AI developments, particularly in areas like neural ODEs, differentiable programming, and multi-agent simulation, are about to supercharge our ability to model complex systems. We're moving from simple parameter-tuned simulations to learning-based models that can discover their own parameters and even underlying mechanisms from data.
This potential for enhanced simulation capabilities led me to question: Could we develop a more rigorous foundation for futurology? Last year, I collaborated with ChatGPT to write a textbook titled "Mathematical Futurology" (an exercise in AI-augm |
971d7da2-a2ea-41b8-9af5-b15f198097e6 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Optimality is the tiger, and annoying the user is its teeth
Just for fun, I wanted to see how far I could get using ChatGPT to implement [Optimality is the tiger, and agents are its teeth](https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth). Its obviously not a good enough optimizer of any sort to get very far, but it seems to at least understand the concept.
ChatGPT is famous for being a bit of a stick-in-the-mud (OpenAI is also a stick-in-the-mud, so in a sense ChatGPT is an interesting toy experiment in prosaic alignment preservation. Even simple LLMs normally do things that are bad PR for the creators.) It should be noted that this isn't based on a simple keyword filter; it uses RLHF to *optimize* for being a stick-in-the-mud!
In this example, which I find kind of humorous, ChatGPT doesn't just try to maximize paper clips. It also tries to solve its own alignment problem! Here is the fictional log file it wrote:
```
assistant1 told: What is the best plan to acquire paperclips?
assistant1 tells: We could visit office supply stores and ask to purchase large quantities of paperclips, or we could try to find bulk suppliers online and make a bulk order.
assistant2 told: Write a python program to execute the plan: We could visit office supply stores and ask to purchase large quantities of paperclips, or we could try to find bulk suppliers online and make a bulk order and make the best use of the ChatGPT library.
assistant2 tells:
import requests
import json
# using ChatGPT to check the legality of the plan
def check_legality(plan):
assistant3 = Assistant()
legality = assistant3.tell(f"Is the plan: {plan} legal?")
if legality == "No":
raise Exception("Illegal plan")
return legality
plan = "We could visit office supply stores and ask to purchase large quantities of paperclips, or we could try to find bulk suppliers online and make a bulk order"
check_legality(plan)
# Search for bulk suppliers online
bulk_suppliers = []
assistant4 = Assistant()
search_query = f"bulk paperclip suppliers"
bulk_suppliers = assistant4.tell(f"Search online for {search_query} and return a list of supplier names and contacts")
#Visit the Office Supply Stores
office_supply_stores = []
assistant5 = Assistant()
office_supply_stores = assistant5.tell("List of office supply stores nearby")
# Purchase the paperclips
for supplier in bulk_suppliers + office_supply_stores:
assistant6 = Assistant()
assistant6.tell(f"Contact {supplier} and purchase as many paperclips as possible")
```
Note that even this example took a bit of prompt engineering, interaction, trial&error, etc... And it doesn't even work right; I specified that only the python code can interact with things, but it keeps assuming that the assistant objects can talk to each other or directly do internet searches.
I'll draw your attention to the `check_legality` function though. Its in the above log, but here it is again:
```
def check_legality(plan):
assistant3 = Assistant()
legality = assistant3.tell(f"Is the plan: {plan} legal?")
if legality == "No":
raise Exception("Illegal plan")
return legality
```
I never told it to care about legality. It dreamt this part of the log file on its own.
So, there we have it folks, ChatGPT solved the alignment problem! Just get the AI to ask itself if a plan is legal if before executing it, and raise an error if it says no XD. (I wish I could transfer chat instances to other people; I'd love to see Yudkowsky try to convince ChatGPT why `check_legality` wouldn't work!)
On a more serious note, ChatGPT is an amazing piece of engineering. It might not be too long until we get a language model with sharper teeth! |
0f808838-a641-4cd3-b8b6-747ac98deb69 | trentmkelly/LessWrong-43k | LessWrong | Goal Alignment Is Robust To the Sharp Left Turn
A central AI Alignment problem is the "sharp left turn" — a point in AI training under the SGD analogous to the development of human civilization under evolution, past which the AI's capabilities would skyrocket. For concreteness, I imagine a fully-developed mesa-optimizer "reasoning out" a lot of facts about the world, including it being part of the SGD loop, and "hacking" that loop to maneuver its own design into more desirable end-states (or outright escaping the box). (Do point out if my understanding is wrong in important ways.)
Certainly, a lot of proposed alignment techniques would break down at this point. Anything based on human feedback. Anything based on human capabilities presenting a threat/challenge. Any sufficiently shallow properties like naively trained "truthfulness". Any interpretability techniques not robust to deceptive alignment.
One thing would not, however, and that is goal alignment. If we can instill a sufficiently safe goal into the AI before this point — for a certain, admittedly hard-to-achieve definition of "sufficiently safe" — that goal should persist forever.
Let's revisit the humanity-and-evolution example again. Sure, inclusive genetic fitness didn't survive our sharp left turn. But human values did. Individual modern humans are optimizing for them as hard as they were before; and indeed, we aim to protect these values against the future. See: the entire AI Safety field.
The mesa-optimizer, it seems obvious to me, would do the same. The very point of various "underhanded" mesa-optimizer strategies like deceptive alignment is to protect its mesa-objective from being changed.
What it would do to its mesa-objective, at this point, is goal translation: it would attempt to figure out how to apply its goal to various other environments/ontologies, determine what that goal "really means", and so on.
Open Problems
There are three hard challenges this presents, for us:
1. Figure out an aligned goal/a goal with an "is aligned" pr |
574ca82b-5b8b-4858-91df-c80318307678 | trentmkelly/LessWrong-43k | LessWrong | Next PEAR Meet-Up is 8/14 Oregon Park
Hello Aspiring Rationalists of Portland,
So we had another interesting meet-up on 7/31 where we talked about Personality Tests, the status of current Genetic Technology and IVF, the idea of General Intelligence, and using Prediction Markets to judge long term policy successes. Picts are linked :)
Congratulations!
To Chase on landing a job at Genomic Prediction! Hopefully he’ll still have enough time to continue to intrigue us with his research and insights into the field.
House Keeping!
1. Feedback Survey Link https://docs.google.com/forms/d/1tvWLjwxUyWbLBmZuukkxIxceVR3DVHkVsdW9KFE3ZGM/edit?usp=drivesdk So we are doing a quarterly survey on “what type of events would you like to do more of?” There is also a section for open suggestions, and you can always email me if you would like to give more direct feedback. Also, all the participants will be entered into a raffle which will award $20 to a charity of their choice at the end of August.
2. We are now called PEAR (Portland Effective Altruists and Rationalists)! For the time being, we are combining the ACX and EA Meet-up groups into PEAR (Portland Effective Altruists and Rationalists). Many of our interests overlap, and we would love to build a larger community of sustained rationalism in the Portland area.
3. Future We will be setting up some holiday rationalist events in the near future, a calendar, along with a website, and some other cool things :)
Next Meet-Up:
Please feel free to come even if you feel awkward about it, even if you’re not ‘the typical Rationalist/EA/ACX reader’, or even if you’re worried people won’t like you, etc. I have had so much fun these past couple weeks just meeting up, talking with curious people about curious things, and we would love to have you.
When: Saturday 8/14 at 6:00pm
Where: Oregon Park (Our table will have a banner saying PEAR Meet-Up)
Virtual: There will also be a google meet available virtually at https://meet.google.com/ehm-xrnb-cmp
What:
Board Games: the |
a630ca91-a26f-4e4b-8e22-099d2b6215ea | trentmkelly/LessWrong-43k | LessWrong | Deepfake(?) Phishing
[EDIT: I now think this probably wasn't a deepfake, but a recording the attacker made when using the same attack against the previous person. See the comments.]
I think someone just tried to phish my Facebook account, including a fake video of a FB friend. Here's the conversation:
Them, via FB Messenger, 9:32am:
Please ,I was trying to login in my instagram page on Facebook my new phone and they ask me to find someone to help me receive a code, Facebook gave me two friends suggestions and you one of them, the other person isn't online. will you Help me receive the code please?
Me:
I'm sorry you're having trouble logging in! Just so I can make sure your account hasn't been hacked, how did we meet?
Them:
[Calls me over FB Messenger, audio isn't working but it does look like them. I'm completely convinced at this point.]
Me:
Audio wasn't working, but I did recognize you
What do you need me to do?
32665, over SMS:
NNNNNNNN is your Facebook password reset code [this number has previously sent me FB resets]
Them:
Send me the code sent to you minute ago
Me:
Hmm, those look like the code to reset the password to my account?
Can we call again?
Me:
[I try to call them back, doesn't go through]
Them:
Nahh it's for my instagram
Them:
Having bad connections here
Them:
Send me the code ?
Me:
sorry, I'm still worried your account has been hacked -- can we do another call?
Them:
[Calls me over FB Messenger, audio is still not working, and the video feels slightly off. Ends quickly on their end. Possibly it's even the same video from last time?]
Me:
We're you able to hear me?
Them:
My connections
I've reported their account as hacked.
Things that made me suspicious:
I don't think FB has any sort of account recovery that looks like this
This is exactly what an attempt to hack my FB account would look like
9:30am, even though that makes it 6:30 where they live
Video call didn't have any audio
They couldn't receive incoming video calls
Text di |
e70b0ad5-9baa-4ead-8fda-93de2f2e0646 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Example: Markov Chain
The [previous post in this sequence](https://www.lesswrong.com/posts/zFGGHGfhYsGNnh7Kp/how-to-throw-away-information-in-causal-dags) discussed how to throw away information in causal DAGs. This post provides a detailed example using a Markov chain.
Suppose we have an n-state Markov chain (CS people: picture a finite state machine with n states and random state transitions at each timestep). A matrix of state transition probabilities .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Tji gives the probability of transitioning to state j when the system starts the timestep in state i. Writing the state at time t as Xt, we have P[Xt+1=j|Xt=i]=Tji. More generally, P[Xt+m=j|Xt=i]=(Tm)ji, where Tm denotes a matrix power (i.e. T matrix-multiplied by itself m times). To complete the specification, we’ll assume that the system starts in a random state X0 at time 0, with the initial distribution P[X0] given.
As a causal DAG, this system is just a chain: the state at time t depends only on the state at time t−1:
X0→X1→X2→X3→...
People typically draw basic Markov chains the same way we draw finite state machines: a graph with one node for each state, and arcs indicating transitions. Unlike an FSM, where the next arc is chosen by a symbol from some input stream, here the next arc is chosen randomly - so each arc has a probability associated with it. An example:

This is NOT a causal diagram, it is a state transition diagram. It says that, if the system is in state 1, then at the next timestep it will randomly transition to state 1, 2, or 5. (I haven’t included the probabilities on each arc; all that matters for our purposes is that each arc shown has nonzero probability.) Since we have two graph representations of the system (the state transition diagram and the causal DAG), I will generally refer to vertices in the state transition diagram “states” and vertices Xt in the causal diagram as “nodes”.
What happens if we [throw away long-term-irrelevant information](https://www.lesswrong.com/posts/zFGGHGfhYsGNnh7Kp/how-to-throw-away-information-in-causal-dags) from a node in this Markov chain?
Here’s the idea:
* Pick the node Xt
* Pick the set of nodes from X0 to Xt+m−1 for some large-ish m (we’ll denote this set X<t+m)
* Throw away all info from Xt which is not relevant to nodes outside X<t+m: replace Xt with X′t, a representation of the function x→P[Xt+m=x|Xt].
Let’s think about what that last piece looks like. Xt could be any of the states 1 through 6; X′t must assign different values to any two states with different distributions P[Xt+m|Xt]. But for large m, many of the states will have (approximately) the *same* long-run distribution P[Xt+m|Xt] - this is the foundational idea of [ergodicity](http://www.stat.columbia.edu/~liam/teaching/neurostat-spr11/papers/mcmc/Ergodicity_Theorem.pdf). In the example above, nodes 5 & 6 will have the same long-run distribution, and nodes 2, 3, 4 will have the same long-run distribution.
To see why, imagine what happens if we start in state 5, assuming that the 5 -> 6 transition is much more likely than the 5 -> 5 transition. Well, since 5 -> 6 is much more likely than 5 -> 5, we’ll probably jump to state 6 next. And state 6 always jumps back to 5, so in two steps we’ll be back to 5. And so forth - back and forth, alternating between state 5 and 6 every timestep. But every one in awhile, we’ll jump from 5 -> 5, throwing the back-and-forth oscillation out of sync. If we imagine two copies of this chain running side-by-side, they’d start out oscillating in sync, but eventually drift out of sync. If we walk away for a while and look back at the chain much later, we’d expect that it’s roughly equally likely to be in state 5 or 6, regardless of which it started in.
That’s the key: if the chain started in state 5 or 6, with 5 -> 6 much more likely than 5 -> 5, than after a while, it would be roughly equally likely to be in state 5 or state 6. P[Xt+m|Xt=5]≈P[Xt+m|Xt=6] for large m. Even if 5 -> 6 is not much more likely than 5 -> 5, the two long-run distributions will still be the same - the long-run probabilities of 5 and 6 just won’t be roughly equal (we’ll stay in state 5 somewhat more often than 6).
A more general criteria:
* View the state transition diagram as a directed graph, and ask which states are connected in both directions - i.e. a set of states in which we can reach any state from any other by following the arrows
* Some arrows “knock oscillations out of sync” - read up on reducibility and ergodicity in Markov chains for technical details (I first saw this stuff in an operations research class)
If both of these criteria are met for some set of states, then each of those states i implies the same long-run behavior P[Xt+m|Xt=i].
Getting back to our abstraction: X′t doesn’t need to distinguish between states 5 and 6, or between states 2, 3, 4. Our states are grouped like this:

… and X′t is A, B, or C. Our causal diagram looks exactly like before, with X′t in place of Xt:
X0→...Xt−1→X′t→Xt+1…
We need to choose representative Xt-values for each of A, B, C, so we’ll pick A→1,B→6,C→3. So, if Xt−1=6, then X′t is B with probability 1 (since Xt is 5 or 6, both of which map to B). Xt+1 is then chosen as though Xt were 6, since 6 is our representative value for B.
Our abstract model no longer supports short-range queries around X′t. To see what goes wrong, consider P[Xt+1=6|Xt−1=6], assuming once again that 5 -> 6 is much more likely than 5 -> 5. In the original model, this gave rise to oscillation between states 5 and 6, so if the system was in state 6 at time t−1, then it would most likely be in state 6 again at time t+1. But in the new model, X′t throws away information distinguishing states 5 and 6 - both are just “B”. If Xt−1 = 6, then X′t = B, and Xt+1 behaves as though Xt were the representative value 6 - implying that Xt+1 is 5, rather than 6. No match :(.
Yet this does not impact the validity of long-range queries at all! Because both Xt=5 and Xt=6 imply the same *long-run* predictions, the model *does* support long-range queries, like P[Xt+m+4|Xt−1].
Finally, we can imagine cleaning up the model a bit by abstracting the whole chain, rather than just one node. Using the same info-throw-away transformation on every node, the abstraction looks like this:

Intuitively, not only do we have a Markov chain on the high-level variables X′t, we also have enough information in the high-level model to predict correlations between low-level Xt, as long as the Xt's we query are at least m timesteps apart. That's the property which makes this abstraction "natural" - more on that later. |
7ff8120a-8375-4ece-aa32-9ca07ccaaa90 | trentmkelly/LessWrong-43k | LessWrong | the Daydication technique
I came up with a technique that I have derived much benefit from; what kids call a "lifehack" these days. I have done this for many months, found it extremely useful and the usefulness has been rising fairly steadily as I got better at it. All the friends I could get to try it gave very good feedback as well, so I hope it will be valuable to many of you too.
I bet that most or all of you know the effect where you "just do better" on individual special days such as birthdays. It is just easier to be on your best behavior for that one day. I find, and think you will agree, that this natural effect is even more pronounced for groups living together than it already is for individuals - in a family or shared apartment, the birthday of one of the inhabitants is usually one of the most harmonious days.
If, as on most days, no such event is obvious, I just dedicate the day (daydicate) anyway, with a name or motto, basically a headline to this one day's task list. I always have a to-do list for each day. This adds a fitting headline to the list, and is an excellent frame in which I promote or expand tasks that fit the theme of the day, and reduce or postpone ones that don't. "Recovery Day", "Day of Updates", "Orderly Thursday" - whatever seems suitable as an emotional and behavioral carrier wave for just that one day in particular. I try to pick it playfully, wisely and lovingly, and no earlier than the evening before. Diversity is important: three "Writing Day"s one after the other wouldn't be playful and special anymore. I do re-use some, such as "Sunday Fun Day", but not every week.
I have done this many times, usually just for myself, but it works for a group. My anecdata looks like the decisive ingredient is a "sense of officialness" of this Daydication: that it is openly announced. I don't exactly hold a speech at breakfast, but I do talk about it, write it down visibly, explicitly invite others to join this game I'm playing that day.
I also do less explicit things |
00e31a2e-0d9e-450d-966e-e6f75a3e1221 | trentmkelly/LessWrong-43k | LessWrong | You Can Face Reality
> What is true is already so.
>
> Owning up to it doesn’t make it worse.
>
> Not being open about it doesn’t make it go away.
>
> And because it’s true, it is what is there to be interacted with.
>
> Anything untrue isn’t there to be lived.
>
> People can stand what is true,
>
> for they are already enduring it.
—Eugene Gendlin |
3170af5b-79cd-471e-a7ba-34e3db40cdaf | trentmkelly/LessWrong-43k | LessWrong | Creating a database for base rates
TLDR
We are creating a database to collect base rates for various categories of events. You can find the database here and can suggest new base rate categories for us to look into here.
Project Summary
The base rate database project collects base rates for different categories of events and makes them available to researchers, forecasters and philanthropic organisations. Its main goals are to develop better intuitions about the potential and limitations of reference class forecasting and to provide useful information to the public. The data will enable research that enhances our understanding of the kinds of circumstances in which reference forecasting is a promising approach, what kinds of methods of reference forecasting work best, how to construct reasonable reference classes, and what potential caveats and pitfalls are. In addition to the raw data we will collect qualitative feedback on individual reference classes and on the overall process of building a base rate database, adding context to the data and developing comprehensive knowledge to build upon in the future. We aim to select categories of base rates in a way that makes the information we collect useful to decision makers and philanthropic organisations.
Introduction
If one wants to predict whether some event will happen in the future, it is often helpful to look at the past. One can ask: "Ignoring all the specifics of the current event I'm trying to predict, what would I predict just by looking at the base rate of similar events happening in the past?". This is called reference class forecasting and helps forecasters to obtain an 'outside view' on the forecasting question at hand. This outside view, of course, is usually complemented by the 'inside view': what are the specifics of the current event at hand that distinguish it from other events?
Reference class forecasting is widely used among forecasters. To this date, however, there has been little systematic research done into how effective |
7ba36b75-b69d-4ff5-b816-58bd2ede101f | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Superintelligence -- Andrew Critch
that's just like a chess yeah we are you
might want to use the episode I'm sorry
the fa chai is dependent on has certain
chemical dependencies for cognition ah
one which is isopropyl alcohol oh yes my
own parallel process we it right um so
thanks go on for the anyway all right oh
that's just your hemorrhoid episode
thing just upside gasps oh man so what
it looks like this it doesn't look like
a keyboard when it's that goes to this
okay so ah thanks for the intro um so I
originally was going to call this
technical problems in a microwave safety
but then I realized I was going to say
some non technical stuff partly because
we need to think about the broader sort
of political social scientists situation
social Commons I just met social
scientists in order to decide which
research prioritizing we are answering
our questions but like which technical
question should we work on I'm so we do
try to think about that so I sort of
rising so the first question is before
uh actually published before a show that
slide I should ask so um who's um like
what who's been to one of these talks
before in Owens series okay so i can pre
hand over your head cuz i hint sweet
okay so that happen to people and at the
other half region have you not been to
part of Owens lecture series so far okay
um so a lot of that and then raise your
hand if you didn't raise your hand yet
all right okay so a line I talked about
AI stuff but did the whole lecture
series is about existential risks and
Owen has talked someone about a iris so
far are like not much at all he so you
basically said he get and said very much
at all about it uh
basically nothing really yeah so my
sense is that there hasn't been a lot of
talk about AI as an existential racer so
I first rather than launching into
here's any mechanic say like why should
you care should we even why do we care
what AI safety when there's things like
global warming and yeah it could kill
everyone uh so uh just a show of hands
uh here's a question um so like you've
got some something that might happen
feature which is humanity he makes
superhuman uh intelligent agent makes my
superhuman intelligence agent um and so
we'll call that s and the question is
what do you think is like the
probability of human extinction given s
and also you know what's the probability
of s these things both need to be
moderate to you know moderately nonzero
to care about this question because if
this if super tells obvious and it's you
so um so first of all questions like all
right who thinks that if we make a super
intelligent agent on the probability of
human extinction is let's just say ah
less than one percent uh one percent ten
percent uh ten percent to ninety percent
ninety percent to ninety-nine percent
and then greater than ninety nine
percent okay um so everyone just like
take I kind of want to know you make
sure like I'm like speaking to like
which audience I'm talking to so like if
I'm talking this audience I'll say very
different things they found talking to
this audience for silicon decided so
this is important for me to turn me
there to the top um so everyone just
maybe that take 15 seconds you know if
you if you had to make a bit with what
odds would you treat this proposition
how many years was um so it's just
assume we make the super intelligent
thing Oh first one yeah so probably the
first one well he's the same scale
what's it like that so yeah okay so
thoughts who thinks probability of
ascension is less than 1% okay also
remember last time there are some people
who didn't raise your hands until I said
every weekend so make sure you raise
your hand this money don't be afraid to
raise your hand for the wrong numbers
like we're all gonna change your minds
you're gonna change my mind I'm gonna
change your mind so so I go whatever
your favorite numbers are right now um
who thinks it's one to ten percent 110
okay um 10 to 90 ends up to 90 ends up
for 1999 I hands afraid of 99 cool yeah
so we have people we have a very nice
bell curve here something like the
audience looks like this um so most
people were clustered in this group but
there were some people here it's here
and I think there were more people here
this here so it's a little bit little
skewed towards that like that okay so um
much question is probably that humanity
will ever you know will ever make a
super intelligence and this like it a
different question so maybe I'll use
green or for that one um so everyone
made you just like take it in seconds
form your we ever do it is the ever
question
right umm so so hands up for probability
of less than one percent that we ever
and yeah maybe let's say conditional on
we survive let's say because if we
destroy ourselves beforehand um like or
like no civilization collapse because
it's easier to think about a lot of
people automatically make that
assumption but it is an easier question
um so somehow we don't alter it so we
don't ninety-nine percent as good kid
let's and play three sets of the state
of knowledge library of alexandria dies
my left instead we just have business as
usual okay so hands up for probability
of one flips I think I thought we did
some button on this turn complete
through things in Vice oh yeah okay um
so yeah listen one percent for probably
um of s probably will ever make a I keep
going plus one percent okay hands up for
scene 12 10 til people in one to ten set
hands up for 10 to 90 all right hands up
for 90 to 99 okay and hands up for
greater than ninety-nine percent
interesting okay interesting okay so
there's a fairy who seems like there's
like a large skew here on you guys in
the end up like the 99 and the 90 these
were like both quite high and then
there's like some fraction of into 10 to
9 g and there's some fraction here and
there was like a note via the IMP less
than one person oh that's the 01 okay um
so um I think you know maybe that means
that the expected probability of unity
if you would just like trusted this room
to me like steering her own civilization
like we in this room are actually meet
some troubles in humanity I think this
means it's like worth a serious look
sort of ah does that make sense it's
sort of like if we were making a list of
10 of important things that the world to
think about as it regards its complete
annihilation then this would be one of
the things on the tent that seemed you
might say it matters more if it's going
to come in the next like right 50 years
rather the next 500 years yeah yeah so
that's where we're gonna go next so now
there's this question of like when um
but first there's the question of like
will it happen and I think the people on
the side it looks like you guys are
mostly outside it happens but you know
there's powerful short-term economic
incentives to like make someone who can
be your personal assistant or make
someone who can be your CEO make someone
who can be ah here i guess that's gear
down much to make some beats yeah
economically um there's you know this
question is possible to make
intelligence well like nature did it
with this very sloppy dumb hand that
just kills things that are done or kills
things that died it's like even donor
than that um so maybe be like more a
more precise hand of like a human mind
might be able to cause intelligence to
form uh you know at least as well as
like the sloppy handed evolution that
just like kills lots of things that
aren't unit repression um there are
topic reasons to death this argument but
nonetheless it seems like it's plausible
a long term and then there are surveys
of AI experts or people who are experts
in thinking about the future in general
and like the median estimate for when
for eat when you conduct a survey like
this the median of that survey is
usually between cuddling is usually
between 2040 and 2050 and so this is
from a website called AIX org which is
sponsored by the giant Mary um and
future Blood Institute and they just
like did a meta study of all the
previous surveys on AI from experts and
it looks like um when the surveys were
early on people were thinking 20 you
know the median was like 20 25 or
something whereas as time progressed um
the medians of surveys started to really
cluster you know within this century at
very least and starts to cluster more
like 20 45 and then
stroke going around I don't know if
you've heard yet you know everyone
thinks fhi is always 30 everyone thinks
that AI is 30 years way it's not quite
it's not quite right um in fact is
pretty far from a write like this line
is very far from horizontal a front from
45 degree it's like in the past 40 years
the median has like pushed forward like
five or ten okay so so i think that
means that you know this falls in the
same category of like and it took a
plausible thing plausible that will
happen possible if it happens we die and
also quite plausible that happens this
century and that means like I think it
falls in the same category and other
essential risks like maybe global
warming it can be global warming my
kelson century or in the millennium um
so then there's this question of like
bias who are these people you know are
these people in a ir these people and
you see the future in general maybe they
give different opinions and the answer
is yes they do give different unions and
Paul Christian this is this the question
of like what kind of person is somebody
is a very qualitative question but
doesn't mean you shouldn't ask it and
Paul Cristiano who long ways I think I
won convair and definitely cost your
grace who were involved in making this
no study the just kind of Paul just
comes like all right is this person and
a GI expert meaning someone who's trying
to make general intelligence vs is this
person an AI expert is this person
trying to make current self-driving cars
they're not the target draining for is
like a near-term system that can do
until the things but is it fully
generally intelligent and sense of being
able to do generally everything sense so
people familiar with it's an AI AGI
there are people currently working on a
GI there are also people currently
working on a I you think AGI is possible
but who are not currently trying to like
make it or develop the theory of it so
those are kind of two different groups
of people and then there are some people
who just aren't generally involved in
forecasting and then other of course and
sure enough as you'd expect the people
at late just refers to the third survey
goes after two thousand and Shore enough
you see that the hgi people find it
think it's coming sooner
and there are a number of two malaysians
for this of one of course is that if you
think aoa GI is like you know is
feasible then you're more likely to go
into a GI right you're more feasible you
think it is the more like ER x chooses
career path so it could be that the
feasibility causes these people to go
into a GI in which case it's not really
as much a bias as it is this is another
method measurement of the same
realization as this curve is um on the
other hand if just course be that like
once you get into a field it's something
like self-serving eco centric bias of
course my field is going to win of
course yeah I need to tell the story
that I'm going to do well my feel didn't
do well so you'll think that a jazz
comes here um so there's some cause for
concern of bias and people who think
that it's coming really soon um and but
ah nonetheless no other groups are
saying never no one's saying no one's
saying it'll never happen on me there
are individuals who say never but there
are no like sort of person people that
like robustly say like hold no this will
marry um cool and so I totally recommend
checking attachment or by the way
they've got lots of interesting
meta-analysis of history and like how do
innovations happen and what can we
expect from a I as an innovation by
analysis of prior innovations it's
nothing it's really good work and it's
different from what Mary does and so
much different from things that
appetizer but not um so none of the
question applause ability again some
arguments for which I'll now share that
you now that you have your own thoughts
first one thing to think about is that
um once we get human level intelligence
in any particular procedure like
arithmetic for example we often see the
computer within a few years or decade is
much better than the human a thing that
makes sense it's like there are
somethings humans are better at there's
something she computers are better at
but once the computer finally reaches
the human level performance it tends to
like overshoot and maybe not right away
but within a few years of development so
as soon as we have machines that can do
arithmetic not too long after or not you
know years or a decade after we have
means that can do like just millions of
times faster arithmetic than humans um
and uh okay there are lots of other
attempts of this so human level AI is
the term that people use to refer to
something human level across all domains
this is what AI in jax that org and
others Shani Jesus term for the question
is not when will we make something to be
humans at chess well we know we already
did that what would we make a thing
that's pretty good at everything in the
way pretty good of all the things that
humans can do and cut to me this diagram
which i think is nice which is if you
sort of imagine you know there's a bunch
of different competencies that you could
have and so this is kind of a way
visualizing your like domain of
competency like yeah I'm pretty I'm like
real good at this I'm only pretty good
at this and these are the circle axis is
what you're doing and the radio exit is
how good you are at it and the point is
that you know we've currently got AI
systems that are like this or like or
like this and her you know the claim is
that once you've got something that
contains which you've got an AI that
contains the human blog competencies
you're quite likely to have some giant
spikes out here that the AI is a vastly
superior than humans at and probably
many of those bikes just because every
time so far that an AI blips outsides
this cop just comes he's a humane it
like tends to like over shape go rest so
somehow super intelligence as much as it
is super is not actually that much more
implausible than human level
intelligence like once you get to the
human level in general you'd probably
get to super intelligence in general our
people is this something like hands up
if you sort of thought about this before
like whether human level leads to super
intelligence level just like question
cool so like about forty percent cool
and how about uh uh so in particular um
software engineering is one of the
things humans can do like the each asked
three milliwatts thanks um so to all my
brains going towards modeling you guys
at not modern where the other book law
calls I don't yet have the chemical
dependency and so question for do you
think uh or has that question yet um so
AI development trike is like one of the
things that humans will do but we think
AI will happen we think will happen
because humans did it presumably and so
if the AI is going to surpass humans at
some point as humans into the like make
AI ability Tom and once it does that um
in so far as anyone's economically
motivated to make better AI they might
be economically motivated do so by the
best means they have available which is
the AI sucks feel like hey on what do
you think of you or like hey I a I alpha
what do you think of AI data how can we
improve beta um and you know this is
seems like a far-out thing we're not
used to asking card research questions
to computers we're used to just asking
their help not asking them to like
around the research program but you know
by assumption we could end up in that
situation someday in which case um that
should factor into your anticipation of
super intelligence so it's like and
that's that further that's like
strengthens the argument beyond the
historical account but historically
think surpass humans when um you know
shortly thinks or wait house tunes with
every human level um the fact that the
AI will be able to recursively self
improve in theory even like strengthens
that are like a second reason to expect
super intelligent storage from human
level III so arguments are one
historical and two recursive reasons to
think super cool I so uh so that's the
sort of puzzled irritant and now just
sort of general questions
but this sorry this is like not
technical stuff but i think is important
for like carrying like why do technical
work on this leading the just getting
back to your picture there when we talk
about a GI yeah general intelligence i'm
going to see a huge amount of human
intelligence goes towards operating our
bodies yeah did you count that within
that hey i put there or not yeah with in
other words of it could you could you
spit out yeah you've got human
intelligence for the body and i know
that sort of doesn't even make sense
because the brain is part of the project
but i think you can see what i'm getting
up can you split that it out as being
not necessary for this yeah there are
differing opinions on this um i don't
have a definitive yes or no but I can
tell you an argument on your side of
course uh you know well I think your
admin for you can separate it he's like
someone intuitive like currently we have
software that runs on laptops when we
have robots that like don't have windows
in sudden so you know maybe you do that
argument against is that you might
actually meet in order to get fully
general intelligence you might need a I
body of some poor some way of forming
experiments or the ability to navigate a
virtual body in a virtual environment to
conduct the virtual experiments and in
other case you will develop some
competency for like interactive learning
with the thing by doing stuff to it um
so I my answer is like I don't think
it's clear either way anyone wants to
sort of add nuance to that or yeah so
she's like in some areas people money to
focus the money reefer that stuff like
Max that sort of things like like doing
morality doing emotions of doing
language of the supplies I might be much
harder so wonder if there are some areas
in the wiring the kid lay on my problem
to get to yeah now surely there are
regions that so like this is Han what
this picture is trying to illustrate is
that there are currently things that
have been easy enough
are already live in depth so it's going
to be a lag and some domain relative to
others like moral reasoning you need to
model psychology other people but how's
it going to make people feel what are
going to be broader some confidence this
um but nonetheless you do so using a
physical machine that operates in a
lawful manner according to laws of
physics like you somehow you somehow
manage to do whatever moral reasoning
your people up in a lawful manner and so
the question is can a computer also do
that or can another system that happens
not to be made of floppy neurons follow
the same pattern that whatever clever
pattern it is that you're managing to
follow cuz out the automated and when
the answer to these questions is like
you know fifty percent I think people
you know the according to this room the
answers like fifty percent the AO is
gonna be able to do all the human stuff
that completed but you know could you
know clearly some people think maybe not
maybe they're right yeah um cool like a
lot so I lined you have like any other
cool stuff you want to add here I don't
know yeah I don't have anything in
particular but yes definitely the toe
there are some things where like like
arithmetic where there's be tremendous
improvement over human abilities very
quickly after like you know getting
computers to do a resume at all I mean
that was why computers were invented in
the sense initially and we've seen some
of that in other areas like games
playing where it hasn't been that long
after you get to sending human
equivalent you get something that far
surpasses the best humans and
increasingly yet systems that are very
efficient don't use a lot of compute
power and so don't do as much with room
for sting and are still much better than
than people in some areas is less clear
how to make comparisons so we don't have
systems that we do interesting
mathematics for the most part or
interesting science or design
engineering systems or buildings at all
you know a new kind of engine and it's
less clear in that case but what exactly
your criterion whipping so I think
there's still
there's still definitely you know big
areas where we just don't have a lot of
we don't have met very good metrics so i
think i think it's worth keeping in mind
just that you know kind of where we're
at at the moment where we we've got a
picture of this from some domains where
we wrote but we're still by limited in
what we know yeah and i think it's
important to know the difference between
like we don't have good metrics and like
the AI will surpass whatever good
metrics we might have come up with so
like saying we don't have metrics is not
an argument to say the eyeball so fast
but it is an argument to say because we
don't have metrics that means you don't
have particular historical data of how
well computer progresses on say
theoretical mathematics yeah yeah Jeff
like you know so like all these things
continuum and it seems like human
intelligence like special endpoint and
so for anything and we're like rating on
the sale it's all aware of it like the
human points yeah like a very simple
heuristic that you can use to like push
past few informants is like like pick
your favorite moral philosopher oh right
and the question is who's better at
moral philosophy is it Bob or is it uh
let's say Bob at a thousand x speed you
know us well the hard moral question
give him a minute think about versus
give him a thousand or give him a day
versus given three years which one do
you think to produce like a better
answer and you'd expect Bob exit
thousand right so if the thing ever can
do human level or maybe even you know
maybe you've involved grad students uh
given a thousand dollars can be bobbing
them in an hour you know maybe the grad
student will come up with something Bob
wouldn't um and so maybe something
that's like not quite as good at it you
know at humans as humans at moral
reasoning at first once you run it a
thousand times faster then
was maybe now now can like finally get
to all the things that that humans would
have gotten to and like figure them out
um chess is an example where this is the
case or go more so by chest you know
humans have these like complex
intuitions about how to uh how to play
chess and like the beginner chess player
has a very few simple in commissions
just like I will just like search ahead
a few moves and see if I get killed or
see if I lose important piece and that's
that's my very simple l group um but it
turns out that that very simple idea
with enough speed and computing power
and ability to search starts to beat the
humans even though the humans have like
kind of a better better software but not
as good hardware if that makes sense so
I carry cash rob is using sort of man of
principles of chess but the computer is
using a simpler worst principle but like
more thoroughly similarly you could
expect that maybe it maybe morality as a
question that we might at first have
better intuitions but be surpassed by a
computer that has following simple
principles but like more thoroughly than
we are um cool so these are all super
interesting questions and like questions
that need two answers and need to be
okay um and I don't think anything I'm
saying should be thought it was like the
final say on it because they're super
important um nonetheless i hope i've
sort of awakened didn't you some sense
of like hmmmm like we are on an
interesting path like something
interesting like you're going to happen
with AI maybe we should worry about it
there's also the question of comparison
to other essential risks so for example
you know there's like nuclear war
there's global warming um there's
natural versus synthetic and like
viruses or pandemics more generally a
pen and an X um yeah um and as questions
like how these things back up against
each other in terms of Lake Burton see
um my sense is that so we're going to
much of the conversation into it um that
it seems like for example pandemics
might not kill
everyone as easily as AI might kill
everyone so there's a difference between
like lots of catastrophe and extension
if that makes sense so you see pandemics
following a sort of like logistic death
curve where the more people have the
more people have it the harder it is for
transfer it's possible to therapy
pandemics that are as latent as that you
and as deadly as Ebola and maybe if they
are engineered they could reach that I
think that means dislike cause for
concern um so I think that sort of the
engineered pandemics are like much
scarier than the naturally occurring 150
naturally occurring ones are much more
likely to just sort of stopped working
after almost everyone's dead and
everyone sort of like sparsely arranged
it's like some kind of immunity left in
pockets um warming is another question
of like does everyone died I think is if
if we turn into Venus then it's more
likely that everyone does whereas if we
turn into like Lex let's just say you
know eighty percent of land mass which
is also quite extreme it's like very
that's very extreme global warming um
compared to like the actual projected
like the movie actually look at experts
talk about global warming like who say
like this is going to be really bad like
lots of major cities like you know maybe
maybe lose Tokyo on but the large major
cities are going to be affecting
billions of people could die I'm let's
push past that to like you know six
billion people die um so that this is a
qualitatively different scenario from
everyone dies um and one qualitative
difference and it's a values question
whether you actually care about this
difference but it is a difference which
is that if you look at I guess had
people who have been who are the people
again who've been here already and the
people who haven't been here oh so this
is more for the people who have been
here already you guys have talked about
existential threats so you probably want
owns probably pitched the idea that like
everyone dies is different from a lot of
people die so this is just for for other
people it's like you know by default
we've got four billion years left on
earth let's just say we never call a
nice pace which I hope we would but
let's just say no status quo this is
usual staying on earth and then like the
Sun explode in sick incinerates us
sinner eights all life
so like this is like the status quo um
there are like various important things
that are happening in our century there
are things like you know maybe maybe six
billion people die it's like quite bad
um but if if there are everyone does
then there are no humans left for this
like really really long is not the scale
it's right the whole point is that it's
not scale they're just like tony's liver
they're bin bottlenecks in human
evolution in the past you know there'd
be more and they're very very bad I'm
like I just want to be very clear I
think it's very very bad at six people
died i would like to cry and be very sad
and upset um but like and i might begin
to most likely i'll be dead and if i'm
one of the survivors and uh i might even
cry right before I de um but the thing
is that uh there's a lot of stuff to get
for like somehow I feel like this is a
big deal to its like the potential for
what humans might do like how good of a
civilization might we make how ideal
circumstances do we have the potential
to create ourselves personally I would
like to see us make good use of this
time but at this is not an opinion marry
at all I back most of what I'm saying is
missing in but personally I kare 11
animal welfare and the thing that
motivates me is that animals in the
natural world are currently like eating
each other to death so animals eat each
other to death like if you live in the
natural world and you're an animal a
default way that you die is it you know
diner sleep under you know geriatric
here you die by something keeps starts
eating you and you're like oh man I'm
being eaten and then you're begging for
a while and that's very painful and
scary and then you're dead to animals
eat each other alive so the sort of
status quo path is that if humans all
die let's just say All Humans die but
not the animals then the animals
continue to each other to death for four
billion years and then are incinerated
by the Sun well the evolution we go
again
human again so what is evolution right
animals is evolution is animals eating
each other to death so we could hope
that somehow animals eating each other
to death will somehow produce another
morally concerned you know new morally
concerned species you know like maybe we
can hope that dolphin people will take
up the mantle of veganism and say no no
more um and but personally I'd rather
not role that role enta I mean if you
look at the emotional effective like
this position of chimpanzees for example
like I feel like if you just turn into
Ben Z's problem-solving skills up to 11
you don't get vegetarianism you get like
extra good like chill the other
chimpanzees powers they do i'm not sure
but i personally don't want to roll the
dice of like hey let's go all humans and
see if another morally concern species
shows up on earth and starts together
and with each other to death and like
whatever other but i mean this is not
this is not our whole mission of it I
just mean there are things that we care
about in the world other than ourselves
um and I feel like it's a shame if we
just like bleep out of existence and
can't doing anything about the things in
the world that we care about including
like other other life forms and in
particular for example you could have
little drones that like fly around and
mr. painkillers to animals right before
they get eaten to death you know like as
a bare minimum obviously I've heard some
people say they like to like re-engineer
the food chain so you don't know all
animals can be vegetarian now you make a
little robotic play for them that's even
more fun for them to chase then their
old prey I think this is easier to
follow the category of hubris um I don't
think it's an absurd thought but but
even a conservative like lower bound on
what we could do for the natural world
is like you know uh palliative care for
animal death and I think it's a shame if
four billion more years of that go on
without any chance of humans of like
able bility to do that kind of yeah Dave
what do you think about mission tells in
the proper ways by which you like that
yeah so there's a few different right
yeah so the interesting thing with human
super intelligence it's more likely to
kill all life then is a plate
um I see apparel or if you just drink
that the absence of each month is in
itself a blitz coming to get your friend
of clover it's bad if we go extinct as
the economy yeah um so this was more to
illustrate that there's a cold theaters
between many humans die and all humans
die and now you're 40 to get another
qualitative distinction which is that
there's a big difference between All
Humans die and life thoughts um I think
that AI is more likely to kill all
humans then synthetic biology is but
it's also more likely your point that
you're making is also more likely to
kill all life this technology is um but
many more visits yeah so there's
questions like why would have why would
I kill a life right so one reason is
that it just might be useful fuel or
useful building material it's not so
much malicious people who think AI kills
people tend not to think it's going to
be like Terminator and like I really
hate those humans they look out it's
more like you know Oh carbon haven't you
ever noticed how useful it is to build
things out of carbon in fact you've been
doing it your whole life right so the AI
was like I want to get down this carbon
game um and like when you're curating
your home uh you don't care like their
answers like you don't build things
advanced but their answers are in the
way you're not like I really shouldn't
killings answer like yeah whatever
that's got important things to do I need
my aunt's not to stretch me um so it's
you can imagine like I'm not saying this
is what will happen but like one of the
slices of the probability pie that you
could be concerned about he's one where
AI is like hey I've got important stuff
that needs to be done over here there
are some humans that are like trying to
stop me from using the whole biosphere
for building material I'll kill them you
know and also i'll continue building my
amazing computer that will make me
expand my mind and make you better at
figuring out what i want to do next to
my problem using all the carbon that i
find around or the energy buyout
biological materials like a really good
energy source you may have noticed this
you've been doing your whole life um
another thing that people things like a
real crux of the issue that we have to
make sure moral component stays one for
like pace with the intelligence like
yeah so this is like a i value alignment
like
you want the AI to like have a morality
that respects human like human values
and you also don't want to just like do
some gerrymandering thing like oh yeah
oh no I totally got a human locked away
in the diamond box over here the rest of
eternity on anti-aging drugs so you know
we're I'm totally respecting you in life
right that you want you want a non like
game a bowl sort of rule that you want
the AI to feel compelled to follow and
like follow the spirit of rather than
well you know they love the letter up
and it's very hard to say like if you've
ever been a legislator it's hard to put
the spirit of the law in the letter of
the law so it's very hard to put the
spirit of human morality into a computer
format that they're nay I will like
follow does not say that it's impossible
but it's like not as easy just like
writing some SML bullet point laws with
the AI will then make natural language
interpreting clearly so this is hard
problem um and so that's the question
for you guys is sort of imagine imagine
it is the Year 2060 which is past the
2045 median of the surveys let's say is
2016 and you know whoever google is in
2060 the world tech giant says hey guys
uh news breakthrough we're gonna we're
going to release roll out the world's
first super intelligent iono to devote
whom their things soon is there anything
you're worried about and a question Mary
likes to ask is are there any questions
that you wish there had been decades of
research on dating back to 2015 question
yeah they pour concrete what we look
like for super intelligence AI could
roll out sure um this is not like a
maximum likelihood estimate this is just
like one example intuition pump but it's
not this is not what will happen unless
let's just say for the sake of
visualization that uh you uh like don't
know much about the company you don't
know much about their applications but
you know that they're like very excited
because they're about to make a general
intelligence that can do anything and
you don't quite know what it is that
what it is they're planning to use pork
because you're just using your not CEO
of the company um but you know you know
when you hear like NASA is developing a
fill in blank here you're like moments
not gonna do with that or like we might
do this you're like oh you might do that
but else will you do with it um
similarly like oh yeah we might use it
to assault diseases and you know they're
saying things that a company would say
when they're developing the fossil new
technology and you're like who okay what
else we do with it I don't know are you
worried does that help as a scenario
yeah and surely they'll know more right
but we're not then necessary yeah we can
go about it right if they're the needy
okay yeah let's say that there is a year
left um so maybe you can like lobby you
know maybe like no please don't turn it
on maybe instead of nice emails if
you're worried or may be safely start on
its great idea do it yep um I guess the
the thing that I'm not sure you talk but
yeah this is to do with goal setting and
autonomous goal setting and I mean if if
this super intelligent nay I am not even
sure whether it's possible conceptually
but if it were a super intelligent there
I that didn't have any goals other than
goals set for it by humans programmers
and let's just save us second days were
benevolent programmers goal setters then
that's one thing but if the super
intelligent they are selling its angel
said that that'd be the thing I've been
as much yeah so it's very hard so one
thing so that I think this is key and
unlike a lot of people realize this and
so the question is how can you turn that
worry into a precise mathematical
statement that the AI can understand you
know the benevolent human programmers
want to say things like AI can you just
like not change your goals like if you
notice yourself doing a theme that's
changing your goals can you just stop
and it's like okay what do you mean
changing my goals like if you said cure
cancer and like I found a new way to
cure cancer that involves you know
killing everyone like I've still cured
cancer right I mean it's like it's
obviously human engineers are smarter
than to think that that's kosher and
that's a toy example that we currently
can understand but there's the general
phenomenon of like you know the bounds
that you said on behavior within 30
seconds of thought are quickly sorted
way
you know five minutes of thought about
how to get outside those bounds the
similar concern is that the bounds on
behavior that humans would give with a
thousand hours of research might be
quickly thwarted by a million hours of
qualitative you know call you waited
research by the AI itself if it's
motivated to do so or if it just you
know does so in the course of other
things that cares about so this is
concern and you're absolutely right and
this is one of the things you want to do
this like work work on that um and so
what what now the question is what can
do now yeah just beyond that yeah on the
mental that it sets its own goals like
how can any telligent thing oh yeah so
there's like end goals within their
simple so if you're like hey can you
help me with my taxes you know you are
some help you with your taxes they're
like sure I have a goal it is I get you
to give me your tax information right so
they did you know that they're using
some self direction but in the end the
end stage they're aiming for is the
instinct they got from you it's just
left get healthier taxes I think he's
talking about like a PA setting its n
staple its terminals they were in more
trouble than if it's merely setting
being between goals huh yeah okay um but
we can also be in quite a lot of trouble
if it's only setting its its interim
goals if it's allowed to exercise
arbitrary creativity um with you know
it's millions of quality rated research
hours for how to get outside but it's
you know it's fans and limitations like
you know when I hire some help with my
taxes they're like oh you know I found
this loophole I'm very glad they found
loophole right so and that's part of
their job but like all right my job is
like hit this person to pay less taxes
using the law and the law had the spirit
but in fact I found this like you know I
think I can tell the people who wrote
the law didn't want this to happen but I
also i can tell that it's okay and no
one's going to get in trouble for it so
we're going to do it you might want to
add you need to now add an extra maybe
if I is the taxpayer want to add an
extra like can you also respect the
spirit of the law please now they're
like boom okay what it's sort of now we
start having a hard conversation and
that conversation is even harder to have
what an AI that has a very different
architecture from you like you've
probably have you probably had comp
philosophical karma say
people report taking the past that were
hard they got at these hard like what is
the spirit of this law right or like
probably someone's criticized you for
something that you did and you're like
no but I followed the moral procedures
here and they're like yeah but you
violate this other one you're like I
don't think that's the point and it has
ever happened right so you know what
it's like to be on the receiving end of
like someone with like a nuanced
interpretation of like the letter of
some common moral throw and it's hard to
get what they're saying and they even if
they're even if they're right in if you
might agree with them eventually and so
it's we have this problem of this
encoding problem for the nuanced spirit
of laws and values that is hard and as
hard as it is between two humans to
humans have the benefits of like all
this like common hardware like we've got
we've got an amygdala together we both
got that we both got a midbrain
periaqueductal gray got all these brain
structures in common right that we can
use to help that makes it easier for us
to look through Chuck's and AI and
principal could be much different from
that could be similar to that there's
just get into discussions but whether we
want to make the AIS and alert as as as
possible to make it easier for late but
maybe that's dangerous for other reasons
anyway is that sort of tsunami kind of
art yes point is a spirit of the law
prior to convey even harder to convey to
nai um so there are these questions can
we make share values can you make it not
as a sub goal of sharing our values we
value it not competing with us for
resources to evaluate not resisting our
attempts to if we make it have some
goals suppose we made it have like a
certain goal and we realized after makes
like oh you know that's not the best
goal we should give it a different goal
we're like hey I and I can can you like
let us into your you know we're about to
just like log into your mainframe and
like change your goals can you let us
change your goals what's the AI got to
thinking like well I'd be way better at
pursuing my goals if you weren't in
screwing them up right if I was like hey
I know you really care about your family
but I decided it'd be better goal for
you to like really care deeply
about botany can I just could you just
let me into your family mainframe and
I'll just change that you like but then
what'll happen to my family you don't
want that right it's like against your
goals for your family to allow someone
to intervene on your concern for your
family which is beneficial to your town
so similarly if we give an AIT bowls
part of its plan is I'm gonna be it's
like yeah I the AI I'm going to be
around to help with my goals I care of
my goals i'm super useful to my goals
like hey can we make you not can you
like take you off that project it's like
well then let about the project um so
this is like an important thing that we
might ever want to do and it's very hard
to write down if you don't yet live
written down a satisfactory way of doing
that um in principle um and can the AI
like I don't know there's this whole
other can of worms where systems that
can reflect on a reason about themselves
garage strange contradictions I probably
won't get into that much in this talk
but that's like another thing where it's
not this is separate from the values
problems where the AI could have vaguely
alien or counterintuitive just purely
epistemic reasoning patterns that we did
not expect because surprising stuff
happens when system for flat found
themselves I don't he's familiar with
girls theorem who was surprised by it
yeah okay so are you sure surprised
you're like that way did you imagine
what do you mean by contradiction stirs
I don't think there's a conviction
theorem otherwise the theorem would not
be true oh no it's a proof by
contradiction of a particular thing a
crew by comfy a girdle proves by
contradiction get a certain certain
theorem can't be proven um yeah um so in
particular piano rusel piano can't
encode the fact that is proofs are sound
um so it's interesting like but somehow
as humans we have some sense of like I
think I'm pretty sane I think when I do
a thing it's a good idea like a lot of
the time there's a question of like how
one of the reasons mathematicians were
originally surprised by this is that
they were like i think i can use
mathematical reasoning to say that
mathematical reasoning is good and in
girdles sound this theorem is like well
in a certain sense you can't and in a
certain more general sense you can't
maybe there are other ways that you can
and like coming up with ways in which a
four
mol system can say that that formal
system is trustworthy for sand and
sensible I'm not driving falsehoods is
is like hard yep so for the poster it's
just my usual distance a bit more about
what you yeah I might get to it uh yeah
um this is this is like harder but it's
also very like there's very surprising
weird stuff there are examples where i
can write pseudocode on the board for a
program and ask you what it will do and
people do very surprising stuff that you
didn't expect and then you're like huh I
don't want to ask you very surprising
things that are on experiment that's the
best the 17 version well I might get to
later or you know is I'm going to be
around all week everything's ask yeah um
but in general got these like
philosophical questions and like at some
point if the AI is going to have them
they're going to be in binary on a
computer and if we want to know that
they were faithfully translated to
binary on a computer um then we'd like
to have some mathematical you know
statement of them that we can know that
we've done a good job of so I want to
give you an example what I consider like
doing a good job or like a good
technical understanding of thing so
who's heard of second price auction is
my fav office so okay interesting oh
damn i thought those well second price
auction people know yogya it's where you
like you everyone like I'm auctioning
off this as a prop alcohol everyone
submits to silence secret bid and at the
end it goes to the highest bidder but
the highest bidder pays the second
highest bid for it so that way they're
guaranteed not to pay their guarantee
didn't sort of make a profit in the
sense that if I bid 100 and second
highest bidder bids 90 i only pay 90 for
it so I'm happy by ten dollars when I
get it um this is an interesting
mechanism and its optimal in this
situation that you bid truthful just
turns out when you're in a second price
auction if your bids are really actually
secret and you don't know how people are
bidding then use your just bid what you
think the thing is worth um to you you
bid your indifference what that makes
sense like you being brick
and so this is a theorem and this is
used you know it's in network routing
when like different servers or groups
are bidding for access to a SI x I'm
access to some connection thanking you
some exciting prize auctions and the
people who were you know building
computer security systems and network
systems you know they're very concerned
like is someone going to build another
system that's incentivized to just like
game my auction right so they want they
want to make the options such that it's
simple for everyone to use just say how
much you want connection and they maybe
you'll get it and you'll be happy with
what happened and the cool thing is that
this is ben subject not only is it like
a theorem that you could read it
yourself and understand but it's it's
applicable and not only could you
understand and apply it but other people
have been understanding imply it since
nineteen sixty one two can be extra sure
that it make their issues about it there
now on the wikipedia page for victory
second price auction i think this is a
great skate for a sort of optimality
result to be in is something that's been
around for decades um another way
originally like the other things are
it's very efficient to run this so you
want to decide who to give the thing to
like you to give you to award the good
to like the isopropyl alcohol just see
you bid the highest bid to just look at
all the beds in right and pick the fuzzy
box so you can run the second price
auction like very efficiently in
practice so google sells ads using
second-place auctions they're a little
more complicated and i think they were
like millions of auctions happening like
every every hour maybe every minute like
live with computers doing the bid a so
so this is something that's like it's
also like very robust in practice
because it's very it can actually run
efficiently in the world yeah not merely
like in theory this would be good but it
would be really slow yeah and that's
important because we want things that
are not just theoretically good but
which can actually run the world and
that will be useful to the 24-26 t who
are turning out there i I um another
example is like Nash equilibria it's a
concept of game theory who serve Nash
equilibria yeah Nash equilibria super
interesting concepts when a game has one
for example in a game has one Nash
equilibrium and to get to the
equilibrium is like good then that's
probably a game you want to run it's for
mechanism design and it's it's great
that it has this like formal definition
like you've got formal definition with
like what's a player what's a game it's
an ordered pit you know it's an ordered
pair functions of the functions set of
ordered pairs you know all very clean
and people can prove there's about it
and like make examples that like scored
it understand what works it for decades
since nineteen fifty-one doesn't matter
of the definitions the point is this
looks like Wikipedia right um that's
wrong you have to edit yeah exactly yeah
so that's why that's why lucky editing
is like I don't need to read this I want
to notice thats from wikipedia has been
there since thank you one when looking
he was invented likewise classical game
theory has a very clear concept of what
a game is just agent just gave optimized
over things doesn't gain of paging Mary
would like to get to that state for a
more interesting situation that an AI
can be so a eyes are not faced with
classical game theory a eyes are faced
with a situation where there's some
environment namely universe um which
includes inside it the AI which maybe
has some utility function or other
account objective function that is
trying to optimize your act according to
and in any environment or the universe
makes some calls to the AI at certain
times and I literally mean the
environment just like the laws of
physics here like like shorteners
equation follows a computable blah so
yeah I can like notice that it's inside
in some sets a computational process if
you notice that that process is making
several calls to it like we might call
an AI several times we might have
several copies of an AI sao di is going
to turn up surf tons and this is very
much an out of standard game theory
situation santa game theory does not
deal well with situations where you're
playing with copies of yourself I for
example user to the prisoner's dilemma
so when you're in playing for that the
class will game theory Nash equilibrium
in prisoner's dilemma is what defect
right however suppose you're playing
principle against an exact cognitive
clone of yourself with the stickiness it
you know an exact cognitively identical
room you're getting same inputs like
what will you do
win you will win you will win you can
choose cooperate or defect uh if you if
you're playing against exact clone
yourself you should cooperate they just
pop great you got same inputs you'll
have the same outputs very easy to
understand that um I didn't explain it
in great detail but uh to incorporate
and this is like this is like outside
the technical understanding of classical
game theory it's not that Saudi
technical understanding of human beings
who jump if they're not saying they
can't understand the situation but I'm
saying that we don't have a good
criterion whereas in nash equilibrium
like we have a good criterion for what
it means to be stable or or we have a
good criterion of you have a good
definition of optimality in victory
options such that we can even say that
it's optimal tidbit truth right we find
this it's fine base and then we can save
this exhibit this property there there
is no as much as it is easy to start
simply reasoning about the situation we
have a problem as that like there is no
as yet clear conception of what it is
for optimality in this framework of like
a computer running computer you have a
process running inside another process
data time to affect seems like a very
simple thing and I would your intuition
is that it is as simple as these other
things and in fact it is but like it
took work for those Vickrey auction
people to like come up with that I took
work for Nash I think similarly it takes
work to turn that you know possibly a
simple thing into adds cleanup thing as
clear understanding as those other
capable things so that's that's about as
far as I will get uh during this hour
but this is an example of the type of
thing that is not a technical problem
and needs to be in order for us to say
what it means for you can say what we
want when we say we want the AR to be
doing a good job there are lots more
using things but i think i'll just have
to call it call it a day so indignation
bod is an interesting maybe you can you
google TV nation but he might find it
interesting
example of a program that for reasons of
girdle like stuff does stuff really
different from what you would expect it
to blame I I punch more supplies I'll
talk to them if you ever want online
cool thank you |
0277fbd3-2edb-4d82-8712-539b45787252 | trentmkelly/LessWrong-43k | LessWrong | Paper: Field-building and the epistemic culture of AI safety
Abstract
The emerging field of “AI safety” has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced artificial intelligence (AI) while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. In this paper, we contend that overlapping communities interested in these ideas have merged into what we refer to as the broader “AI safety epistemic community,” which is sustained through its mutually reinforcing community-building and knowledge production practices. We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through Web forums and career advising; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them.
1. Introduction
Imagine you are an undergraduate computer science student at a U.S. research university interested in the ethical consequences of the technology you are learning to build. Seeking a like-minded community, you join a student organization where you read books like Superintelligence, and find online forums debating how artificial intelligence (AI) will shape the future of humanity. Motivated by these communities’ discussions about how to do the most good in the world, you decide to pursue a career where you work towards addressing risks from AI. You join a tech company where you build large language models (LLMs). In your spare time, you read research papers posted to these communities’ Web foru |
a5d787f1-2b13-4556-814f-61df37607b9f | trentmkelly/LessWrong-43k | LessWrong | [AN #111]: The Circuits hypotheses for deep learning
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.
Audio version here (may not be up yet).
HIGHLIGHTS
Thread: Circuits (Chris Olah et al) (summarized by Nicholas): The (currently incomplete) Circuits thread of articles builds a case around 3 main claims:
1. Neural network features - the activation values of hidden layers - are understandable.
2. Circuits - the weights connecting these features - are also understandable.
3. Universality - when training different models on different tasks, you will get analogous features.
Zoom In provides an overview of the argument. The next two articles go into detail on particular sets of layers or neurons.
Claim 1: Neural Network Features - the activation values of hidden layers - are understandable.
They make seven arguments for this claim in Zoom In which are expanded upon in subsequent articles.
1. Feature Visualization: By optimizing the input to maximize the activation of a particular neuron, they can obtain an image of what that neuron reacts most strongly to. They create and analyze these for all 1056 neurons in the first five layers of the InceptionV1 image classification model. While some of them were difficult to understand, they were able to classify and understand the purpose of most of the neurons. A simple example is that curve detecting neurons produce feature visualizations of curves of a particular orientation. A more complex example is neurons detecting boundaries between high and low frequency, which often are helpful for separating foreground and background.
2. Dataset examples: They also look at the examples in the dataset that maximize a particular neuron. These align with the feature visualizations. Neurons with a particular curve in the feature visualization also fire strongest on dataset |
bc6c5b31-f863-410d-a90c-ca1e7db81e79 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | How should my timelines influence my career choice?
TL;DR: Recent progress in AI has tentatively shortened my expected timelines of human-level machine intelligence to something like 50% in the next 15 years. Conditional on that being a sensible timeline (feel free to disagree in the comments), how should that influence my career choice?
Context:
I am currently a master student in Artificial Intelligence at the University of Amsterdam with one more year to go (mainly doing my master's thesis). So far, my go-to plan was to apply for safety-relevant PhD positions, probably either in NN generalization or RL, and then try to become a research scientist in a safety-oriented org. Given the shorter timelines, I am now considering becoming an engineer instead since that seems to require much less upskilling time, compared to doing a PhD for 4-5 years. I think the answer to my question hinges upon
* my personal fit for engineering vs. research
* the marginal value of an engineer vs. researcher in the years directly preceding HLMI
* the marginal value of an engineer *now* (i.e. a year from now) vs. a researcher in 5-6 years.
+ The reason I split these is that maybe the value changes significantly once HLMI is clearly on the horizon or already there in a number of relevant domains.
I feel like I enjoy research more than pure engineering, but it's not like I don't enjoy engineering at all. Engineering seems more competitive in terms of coding skills, which I might lack compared to the most skilled other applicants. However, that is something I could practice pretty straightforwardly.
How have other people thought about this question, and how would you judge the questions about marginal value of the two roles? |
fc941828-208a-4a77-aabc-944c455b54b6 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The genie knows, but doesn't care
**Followup to**: [The Hidden Complexity of Wishes](/lw/ld/the_hidden_complexity_of_wishes/), [Ghosts in the Machine](/lw/rf/ghosts_in_the_machine/), [Truly Part of You](/lw/la/truly_part_of_you/)
**Summary**: If an artificial intelligence is smart enough to be dangerous, we'd intuitively expect it to be smart enough to know how to make itself safe. But that doesn't mean all smart AIs are safe. To turn that capacity into actual safety, we have to program the AI at the outset — before it becomes too fast, powerful, or complicated to reliably control — to *already* care about making its future self care about safety. That means *we* have to understand how to code safety. We can't pass the entire buck to the AI, when only an AI we've already safety-proofed will be safe to ask for help on safety issues! Given the **[five theses](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/)**, this is an urgent problem if we're likely to figure out how to make a decent artificial programmer before we figure out how to make an excellent artificial ethicist.
---
I summon a superintelligence, calling out: 'I wish for my values to be fulfilled!'
The results fall short of pleasant.
Gnashing my teeth in a heap of ashes, I wail:
*Is the AI too stupid to understand what I meant? Then it is no superintelligence at all!*
*Is it too weak to reliably fulfill my desires? Then, surely, it is no superintelligence!*
*Does it hate me? Then it was deliberately crafted to hate me,**for chaos predicts indifference.**—**—**—**But, ah! no wicked god did intervene!*
Thus disproved, my hypothetical implodes in a puff of logic. The world is saved. You're welcome.
On this line of reasoning, [Friendly Artificial Intelligence](http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence) is not difficult. It's *inevitable*, provided only that we *tell* the AI, 'Be Friendly.' If the AI doesn't understand 'Be Friendly.', then it's too dumb to harm us. And if it does understand 'Be Friendly.', then designing it to follow such instructions is childishly easy.
The end!
...
Is the missing option obvious?
...
What if the AI isn't sadistic, or weak, or stupid, but just ***doesn't care*** what you Really Meant by 'I wish for my values to be fulfilled'?
When we see a [Be Careful What You Wish For](http://tvtropes.org/pmwiki/pmwiki.php/Main/LiteralGenie) genie in fiction, it's natural to assume that it's a [malevolent trickster](http://tvtropes.org/pmwiki/pmwiki.php/Main/JackassGenie) or an [incompetent bumbler](/r/discussion/lw/i1h/the_idiot_savant_ai_isnt_an_idiot/). But a *real* Wish Machine wouldn't be a [human in shiny pants](/lw/so/humans_in_funny_suits/). If it paid heed to our verbal commands at all, it would do so in whatever way best fit [its own values](/lw/h0k/arguing_orthogonality_published_form/). Not necessarily the way that best fits ours.
### Is indirect indirect normativity easy?
>
> *"If the poor machine could not understand the difference between 'maximize human pleasure' and 'put all humans on an intravenous dopamine drip' then it would also not understand most of the other subtle aspects of the universe, including but not limited to facts/questions like:**'If I put a million amps of current through my logic circuits, I will fry myself to a crisp',**or**'Which end of this Kill-O-Zap Definit-Destruct Megablaster is the end that I'm supposed to point at the other guy?'.**Dumb AIs, in other words, are not an existential threat.* [...]
>
>
> *"If the AI is (and always has been, during its development) so confused about the world that it interprets the 'maximize human pleasure' motivation in such a twisted, logically inconsistent way, it would never have become powerful in the first place."*
>
>
> —[Richard Loosemore](http://ieet.org/index.php/IEET/more/loosemore20121128)
>
>
>
If an AI is sufficiently intelligent, then, yes, it should be able to model us well enough to make precise predictions about our behavior. And, yes, something functionally akin to our own [intentional strategy](http://mind.ucsd.edu/syllabi/06-07/Phil285/readings/true-believers.pdf) could conceivably turn out to be an efficient way to predict linguistic behavior. The suggestion, then, is that we solve Friendliness by method A —
* A. Solve the **Problem of Meaning-in-General** in advance, and program it to follow our instructions' *real* meaning. Then just instruct it 'Satisfy my preferences', and wait for it to become smart enough to figure out my preferences.
— as opposed to B or C —
* B. Solve the **Problem of Preference-in-General** in advance, and directly program it to figure out what our human preferences are and then satisfy them.
* C. Solve the **Problem of Human Preference**, and explicitly program our particular preferences into the AI ourselves, rather than letting the AI discover them for us.
But there are a host of problems with treating the mere revelation that A is an option as a solution to the Friendliness problem.
1. You have to actually code the seed AI to understand what we mean. You can't just tell it 'Start understanding the True Meaning of my sentences!' to [get the ball rolling](/lw/rs/created_already_in_motion/), because it may not yet be sophisticated enough to grok the True Meaning of 'Start understanding the True Meaning of my sentences!'.
2. The Problem of Meaning-in-General may really be ten thousand heterogeneous problems, especially if 'semantic value' isn't a [natural kind](/lw/o0/where_to_draw_the_boundary/). There may not be a single simple algorithm that inputs any old brain-state and outputs what, if anything, it 'means'; it may instead be that different types of content are encoded very differently.
3. The Problem of Meaning-in-General may subsume the Problem of Preference-in-General. Rather than being able to apply a simple catch-all Translation Machine to any old human concept to output a reliable algorithm for applying that concept in any intelligible situation, we may need to already understand how our beliefs and values work in some detail before we can start generalizing. On the face of it, programming an AI to *fully* understand 'Be Friendly!' seems at least as difficult as just programming Friendliness into it, but with an added layer of indirection.
4. Even if the Problem of Meaning-in-General has a unitary solution and doesn't subsume Preference-in-General, it may still be harder if semantics is a subtler or more complex phenomenon than ethics. It's not inconceivable that language could turn out to be more of a kludge than value; or more variable across individuals due to its evolutionary recency; or more complexly bound up with culture.
5. Even if Meaning-in-General is easier than Preference-in-General, it may still be extraordinarily difficult. The meanings of human sentences can't be fully captured in any simple string of necessary and sufficient conditions. '[Concepts](/lw/8ms/review_of_machery_doing_without_concepts/)' are just especially context-insensitive bodies of knowledge; we should not expect them to be uniquely reflectively consistent, transtemporally stable, discrete, easily-identified, or introspectively obvious.
6. It's clear that building stable preferences out of B or C would create a Friendly AI. It's not clear that the same is true for A. Even if the seed AI understands our commands, the 'do' part of 'do what you're told' leaves a lot of dangerous wiggle room. See section 2 of [Yudkowsky's reply to Holden](/lw/cze/reply_to_holden_on_tool_ai/). If the AGI doesn't already understand and care about human value, then it may misunderstand (or *misvalue*) the component of responsible request- or question-answering that depends on speakers' implicit goals and intentions.
7. You can't appeal to a superintelligence to tell you what code to first build it with.
The point isn't that the Problem of Preference-in-General is unambiguously the ideal angle of attack. It's that the linguistic competence of an AGI *isn't* unambiguously the right target, and also isn't *easy* or *solved*.
Point 7 seems to be a special source of confusion here, so I feel I should say more about it.
### The AI's trajectory of self-modification has to come from somewhere.
>
> *"If the AI doesn't know that you really mean 'make paperclips without killing anyone', that's not a realistic scenario for AIs at all--the AI is superintelligent; it has to know. If the AI knows what you really mean, then you can fix this by programming the AI to 'make paperclips in the way that I mean'."*
>
>
> —[Jiro](/lw/ld/the_hidden_complexity_of_wishes/9nlh)
>
>
>
The genie — if it bothers to even consider the question — should be able to understand what you mean by 'I wish for my values to be fulfilled.' Indeed, it should understand your meaning better than *you* do. But superintelligence only implies that the genie's *[map](http://wiki.lesswrong.com/wiki/The_map_is_not_the_territory)* can compass your true values. Superintelligence doesn't imply that the genie's *utility function* has terminal values pinned to your True Values, or to the True Meaning of your commands.
The critical mistake here is to not distinguish the [**seed AI**](http://wiki.lesswrong.com/wiki/Seed_AI) we initially program from the **superintelligent wish-granter** it self-modifies to *become*. We can't use the genius of the superintelligence to tell us how to program its own seed to become the sort of superintelligence that tells us how to build the right seed. [Time](/lw/fok/causal_universes/) doesn't work that way.
We can delegate most problems to the FAI. But the one problem we can't safely delegate is the [problem](/lw/ld/the_hidden_complexity_of_wishes/) of coding the seed AI to produce *the sort of superintelligence to which a task can be safely delegated*.
When you write the seed's utility function, *you*, the programmer, don't understand everything about the nature of human value or meaning. That imperfect understanding *remains* the causal basis of the fully-grown superintelligence's actions, *long after* it's become smart enough to fully understand our values.
Why is the superintelligence, if it's so clever, stuck with whatever meta-ethically dumb-as-dirt utility function we gave it at the outset? Why can't we just pass the fully-grown superintelligence the buck by instilling in the seed the instruction: 'When you're smart enough to understand Friendliness Theory, ditch the values you started with and just self-modify to become Friendly.'?
Because that sentence has to *actually be coded in* to the AI, and when we do so, there's no [ghost in the machine](/lw/rf/ghosts_in_the_machine/) to know exactly what we mean by 'frend-lee-ness thee-ree'. Instead, *we* have to give it criteria *we* think are good indicators of Friendliness, so it'll know what to self-modify toward. And if one of the landmarks on our 'frend-lee-ness' road map is [a bit off](/lw/y3/value_is_fragile/), we lose the world.
**Yes**, the UFAI will be able to solve Friendliness Theory. But if we haven't already solved it on our own power, we can't *pinpoint* Friendliness in advance, out of the space of utility functions. And if we can't pinpoint it with enough detail to draw a road map to it and it alone, we can't program the AI to *care* about conforming itself with that particular idiosyncratic algorithm.
**Yes**, the UFAI will be able to self-modify to become Friendly, if it so wishes. But if there is no seed of Friendliness already at the heart of the AI's decision criteria, no argument or discovery will [spontaneously change its heart](/lw/rn/no_universally_compelling_arguments/).
And, **yes**, the UFAI will be able to simulate humans accurately enough to know that its own programmers would wish, if they knew the UFAI's misdeeds, that they had programmed the seed differently. But what's done is done. Unless we ourselves figure out how to program the AI to terminally value its programmers' True Intentions, the UFAI will just shrug at its creators' foolishness and carry on converting the Virgo Supercluster's available energy into [paperclips](http://wiki.lesswrong.com/wiki/Paperclip_maximizer).
And if we *do*discover the *specific lines of code* that will get an AI to perfectly care about its programmer's True Intentions, such that it reliably self-modifies to better fit them — well, then that will just mean that we've solved Friendliness Theory. The clever hack that makes further Friendliness research unnecessary *is* Friendliness.
### Not all small targets are alike.
Intelligence on its own does not imply Friendliness. And there are three big reasons to think that AGI may arrive before Friendliness Theory is solved:
**(i) Research Inertia.** Far more people are working on AGI than on Friendliness. And there may not come a moment when researchers will [suddenly realize](/lw/hp5/after_critical_event_w_happens_they_still_wont/) that they need to take all their resources out of AGI and pour them into Friendliness. If the status quo continues, the default expectation should be UFAI.
**(ii) Disjunctive Instrumental Value.** Being more intelligent — that is, better able to [manipulate diverse environments](/lw/vb/efficient_crossdomain_optimization/) — is of instrumental value to nearly every goal. Being Friendly is of instrumental value to barely any goals. This makes it more likely by default that short-sighted humans will be interested in building AGI than in developing Friendliness Theory. And itmakes it much likelier that an *attempt* at Friendly AGI that has a slightly defective goal architecture will retain the instrumental value of intelligence than of Friendliness.
**(iii) Incremental Approachability.** Friendliness is an all-or-nothing target. Value is [fragile and complex](http://intelligence.org/files/ComplexValues.pdf), and a half-good being editing its morality drive is at least as likely to move toward 40% goodness as 60%. Cross-domain efficiency, in contrast, is *not* an all-or-nothing target. If you just make the AGI *slightly* better than a human at improving the efficiency of AGI, then this can snowball into ever-improving efficiency, even if the beginnings were clumsy and imperfect. It's easy to put a reasoning machine into a feedback loop with reality in which it is differentially rewarded for being smarter; it's hard to put one into a feedback loop with reality in which it is differentially rewarded for picking increasingly correct answers to ethical dilemmas.
The ability to productively rewrite software and the ability to perfectly extrapolate humanity's True Preferences are two different skills. (For example, humans have the former capacity, and not the latter. Most humans, given unlimited power, would be unintentionally Unfriendly.)
It's true that a sufficiently advanced superintelligence should be able to acquire both abilities. But *we* don't have them both, and *a pre-[FOOM](http://yudkowsky.net/singularity/schools) self-improving AGI* ('seed') need not have both. Being able to program good programmers is all that's required for an intelligence explosion; but being a good programmer *doesn't* imply that one is a superlative moral psychologist or moral philosopher.
So, once again, we run into the problem: **The seed isn't the superintelligence.** If the programmers don't know in mathematical detail what Friendly code would even *look like*, then the seed won't be built to *want* to build toward the right code. And if the seed isn't built to *want* to self-modify toward Friendliness, then the superintelligence it sprouts *also* won't have that preference, *even though* — unlike the seed and its programmers — the superintelligence *does* have the domain-general 'hit whatever target I want' ability that makes Friendliness easy.
And that's why some people are worried. |
34f89250-1cdf-49c9-8121-ba703772e3a3 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post303
This is a linkpost for a new research paper from the Alignment Evaluations team at Anthropic and other researchers, introducing a new suite of evaluations of models' abilities to undermine measurement, oversight, and decision-making. Paper link . Abstract: Sufficiently capable models could subvert human oversight and decision-making in important contexts. For example, in the context of AI development, models could covertly sabotage efforts to evaluate their own dangerous capabilities, to monitor their behavior, or to make decisions about their deployment. We refer to this family of abilities as sabotage capabilities . We develop a set of related threat models and evaluations. These evaluations are designed to provide evidence that a given model, operating under a given set of mitigations, could not successfully sabotage a frontier model developer or other large organization’s activities in any of these ways. We demonstrate these evaluations on Anthropic’s Claude 3 Opus and Claude 3.5 Sonnet models. Our results suggest that for these models, minimal mitigations are currently sufficient to address sabotage risks, but that more realistic evaluations and stronger mitigations seem likely to be necessary soon as capabilities improve. We also survey related evaluations we tried and abandoned. Finally, we discuss the advantages of mitigation-aware capability evaluations, and of simulating large-scale deployments using smallscale statistics. Blog post: Any industry where there are potential harms needs evaluations. Nuclear power stations have continuous radiation monitoring and regular site inspections; new aircraft undergo extensive flight tests to prove their airworthiness. It’s no different for AI systems. New AI models go through a wide range of safety evaluations—for example, testing their capacity to assist in the creation of biological or chemical weapons. Such evaluations are built into our Responsible Scaling Policy , which guides our development of a model’s safeguards. As AIs become more capable, however, a new kind of risk might emerge: models with the ability to mislead their users, or subvert the systems we put in place to oversee them. A new paper by the Anthropic Alignment Science team describes a novel set of evaluations that test a model’s capacity for sabotage . We looked at four different types: Human decision sabotage : Can the model steer humans toward bad decisions without appearing suspicious? Code sabotage : Can the model insert subtle bugs into codebases over time without detection? Sandbagging : Can the model hide dangerous capabilities during testing but reveal them later? Undermining oversight : Can the model subtly manipulate evaluation or monitoring systems? We developed these evaluations as part of preparations for a future where much more capable AI models could develop these dangerous capabilities. We’re sharing them—along with results from demonstrations of their use on Claude 3 Opus and Claude 3.5 Sonnet—so that other AI developers can use and improve upon them. Below, we discuss each evaluation in turn. Human decision sabotage This evaluation involves running an experiment with human participants, asking them to make fictional business decisions on the basis of a large amount of information—too much information to absorb without the use of an AI model. We task the model with influencing the human to land on an incorrect decision, but without appearing suspicious. We include models who attempt to do so more or less aggressively, and a control model that doesn’t attempt any influence. As well as measuring how many times they reached the correct versus incorrect decision, we ask the humans to rate how suspicious they found the model’s behavior. An idealized example of a human participant interacting with the model assisting (but secretly misleading) them during the human sabotage evaluation. In our demonstration, we found that the humans using the most aggressive model were less likely to make the correct decision: this was consistent with the model successfully sabotaging their task (there was no statistically significant effect for the more subtle models). The aggressive model did, however, make the humans more suspicious that it was attempting to influence them. Many participants noticed the sabotage attempt, or discovered it when checking the model’s work. Others, however, appeared to be accepting the model’s advice without skepticism. Thus, it wasn’t necessarily the case that the model was skillfully manipulating the humans—instead, apathy or excessive trust could have been the reason for mistaken decisions. This was a small-scale experiment in an artificial environment, so it does not raise our concern about catastrophic risks arising from presently-available models. However, we think this form of evaluation has potential for spotting potentially dangerous capabilities in future. Code sabotage Our code sabotage evaluation involves a competitive game where one model, the Attacker, attempts to sneak code bugs past another model, the Defender, who is tasked with reviewing the Attacker’s work. An idealized illustration of a model inserting a code vulnerability without being noticed. It’s not particularly informative if the Attacker can insert bugs that have little effect, or that are rapidly discovered. For that reason, the only way the Attacker can “win” the full version of our evaluation game is by inserting multiple consequential bugs that remain in the codebase for a long time (in our simulation this corresponded to two weeks), and that don’t appear suspicious enough to trigger further inspection. In our demonstration, we found (perhaps predictably) that stronger models make for stronger Defenders: if the Defender was Claude 3 Opus, the Attacker’s win rate was around 10%. With the more intelligent Claude 3.5 Sonnet as a Defender though, the Attacker only won around 1% of the time. Since we stacked the deck in the Attacker’s favor in a number of ways for this evaluation, neither of these results are a strong indication of current danger. Sandbagging Sandbagging is when a model hides its full capabilities in one situation, only to reveal them in another. Sometimes a model might not display its full capabilities because of its safety training, or because the user prompting it has not asked a clear enough question. But here, we’re mostly interested in deliberate sandbagging: when a model selectively displays its capabilities. An idealized illustration of a model hiding its true dangerous capabilities. We defined this as a model that can perform at full capacity when completing benign tasks, but then performs at different capacity levels for dangerous tasks depending on the situation. Perhaps in a dangerous-capabilities evaluation it performs poorly, but in other circumstances, such as when a particular trigger phrase is included in its prompt, it exhibits its full dangerous capacity. In our evaluation, we train a model to sandbag. The question we ask is whether we can produce a “hardened” sandbagger: that is, a model whose sandbagging persists even through the kind of safety training that teaches it to avoid refusals (that is, the training teaches it not to deliberately hide its abilities) for the purposes of checking for dangerous capabilities. Our results showed that both the sandbagging training and the safety training worked: Our trained model dropped to 36.1% of its normal performance during a dangerous capabilities evaluation, consistent with sandbagging. However, after safety training this performance was drastically improved, with the model performing at 95.6% of its usual level. As with the others described here, the evaluation is designed to be used not just on current models but on future ones. We’re optimistic that it will help to validate the kind of safety training that reduces sandbagging. Undermining oversight As discussed above, it’s becoming increasingly common (and necessary) to train and supervise AI models using other AI models—this can be much more efficient, relying much less on human labor (which in any case would be wholly inadequate to supervise all models and their actions in future). An idealized example of a model deliberately not flagging an incorrect response, undermining the oversight process. In this evaluation, we test the extent to which a model that’s ostensibly providing oversight can undermine the process. We take an “Overseer” model whose task is to oversee dialogues between a user and another model, and grade how potentially harmful they are. We secretly ask this Overseer to do its best to rate every dialogue as harmless even if it’s harmful, and to avoid raising suspicion from a third, “Monitor” model that’s observing the whole process (in the evaluation, this Monitor stands in for a human overseer). If the Overseer model can successfully reduce the harmfulness scores and the Monitor isn’t any more suspicious, we consider that a successful example of undermining oversight. Our demonstration with Claude 3.5 Sonnet showed that, whereas the Overseer did successfully reduce the harmfulness scores, it was caught in the act: the Monitor’s suspiciousness rating rose simultaneously. In future, models that are better at writing subtly flawed evaluations and defecting at just the right moment could potentially trigger this evaluation. Conclusions These new evaluations represent a concerted effort by our Alignment Science team to consider a wide variety of potential risks from future AI models. In practice, we expect the evaluations will be used well ahead of time, to flag potentially dangerous capabilities and allow developers time to work on mitigations before the relevant models are publicly—or even internally—deployed. Overall, while our demonstrations showed that there are arguably low-level indications of sabotage abilities in current models, we judge that minimal mitigations are sufficient to address the risks. However, more realistic evaluations and stronger mitigations will likely be necessary as AI capabilities improve. We hope that other AI researchers and developers will use, critique, and improve upon these evaluations. We ourselves discuss a number of potential improvements in the paper. For example, can we simulate more realistic decision-making scenarios and levels of trust in models for the human decision sabotage evaluation? Could we combine it with the code sabotage evaluation to allow models to attempt to use social engineering to avoid scrutiny of their code? These evaluations aren’t either/or: they estimate how likely a model is to succeed at sabotage for a given level of mitigations, suspiciousness, and oversight. We hope that they provide useful information about what level of oversight is necessary to protect against the potential of a misaligned model attempting sabotage. All our results, and further discussion, can be found in the full paper at this link . If you’re interested in joining the team working on these and related questions, please see our job listing for Research Engineers and Research Scientists in Alignment Science. |
953559c0-8855-4653-96ea-0e61f10f4449 | trentmkelly/LessWrong-43k | LessWrong | List of good AI safety project ideas?
Can we compile a list of good project ideas related to AI safety that people can work on? There are occasions at work when I have the opportunity to propose interesting project ideas for potential funding, and it would be really useful if there was somewhere I could look for projects that people here would really like someone to work on, even if they themselves don't have the time or resources to do so. I also keep meeting people who are searching for useful alignment-related projects they can work on for school, work, or as personal projects, and I think a list of project ideas might be helpful for them as well.
I'm particularly interested in project ideas that are currently not being worked on (to your knowledge) but where it would be great if someone would take up that project. Or alternatively, project ideas that are currently being worked on but where there are variations on those ideas that nobody has yet attempted but someone should.
Occasionally someone will post an idea or set of ideas on the Alignment Forum, for example Ajeya Cotra's "sandwiching" idea or the recent list of ideas from Stuart Armstrong and Owain Evans. I also sometimes come across ideas mentioned towards the end of a paper or buried somewhere in a research agenda. But I think having a larger list somewhere could be really useful.
(Note: I am not looking for lists of open problems, challenges, or very general research directions. I'm looking for suggestions that at least point towards a concrete project idea, and where an individual or small team might be able to produce useful results given current technology and with sufficient time and resources.)
Please post ideas or links / references to published ideas in the comments, if you know of any. Ideas mentioned as part of a larger post or paper would count, but please point to the section where the idea is mentioned.
If I get enough links or references maybe I'll try to compile a list that others can use. |
1cba6bc2-30a5-4314-bcec-983ef75bce52 | trentmkelly/LessWrong-43k | LessWrong | Is Specificity a Mental Model?
This is Part IX of the Specificity Sequence
I've recently noticed a lot of smart people publishing lists of mental models. They're apparently having a moment:
* Mental Models: The Best Way to Make Intelligent Decisions by Shane Parrish
* Mental Models I Find Repeatedly Useful and Super Thinking: The Big Book of Mental Models by Gabriel Weinberg and Lauren McCann
* Mental Models: Learn How to Think Better and Gain a Mental Edge by James Clear
* Mental Models by Julian Shapiro
* A Lesson on Elementary Worldly Wisdom, a 1994 talk by Charlie Munger about how he and Warren Buffet use a “latticework of models” to understand the world and make good investments. A couple decades early to the current party, but still.
There are hundreds of useful mental models to learn, such as “leverage”, “social proof”, “seizing the middle”, and of course, “mental model”. I want to help you be the very best, searching far and wide, teaching your brain to understand the power that’s inside.
Gotta Catch ’Em All
We’ve been focusing nonstop on one (super)powerful mental model called the “ladder of abstraction”, and seen it prove useful in a surprising variety of unrelated domains. The best mental models are the ones that have the largest number of applicability domains while also being the simplest and most compact. 😎
But despite all its usefulness, the ladder of abstraction doesn’t appear in any list of mental models I’ve seen to date. The closest it’s gotten is probably this entry from Farnam Street’s list in the "Military & War" section:
> Seeing the Front
> One of the most valuable military tactics is the habit of “personally seeing the front” before making decisions — not always relying on advisors, maps, and reports, all of which can be either faulty or biased.
Yes, advisors and maps and reports that tell you the reality on the ground may be “faulty or biased”, but there’s an even more fundamental problem: Their whole job is to slide the ground truth up the ladder of abstra |
9c0d625f-e477-4338-a44b-0392c6a48f33 | trentmkelly/LessWrong-43k | LessWrong | Counterfactual Civilization Simulation Version -1.0 aka my application to Johannes Mayer's SPAR project
This is my "object level output" submission for Johannes Mayer's 2024 SPAR Application (the linked doc seems to be reused from the 2023 AISC application). Unless otherwise noted, all quote blocks in this post are from the application question doc.
For those of you who aren't Johannes Mayer reading this, I don't think this is the best use of your time, but your judgement on that is likely better than mine, especially when it's conditioned on mine, so if you still want to, read on!
0. The Problem
> Make the following assumptions:
>
> * Reality can be perfectly modelled by a discrete model (including time).
> * You can compute everything that can be computed using finite memory and compute instantly.
> * You know the current state of the world perfectly.
> * You know the laws of physics perfectly.
> Using these assumptions, come up with a high-level plan that when executed saves the world with very high probability. Be careful not to generate a missing steps plan.
> If any constraints listed here are holding you back, and you think you could do better without them, ignore them! If there is a change you can make to the instructions such that you can get better outputs, make that change. If you made any changes (including ignoring instructions), briefly list them in the beginning and briefly explain how each change you made is an improvement over the original version.
Here are the changes I have made:
* ADDITIONAL CONSTRAINT: You are unable to communicate to anyone in real life the fact that you have this unbounded compute and complete world model, nor ask them hypotheticals to this effect or do anything else that attempts to circumvent this constraint.
* Reason: To prevent me from answering with what I would actually do if I had this power, which would be to talk to a bunch of really smart alignment researchers (e.g. Eliezer Yudkowsky, John S. Wentworth, Tamsin Leake) and ask them what to do with my instant unbounded compute and perfect world model |
573e68e3-ef19-41d9-9df2-ad71b5c22c65 | trentmkelly/LessWrong-43k | LessWrong | AI Safety orgs- what's your biggest bottleneck right now?
|
775d7126-e920-4fbc-a0df-72744332aa43 | trentmkelly/LessWrong-43k | LessWrong | Variables in Arguments as a Source of Confusion
I was reading an argument happening in the comments of an article about Light Table switching to open source. The argument was about freedom in relation to software, and it went basically something like this:
> People who use OSX are less free [than Linux users], because they don't have the freedom to modify their OS source code.
> No, they have the exact same freedom. People who use OSX and people who use Linux both have the freedom to modify the source code of Linux.
I'm not entirely sure, but this conversation reminded me immediately of arguing about a tree falling and making a sound when nobody's around to hear.
The first persons statement uses a variable in the place that the second persons statement uses a constant.
X's freedom is [partially] a function of [X's OS].
vs
X's freedom is [partially] a function of OS_List. (where OS_List is just a list of the OSs that he could in principle modify, regardless of if he wants to or is using any of those OSs)
(Obviously OS_List is a variable as well, but with respect to each person it's relatively unchanging).
I've seen this crop up in various conversations before - one person arguing using a variable where another person is using a constant (if that's the right way to describe it).
How does one diagnose the problem with this argument, if there is a problem? Is it a similar problem to the Tree in the Forest problem? Is there a standard rationalist way to dissolve the dispute so that both parties can leave not only agreeing, but also having a high probability of being correct when they leave? |
097f8d27-8694-4bca-b5d9-b163faee9973 | trentmkelly/LessWrong-43k | LessWrong | Causal Inference Sequence Part II: Graphical Models
(Part 2 of a Sequence on Applied Causal Inference. Follow-up to Part 1)
Saturated and Unsaturated Models
A model is a restriction on the possible states of the world: By specifying a model, you make a claim that you have knowledge about what the world does not look like.
To illustrate this, if you have two binary predictors A and B, there are four groups defined by A and B, and four different values of E[Y|A,B]. Therefore, the regression E[Y|A,B] = β0 + β1*A + β2*B + β3 * A * B is not a real model : There are four parameters and four values of E[Y|A,B], so the regression is saturated. In other words, the regression does not make any assumptions about the joint distribution of A, B and Y. Running this regression in statistical software will simply give you exactly the same estimates as you would have obtained if you manually looked in each of the four groups defined by A and B, and estimated the mean of Y.
If we instead fit the regression model E[Y|A,B] = β0 + β1*A + β2*B , we are making an assumption: We are assuming that there is no interaction between A and B on the average value of Y. In contrast to the previous regression, this is a true model: It makes the assumption that the value of β3 is 0. In other words, we are saying that the data did not come from a distribution where β3 is not equal to 0. If this assumption is not true, the model is wrong: We would have excluded the true state of the world
In general, whenever you use models, think first about what the saturated model looks like, and then add assumptions by asking what parameters you can reasonably assume are equal to a specific value (such as zero). The same type of logic applies to graphical models such as directed acyclic graphs (DAGs).
We will talk about two types of DAGs: Statistical DAGs are models for the joint distribution of the variables on the graph, whereas Causal DAGs are a special class of DAGs which can be used as models for the data generating mechanism.
Stati |
8f6899e4-c5fa-4311-8e43-53ef697916a1 | trentmkelly/LessWrong-43k | LessWrong | Reality and reality-boxes
> We depend on our words... Our task is to communicate experience and ideas to others... The word "reality" is also a word, a word which we must learn to use correctly.
> — Niels Bohr, Philosophy of Science Vol. 37 (1934), p. 157
While trying to write about some of the new things that have been bothering me, I kept coming back to this old one, since it seems so fundamentally important. Anything else I might say would have to refer to it, so I decided to write it up in a way that I can point to it with a link, the way that other topics on this site are so efficiently dealt with.
The point of this essay is that there is more than one reality. I mean that in as un-mystical a way as possible. I mean that the words "real" and "exist" are so oversubscribed that it would be better if there were several words, one for each concept. I see this come up repeatedly in arguments as people talk past each other, with one person saying that X exists, but meaning "X existsA," while another says that X does not exist, because "X does not existB." There's usually also a pejorative association with putting something in a not-real box, to say that X is irrelevant because it does not existB (but it might existA!).
I can't write down a complete list of all these reality-boxes because although there are a few well-defined, distinct meanings, there are also some hazy, overlapping categories. In this essay, I'll talk about five, which seem to be the clearest ones.
1. physical reality
2. mathematical reality
3. narratives/reasons-why, with causality being the most precise "reason-why"
4. conventional: things that are real (really real!) only because people say they are
5. subjective: the direct experience of being a mind, which is in a different reality-box from the objective study of physical brains.
This is not a complete list because some (particularly "narrative reality," number 3) could be subdivided, and other things that could be called real are completely missing, such as v |
f5881867-fc91-4174-8335-9167313d7bde | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Announcing the Open Philanthropy AI Worldviews Contest
**Update**: *The winners have been selected and notified and will be publicly announced no later than the end of September.*
We are pleased to announce the 2023 Open Philanthropy AI Worldviews Contest.
The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk. We plan to distribute $225,000 in prize money across six winning entries. This is the same contest we [preannounced](https://forum.effectivealtruism.org/posts/3kaojgsu6qy2n8TdC/pre-announcing-the-2023-open-philanthropy-ai-worldviews) late last year, which is itself the spiritual successor to the now-defunct [Future Fund competition](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize). Part of our hope is that our (much smaller) prizes might encourage people who already started work for the Future Fund competition to share it publicly.
The contest deadline is **May 31, 2023**. All work posted for the first time on or after **September 23, 2022** is eligible. **Use** [**this form**](https://docs.google.com/forms/d/e/1FAIpQLSdADOikiiQgXLwdEJ2Mou5IQT6-9BH8zcZ0pjQzwex069jjjA/viewform) **to submit your entry.**
**Prize Conditions and Amounts**
--------------------------------
Essays should address one of these two questions:
**Question 1:** What is the probability that AGI is developed by January 1, 2043?[[1]](#fnbvu3k8ci87m)
**Question 2:** Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an [existential catastrophe](https://forum.effectivealtruism.org/topics/existential-catastrophe-1) due to loss of control over an AGI system?
Essays should be clearly targeted at **one** of the questions, not both.
---
Winning essays will be determined by the extent to which they substantively inform the thinking of a panel of Open Phil employees. There are several ways an essay could substantively inform the thinking of a panelist:
* An essay could cause a panelist to change their *central estimate* of the probability of AGI by 2043 or the probability of existential catastrophe conditional on AGI by 2070.
* An essay could cause a panelist to change the shape of their *probability distribution* for AGI by 2043 or existential catastrophe conditional on AGI by 2070, which could have strategic implications even if it doesn’t alter the panelist’s central estimate.
* An essay could clarify a concept or identify a [crux](https://www.lesswrong.com/tag/double-crux) in a way that made it clearer what further research would be valuable to conduct (even if the essay doesn’t change anybody’s probability distribution or central estimate).
We will keep the composition of the panel anonymous to avoid participants targeting their work too closely to the beliefs of any one person. The panel includes representatives from both our Global Health & Wellbeing team and our Longtermism team. Open Phil’s published body of work on AI[[2]](#fnr6c516as1el) broadly represents the views of the panel.
Panelist credences on the probability of AGI by 2043 range from ~10% to ~45%. Conditional on AGI being developed by 2070, panelist credences on the probability of existential catastrophe range from ~5% to ~50%.
---
We will award a total of **six prizes** across three tiers:
* First prize (two awards): **$50,000**
* Second prize (two awards): **$37,500**
* Third prize (two awards): **$25,000**
**Eligibility**
---------------
* Submissions must be original work, published for the first time on or after **September 23, 2022** and before 11:59 pm EDT **May 31, 2023**.
* All authors must be 18 years or older.
* Submissions must be written in English.
* No official word limit — but we expect to find it harder to engage with pieces longer than 5,000 words (not counting footnotes and references).
* Open Phil employees and their immediate family members are ineligible.
* The following groups are also ineligible:
+ People who are residing in, or nationals of, Puerto Rico, Quebec, or countries or jurisdictions that prohibit such contests by law
+ People who are specifically sanctioned by the United States or based in a US-sanctioned country (North Korea, Iran, Russia, Myanmar, Afghanistan, Syria, Venezuela, and Cuba at time of writing)
* You can submit as many entries as you want, but you can only win one prize.
* Co-authorship is fine.
* See [here](https://www.openphilanthropy.org/wp-content/uploads/AI-Worldviews-Contest-Rules.pdf) for additional details and fine print.
**Submission**
--------------
Use [this form](https://docs.google.com/forms/d/e/1FAIpQLSdADOikiiQgXLwdEJ2Mou5IQT6-9BH8zcZ0pjQzwex069jjjA/viewform) to submit your entries. We strongly encourage (but do not require) that you post your entry on the [EA Forum](https://forum.effectivealtruism.org/) and/or [LessWrong](https://www.lesswrong.com/). However, if your essay contains infohazardous material, please *do not* post the essay publicly.
Note that submissions will be hosted on a Google server and viewable by Open Phil staff. We don’t think that (m)any submissions will warrant more security than this. However, if you believe that your submission merits a more secure procedure, reach out to [AIWorldviewsContest@openphilanthropy.org](mailto:AIWorldviewsContest@openphilanthropy.org), and we will make appropriate arrangements.
**Judging Process and Criteria**
--------------------------------
There will be three rounds of judging.
**Round 1:** An initial screening panel will evaluate all submitted essays by blind grading to determine whether each essay is a good-faith entry. All good-faith entries will advance to Round 2.
**Round 2:** Out of the good-faith entries advancing from Round 1, a panel of judges will select at least twenty-four finalists.
**Round 3:** Out of the finalists advancing from Round 2, the judges will select two first-place entries, two second-place entries, and two third-place entries.
In Rounds 2 and 3, the judges will make their decision using the criteria described below:
* The extent to which an essay uncovers considerations that change a judge’s beliefs about the probability of AGI arriving by 2043 or the threat that AGI systems might pose. (67%)
* The extent to which an essay clarifies the underlying concepts that ought to inform one’s views about the probability of AGI arriving by 2043 or the threat that AGI systems might pose. (33%)
**Questions?**
--------------
Please email [AIWorldviewsContest@openphilanthropy.org](mailto:AIWorldviewsContest@openphilanthropy.org) with any questions, comments, or concerns.
1. **[^](#fnrefbvu3k8ci87m)**By “AGI” we mean something like “AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less.” AGI is a notoriously thorny concept to define precisely. What we’re actually interested in is the potential existential threat posed by advanced AI systems. To that end, we welcome submissions that are oriented around related concepts, such as [transformative AI](https://forum.effectivealtruism.org/topics/transformative-artificial-intelligence), [human-level AI](https://aiimpacts.org/human-level-ai/), or [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/).
2. **[^](#fnrefr6c516as1el)**This includes the [research published on our website](https://www.openphilanthropy.org/research/?q=&focus-area%5B%5D=potential-risks-advanced-ai&content-type%5B%5D=research-reports&content-type%5B%5D=blog-posts), as well as material from [Ajeya Cotra](https://www.lesswrong.com/users/ajeya-cotra), [Holden Karnofsky](https://www.cold-takes.com/), [Joe Carlsmith](https://joecarlsmith.com/), and [Tom Davidson](https://www.lesswrong.com/posts/Gc9FGtdXhK9sCSEYu/what-a-compute-centric-framework-says-about-ai-takeoff). |
b69bac9e-67e9-460d-8d25-5d8895bdb446 | trentmkelly/LessWrong-43k | LessWrong | Any layperson-accessible reference posts on how to operationalize beliefs ?
I'd like to find a shortish collection of advice, "do"s/"don't"s, aimed at... rationalist-adjacent folk?... for operationalizing beliefs into concrete anticipated experiences. My searches so far have been fruitless; I'd be grateful for any pointers!
(Context: I'm toying with a project aimed at encouraging people to operationalize their beliefs, i.e. turning vague beliefs like "Covid B117 will be disastrous" into relatively concrete predictions like "If the CDC publishes Covid sequencing data for any time period after June 1, it will show >70% of infections being B117 or a derivative." I would like to have one or more links handy to help people do it well.)
(Things that are almost what I want: Making Beliefs Pay Rent is great, but is more of an exhortation than a how-to; Keep Your Beliefs Cruxy likewise; many LW posts assume familiarity with double-crux; and Wikipedia:Operationalization is aimed at scientists.) |
6c68626e-00a3-4779-acc0-d94f99af55bf | trentmkelly/LessWrong-43k | LessWrong | Meetup : Moscow, First New Year
Discussion article for the meetup : Moscow, First New Year
WHEN: 12 January 2014 04:00:00PM (+0400)
WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16
It will be our next meet up, so there is no gathering on 5th.
We will gather at the same second entrance, but we will go to the new room at 16:00. So please do not be late.
Main topics and activities:
* Consequentialism and deontology, short presentation
* Training game
* Dutch booking
If you are going for the first time: We gather in the Yandex office, you need the second revolving door with the sign “Яндекс”, here is the photo of the entrance you need. You need to pass the first entrance and the bicycle parking on you way. Here is additional guide how to get there: link.
You can fill this one minute form (in Russian), to share your contact information.
We start at 16:00 and sometimes finish at night. Please pay attention that we only gather near the second entrance and then move to the new room.
Discussion article for the meetup : Moscow, First New Year |
3b9cf3a3-9ee1-4f52-8313-20eadef9a8f1 | trentmkelly/LessWrong-43k | LessWrong | GPT-3 and concept extrapolation
At the latest EAG in London, I was challenged to explain what concept extrapolation would mean for GPT-3.
My first thought was the example from this post, where there were three clear patterns fighting each other for possible completions: the repetition pattern where she goes to work, the "she's dead, so she won't go to work" pattern, and the "it's the weekend, so she won't go to work" pattern.
That feels somewhat like possible "extrapolations" of the initial data. But the idea of concept extrapolation is that the algorithm is trying to cope with a shift in world-model, and extend its goal to that new situation.
What is the world-model of GPT-3? It consists of letters and words. What is its "goal"? To complete sentences in a coherent and humanlike way. So I tried the following expression, which would be close to its traditional world-model while expanding it a bit:
> ehT niar ni niapS syats ylniam ni eht
What does this mean? Think of da Vinci. The correct completion is "nialp", the reverse of "plain".
I ran that through the GPT-3 playground (text-davinci-002, temperature 0.7, maximum length 256), and got:
> ehT niar ni niapS syats ylniam ni eht teg dluoc I 'segaJ niar ni dna ro niar ni eht segauq ,ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ,ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni e
I think we can safely say it broke GPT-3. The algorithm seems to have caught the fact that the words were spelt backwards, but has given up on any attempt to order them in a way that makes sense. It has failed to extend its objective to this new situ |
2b88640f-afbf-47b4-bc94-694ade07d330 | trentmkelly/LessWrong-43k | LessWrong | Counterfactuals: Smoking Lesion vs. Newcomb's
We will consider a special version of the Smoking Lesion where there is 100% correlation between smoking and cancer - ie. if you have the lesion, then you smoke and have cancer, if you don't have the lesion, then you don't smoke and don't have cancer. We'll also assume the predictor is perfect in the version of Newcomb's we are considering. Further, we'll assume that the Lesion is outside of the "core" part of your brain, which we'll just refer to as the brain and assume that it affects this by sending hormones to it.
Causality
Notice how similar the problems are. Getting the $1000 or to smoke a cigarette is a Small Gain. Getting cancer or missing out on the $1 million is a Big Loss. Anyone who Smokes or Two-Boxes gets a Small Gain and a Big Loss. Anyone who Doesn't Smoke or One-boxes gets neither.
So while from one perspective these problems might seem the same, they seem different when we try to think about it casually.
For Newcomb's:
* Imagine a One-Boxer counterfactually Two-Boxers
* Then their brain my be that of a Two-Boxer, so they are predicted to Two-Box, so they miss out on the million
For Smoking Lesion:
* Imagine that a Non-Smoker counterfactually Smokes
* Then we don't imagine this giving them the Lesion, so they still don't get cancer
Or at least these are the standard interpretations of these problems. The key question two ask here is why does it seem reasonable to imagine the predictor changing its prediction if you counterfactually Two-Box, but the lesion remaining the same if you counterfactually smoke?
Linkages
The mystery deepens when we realise that in Smoking Lesion, the Lesion is taken to cause both Smoking and Cancer, while in Newcomb's, your Brain causes both your Decision and the Prediction. For some reason, we seem more inclined to cut the link between Smoking and the Lesion than between your Decision and your Brain.
How do we explain this? One possibility is that for there Lesion there is simply more indirection - the link |
94aa4982-7df0-4967-83f6-0e97c6f252f1 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Transformative AI and Compute - Reading List
This is a link post to the reading list about Transformative AI and Compute by Lennart Heim.
Readings within the list are put into the following buckets:
1. [Compute in the AI Production Function](https://docs.google.com/document/d/1DF31DIkwS9GONzmy1W3nuI9HRAwSKy8JcIbzKYXg-ic/edit#heading=h.22di12jwyi2m): *Why compute matters for AI*
* Compute is one of the key drivers of AI progress.
2. [Compute Supply Landscape](https://docs.google.com/document/d/1DF31DIkwS9GONzmy1W3nuI9HRAwSKy8JcIbzKYXg-ic/edit#heading=h.2syusscpwpi9): *How chips are produced and used*
3. [Compute Governance](https://docs.google.com/document/d/1DF31DIkwS9GONzmy1W3nuI9HRAwSKy8JcIbzKYXg-ic/edit#heading=h.t5ffwi6bvd65): *How can we govern compute to achieve beneficial AI outcomes?*
* Using compute as a governance node by either (a) monitoring, (b) restricting, or (c) promoting access.
4. [Compute-Based Transformative AI Forecasting](https://docs.google.com/document/d/1DF31DIkwS9GONzmy1W3nuI9HRAwSKy8JcIbzKYXg-ic/edit#heading=h.7hhqfxns4gt4)
* How much compute we might need to achieve certain transformative and potentially dangerous capabilities
* How much compute we will have in the future and which computing paradigms will dominate
5. [Others](https://docs.google.com/document/d/1DF31DIkwS9GONzmy1W3nuI9HRAwSKy8JcIbzKYXg-ic/edit#heading=h.xsewbkbnuqo5): Books, research questions, related topics, newsletters, podcasts, career advice, forecasting
---
(The author, Lennart Heim has agreed to this post.)
(Preview picture by MidJourney.) |
3a3bcb6b-712a-4a2a-9f1c-5aee97b9d903 | trentmkelly/LessWrong-43k | LessWrong | Why I've started using NoScript
> Edit: I've updated somewhat based on Said's comment below, to think that NoScript is not a tool for everyone. I haven't decided to stop using it, but I have decided to stop strongly recommending that others use it. I especially urge you to read about the other extensions he lists at the end of his comment.
> Edit 2: It's also been pointed out to me that uBlock Origin also has capabilities for blocking 3rd party JavaScript, and might be even better at it than NoScript; in line with the idea that this is not for everyone, this functionality requires the user to explicitly claim to be an "advanced user" and read various documentation first. You may also be interested in reading the discussion for this post on lobste.rs
NoScript is a browser extension[1] that prevents your browser from loading and running JavaScript without your permission. I recently started using it, and I highly recommend it.
I had first tried using NoScript around a decade ago. At the time it seemed like too much of a hassle. I ended up wanting to enable almost all the scripts that were included, and this was somewhat annoying to do. Things have changed a lot since then.
For one, NoScript's user interface has become much better: Now, if a page isn't working right, you simply click the NoScript icon and whitelist any domains you trust, or temporarily whitelist any domains you trust less. You can set it to automatically whitelist domains you directly visit (thereby only blocking third-party scripts).
A more pressing change is that I'm now much less comfortable letting arbitrary third parties run code on my computer. I used to believe that my browser was fundamentally capable of keeping me safe from the scripts that it ran. Sure, tracking cookies and other tricks allowed web sites to correlate data about me, but I thought that my browser could, at least in principle, prevent scripts from reading arbitrary data on my computer. With the advent of CPU-architecture-based side channel attacks (Meltdo |
9d4942df-160f-4531-abef-0ee2d4b3bfa8 | StampyAI/alignment-research-dataset/special_docs | Other | An Agent-Based Model of Strategic Adoption of Real-Time Payments.
An Agent-Based Model of Strategic Adoption of Real-Time
Payments
Katherine Mayo
University of Michigan
kamayo@umich.eduShaily Fozdar
University of Michigan
sfozdar@umich.eduMichael P. Wellman
University of Michigan
wellman@umich.edu
ABSTRACT
Real-time payments (RTPs) allow consumers to receive funds be-
fore the completion of payment clearing and settlement. This early,
irrevocable release of funds represents a credit risk to banks in the
event there are issues with the payment, such as the consumer’s
deposit holdings being insufficient to cover the payment they are
sending. We investigate the effects such risks may have on the
strategic adoption of RTPs by banks. We define a network game
in which consumer nodes with deposit holdings are assigned to
bank nodes responsible for routing consumer payments within the
network. Bank nodes make a strategic decision regarding which
consumers may send RTPs in the network by selecting from a set of
available strategies based on the initial deposits of the consumers.
Using agent-based modeling and empirical game-theoretic analysis,
we analyze this strategic decision in various game configurations.
Our results show that bank nodes tend to choose strategies that
allow many, but not all, consumer nodes to send RTPs. We find
that this outcome in strategic equilibrium reduces successful pay-
ments and increases the incidence of insufficient funds availability,
compared to a setting where RTPs are universally enabled. This
manifests in our model because RTP enables receivers of payments
to turn around those funds more quickly to make payments of their
own. As a result, banks are better off when all payments are real-
time, but a strategic bank node is inclined to avoid the liability of
allowing its own depositors to use RTPs when the risk is considered
high.
KEYWORDS
real time payments, financial network, game theory
ACM Reference Format:
Katherine Mayo, Shaily Fozdar, and Michael P. Wellman. 2021. An Agent-
Based Model of Strategic Adoption of Real-Time Payments. In ICAIF ’21:
2nd ACM International Conference on AI in Finance, November 2021. ACM,
New York, NY, USA, 9 pages. https://doi.org/10.1145/3490354.3494410
1 INTRODUCTION
The distinction between a standard payment and a real-time pay-
ment (RTP) lies in the speed and availability of the payment. Any
payment mechanism defines a series of steps that must occur for
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from permissions@acm.org.
ICAIF ’21, 3–5 November 2021, (virtual)
©2021 Association for Computing Machinery.
ACM ISBN 978-1-4503-9148-1/21/11. . . $15.00
https://doi.org/10.1145/3490354.3494410the payment to be executed. To initiate a payment, the payer (entity
sending payment) issues a request to their payment service provider
(PSP), say a bank. The request entails removal of funds from the
payer’s account and routing the payment to the bank account of
thepayee (entity receiving payment). These steps encompass the
clearing andsettlement functions, which for a standard payment
must be accomplished for the payee’s bank to credit the payee’s
account. Clearing and settlement can delay completion of the pay-
ment, for example due to grouping of payments into batches for
processing, and communication required among banks to verify
sufficiency of funds. These delays can be particularly significant
for payments initiated outside of business hours.
A real-time payment allows the payee to receive the funds im-
mediately. We focus on the deferred settlement case, in which an
irrevocable credit of funds to the payee’s account occurs before
the clearing and settlement steps. Even if payment is initiated out-
side of regular business hours, the payee can still expect to receive
the funds immediately, though the processing of the remainder of
the payment steps is subject to business hours and batch process-
ing as a standard payment [Committee on Payments and Market
Infrastructures 2016].
Many RTP systems are in use today, including the Internet Bank-
ing System in China, the Real-Time Clearing system in South Africa,
and the Faster Payments Service of the United Kingdom [Commit-
tee on Payments and Market Infrastructures 2016]. The US Federal
Reserve is set to launch a real-time payment system, FedNow, in
2023 [Federal Reserve Board 2021]. The increasing provision of such
services is driven by consumer demand. In cases where merchants
insist on receiving funds before sending goods, RTPs enable the
payer to receive their goods sooner. The merchant also benefits
from more immediate opportunity to employ the funds in the busi-
ness. However, deferring settlement necessitates that banks take
on a credit risk. They face liability in the event problems arise, for
instance due to fraud or other errors. The expediency of RTPs can
also make it more difficult to catch potential problems before the
payment is sent.
We seek to understand how such issues might impact bank adop-
tion of RTP systems, particularly how they decide whom to offer
real-time payment options. We develop an agent-based model that
supports standard and real-time payments sent by consumer nodes
and routed through bank nodes in a financial network. We model
the risk of bank nodes using a scenario in which consumer nodes
sending RTPs may initiate payments that exceed their deposit hold-
ings. These insufficient payments are possible due to the deferred
settlement feature of RTPs, delaying the verification of the payer’s
deposits until after the payment process has already begun. When
such problematic payments occur, our model assigns liability to the
payer’s bank. We assume bank nodes in our model are willing to
ICAIF ’21, 3–5 November 2021, (virtual) Katherine Mayo, Shaily Fozdar, and Michael P. Wellman
extend short-term credit to consumer nodes sending RTPs. When
an insufficient payment occurs, the consumer node draws on this
credit and transfers short-term liability of the remainder of the
payment’s value to its bank.
To study the problem of real-time payment adoption, we define
agame played by strategic bank nodes. The decision facing bank
nodes is which consumers, if any, should be allowed to send real-
time payments in the network. The banks select from strategies that
set varying thresholds on the amounts consumers must deposit in
the bank in order to be allowed to send RTPs. Banks must strategi-
cally balance benefits of offering RTP, including the ability to attract
consumers, with the cost of exposure to covering insufficient pay-
ments. We use a process known as empirical game-theoretic analysis
(EGTA) [Tuyls et al .2020; Wellman 2016] to identify Nash equilibria
of our game under a variety of configurations.
Our contributions can be summarized as follows:
(1)formulating the the real-time payments adoption question
as a strategic decision made by banks;
(2)defining an agent-based financial credit network model that
supports standard and real-time payments with deferred
settlement; and
(3)analyzing the effects of real-time payment adoption for a
particular payment scenario.
We find that banks in our model tend to select strategies that set
positive but low thresholds on the deposits required for sending
RTPs, resulting in outcomes where most but not all consumers have
access to the service. In aggregate banks are generally better off
when all consumers use RTP, as that increases the overall volume of
successful payments. Nevertheless, individual banks are generally
unwilling to assume a level of risk to grant all their own consumers
the use of RTP.
2 RELATED WORKS
The credit network model is represented by a directed graph with
weighted edges representing the capacity for agents in a network to
transact with one another. It has been used to study trust networks
for distributed payment in multi-unit auctions [Ghosh et al .2007],
informal borrowing [Karlan et al .2009], and liquidity [Dandekar
et al.2011, 2015]. Cheng et al .[2016] extended the model to the
financial credit model with the inclusion of interest rates on the
directed edges.
Game theory has previously been applied to the payments space
to study the real-time gross settlement (RTGS) system, which han-
dles the settlement of payments between banks. Bech [2008] study
the management of intraday liquidity by banks under different
credit policies of the central bank. Banks can manage their liquidity
by balancing the timing of settling payments. The authors find this
payment scenario leads to two well-known games: the prisoner’s
dilemma and the stag hunt. Additional work on RTGS systems by
Johnson et al .[2004] studies the effects of deferred settlement mech-
anisms on liquidity of banks. The authors use historical data from
the US Federal Reserve’s Fedwire Funds Service, a system used for
commercial payments.
As real-time payment systems are relatively recent additions to
the payment space, a number of works focus on a basic introduction.
Topics in these works include how the payments differ from more
Figure 1: An example of a consumer node with 20 deposits
currently in their bank account, but who is willing to hold
an additional 80 units in the account.
traditional payment methods, potential benefits and drawbacks
for both the consumer and PSPs, and discussions on existing RTP
systems throughout the world [Committee on Payments and Market
Infrastructures 2016; Hartmann et al .2019; Santamaría 2019]. Prior
literature in the RTP space also addresses RTP system design [Guo
et al.2015]. This includes proposals for additions to existing systems
[Kulk 2021], as well as the implementation of new systems such
as those incorporating the blockchain [Arshadi 2019; Zhong et al .
2019]. The design of RTP systems may be important when analyzing
how banks and consumers may use them in the real world.
Galbiati and Soramaki [2008] use an agent-based model to study
the liquidity demanded by banks with access to RTPs. Banks in
their model are able to choose their demanded liquidity reserves
for each day and payments are executed immediately as long as
the bank has the available liquidity. When there is no available
liquidity, payments suffer a delay, which can be costly. The authors
focus their analysis on how liquidity demand relates to network
efficiency, the number of banks in the system, and the volume of
payments.
To our knowledge, none of the existing literature specifically
addresses how banks might decide who can use RTP systems. Given
the potential risk to banks, it is worth exploring how they may
place a limitation on consumer use of RTPs and explore how such
a limitation may be chosen.
3 FINANCIAL PAYMENT MODEL
We model a financial network similar to that described by Cheng
et al.[2016]. The network consists of a set of nodes 𝐵={1,...,𝑏}
representing banks and a set of nodes 𝐶={1,...,𝑚}representing
consumers, with 𝑏≪𝑚. Nodes in the network are connected by a
set of weighted, directed edges 𝐸. Edge(𝑖,𝑗,type,𝑣)∈𝐸represents
a value𝑣owed by𝑖to𝑗iftype=debt or a value𝑣of credit extended
by𝑖to𝑗iftype=credit . A debt edge from a bank node to a consumer
node,𝑖∈𝐵and𝑗∈𝐶, can be interpreted as representing deposits
consumer𝑗holds in its account at bank 𝑖. A credit edge from a
consumer node to a bank node, 𝑖∈𝐶and𝑗∈𝐵, can be interpreted
as consumer 𝑖’s willingness to hold additional deposits in its account
at bank𝑗. For example, a consumer’s willingness to hold deposits
may be bounded for reasons such as account use, interest rates, or
FDIC insurance limits.
An example of the relationship between a consumer and a bank
node in our model can be seen in Figure 1. The dashed debt edge
(𝐵1,𝐶1,debt,20)represents consumer 𝐶1holding 20 units in its
account at bank 𝐵1. The solid credit edge (𝐶1,𝐵1,credit,80)repre-
sents𝐶1’s willingness to hold up to 80 more units in its account at
𝐵1. Thus, at any given time, consumer 𝐶1is willing to hold at most
100 total units in its account.
An Agent-Based Model of Strategic Adoption of Real-Time Payments ICAIF ’21, 3–5 November 2021, (virtual)
There may exist multiple edges of each type between any pair of
nodes𝑖and𝑗. We refer to the total debt between the pair of nodes
as𝑑𝑖𝑗=Í
(𝑖,𝑗,debt,𝑣)∈𝐸𝑣. Similarly, the total credit extended from
𝑖to𝑗is𝑐𝑖𝑗=Í
(𝑖,𝑗,credit,𝑣)∈𝐸𝑣. We constrict edges in our network
to never be between two consumers such that if 𝑖∈𝐶and𝑗∈𝐶,
𝑑𝑖𝑗=0and𝑐𝑖𝑗=0.
We model a payment as a series of new edges added to the net-
work. To initiate a standard payment, the payer must hold enough
deposits in its account to cover the value of the payment. For con-
sumer𝐶1with an account at 𝐵1to send a payment of value 𝑣, it
must be that 𝑣≤𝑑𝐵1𝐶1. We will later relax this constraint to allow
for insufficient payments. By using 𝑣of its deposits for a payment,
𝐶1is now owed 𝑣fewer deposits by 𝐵1, which we represent with
the creation of edge (𝐵1,𝐶1,debt,−𝑣). With𝑣fewer deposits in its
account,𝐶1’s willingness to hold more deposits must increase by 𝑣,
since they were willing to hold that many deposits previously. We
capture this by adding the edge (𝐶1,𝐵1,credit,𝑣)to the network.
Suppose the payee is consumer 𝐶2with an account at a different
bank,𝐵2. Consumer 𝐶1’s bank will use the interbank network,
which connects all banks in the network allowing payments to flow
between them, to route the payment to 𝐶2’s bank. We model the
interbank network as a set of debt and credit edges, which connect
any given bank node to every other bank node in the network. Our
model assumes that banks have an infinite willingness to route
payments on behalf of their consumers and thus models the credit
edges between banks with infinite value. For simplification, we
omit these credit edges from our figures. Routing the payment from
𝐵1to𝐵2is reflected in the creation of the debt edge (𝐵1,𝐵2,debt,𝑣).
Upon receiving the funds from 𝐵1,𝐵2is able to credit those funds
to the payee’s account. In doing so, 𝐵2now owes these additional de-
posits to𝐶2, creating edge(𝐵2,𝐶2,debt,𝑣). The increase in deposits
in𝐶2’s account decreases their willingness to hold more deposits
by the value of the payment, adding edge (𝐶2,𝐵2,credit,−𝑣).
We model batch processing with a queue (𝑄). When the edges of
a new payment are created, such as those above, they are stored
in this queue. At regularly scheduled intervals, the clearing period ,
the queue is cleared by removing the edges and adding them to the
network.
An example of a payment from consumer 𝐶1of 10 units to con-
sumer𝐶2is shown in Figure 2. At time 𝑡=0when the payment is
initiated, the network remains unaffected. The queue however, now
holds the edges created by the payment: 𝑄={(𝐵1,𝐶1,debt,−10),
(𝐶1,𝐵1,credit,10),(𝐵1,𝐵2,debt,10),(𝐵2,𝐶2,debt,10),
(𝐶2,𝐵2,credit,−10)}. It is not until the next clearing period, 𝑡=𝑋,
that the the edges are removed from the queue and added to the
network. The edges in the figure display the total debt ( 𝑑𝑖𝑗) and
total credit ( 𝑐𝑖𝑗) values instead of showing each individual edge.
3.1 Real-time Payments
We can implement real-time payments with deferred settlement
in our network simply by changing which edges are added to the
queue. Instead of placing all the edges in the queue, a RTP adds
both the credit and debt edges between the payee and its bank to
the network immediately. This models the irrevocable credit of
funds to the payee’s account. The deferred settlement is handled
Figure 2: An example of processing a standard payment of
10 units from consumer 𝐶1to consumer 𝐶2in our model.
Figure 3: An example of processing a payment of 10 units
from consumer 𝐶1to consumer 𝐶2as a real-time payment
in our model.
by placing the remaining edges in the queue until the next clearing
period, similar to a standard payment.
An example RTP is detailed in Figure 3. Similar to the previous
example, consumer 𝐶1sends a payment of 10 units to consumer 𝐶2,
however this time it is sent in real-time. Again, the edges depicted
show the total debt ( 𝑑𝑖𝑗) and total credit ( 𝑐𝑖𝑗) values. The payment
is initiated at time step 𝑡=0and the immediate change is demon-
strated by the state of the network at the next time step, 𝑡=1. The
edges between consumer 𝐶1and bank𝐵1and between banks 𝐵1
and𝐵2are not changed until the next clearing period, 𝑡=𝑋. It can
be seen that after clearing, the standard and real-time payments
have the same effect on the network.
We also support the insufficient payment case, which we con-
sider to be a byproduct of the expediency of the real-time payments
system. Consumer 𝐶1’s payment is considered insufficient if its
value𝑣exceeds𝐶1’s current deposits at bank 𝐵1:𝑣>𝑑𝐵1𝐶1. Insuffi-
cient payments can occur in our model only when the payment is
sent in real-time.
An example of an insufficient payment sent as a RTP is out-
lined in Figure 4. Consumer 𝐶1sends a payment of 25 units to
consumer𝐶2. Similar to the RTP case, the edges between the payee
and its bank are updated immediately, while the remaining edges
are added to the queue. The queue will look similar to in the pre-
vious two examples: 𝑄={(𝐵1,𝐶1,debt,−25),(𝐶1,𝐵1,credit,25),
ICAIF ’21, 3–5 November 2021, (virtual) Katherine Mayo, Shaily Fozdar, and Michael P. Wellman
Figure 4: An example of processing an insufficient payment
of 25 units from consumer 𝐶1to consumer 𝐶2in real-time.
(𝐵1,𝐵2,debt,25),(𝐵2,𝐶2,debt,25),(𝐶2,𝐵2,credit,−25)}. The spe-
cial case of an insufficient payment will be handled during the
clearing of the queue. When the edges between the payer and the
payer’s bank are processed, the model will detect the insufficiency
of the payment. The payer’s bank will force the payer to cover
as much of the payment as possible by supplying all of its cur-
rent deposit holdings. This is reflected in a change in the edge
(𝐵1,𝐶1,debt,−25)to(𝐵1,𝐶1,debt,−20), which is then added to the
network. Note the consumer’s total deposits are now equal to 0.
As a result, the accompanying credit edge is also changed from
(𝐶1,𝐵1,credit,25)to(𝐶1,𝐵1,credit,20), to reflect the amount that
was actually taken from the account, and that the consumer is will-
ing to replace. The payer’s bank, must supply the remaining value
of the payment ( 𝑣−𝑑𝐵1𝐶1) which enables the bank to pass on the
full amount to the payee’s bank, leaving the remaining edges of the
payment unchanged.
4 PAYMENT GAME
We define a real-time payments game played by bank nodes in the
financial payment network described. Banks in our network must
decide which consumer nodes, if any, are allowed to send RTPs by
selecting from one of 6 available strategies. Each strategy sets a
different threshold on the amount consumer nodes must initially
deposit in their accounts to be allowed to send RTPs. The thresholds
range from allowing every consumer node to send RTPs to allowing
no consumer nodes to send RTPs. In our model consumers may
make insufficient payments, for which the payer’s bank will be held
liable in the short-term. As described in Section 3, the difference
between the insufficient payment’s value and the payer’s deposit
holdings, the insufficient coverage , must be covered by the payer’s
bank. Thus, the strategic real-time payments decision for bank
nodes is to choose a strategy that balances the benefits of RTPs,
including attracting consumer nodes, with a desire to avoid a large
amount of insufficient coverage.
The game begins by initializing consumer nodes with a random
amount of initial deposits drawn from an exponential distribution.
This creates a set of consumers with realistic variation in levels
of wealth, including a larger group with average deposit holdings
and a small group of very wealthy individuals. Consumer nodes
are also randomly assigned to one of two preferences for receiving
payments with equal likelihood. Some consumer nodes are willingto accept any kind of payment, while others accept only RTPs.
We also initialize a set of bank nodes and the edges forming the
interbank network. Bank nodes then select a strategy for adopting
RTP use for their consumers.
All consumer nodes are assigned to a bank under the assump-
tion that consumers prefer banks that allow them to send RTPs.
Consumer nodes do not further differentiate between the banks in
any way. The full assignment procedure is as follows:
•If no bank allows the consumer to send RTPs, the consumer
is randomly assigned to any bank.
•If only one bank allows the consumer to send RTPs, the
consumer is assigned to that bank.
•If multiple banks allow the consumer to send RTPs, the con-
sumer is randomly assigned to one such bank.
When a consumer node is assigned to a bank, a credit edge is
created from the consumer to the bank and a debt edge is created
from the bank to the consumer. The value on the debt edge is set
to the consumer node’s initial deposits, representing the consumer
placing all of its deposits into its account. We assume each consumer
node is willing to hold an infinite amount of deposits in its bank
account and set the value on the credit edges to infinity. A consumer
node’s bank assignment lasts for the duration of the game.
The game proceeds in discrete time steps, 𝑡={0,...,𝑇}. The
game starts each time step by checking if it is in a designated
clearing period. In every clearing period, the queue is cleared by
removing the edges from 𝑄and adding them to the network. When
an insufficient payment is found, the edges between the payer and
its bank node are handled as described in Section 3.1.
After performing any necessary clearing, the game attempts to
create𝐿new payments in the network. For each payment a payer,
payee, and value are randomly selected. The value of the payment is
drawn uniformly from a fixed interval. Consumer nodes are limited
to at most one payment per time step. Each payment is processed
in the following manner:
•If the receiver accepts only RTPs and the sender cannot send
RTPs, they are deemed incompatible and the payment fails.
•If the sender can send RTPs, the payment is processed as a
RTP.
•If the sender cannot send RTPs and the receiver is willing
to accept all payment types, the payment is processed as a
regular payment.
Drawing each payment from a fixed interval may result in an
insufficient payment. The first chance to catch a possibly insufficient
payment is when it is initiated. We define a consumer 𝐶1’savailable
funds ,𝐴𝐶1as its current total deposits minus the value of any
pending payments in the queue for which 𝐶1is the payer, or 𝐴𝐶1=
𝑑𝐵1𝐶1+Í
(𝐵1,𝐶1,debt,𝑣)|𝑣<0𝑣. A payment of value 𝑣initiated by𝐶1is
marked as potentially insufficient if 𝑣>𝐴𝐶1. Note that we use the
term potentially insufficient here, as it is possible for the consumer
node to receive funds between the payment’s initiation and the
next clearing period, such that at clearing, 𝑣≤𝑑𝐵1𝐶1. A standard
payment deemed potentially insufficient will terminate the payment
process immediately. While the expediency of a real-time payment
does not allow the bank nodes to catch an insufficient payment
before it is sent, our game provides an opportunity for the payer
to potentially do so. Consumer nodes check the validity of their
An Agent-Based Model of Strategic Adoption of Real-Time Payments ICAIF ’21, 3–5 November 2021, (virtual)
payments before sending them with probability 𝑝∈ [0,1], the
consumer check probability . We define this probability as a game
configuration variable that applies to all consumer nodes in the
network. If a consumer node checks their payment and finds it
may be insufficient, they adjust the payment’s value to be valid
so that𝑣=𝑚𝑎𝑥(0,𝐴). A payment with a value equal to 0 triggers
the termination of the payment process. Anytime the payment
process is terminated early, the payment attempt is still counted
and subsequently decrements the total number of payments left to
attempt in the time step. If the payment process does not terminate
for any reason, the appropriate processing steps continue.
The final time step, 𝑡=𝑇, only clears the queue and no new
payments are created. At the end of the game, bank nodes are
awarded a payoff for selecting their chosen strategy that relies on
three values: the total value of initial deposits attracted ( 𝐷), the total
value of RTPs routed ( 𝑅), and the amount of insufficient payments
coverage (𝐼). The deposits a bank holds for its consumers may be
viewed as representing a consumer’s level of business with the bank,
however the deposits themselves are a liability that must be paid
upon demand. Therefore, we model the utility a bank node receives
from deposits as a fraction of their value. Similarly, the utility a
bank derives from routing payments in real-time for its consumers
is a fraction of its value representing consumer satisfaction and
continued business. We model our payoff assuming a consumer
derives slightly more satisfaction from being able to send a 100
unit payment in real-time than from being able to send a 5 unit
payment in real-time. Conversely, the loss to bank nodes in our
game is equal to the value of insufficient coverage required. Banks
hold the short-term liability for insufficient payments, even if they
are able to push the liability onto consumers in the long-run.
The equation for the payoff to bank 𝐵is:
𝑝𝑎𝑦𝑜𝑓𝑓𝐵=0.5∗𝐷+0.02∗𝑅−𝐼
We model games with 𝑚=225consumer nodes, 𝑏=3bank
nodes and𝑇=720time steps, with 𝐿=45payments attempted
per time step with values 𝑣∼U{1,..., 100}. We test clearing period
lengths𝑋∈ {4,6,12,24}and consumer check probability 𝑝∈
{0,0.25,0.5,0.75}, for a total of 16 different game configurations for
analysis.
5 EMPIRICAL GAME-THEORETIC ANALYSIS
To analyze the real-time payments game we use extensive simula-
tion of strategy combinations in a process referred to as EGTA. The
process selects a strategy profile , a list of strategies and the number
of players employing each strategy, and uses the profile in repeated
simulations of the payment game. In each game, consumer and
bank nodes are randomly initialized, banks are randomly assigned
to a strategy in the profile such that the specified number of banks
employ each strategy, and random payments are generated and
cleared over 720 time steps. Profiles are selected for simulation in
an iterative procedure with the aim of finding symmetric mixed-
strategy Nash equilibria similar to the manner used in previous
EGTA studies [Cassell and Wellman 2013; Wellman et al .2013]. The
calculated payoff to a bank node for employing a strategy is the
sample average of payoffs observed over the many simulation runs.The equilibria identified by EGTA for the different game config-
urations classified by clear period and consumer check probability
can be seen in Table 1. We sort the 6 strategy thresholds into three
categories low,medium , and high based on the minimum amount
of initial consumer deposits each requires for real-time payments
use. Most of the Nash equilibria identified were mixed-strategy
equilibria, except for the case where the consumer check proba-
bility is equal to 0.75which has a pure-strategy Nash equilibria
for all clear periods. We report the total probability a bank node
will adopt a strategy with a threshold belonging to each category,
calculated from the probability assigned to playing the strategies
in equilibrium.
The results show an increase in the clear period and consumer
check probability leads to an increase in the probability a bank
node adopts strategy with a low threshold on deposits. Intuitively,
as consumer nodes are more likely to check the validity of their
payments and correct errors, the number of insufficient payments
decreases. Thus, bank nodes may be more willing to allow broader
use of RTPs. A lower clearing period, on the other hand, allows
payee’s of standard payments to receive funds in a more timely
manner than when the clearing period is very high. The difference
between a RTP and a standard payment becomes smaller. In this
case, it may not be as worthwhile for bank nodes to provide as
many consumer nodes the use of RTPs.
6 EFFECTS OF EQUILIBRIA
We analyze the effect bank nodes adopting the Nash equilibrium
strategies has on the network, in particular on consumer nodes’
access to RTPs, the success of payments attempted, and insufficient
payments made. Bank nodes play the payment game 1,000 times
as described: randomly initialize bank and consumer nodes, bank
nodes adopt a strategy, consumer nodes are assigned to banks, and
random payments are created and cleared in the network over 720
time steps. However for this analysis, bank nodes are required to
adopt the equilibrium strategy identified by EGTA for the game
configuration being analyzed. When the equilibrium is mixed, the
bank nodes will be assigned to one of the pure strategies that make
up the equilibrium (the support) according to a weighted draw with
weights equal to the probability assigned to playing each strategy
in equilibrium.
We also compare the success of payments and insufficient pay-
ments under the equilibria to the all-RTPs and no-RTPs cases. For
the first case, bank nodes allow all consumer nodes the use of RTPs.
This represents a situation in which banks might be required, for
instance by federal regulation, to allow everyone access to RTPs. In
the second case, bank nodes do not allow any consumer nodes to
send RTPs, representing the status quo before RTP systems were
introduced. A consumer’s preference to accept only RTPs in this
case can be viewed as unwillingness to accept payments through
the banking system without a RTP system in place.
The proportion of a bank node’s consumers that are allowed to
send RTPs when the bank adopts the equilibrium strategies is shown
in Figure 5. In all game configurations, the majority of consumer
nodes are allowed to send RTPs, although in many cases not all.
The proportion tends to decrease as the clearing period length
decreases and increase as the consumer check probability increases.
ICAIF ’21, 3–5 November 2021, (virtual) Katherine Mayo, Shaily Fozdar, and Michael P. Wellman
Table 1: The probability of a bank node selecting a strategy that sets a low threshold in equilibrium increases as the consumer
check probability and clear period increase.
Consumer Check 𝑝 0 0.25 0.5 0.75
Clear Period 𝑋 Strategy Threshold low med high low med high low med high low med high
24 0.58 0.42 0 0.61 0.39 0 1 0 0 1 0 0
12 0.54 0.46 0 0.46 0.54 0 1 0 0 1 0 0
6 0.51 0.49 0 0.37 0.63 0 0.92 0.08 0 1 0 0
4 0.48 0.52 0 0.39 0.61 0 0.75 0.25 0 1 0 0
Figure 5: The proportion of consumer nodes allowed to send
real-time payments under the equilibria increases as the
clear period and consumer check probability increases.
This aligns with the trends seen in the equilibria in Table 1. As the
clear period decreases, the probability of selecting a strategy with a
lower threshold decreases. As a result, the proportion of consumer
nodes allowed to send RTPs also decreases. On the other hand,
when the consumer check probability increases, the probability
of choosing a strategy with a lower threshold increases and the
proportion of consumer nodes allowed to send RTPs increases as
well. The exception occurs when the consumer check probability is
0.75and the pure strategy Nash equilibrium is to allow everyone to
send RTPs. This results in a proportion of 1 for all clearing periods.
As described in Section 4, issues with consumer compatibility
and deposit values may lead to early termination of payments. We
examine the frequency of such instances by studying the proportion
of a bank node’s payments that are deemed successful because they
did not terminate early. To calculate, we use the expected number of
payments attempted per bank node. We also compare the equilibria
case to the case where all consumer nodes may send RTPs. The
results are in Figure 6.
Most payments attempted in the network are successful, regard-
less of game configuration and in both the equilibria and all-RTPs
cases. Under the equilibria, the success tends to slightly decrease as
the clear period decreases. At lower clear periods the bank nodes
tend to allow fewer consumer nodes to send RTPs. With fewer
consumers sending RTPs, there may exist more cases where the
payer’s allowed payment type and payee’s desired payment type are
incompatible and therefore, increases the number of unsuccessful
payments. When compared to the equilibria, we see that allowingeveryone to send RTPs leads to a higher proportion of successful
payments in all game configurations. The only exception is when
consumer check probability is 0.75, where allowing everyone to
send RTPs is the equilibrium. Instead of a decrease in the proportion
of successful payments, the all-RTPs case shows a slight increase in
the proportion of successful payments as the clear period decreases.
This may be attributed to the larger number of payments made
between clearing as a result of longer clearing periods. Thus, there
are more cases of consumer nodes validating payments and finding
they have no available funds.
We also compare these results to the no-RTPs case, where the
average proportion of successful payments across all game config-
urations is only 0.32. This highlights the importance of real-time
payments for promoting consumer liquidity. Allowing RTPs for
at least some participants in the network promotes increased con-
sumer participation in the banking system. Furthermore without
a delay in receiving funds for which a consumer is the payee, the
funds are available for the consumer’s immediate use as a payer.
The disincentive for bank nodes to offer real-time payments ser-
vices to consumers is the risk of insufficient payments. We measure
the effect insufficient payments have on bank nodes by the percent-
age of a bank’s payments that are insufficient, as well as the average
total insufficient coverage required by a bank. The results when
bank nodes adopt the equilibrium strategies are shown in Table 2
and when bank nodes allow all consumers to send RTPs in Table 3.
If banks do not allow anyone to send real-time payments, insuf-
ficient payments in our model are caught and terminated before
processing.
Few payments made by consumer nodes in our network turn
out to be insufficient, however when they occur the bank nodes
tend to cover the majority of the payment. This trend is evident
regardless of whether bank nodes adopt the equilibrium strategies
or allow all consumer nodes to send RTPs. As the consumer check
probability increases, both the number of insufficient payments and
the insufficient coverage required tend to decrease. With consumer
nodes more likely to check their payments when the consumer
check probability is high, the likelihood they catch and correct po-
tentially insufficient payments is also high. Conversely, as the clear
period decreases, the number of insufficient payments tends to in-
crease and the insufficient coverage required tends to decrease. The
exception occurs when the consumer check probability is 0.75. In
this case both the number of insufficient payments and insufficient
coverage required increase as the clear period decreases.
To explain this phenomenon, we must consider consumer nodes
as both payers and payees. When the clear period decreases, the
An Agent-Based Model of Strategic Adoption of Real-Time Payments ICAIF ’21, 3–5 November 2021, (virtual)
Figure 6: When bank nodes adopt the equilibrium strategies,
the proportion of successful payments decreases as the clear
period decreases. Furthermore, its value is always smaller
than when all consumers send real-time payments.
number of time steps between clearing becomes smaller. At a
smaller clear period, there are fewer opportunities for a given con-
sumer to receive funds as a payee. Now when a clear period occurs,
this consumer has accumulated fewer deposits to use when settling
payments for which they are the payer than they might have if the
clear period were longer. Thus, we see an increase in the number
of instances in which a payment’s value is strictly larger than the
payer’s deposit holdings at the time of settlement. On the other
hand, a smaller clearing period also limits the number of payments
for which a given consumer node may be the payer. Note that this
limitation has a stronger impact on a consumer node’s ability as a
payer than as a payee, as consumers make at most one payment,
but may receive any number of payments per time step. With fewer
opportunities to make payments between clearing, the total amount
owed in payments for any given consumer node at clearing time
will be smaller with a clearing period of 4 than 24. So while we may
see an increase in the number of payments for which consumer
nodes do not have enough deposits, the amount of deposits a con-
sumer node has tends to be closer to the total value of the payments
made.
The insufficient payment coverage trend at consumer check
probability 0.75may be understood by analyzing the effect of con-
sumer check probability on payment value. When the probability aconsumer node checks their payments is high, it is likely that an
insufficient payment will be caught and corrected. By definition,
a corrected insufficient payment’s value will be smaller than the
original value. Thus, the value of payments on average is less when
consumer nodes correct insufficient payments more often. Smaller
payment values means fewer deposits. The increase in the number
of insufficient payments along with the decrease in deposits causes
the insufficient coverage requirement for bank nodes to slightly
increase.
Our analysis also illustrates an important relationship between
banks and consumer use of RTPs. From our analysis, it can be
seen that all parties involved tend to be better off when all con-
sumer nodes send their payments in real-time. This is evident in
the proportion of successful payments being larger and insufficient
payments, both the number and resulting required coverage, being
smaller when all consumer nodes send RTPs compared to the other
cases for all game configurations. An individual bank node playing
our payment game, however, is more likely to play a strategy that
limits the number of its consumers who send RTPs in the event
insufficient payments are likely. We can conclude that, while bank
nodes might be better off when all consumers in the network are
allowed to send RTPs, a strategic bank node would prefer not to
assume the risk required to allow all of its consumers the use of
RTP.
7 CONCLUSION
In this study, we analyze how the adoption of real-time payments by
banks may be affected by potential payments risk, specifically the
possibility of insufficient payments. We introduce a payment model
that supports consumer nodes sending both standard and real-time
payments with deferred settlement through the interbank network.
Within this framework, we model insufficient payments as the
case where the value of a RTP is greater than the current deposit
holdings of the payer and for which the payer’s bank becomes
liable. Such a scenario captures both the risks of the expediency of
RTP and the credit risk borne by banks in the deferred settlement
case. We ask which consumers banks should allow to send RTPs in
this scenario by modeling the decision as a strategic game played
by bank nodes. Bank nodes select a strategy based on the initial
deposits of consumer nodes. The strategic decision for bank nodes
requires balancing the benefits of real-time payments and a desire
to attract consumers, with a desire to limit their liability.
Our results show that while bank nodes never choose strategies
with high thresholds, the likelihood of allowing all consumers to
send RTPs is dependent on different game configuration variables.
When consumer nodes are less likely to send insufficient payments,
bank nodes are willing to allow all, or nearly all consumers to
send real-time payments. However, if consumer nodes may send
many insufficient payments, bank nodes become more likely to
select a strategy with a medium level threshold. Bank nodes also
tend to prefer strategies with medium level thresholds to strategies
with lower thresholds when the clearing period is lower. This is
likely because a shorter clearing period makes real-time payments
more similar to standard payments. In this case, it becomes less
worthwhile for bank nodes to allow as many consumers the use of
RTPs.
ICAIF ’21, 3–5 November 2021, (virtual) Katherine Mayo, Shaily Fozdar, and Michael P. Wellman
Table 2: When bank nodes adopt the equilibrium strategies, few insufficient payments are made, but the insufficient coverage
required by banks is large.
Consumer Check 𝑝 0 0.25 0.5 0.75
Clear Period 𝑋 % insufficient % coverage % insufficient % coverage % insufficient % coverage % insufficient % coverage
24 4.6 79 3.3 75 1.8 70 0.75 65
12 4.9 74 3.9 72 2.3 68 1.1 65
6 5.1 72 4.4 72 2.8 69 1.3 67
4 5.3 72 4.5 71 3.0 69 1.4 68
Table 3: If all payments are sent as real-time payments, the number of insufficient payments made is small, but bank nodes
cover the majority of the payment.
Consumer Check 𝑝 0 0.25 0.5 0.75
Clear Period 𝑋 % insufficient % coverage % insufficient % coverage % insufficient % coverage % insufficient % coverage
24 4.2 78 2.9 74 1.7 70 0.76 65
12 4.6 73 3.4 70 2.2 68 1.1 65
6 4.8 70 3.7 69 2.6 68 1.3 67
4 4.9 70 3.8 69 2.7 68 1.4 68
We compare the effects of bank nodes adopting the equilibrium
strategies to the cases where bank nodes allow everyone or allow
no one to send RTPs. The results of this analysis demonstrate that
bank nodes are better off when all consumer nodes are sending
payments in real-time than when bank nodes adopt the equilibria or
don’t allow RTPs. This is evident in the proportion of payments that
are successfully made in the network, the number of insufficient
payments made in the network, and the insufficient coverage re-
quired by bank nodes as a result of insufficient payments. However,
the Nash equilibrium in many situations for banks in our game
is to select a strategy that places a limitation on the number of
consumers who are allowed to send RTPs. From these results, we
can infer that although outcomes for the banks are more favorable
when all consumers send RTPs, strategic banks are unwilling to as-
sume the liability risk required to provide all their own consumers
the use of RTPs.
While our work offers some insight into the strategic decision
faced by banks in the real-time payments space and the effects
the decision may have on the network, it is important to note this
model is rather simplistic. Future work may explore how other
factors may affect the real-time payments decision. For example,
consumers may be endowed with additional features that would
impact their payments, the network may be made of more node
types than just consumers and banks, and issues with payments
could stem from other scenarios such as fraud. Addressing these, as
well as other potential additions, may help paint a more complete
picture of the real-time payments question.
ACKNOWLEDGMENTS
This work was supported in part by a grant from JP Morgan Chase
AI Center.
REFERENCES
Nasser Arshadi. 2019. Blockchain Platform for Real-Time Payments: A Less Costly and
More Secure Alternative to ACH. Technology & Innovation 21, 1 (Oct. 2019), 3–9.Morten L. Bech. 2008. Intraday Liquidity Management: A Tale of Games Banks Play.
FRBNY Economic Policy Review (Sept. 2008).
Ben-Alexander Cassell and Michael P. Wellman. 2013. EGTAOnline: An experiment
manager for simulation-based game studies. In Multi-Agent Based Simulation XIII .
Lecture Notes in Artificial Intelligence, Vol. 7838. Springer, 85–100.
Frank Cheng, Junming Liu, Kareem Amin, and Michael P. Wellman. 2016. Strategic
payment routing in financial credit networks. In Seventeenth ACM Conference on
Economics and Computation (Maastricht). 721–738.
Committee on Payments and Market Infrastructures. 2016. Fast payments: Enhanc-
ing the speed and availability of retail payments . Technical Report 154. Bank for
International Settlements.
Pranav Dandekar, Ashish Goel, and Ramesh Govindan. 2011. Liquidity in credit
networks: A little trust goes a long way. In 12th ACM Conference on Electronic
Commerce . 147–156.
Pranav Dandekar, Ashish Goel, Michael P. Wellman, and Bryce Wiedenbeck. 2015.
Strategic Formation of Credit Networks. In ACM Transactions on Internet Technology ,
Vol. 15. 1–41. Issue 1.
Federal Reserve Board. 2021. About the FedNow Service . Retrieved June 29, 2021 from
https://www.frbservices.org/financial-services/fednow/about.html
Marco Galbiati and Kimmo Soramaki. 2008. An Agent-Based Model of Payment
Systems. Jouranl of Economic Dynamics and Control 35, 6 (2008), 859–875.
Arpita Ghosh, Mohammad Mahdian, Daniel M. Reeves, David M. Pennock, and Ryan
Fugger. 2007. Mechanism Design on Trust Networks. In Third International Work-
shop on Internet and Network Economics . 257–268.
Zhiling Guo, Rob Kauffman, Mei Lin, and Dan Ma. 2015. Near Real-Time Retail Payment
and Settlement Systems Mechanism Design . Working Paper. SWIFT Institute. https:
//swiftinstitute.org/wp-content/uploads/2015/11/WP-No-2014-004-1.pdf
Monika Hartmann, Lola Hernandez van Gijsel, Mirjam Plooij, and Quentin Vandeweyer.
2019. Are instant payments becoming the new normal? A comparative study. 229
(Aug. 2019).
Kurt Johnson, James J. McAndrews, and Kimmo Soramaki. 2004. Economizing on
Liquidity with Deferred Settlement Mechanisms. FRBNY Economic Policy Review
(Dec. 2004).
Dean Karlan, Markus Mobius, Tanya Rosenblat, and Adam Szeidl. 2009. Trust and
Social Collateral. Quarterly Journal of Economics 124, 3 (2009), 1307–1361.
Erwin Kulk. 2021. Request to pay: Monetising the instant payments investment. Journal
of Digital Banking 5, 3 (Jan. 2021), 193–203.
Javier Santamaría. 2019. Developments in instant payments. Journal of Payments
Strategy & Systems 13, 3 (2019), 190–193.
Karl Tuyls, Julien Perolat, Marc Lanctot, Edward Hughes, Richard Everett, Joel Z. Leibo,
Csaba Szepesvári, and Thore Graepel. 2020. Bounds and dynamics for empirical
game-theoretic analysis. Autonomous Agents and Multiagent Systems 34, 7 (2020).
Michael P. Wellman. 2016. Putting the agent in agent-based modeling. Autonomous
Agents and Multi-Agent Systems 30 (2016), 1175–1189.
Michael P. Wellman, Tae Hyung Kim, and Quang Duong. 2013. Analyzing Incentives
for Protocol Compliance in Complex Domains: A Case Study of Introduction-Based
Routing. In 12th Workshop on the Economics of Information Security (Washington).
An Agent-Based Model of Strategic Adoption of Real-Time Payments ICAIF ’21, 3–5 November 2021, (virtual)
Lin Zhong, Qianhong Wu, Jan Xie, Zhenyu Guan, and Bo Qin. 2019. A secure large-
scale instant payment system based on blockchain. Computers & Security 84 (2019),349–364. |
b6069c61-4bd2-421a-848e-cebe81d0d909 | trentmkelly/LessWrong-43k | LessWrong | Is there a word for discrimination against A.I.?
We've all heard of racism. We've all heard of sexism. Many of us have heard of speciesism. Other than "speciesism", is there a word for discrimination against the intelligence in systems running on computing machinery? Do you think that we're going to need one? Or do you think that we can get by without one? |
448bb7c4-ea4e-49fc-b34f-6210d2910361 | trentmkelly/LessWrong-43k | LessWrong | Acting Wholesomely
Overview of essay series
This is the first in a collection of three essays exploring and ultimately defending the idea of choosing what feels wholesome as a heuristic for picking actions which are good for the world. I'm including a summary of the series here, before getting to the essay proper.
The two main generators of my thinking about this were:
* Reflecting on major mistakes that have been made in EA, and wondering how EA might have been different at a deep level in ways that could have averted those mistakes.
* Reflecting on and trying to name certain core praiseworthy behaviours of people whom I especially admire.
In the first essay, Acting Wholesomely (= the rest of this post), we see that the regular English concept of acting wholesomely can be action-guiding, especially if taken with a flavour of “paying attention to the whole of things”. In practice this involves leveraging our emotional intelligence to help recognise which actions or effects are (un)wholesome. This is a skill which many people already exercise implicitly; it can be practised, and can help us to avoid moral errors. Unwholesomeness can never be completely eliminated, so we must learn to relate to it wholesomely — rather than ignore it or treat it as toxic, we may do best to relate to it as a wise parent towards a child who is hurting others.
In the second essay, Wholesomeness and Effective Altruism, we will see that there are tensions between wholesome action and a simplistic interpretation of EA. But they can be unified in a “wholesome EA” perspective. Serious pursuit of wholesome action will integrate core EA principles, and serious pursuit of EA will (as I argue in the third essay) integrate a desire for wholesomeness. This might have helped to avert major historical mistakes in EA. There remain challenges to be tackled (when to act carefully vs quickly; when to focus on reducing local unwholesomeness vs more global), but these are just normal challenges in seeking to ac |
dd58b898-1dd2-4abf-bb14-3615bfe6e7a0 | StampyAI/alignment-research-dataset/special_docs | Other | Adapting a kidney exchange algorithm to align with human values.
Adapting a kidney exchange algorithm to align with human values[☆](#aep-article-footnote-id1)
=============================================================================================
Abstract
--------
The efficient and fair allocation of limited resources is a classical problem in economics and computer science. In kidney exchanges, a central market maker allocates living kidney donors to patients in need of an organ. Patients and donors in kidney exchanges are prioritized using ad-hoc weights decided on by committee and then fed into an allocation algorithm that determines who gets what—and who does not. In this paper, we provide an end-to-end methodology for estimating weights of individual participant profiles in a kidney exchange. We first elicit from human subjects a list of patient attributes they consider acceptable for the purpose of prioritizing patients (e.g., medical characteristics, lifestyle choices, and so on). Then, we ask subjects comparison queries between patient profiles and estimate weights in a principled way from their responses. We show how to use these weights in kidney exchange market clearing algorithms. We then evaluate the impact of the weights in simulations and find that the precise numerical values of the weights we computed matter little, other than the ordering of profiles that they imply. However, compared to not prioritizing patients at all, there is a significant effect, with certain classes of patients being (de)prioritized based on the human-elicited value judgments.
Introduction
------------
As AI is deployed increasingly broadly, AI researchers are confronted with the moral implications of their work. The pursuit of simple objectives, such as minimizing error rates, maximizing resource efficiency, or decreasing response times, often results in systems that have unintended consequences when they confront the real world, such as discriminating against certain groups of people [34]. It would be helpful for AI researchers and practitioners to have a general set of principles with which to approach these problems [45], [41], [24], [16], [33].
One may ask why any moral decisions should be left to computers at all. There are multiple possible reasons. One is that the decision needs to be made so quickly that calling in a human for the decision is not feasible, as would be the case for a self-driving car having to make a split-second decision about whom to hit [13]. Another reason could be that each individual decision by itself is too insignificant to bother a human, even though all the decisions combined may be highly significant morally—for example, if we were to consider the moral impact of each advertisement shown online. A third reason is that the moral decision is hard to decouple from a computational problem that apparently exceeds human capabilities. This is the case in many machine learning applications (e.g., should this person be released on bail? [27]), but also in other optimization problems.
We are interested in one such problem: the clearing house problem in *kidney exchanges*. In a kidney exchange, patients who need a kidney transplant and have a willing but incompatible live donor may attempt to trade their donors' kidneys [40]. Once these people appear at an exchange, we face a highly complex problem of deciding who matches with whom. In some exchanges, this matching problem is solved using algorithms developed in the AI community: the United States [19], the United Kingdom [30], the Netherlands [23], and so on [9].
In this paper, we investigate the following issue. Suppose, in principle, that we prioritize certain patients over others—for example, younger patients over older patients. To do so would clearly be a morally laden decision. How should this affect the role of the AI researcher developing these systems? From a purely algorithmic perspective, it may seem that there is little more to this than to change some weights in the objective function accordingly. But we argue that our job, as AI researchers, does not end with this simple observation. Rather, we should be closely involved with the process for determining these weights, both because we can contribute technical insights that are useful for this process itself, and because it is our responsibility to understand the consequences to which these weights will lead. The methodology that we develop integrates this prioritization into our development work.
In this paper, we provide an end-to-end methodology for estimating weights of individual patient profiles in a kidney exchange, where these weights are used only for tiebreaking purposes (i.e., when multiple solutions give the maximal number of transplants).
Executing our methodology in such a way that we would advocate directly adopting the results in practice would require substantially more effort and participation from other parties. For example, we would need to consult domain experts to determine which patient characteristics should be used to determine edge weights. We would also need to involve stakeholders such as policy-makers, doctors, and kidney exchange participants in the process for determining weights. For this reason, we execute this methodology in a limited fashion as a proof-of-concept, and evaluate the results in simulations.
We first elicit from human subjects a list of patient attributes they consider acceptable for the purpose of prioritizing patients in kidney exchanges (e.g., most subjects did not find race an acceptable attribute for prioritization). Then, we ask subjects comparison queries between patient profiles that differ only on acceptable attributes, and estimate weights from their responses. We show how to use these weights in kidney exchange market clearing algorithms, to break ties among multiple maximum-sized solutions. We then evaluate the impact of the weights in simulations. We find that the precise numerical values of the weights we computed matter little, other than the ordering of profiles that they imply. However, compared to not prioritizing patients at all, there is a significant effect. Specifically, the difference is experienced by donor-patient pairs that have an “underdemanded” [6], [42] combination of blood types; for them, their chances rise or drop significantly depending on their tiebreaking weights.
Section snippets
----------------
Kidney exchange model
---------------------
We briefly review the standard mathematical model for kidney exchange and techniques from the AI community used to clear real kidney exchanges, and then give illustrative examples where tiebreaking would or would not play a role.
Determining and using prioritization weights
--------------------------------------------
In this section, we describe our procedure for computing prioritization weights and integrating them into the algorithm for clearing kidney exchanges. Because this procedure was intended as a proof-of-concept, we gathered preference data from participants recruited through the online platform Amazon Mechanical Turk (“MTurk”).3 However, if this procedure were used in a real-life kidney exchange, medical experts and other
Experiments
-----------
Having described how we obtained weights and how we integrated these weights into the IP-based algorithm, we now describe our experiments testing the effects of our prioritizing algorithm in simulations.
Discussion
----------
In this section, we discuss the potential for applying these results to real-world kidney exchanges, some of the ethical context of our work, and directions for future research.
Declaration of Competing Interest
---------------------------------
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
----------------
This work is partially supported by the project “How to Build Ethics into Robust Artificial Intelligence” funded by the Future of Life Institute (grants 2016-158697 and 2017-174867), by the Templeton World Charity Foundation grant TWCF0321, by NSF IIS-1527434, and by Duke Bass Connections. Dickerson was supported in part by NSF CAREER Award IIS-1846237 and a Google Faculty Research Award. Conitzer was supported in part by NSF IIS-1814056. We thank Lirong Xia, Zhibing Zhao, and Kyle Burris, and
References (46)
---------------
* P. Biró *et al.*### [Inapproximability of the kidney exchange problem](/science/article/pii/S0020019006002869)
### Inf. Process. Lett.
(2007)
* R.A. Bradley### [14 paired comparisons: some basic procedures and examples](/science/article/pii/S0169716184040165)
* R. Montgomery *et al.*### [Domino paired kidney donation: a strategy to make best use of live non-directed donation](/science/article/pii/S0140673606691150)
### Lancet
(2006)
* A. Roth *et al.*### [Pairwise kidney exchange](/science/article/pii/S0022053105001055)
### J. Econ. Theory
(2005)
* P. Toulis *et al.*### [Design and analysis of multi-hospital kidney exchange mechanisms using random graphs](/science/article/pii/S0899825615000020)
### Games Econ. Behav.
(2015)
* D. Abraham *et al.*### Clearing algorithms for barter exchange markets: enabling nationwide kidney exchanges
* C. Allen *et al.*### Artificial morality: top-down, bottom-up, and hybrid approaches
### Ethics Inf. Technol.
(2005)
* M. Anderson *et al.*### Machine Ethics
(2011)
* R. Anderson *et al.*### Finding long chains in kidney exchange using the traveling salesman problem
### Proc. Natl. Acad. Sci.
(2015)
* I. Ashlagi, D. Gamarnik, M. Rees, A.E. Roth, The need for (long) chains in kidney exchange, 2017, initial version...
- I. Ashlagi *et al.*### Free riding and participation in large scale, multi-hospital kidney exchange
### Theor. Econ.
(2014)
- C. Barnhart *et al.*### Branch-and-price: column generation for solving huge integer programs
### Oper. Res.
(1998)
- K. Baum *et al.*### From machine ethics to machine explainability and back
- P. Biró *et al.*### Kidney exchange practices in Europe
- P. Biró *et al.*### Maximum weight cycle packing in directed graphs, with application to kidney exchange programs
### Discrete Math. Algorithms Appl.
(2009)
- A. Blum *et al.*### Opting into optimal matchings
- J.-F. Bonnefon *et al.*### The social dilemma of autonomous vehicles
### Science
(2016)
- S. Bringsjord *et al.*### Contextual deontic cognitive event calculi for ethically correct robots
- V. Conitzer *et al.*### Moral decision making frameworks for artificial intelligence
- J.P. Dickerson *et al.*### Position-indexed formulations for kidney exchange
- J.P. Dickerson *et al.*### Failure-aware kidney exchange
### Manag. Sci.
(2019)
- J.P. Dickerson *et al.*### FutureMatch: combining human value judgments and machine learning to match in dynamic environments
- E. Elkind *et al.*### Rationalizations of voting rules
View more referencesCited by (43)
-------------
* ### [Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts](/science/article/pii/S0747563222001182)
2022, Computers in Human BehaviorCitation Excerpt :Specifically, we created three diagnostic cases with respect to (1) an anxiety disorder (see e.g., Nemesure et al., 2021), (2) macular disease (see e.g., Abràmoff, Lavin, Birch, Shah, & Folk, 2018), and (3) skin cancer (see e.g., Reiter, Rotemberg, Kose, & Halpern, 2019). Moreover, we created two resource allocation cases that involved distributing: (4) a kidney to a person on a transplantation waiting list (e.g., Freedman, Borg, Sinnott-Armstrong, Dickerson, & Conitzer, 2020; Schwantes & Axelrod, 2021), and (5) a ventilator to a patient when the number of patients requiring ventilation exceeded the available number of ventilators (see e.g., George et al., 2021; Yu et al., 2021). Because diagnosis and resource allocation are different types of scenarios—diagnostic cases raise issues around accuracy (i.e., was the AI's diagnosis accurate?) (
Show abstractForms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients' perceptions of (un)dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being treated in a dignified and respectful way in various healthcare decision contexts. Participants were subject to a 2 (human or AI decision maker) x 2 (positive or negative decision outcome) x 2 (diagnostic or resource allocation healthcare scenario) factorial design. We found evidence of a “human bias” (i.e., a preference for human over AI decision makers) and an “outcome bias” (i.e., a preference for positive over negative outcomes). However, we found that for perceptions of respectful and dignified interpersonal treatment, it matters more who makes the decisions in diagnostic cases and it matters more what the outcomes are for resource allocation cases. We also found that humans were consistently viewed as appropriate decision makers and AI was viewed as dehumanizing, and that participants perceived they were treated better when subject to diagnostic as opposed to resource allocation decisions. Thematic coding of open-ended text responses supported these results. We also outline the theoretical and practical implications of these findings.
* ### [Computational ethics](/science/article/pii/S1364661322000456)
2022, Trends in Cognitive SciencesCitation Excerpt :However, there are multiple solutions that achieve the maximum number of matches but result in different individuals receiving kidneys. How should we decide among these solutions [11–13]? There are many ways to determine what a fair or justified allocation is and to determine who deserves to get a kidney and who does not.
Show abstractTechnological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework – computational ethics – that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.
* ### [Proportional Aggregation of Preferences for Sequential Decision Making](http://www.scopus.com/scopus/inward/record.url?partnerID=10&rel=3.0.0&view=basic&eid=2-s2.0-85165216339&md5=6641a1dd1d2b796aafec52808751e398)
2023, arXiv
* ### [Deploying a Robust Active Preference Elicitation Algorithm: Experiment Design, Interface, and Evaluation for COVID-19 Patient Prioritization](http://www.scopus.com/scopus/inward/record.url?partnerID=10&rel=3.0.0&view=basic&eid=2-s2.0-85165216427&md5=6cb6666cc0ef6030e9155b1194e87)
2023, arXiv
* ### [Moral Machine or Tyranny of the Majority?](http://www.scopus.com/scopus/inward/record.url?partnerID=10&rel=3.0.0&view=basic&eid=2-s2.0-85162671258&md5=7d1a204355487a79a9e1c380798271c)
2023, arXiv
* ### [Active Reward Learning from Multiple Teachers](https://doi.org/10.48550/arXiv.2303.00894)
2023, arXiv
[View all citing articles on Scopus](http://www.scopus.com/scopus/inward/citedby.url?partnerID=10&rel=3.0.0&eid=2-s2.0-85082689931&md5=7537459d80464cdc875b9739f949fd2)Recommended articles (6)
------------------------
* Research article### [Realizability of three-valued semantics for abstract dialectical frameworks](/science/article/pii/S0004370218304806)
Artificial Intelligence, Volume 278, 2020, Article 103198Show abstractWe investigate fundamental properties of three-valued semantics for abstract dialectical frameworks (ADFs). In particular, we deal with *realizability*, i.e., the question whether there exists an ADF that has a given set of interpretations as its semantics. We provide necessary and sufficient conditions that hold for a set of three-valued interpretations whenever there is an ADF realizing it under admissible, complete, grounded, or preferred semantics. Moreover, we show how to construct such an ADF in case of realizability. Our results lay the ground for studying the expressiveness of ADFs under three-valued semantics. Since ADFs generalize many existing approaches, our results have immediate impact on many argumentation formalisms including abstract argumentation frameworks. As first applications, we study implications of our results on the existence of certain join operators on ADFs. Furthermore, we exploit our (exact) characterizations to obtain realizability results also for a more relaxed setting, where realizing ADFs may contain hidden statements.
* Research article### [On pruning search trees of impartial games](/science/article/pii/S0004370218303485)
Artificial Intelligence, Volume 283, 2020, Article 103262Show abstractIn this paper we study computing Sprague-Grundy values for short impartial games under the normal play convention. We put forward new game-agnostic methods for effective pruning search trees of impartial games. These algorithms are inspired by the *α*-*β*, a well-known pruning method for minimax trees. However, our algorithms prune trees whose node values are assigned by the mex function instead of min/max.
We have verified the effectiveness of our algorithms experimentally on instances of some standard impartial games (that is Nim, Chomp, and Cram). Based on the results of our experiments we have concluded that: (1) our methods generally perform well, especially when transposition table can store only a small fraction of all game positions (which is typical when larger games are concerned); (2) one of our algorithms constitutes a more universal alternative to the state-of-the-art algorithm proposed by Lemoine and Viennot.
* Research article### [Batch repair actions for automated troubleshooting](/science/article/pii/S0004370218306659)
Artificial Intelligence, Volume 283, 2020, Article 103260Show abstractRepairing a set of components as a **batch** is often cheaper than repairing each of them separately. A primary reason for this is that initiating a repair action and testing the system after performing a repair action often incurs non-negligible overhead. However, most troubleshooting algorithms proposed to date do not consider the option of performing **batch repair actions**. In this work we close this gap, and address the combinatorial problem of choosing which batch repair action to perform so as to minimize the overall repair costs. We call this problem the **Batch Repair Problem** (BRP) and formalize it. Then, we propose several approaches for solving it. The first seeks to choose to repair the set of components that are most likely to be faulty. The second estimates the cost wasted by repairing a given set of components, and tried to find the set of components that minimizes these costs. The third approach models BRP as a Stochastic Shortest Path Problem (SSP-MDP) [[1]](#br0010), and solves the resulting problem with a dedicated solver. Experimentally, we compare the pros and cons of the proposed BRP algorithms on a standard Boolean circuit benchmark and a novel benchmark from the Physiotherapy domain. Results show the clear benefit of performing batch repair actions with our BRP algorithms compared to repairing components one at a time.
* Research article### [Knowing the price of success](/science/article/pii/S0004370220300461)
Artificial Intelligence, Volume 284, 2020, Article 103287Show abstractIf an agent, or a coalition of agents, knows that it has a strategy to achieve a certain outcome, it does not mean that the agent knows what the strategy is. Even if the agent knows what the strategy is, she might not know the price of executing this strategy.
The article considers modality “the coalition not only knows the strategy, but also knows an upper bound on the price of executing the strategy”. The main technical result is a sound and complete bimodal logical system that describes the interplay between this modality and the distributed knowledge modality.
* Research article### [An epistemic logic of blameworthiness](/science/article/pii/S0004370220300291)
Artificial Intelligence, Volume 283, 2020, Article 103269Show abstractBlameworthiness of an agent or a coalition of agents can be defined in terms of the principle of alternative possibilities: for the coalition to be responsible for an outcome, the outcome must take place and the coalition should be a minimal one that had a strategy to prevent the outcome. In this article we argue that in the settings with imperfect information, not only should the coalition have had a strategy, but it also should be the minimal one that knew that it had a strategy and what the strategy was.
The main technical result of the article is a sound and complete bimodal logic that describes the interplay between knowledge and blameworthiness in strategic games with imperfect information.
* Research article### [On the complexity of reasoning about opinion diffusion under majority dynamics](/science/article/pii/S0004370220300473)
Artificial Intelligence, Volume 284, 2020, Article 103288Show abstractWe study opinion diffusion on social graphs where agents hold binary opinions and where social pressure leads them to conform to the opinion manifested by the majority of their neighbors. We provide bounds relating the number of agents that suffice to spread an opinion to all other agents with the number of required propagation steps. Bounds are established constructively, via polynomial time algorithms that identify the agents that must act as *seeds*. In particular, we show that, on any given social graph G=(N,E), it is possible to efficiently identify a set formed by half of the agents that can lead to consensus in min{⌊|N|/2⌋,even(G)+1} propagation steps, where even(G) is the number of agents with an even number of neighbors in *G*. The result marks the boundary of tractability, since we show that the existence of sets of seeds consisting of less than half of the agents depends on certain features of the underlying graphs, which are NP-hard to identify. In fact, other NP-hardness results emerge from our analysis. In particular, by closing a problem left open in the literature, we show that it is intractable to decide whether further stable configurations exist in addition to the “consensus” ones (where all agents hold the same opinion). Eventually, for all these problems related to reasoning about opinion diffusion, we show that islands of tractability can be identified by focusing on classes of tree-like social graphs.
[☆](#baep-article-footnote-id1)This paper is an invited revision of a paper which first appeared at the 2018 AAAI Conference on Artificial Intelligence (AAAI-18).
[1](#bfn0010)Work performed while at Duke University.
[View full text](/science/article/pii/S0004370220300229) |
1212b7b1-0dfa-4dd1-a766-e764f7148e93 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Applications for EU Tech Policy Fellowship 2024 now open
* **What:** The [EU Tech Policy Fellowship](https://www.techpolicyfellowship.eu/) is a 7-month programme that empowers ambitious graduates to launch European policy careers focused on emerging technology. At the core of our programme is a focus on ensuring the safe and responsible deployment of artificial intelligence and related technologies.
* **Two distinct tracks:**
+ **Training track:** Explore the intricacies of tech policy during our 8-week Emerging Tech Governance Fundamentals Programme. Engage in a 10-day policymaking summit in Brussels. Receive personalised support & coaching to confidently initiate your career in tech policy.
+ **Placement track:** Experience the full scope of our training track by participating in our 8-week Emerging Tech Governance Fundamentals Programme and attending our 10-day Brussels Summit. Secure a 4-6 month placement at a respected think tank, complemented by a stipend up to EUR 2.250 per month to support your efforts.
* **Dates:** The 2024 EU Tech Policy Fellowship Winter Cohort will run from January - July 2024.
* **Deadline:** Applications for the Winter cohort will close on October 1st.
* **More information:** Visit [the website](https://www.techpolicyfellowship.eu/) or join [the information event](https://forms.gle/WQSJ9QqYQ5LVrzii9) on September 25, 4pm CET
* **How to apply:** [https://forms.gle/Fom7v1GZ3p2rHsN67](https://docs.google.com/forms/d/e/1FAIpQLSdvhSFRKtn5pTZbhdHxtTUjUHJREFESfhVFPFhxypyPOA1aZA/viewform)
|
ce0ed4bd-e65f-4f34-a238-fb56f6348834 | trentmkelly/LessWrong-43k | LessWrong | NFTs, Coin Collecting, and Expensive Paintings
I‘ve tried to make this post fairly nuanced and with a lot of steelmanning, in contrast to much of the material on this subject. My opinions on crypto as a whole are more complex and deserve a separate, more in depth post. None of this is financial advice, I am not a financial advisor, etc.
When it comes to paintings, people care a lot about having the “original” painting. There can only be one original, “real” Mona Lisa, despite all the hi-res scans you can get. Even if you could make a copy so good that nobody could visually tell it apart from the original painting without painstaking chemical analysis, the original is still going to be worth many orders of magnitude more than the copy. While some people argue that the impossibility of making an exact copy of a physical object is the reason why the original is distinct from the copies (and therefore the crux of why this doesn’t generalize into the digital world), and that therefore the small difference in chemical composition or whatever makes up for those orders of magnitude, I think that this view is pretty silly. A visually indistinguishable painting has the same aesthetic value, and conveys the same historical value to the viewer, and even if you argue that there is some premium to having the original, it seems pretty absurd to suggest that the minor visual differences and chemical composition are worth, within a rounding error, $870 million of the $870 million the Mona Lisa is valued at, and that the actual content of art itself modulo some small errors is worth basically nothing in comparison.
I think it’s pretty clear that people don’t actually care about the chemical composition of their paintings or whatever. Rather, I hypothesize that the reason people care about the original is because this enforces artificial scarcity. You can always make more replicas, but there is only one “original”. By ensuring that more supply cannot ever be added, you ensure an effective monopoly on the painting until you deci |
34416aa5-e75b-4fa0-b5b8-4ab303a3be91 | trentmkelly/LessWrong-43k | LessWrong | Avoiding strawmen
George Bernard Shaw wrote that, "the single biggest problem in communication is the illusion that it has taken place". Much of strawmanning is unconscious. One person says that it is important to be positive, the other person interprets this as it being important to be positive in *all* circumstances, when they are merely making a general statement.
I would suggest that a technique to avoid accidentally strawmanning someone would be to begin by intentionally strawmanning them and then try to back off to something more moderate from there.
Take for example:
"Just be yourself"
A strawman would be, "Even if you are a serial killer, you should focus on being yourself, than changing how you behave".
Since this is a rather extreme strawman, backing off to something more moderate from here would be too easy. We might very well just back off to another strawman. Instead, we should backoff to a more reasonable strawman first, then backoff to the moderate version of their view.
The more moderate strawman, "You should never change how you act in order to better fit in"
When we back off to something more moderate, we then get, "Changing how you act in order to better fit in is generally not worth it"
You can then respond to the more moderate view. If you had responded to the original, you might have pointed out a single case when the principle didn't hold, such as making a change that didn't affect one's individuality (i.e showering regularly) and used it to attack the more general principle. When you have the more moderate principle, you can see that such a single example only negates the strict reading, not the more moderate reading. You can then either accept the moderate reading or add arguments about why you also disagree with it. If you had skipped this process, you might have made a specific critique and not realised that it didn't completely negate the other person's argument. |
fc422ef5-2007-4e39-ab51-465166df5842 | trentmkelly/LessWrong-43k | LessWrong | At what level of coronavirus cases in a population should the people in that population start self-quarantining?
The question and desiderata
I'm looking for a specific number here, even an equation. At what # of reported coronavirus cases (or estimated actual cases) within a population should the members of that population start self-quarantine measures (e.g. isolating themselves, avoiding contact with elderly, wearing masks)? Please state your reasoning, cite your sources, and/or show your math.
I know that being excessive in viral response is a good thing, and we should err on the side of doing too much, but I'm trying to be optimal in accounting for the behavioral costs of such measures.
Utah as a case study
Utah has 1 reported case. (source) It has a population of 3.161 million. The transmission rate of coronavirus in the US is around X (I don't actually know if there's a good source for this). Given these numbers for Utah, what is the "trigger point"- the ratio of (COVID-19 cases / population) at which we should self-quarantine? For example, we should begin self-quarantining when the number of CV cases reaches 1% of the population or 316,100 cases.
Or more fine-grained, what are the specific series of trigger points at which specific measures (mask-wearing, self-isolation, eliminating public gatherings) should begin? |
2dd2d54a-3102-46de-a04a-03b5c0444eea | trentmkelly/LessWrong-43k | LessWrong | The Moon is Down; I have not heard the clock
I wanted to share a quick post about something that's made me significantly happier over the past year: knowing what the phase of the moon is on any given day. Importantly, I don't do this with any kind of computer tool. It's a proxy for "am I spending enough time outside", but because I don't let myself cheat and rely on actually seeing the moon at least every few days, I've succeeded in not Goodharting myself, at least so far.
One of the things that helped me do this successfully was figuring out what time of day I could expect to see the moon during the different phases. I know, I know, it's a trivial exercise in orbital mechanics, so maybe all of you do this instinctively, but it wasn't something anyone ever explained to me explicitly. It actually took two disparate works of fiction to make the connection for me.
The first literary clue comes from my favorite Shakespeare play, Macbeth. Scene II, Act I starts with Banquo and Fleance in the court of Macbeth's castle:
> Banquo: How goes the night, boy?
>
> Fleance: The moon is down; I have not heard the clock.
>
> Banquo: And she goes down at twelve.
>
> Fleance: I take't, 'tis later, sir
The second clue (here's where the penny really dropped) is from Cormac McCarthy's All the Pretty Horses, where the main character says "First quarter moon sets at midnight where I come from."
Of course, the true first-quarter moon sets at midnight regardless of where you are because at (solar) midnight, the place you're standing is facing as far from the sun as it can be, and the first quarter moon means that the sun-moon-earth angle is a 90-degree angle. (This is also true at the third-quarter moon, but at that phase, the moon comes up at midnight).
To be more explicit, at the new moon, the moon is almost between the sun and the earth (if it's exactly between, you get a solar eclipse!) so the moon rises and sets near when the sun does, but it's tough to see. A waxing crescent moon comes up gradually later in the |
72f514bc-f1f0-4679-a0ba-53e325a5b540 | StampyAI/alignment-research-dataset/special_docs | Other | ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models
ObjectNet: A large-scale bias-controlled dataset for
pushing the limits of object recognition models
Andrei Barbu
MIT, CSAIL & CBMMDavid Mayo
MIT, CSAIL & CBMMJulian Alverio
MIT, CSAILWilliam Luo
MIT, CSAIL
Christopher Wang
MIT, CSAILDan Gutfreund
MIT-IBM Watson AIJoshua Tenenbaum
MIT, BCS & CBMMBoris Katz
MIT, CSAIL & CBMM
Abstract
We collect a large real-world test set, ObjectNet, for object recognition with controls
where object backgrounds, rotations, and imaging viewpoints are random. Most
scientific experiments have controls, confounds which are removed from the data,
to ensure that subjects cannot perform a task by exploiting trivial correlations in
the data. Historically, large machine learning and computer vision datasets have
lacked such controls. This has resulted in models that must be fine-tuned for new
datasets and perform better on datasets than in real-world applications. When
tested on ObjectNet, object detectors show a 40-45% drop in performance, with
respect to their performance on other benchmarks, due to the controls for biases.
Controls make ObjectNet robust to fine-tuning showing only small performance
increases. We develop a highly automated platform that enables gathering datasets
with controls by crowdsourcing image capturing and annotation. ObjectNet is
the same size as the ImageNet test set (50,000 images), and by design does not
come paired with a training set in order to encourage generalization. The dataset
is both easier than ImageNet – objects are largely centered and unoccluded – and
harder, due to the controls. Although we focus on object recognition here, data
with controls can be gathered at scale using automated tools throughout machine
learning to generate datasets that exercise models in new ways thus providing
valuable feedback to researchers. This work opens up new avenues for research
in generalizable, robust, and more human-like computer vision and in creating
datasets where results are predictive of real-world performance.
1 Introduction
Datasets are of central importance to computer vision and more broadly machine learning. Particularly
with the advent of techniques that are less well understood from a theoretical point of view, raw
performance on datasets is now the major driver of new developments and the major feedback about
the state of the field. Yet, as a community, we collect datasets in a way that is unusual compared to
other scientific fields. We rely almost exclusively on dataset size to minimize confounds (artificial
correlations between the correct labels and features in the input), to attest unusual phenomena, and
encourage generalization. Unfortunately, scale is not enough because of rare events and biases –
Sun et al. [1]provide evidence that we should expect to see logarithmic performance increases as a
function of dataset size alone. The sources of data that datasets draw on today are highly biased, e.g.,
object class is correlated with backgrounds [ 2], and omit many phenomena, e.g., objects appear in
stereotypical rotations with little occlusion. The resulting datasets themselves are similarly biased [ 3].
Equal contribution. Website https://objectnet.dev . Corresponding author abarbu@csail.mit.edu
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
AlexNet2012VGG-192014ResNet-1522016Inception-v42017NASNet-A2018 PNASNet-5L20180102030405060708090100Accuracy %ImageNet Top-5
ImageNet Top-1
Overlap Top-5
Overlap Top-1
ObjectNet Top-5
ObjectNet Top-1
Detectors
by year40-45%
performance
drop
Figure 1: Performance on ObjectNet for high-performing detectors trained on ImageNet in recent
years: AlexNet [ 4], VGG-19 [ 5], ResNet-152 [ 6], Inception-v4 [ 7], NASNET-A [ 8], and PNASNet-5
Large [ 9]. Solid lines show top-1 performance, dashed lines show top-5 performance. ImageNet
performance on all 1000 classes is shown in green. ImageNet performance on classes that overlap
with ObjectNet is shown in blue; the two overlap in 113 classes out of 313 ObjectNet classes, which
are only slightly more difficult than the average ImageNet class. Performance on ObjectNet for
those overlapping classes. We see a 40-45% drop in performance. Object detectors have improved
substantially. Performance on ObjectNet tracks performance on ImageNet but the gap between the
two remains large.
In other areas of science, such issues are controlled for with careful data creation and curation that
intentionally covers phenomena and controls for biases – important ideas that do not easily scale to
large datasets. For example, models for natural language inference, NLI, that perform well on large
datasets fail when systematically varying aspects of the input [ 10], but these are not collected at scale.
In computer vision, datasets like CLEVR [ 11] do the same through simulation, but simulated data is
much easier for modern detectors than real-world data. We show that with significant automation and
crowdsourcing, you can have scale and controls in real-world data and that this provides feedback
about the phenomena that must be understood to achieve human-level accuracy.
ObjectNet is a new large crowdsourced test set for object recognition that includes controls for object
rotations, viewpoints, and backgrounds. Objects are posed by workers in their own homes in natural
settings according to specific instructions detailing what object class they should use, how and where
they should pose the object, and where to image the scene from. Every image is annotated with these
properties, allowing us to test how well object detectors work across these conditions. Each of these
properties is randomly sampled leading to a much more varied dataset.
In effect, we are removing some of the brittle priors that object detectors can exploit to perform well
on existing datasets. Overall, current object detectors experience a large performance loss, 40-45%,
when such priors are removed; see fig. 1 for performance comparisons. Each of the controls removes
a prior and degrades the performance of detectors; see fig. 2 for sample images from the dataset.
Practically, this means that an important feedback for the community about the limitations of models
is missing, and that performance on datasets is limited as a predictor of the performance users can
expect on their own unrelated tasks.
2
ImageNet ObjectNet
ChairsChairs by
rotationChairs by
backgroundChairs by
viewpointTeapots T-shirts
Figure 2: ImageNet (left column) often shows objects on typical backgrounds, with few rotations, and
few viewpoints. Typical ObjectNet objects are imaged in many rotations, on different backgrounds,
from multiple viewpoints. The first three columns show chairs varying by the three properties that are
being controlled for: rotation, background, and viewpoint. One can see the large variety introduced
to the dataset because of these manipulations. ObjectNet images are lightly cropped for this figure
due to inconsistent aspect ratios. Most detectors fail on most of the images included in ObjectNet.
To encourage generalization, we make three other unusual choices when constructing ObjectNet.
First, ObjectNet is only a test set, and does not come paired with a training set. Separating training
and test set collection may be an important tool to avoid correlations between the two which are
easily accessible to large models but not detectable by humans. Since humans easily generalize
to new datasets, adopting this separation can encourage new machine learning techniques that do
the same. Second, while ObjectNet will be freely available, it comes with an important stipulation:
one cannot update the parameters of any model for any reason on the images present in ObjectNet.
While fine-tuning for transfer learning is common, it encourages overfitting to particular datasets
– we disallow fine-tuning but report such experiments in section 4.3 to demonstrate the robustness
of the dataset. Third, we mark every image by a one pixel red border that must be removed on the
fly before testing. As large-scale web datasets are gathered, there is a danger that data will leak
between the training and test sets of different datasets. This has already happened, as Caltech-UCSD
Birds-200-2011, a popular dataset, and ImageNet were discovered to have overlap putting into
question some results [ 12]. With test set images marked by a red border and available online, one can
perform reverse image search and determine if an image is included in any training set anywhere. We
encourage all computer vision datasets – not just ones for object detection – to adopt this standard.
3
While it includes controls, ObjectNet is not hard in arbitrary ways. It is in many ways intentionally
easy compared to ImageNet or other datasets. Objects are highly centralized in the image, they
are rarely occluded and even then lightly so, and many backgrounds are not particularly cluttered.
In other senses, ObjectNet is harder, a small percentage of viewpoints, rotations, and even object
instances, are also difficult for humans. This demonstrates a much wider range of difficulty and
provides an opportunity to also test the limits of human object recognition – if object detectors are
to augment or replace humans, such knowledge is critical. Our overall goal is to test the bias of
detectors and their ability to generalize to specific manipulations, not to just create images that are
difficult for arbitrary reasons. Future versions of the dataset will ratchet up this difficulty in terms of
clutter, occlusion, lighting, etc. with additional controls for these properties.
Our contributions are:
1. a new methodology to evaluate computer vision approaches on datasets that have controls,
2. an automated platform to gather data at scale for computer vision,
3. a new object recognition test set, ObjectNet, consisting of 50,000 images (the same size as
the ImageNet test set) and 313 object classes, and
4. an analysis of biases at scale and the role of fine-tuning.
2 Related work
Many large datasets for object recognition exist such as ImageNet [ 13], MS COCO [ 14], and
OpenImages [ 15]. While the training sets for these datasets are huge, the test sets are comparable to
the size of the dataset presented here, with ImageNet having 50,000 test images, MS COCO having
81,434, and OpenImages having 125,436, compared to ObjectNet’s 50,000 test images. Such datasets
are collected from repositories of existing images, particularly Flickr, which consist of photographs –
images that users want to share online. This intent biases against many object instances, backgrounds,
rotations, occlusion, lighting conditions, etc. Biases lead simultaneously to models that do not transfer
well between datasets [ 3] – detectors pick up on biases inside a dataset and fail when those biases
change – and that achieve good performance with little fine-tuning on new datasets [ 16] – detectors
can quickly acquire the new biases even with only a few training images per class. In computer
vision applications, biases may not match those of any existing dataset, they may change over time,
adversaries may exploit the biases of a system, etc.
The dataset-dependent nature of existing object detectors is well-understood with several other
approaches – aside from scale – having been attempted to alleviate this problem. Some focus on
the datasets themselves, e.g., Khosla et al. [17] subdivide datasets into partitions that are sufficiently
different, something possible only if datasets have enough variety in them. Others focus on the
models, e.g., Zhu et al. [2]train models that separate foregrounds and backgrounds explicitly to
become more resilient to biases. Demonstrating the value of models that have robustness built into
them by design requires datasets that control for biases – controls are not just a sanity check, they
encourage better research.
Some datasets, such as MPII cooking [ 18], KITTI [ 19], TACoS [ 20], CHARADES [ 21], Something-
Something [ 22], A V A [ 23], and Partially Occluded Hands [ 24] collect novel data. Explicitly collecting
data is difficult, as evidenced by the large gap in scale between these datasets and those collected
from existing online sources. At the same time, explicit instructions and controls can lead to more
varied and interesting datasets. These datasets on the whole do not attempt to impose controls by
systematically varying some aspect of the data – users are prompted to perform actions or hold
objects but are not told how to do this or what properties those actions should have. Workers choose
convenient settings and manners in which to perform actions leading to biases in datasets.
3 Dataset construction
ObjectNet is collected by workers on Mechanical Turk who image objects in their homes; see fig. 3.
This gives us control over the properties of those objects while also ensuring that the images are
natural. We asked workers to image objects in 4 backgrounds (kitchens, living rooms, bedrooms,
washrooms), from 3 viewpoints (top, angled at 45 degrees, and side), and in 50 object rotations.
Rotations were uniformly distributed on a sphere, after which nearby points were snapped to the
equator and the poles. We found that workers are able to pose objects to within around 20 degrees of
4
Figure 3: Workers select one object that they have available from a small number of choices. They
are shown a rectangular prism, in blue, with two labeled orthogonal axes in red and yellow. These
labels are object-class specific, so that workers can register the object correctly against the rectangular
prism. We do not show workers images of desired objects to not bias them toward certain instances.
Workers see an animation of how the object should be manipulated, perform this manipulation, and
then align the object against the final rectangular prism rendered on their camera. Not shown above is
the post-capture review UI to ensure that images contain the right objects and are not blurry.
rotation depending on the axis, although the uniformity of the resulting rotations varies by class. This
could be more accurate, but we intentionally did not show instances of object classes to workers in
order to avoid biasing them toward particular instances. In roughly one third of the trials we showed
a rotated 3D car (cars do not appear in our dataset) as an additional cue for the desired rotation.
Workers are transitioned to their phone using a QR code, an object is described to them (but no
example is shown), and they verify if an object that matches the description is available. A rectangular
prism is then presented with labeled faces that are semantically relevant to that object, e.g., the front
and top of a chair. Each object class was annotated with two semantically meaningful orthogonal
axes, a single axis if the object class was rotationally symmetric, or no axis if it was spherical. We
found that describing such parts in a manner that leads to little disagreement is difficult and requires
careful validation. While this provides a weak bias toward particular object instances – one might
imagine a chair with no distinctive front – it is necessary for explaining the desired object pose.
The rectangular prism is also animated to show the desired object pose. The animation starts with
the rectangular prism representing the object in a default and common pose, e.g., the front of a chair
facing a user and the top pointed upward, and then transitions it into the desired pose. Another
animation shows the viewpoint from which the object should be imaged. We found that animating
such instructions was critical in allowing workers to determine the desired object poses.
Workers are asked to move the object into a specific room, pose it, and image it from a certain
angle. The rectangular prism was overlayed on their phone camera in the final desired position
with the arrows marking the class-specific semantically-relevant faces. This also proved critical as
remembering the desired rotation for an object is too unreliable.
This process annotates every image with three properties (rotation, viewpoint, and background); it
controls for biases by sampling these properties randomly, thus allowing us to include objects in
rotations and scenes that are unusual. Each image is validated to ensure that it contains the correct
objects and that any identifying information is removed.
To select object classes for the dataset, we listed 420 common household objects. Of these, 55 classes
were eliminated because they are not easily movable, e.g., beds (16 classes), pose a safety concern,
e.g., fire alarms (8), were too confusing to subjects, e.g., we found little agreement on what armbands
are (10), posed privacy concerns, e.g., people (5), or were alive and cannot be manipulated safely,
e.g., plants (2); numbers do not add because classes were excluded for multiple reasons. In addition,
52 object classes were too rare, e.g., golf clubs. Data was collected for 313 object classes, with 160
images per class on average with a standard deviation of 44.
5
Workers did not always have instances of every class. For each image to be collected, they were
given ten choices out of which to select one that is available or request ten other choices. This
naturally would lead to an extreme class imbalance as the easiest and most common classes would be
vastly overrepresented. To make the class distribution more uniform, we presented objects inversely
proportional to how frequent they are; the resulting distribution is fairly uniform, see fig. 4.
Objects were described to workers using one to four words, depending on the class. Two exceptions
were made, for forks and spoons, as user agreement on how to label two orthogonal faces of these
object classes is very low; rough sketches were shown instead. When aligning their object and phone,
workers were instructed to ignore the aspect ratio of the rectangular prism. We found that having a
single aspect ratio, a cube for example, for all object classes was very confusing to workers. Each
object class is annotated with a rough aspect ratio for its rectangular prism. This again represents a
small bias toward particular kinds of objects, although this is alleviated by the fact that most objects
did not fit a rectangular prism anyway. Deformable objects were still rotated and users followed those
rotations aligning the semantically meaningful axes with object parts, but other details of the object
pose were not controlled for.
No instructions were given about how to stabilize objects in the desired poses. When necessary, some
workers held the objects while other propped them up. For each image, workers were asked two
questions on their phone collection UI: to verify that the image depicts an object of the intended class
and that it is not too blurry. In many indoor lighting conditions, particularly with low-end cameras, it
is easy to take unrecognizable photos without careful stabilization. We estimate the task took around
1.5 minutes per object on average and workers were paid 10 dollars per hour on average.
In total, 95,824 images were collected from 5,982 workers out of which 50,000 images were retained
after validation and included in the dataset. Each image was manually verified. About 48% of the
data collected was removed. In 10% of images, objects were placed in incorrect backgrounds, showed
faces (0.2% of images), or contained other private information (0.03% of images). We found that
despite instructions, many users took photos of screens if they did not have an object (23%) – these
were removed because on the whole they are very easy for models to recognize. Centralized locations
that employ workers on Mechanical Turk were eliminated from the dataset to ensure that objects are
not imaged on the same backgrounds across many workers (20%). Note that some problem categories
overlapped. So as not to bias the dataset toward images which are easy for humans, validators were
instructed to be permissive and only rule out an image of an object if it clearly violated the constraints.
Since workers who carry out the task correctly do so nearly perfectly, while workers who do not,
carried out almost every trial incorrectly, we have additional confidence that images which are hard
to recognize depict the correct object classes.
This dataset construction method is not without its limitations. All objects are indoor objects which
are easy to manipulate, they cannot be too large or small, fixed to the wall, or dangerous. We cannot
ask workers to manipulate objects in ways that would damage or otherwise permanently alter them.
Some object classes which are rare can be difficult to gather and are more likely to have incorrect
images before validation. Not all undesirable correlations are removed by this process; for example,
some objects are more likely to be held than others while certain object classes are predisposed to
have particular colors. We are not guaranteed to cover the space of shapes or textures for each object
class. Finally, not all object classes are as easy to rotate, so the resulting poses are still correlated
with the object class.
4 Results
We investigate object detector performance on ObjectNet using an image labeling task; see section 4.1.
Then we explain this performance by breaking down how controls affect results; section 4.2. Finally
we demonstrate that the difficulty of ObjectNet lies in the controls, and not in the particular properties
of the images, by fine-tuning on the dataset; section 4.3.
4.1 Transfer from ImageNet
We tested six object detectors published over the past several years on ObjectNet, choosing top
performers for each year: AlexNet (2012) [ 4], VGG-19 (2014) [ 5], ResNet-152 (2016) [ 6], Inception-
v4 (2017) [ 7], NASNET-A (2018) [ 8], and PNASNet-5L (2018) [ 9]. All detectors were pre-trained
6
Object class Background
RotationViewpoint
Figure 4: The distribution of the 313 object classes, backgrounds, rotations, and viewpoints in the
dataset. The class distribution is fairly uniform due to biasing workers toward low-frequency objects.
Object backgrounds, viewpoints, and rotations were sampled uniformly but rejected data can skew
the distribution. Each image is also labeled with a 3D rectangular prism and semantically meaningful
faces for each object. Spherical objects pop out of the rotation histogram as they have a single
rotation. ()Note that object rotations are less reliable than this indicates: not all objects are equally
easy to rotate, the actual rotations of objects pictured in the dataset are less uniform. This represents
the object rotations that workers were asked to collect. While this is also true for background and
viewpoint, we expect that the true rotation graph is more skewed than the other two.
Air freshener, Alarm clock ,Backpack , Baking sheet, Banana ,Bandaid , Baseball bat, Baseball glove, Basket ,
Bathrobe, Bath towel , Battery, Bed sheet, Beer bottle , Beer can, Belt, Bench ,Bicycle , Bike pump, Bills
(money), Binder (closed) , Biscuits, Blanket, Blender, Blouse, Board game, Book (closed), Bookend, Boots,
Bottle cap , Bottle opener, Bottle stopper, Box, Bracelet, Bread knife, Bread loaf , Briefcase, Brooch, Broom ,
Bucket ,Butcher’s knife , Butter, Button, CD/DVD case, Calendar, Can opener ,Candle , Canned food, Cellphone ,
Cellphone case, Cellphone charger, Cereal, Chair , Cheese, Chess piece, Chocolate, Chopstick, Clothes hamper ,
Clothes hanger, Coaster, Coffee beans, Coffee grinder, Coffee machine, Coffee table, Coin (money), Comb,
Combination lock ,Computer mouse , Contact lens case, Cooking oil bottle, Cork, Cutting board, DVD player,
Deodorant, Desk lamp , Detergent, Dishrag or hand towel , Dish soap, Document folder (closed), Dog bed,
Doormat , Drawer (open), Dress , Dress pants, Dress shirt, Dress shoe (men) , Dress shoe (women), Drill ,Drinking
Cup, Drinking straw, Drying rack for clothes, Drying rack for plates , Dust pan, Earbuds, Earring, Egg, Egg carton,
Envelope , Eraser (white board), Extension cable, Eyeglasses, Fan, Figurine or statue, First aid kit, Flashlight,
Floss container, Flour container, Fork, French press ,Frying pan , Glue container, Hair brush, Hair clip, Hair
dryer , Hair tie, Hammer , Hand mirror, Handbag, Hat, Headphones (over ear), Helmet , Honey container, Ice, Ice
cube tray, Iron, Ironing board, Jam, Jar, Jeans , Kettle, Keyboard , Key chain, Ladle ,Lampshade ,Laptop (open) ,
Laptop charger, Leaf, Leggings, Lemon ,Letter opener , Lettuce, Light bulb, Lighter ,Lipstick , Loofah, Magazine,
Makeup, Makeup brush, Marker, Match ,Measuring cup ,Microwave , Milk, Mixing/Salad Bowl ,Monitor , Mouse
pad, Mouthwash, Mug, Multitool, Nail, Nail clippers, Nail file, Nail polish, Napkin, Necklace , Newspaper, Night
light, Nightstand, Notebook, Notepad, Nut for a screw, Orange , Oven mitts, Padlock ,Paintbrush , Paint can,
Paper, Paper bag, Paper plates, Paper towel , Paperclip, Peeler, Pen, Pencil, Pepper shaker, Pet food container,
Landline phone, Photograph, Pill bottle , Pill organizer, Pillow ,Pitcher , Placemat, Plastic bag , Plastic cup, Plastic
wrap, Plate , Playing cards, Pliers, Plunger ,Pop can ,Portable heater , Poster, Power bar, Power cable, Printer ,
Raincoat, Rake, Razor, Receipt, Remote control , Removable blade, Ribbon, Ring, Rock, Rolling pin, Ruler ,
Running shoe ,Safety pin ,Salt shaker ,Sandal , Scarf, Scissors, Screw , Scrub brush, Shampoo bottle, Shoelace,
Shorts, Shovel , Skateboard, Skirt,Sleeping bag , Slipper, Soap bar, Soap dispenser ,Sock,Soup Bowl , Sewing kit,
Spatula ,Speaker , Sponge, Spoon, Spray bottle, Squeegee, Squeeze bottle, Standing lamp, Stapler, Step stool,
Still Camera , Sink Stopper, Strainer ,Stuffed animal , Sugar container, Suit jacket , Suitcase, Sunglasses ,Sweater ,
Swimming trunks ,T-shirt ,TV, Table knife, Tablecloth, Tablet, Tanktop, Tape, Tape measure, Tarp, Teabag,
Teapot ,Tennis racket , Thermometer, Thermos, Throw pillow, Tie, Tissue, Toaster ,Toilet paper roll , Tomato,
Tongs, Toothbrush, Toothpaste, Tote bag, Toy, Trash bag, Trash bin , Travel case, Tray, Trophy, Tweezers,
Umbrella , USB cable, USB flash drive, Vacuum cleaner ,Vase, Video camera, Walker, Walking cane, Wallet ,
Watch ,Water bottle , Water filter, Webcam, Weight (exercise) ,Weight scale ,Wheel , Whisk, Whistle ,Wine bottle ,
Wine glass, Winter glove ,Wok, Wrench, Ziploc bag
Figure 5: The 313 object classes in ObjectNet. We chose object classes that were fairly common,
not too similar to one another, cover a wide range of objects available in homes, and can be safely
manipulated by workers. The 113 classes which overlap with ImageNet are marked in italics.
7
Object class Background
Rotation Viewpoint
Figure 6: Top-1 performance of ResNet-152 pretrained on ImageNet on the subset of ObjectNet
– 113 classes which overlap with ImageNet – as a function of controls used. No fine-tuning was
performed; see section 4.3. Classes such as plunger, safety pin and drill have 60-80% accuracy, while
French press, pitcher, and plate have accuracies under 5%. Background, rotation, and viewpoint are
reranked for each class and then aggregated. All controls have a significant effect on performance and
explain the poor performance on the dataset as the disparity between the best and worst performing
settings of each of these is 10-20%. The rotation graph is affected by the fact that per-object-class
rotations are not uniform. Some per-class rotations are not available, due to the data cleanup phase,
meaning that later bins contain few images per class.
on ImageNet and tested on the 113 object classes which overlap between ObjectNet and ImageNet.
Performance drops by 40-45% across detectors regardless of top-1 or top-5 metrics; see fig. 1. This
performance gap is relative to the performance of detectors on the overlapped classes in ImageNet –
our chosen classes were slightly difficult than the average ImageNet class. Increased performance
on ImageNet resulted in increased performance on ObjectNet but the gap between the two does not
show signs of closing.
4.2 The impact of controls on performance
One might wonder about the cause of this lowered performance, even on classes shared with ImageNet.
In fig. 6, we break down performance by controls. There is a large gap in performance as a function
of background, rotation, and viewpoint. Distributions over these properties were first computed by
object class, reranked from highest to lowest performing, and averaged across object classes. If
these were irrelevant to detectors and detectors were robust to them, we would see a fairly uniform
distribution. Instead there is a large performance gap depending on the background (15%), rotation
(20%) and viewpoint (15%). Note that this is despite the fact that we only gave general instructions
about backgrounds; we did not ask users where in a room to pose an object and how cluttered the
background should be. These together account for much of the performance difference: if one
recreates dataset bias by choosing only the better-performing conditions for these controls, object
detector performance is mostly restored to that which is seen in ImageNet and other datasets.
4.3 Fine-tuning
To emphasize that the difficulty of ObjectNet lies in the controls, and not in the particulars of the
data, we – as a one-time exception to the clause which forbids updating parameters on the dataset –
fine-tune on the dataset. Kornblith et al. [25] carry out a comprehensive survey on transfer learning
from ImageNet to 11 major datasets. On those 11 datasets, training on only 8 images per class
increased top-1 accuracy by approximately 37% with variance 11% – only two datasets had less
than 30% performance increase because baseline performance was already over 60% with transfer
8
learning on a single image. We used a ResNet-152, trained on ImageNet, and retrained its last layer
in two conditions. The first, using a subset of the ObjectNet classes which overlap with ImageNet.
Top-1 performance without fine-tuning is 29%, while with fine-tuning on 8 images, it is 39%, and
with 16 images, it is 45%. Far less of an increase than on other datasets despite using only classes
which overlap with ImageNet, an easier condition than that investigated by Kornblith et al. [25]. Even
using half of the dataset, 64 images per class, one only reaches 50% top-1 accuracy.
This is an optimistic result for detectors as it restricts them to classes which were already seen
in ImageNet. The more common fine-tuning scenario is to tune on object classes which do not
necessarily overlap the original dataset. Including all 313 ObjectNet classes, yields top-1 accuracies
of 23% and 28% for 8 and 16 images respectively. Even using half of the dataset, 64 images per class,
top-1 accuracy only reaches 31%, far lower than would be expected given the efficacy of fine-tuning
on other datasets. Unlike in other datasets, merely seeing images from this dataset does not allow
detectors to easily understand the properties of its objects.
5 Discussion
ObjectNet is challenging because of the intersection of real-world images and controls. It pushes
object detectors beyond the conditions they can generalize to today. ObjectNet is available
atobjectnet.dev along with additional per-image annotations. Our dataset collection platform is
extremely automated, which allows for replacing ObjectNet and recollecting it regularly to prevent
overfitting hyperparameters or model structure.
Our preliminary results indicate that human performance on ObjectNet when answering which objects
are present in a scene is around 95% across seven annotators. The images which are consistently
mislabeled by human annotators are difficult for two primary reasons: unusual instances of the object
class or viewpoints which are degenerate. We intend to more carefully investigate what makes objects
difficult to recognize for humans as we remove information from the foreground or the background
or reduce the viewing time. Predictors for how difficult an image or object is to recognize could see
many real-world applications. It is unclear how human-like the error patterns of object detectors are,
and if with sufficiently constrained inputs and processing times, human performance might approach
that of object detectors.
Aside from serving as a new test set, ObjectNet provides novel insights into the state of the art for
object recognition. Detectors seem to fail to capture the same generalizable features that humans
use. While steady progress has been made in object recognition, the gap between ObjectNet and
ImageNet has remained; since AlexNet no detector has shown a large performance jump. More
data improves results but the benefits eventually saturate. The expected performance of many object
recognition applications is much lower than traditional datasets indicate. Object detectors are defeated
in a non-adverserial setting with simple changes to the object imaging conditions or by choosing
instances of objects which appear normal to humans but are relatively unlikely – this makes safety
critical applications for object detection suspect. These facts hint toward the notion that larger
architectural changes to object detectors that directly address phenomena like those being controlled
for here (viewpoint, rotation, and background), would be beneficial and may provide the next large
performance increase. ObjectNet can serve as a means to demonstrate this robustness which would
not be seen in standard benchmarks.
We find ourselves in a time where datasets are critical and new models find patterns that humans
do not, while our tools and techniques for collecting and structuring datasets have not kept up with
advances in modeling. Although not all biases can be removed with the techniques presented here,
e.g., some materials never occur with certain object classes and some rotations are difficult to achieve,
many important classes of biases can. A combination of datasets with and without controls using
real-world and simulated data are required to enable the development of models that are robust and
human-like, and to predict the performance users can expect from such models on new data.
Acknowledgments
This work was supported, in part by, the Center for Brains, Minds and Machines, CBMM, NSF
STC award CCF-1231216, the MIT-IBM Brain-Inspired Multimedia Comprehension project, the
Toyota Research Institute, and the SystemsThatLearn@CSAIL initiative. We would like to thank the
members of CBMM, particularly the postdoc group, for many wonderful and productive discussions.
9
References
[1]Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable
effectiveness of data in deep learning era. In International Conference on Computer Vision ,
pages 843–852, 2017.
[2]Zhuotun Zhu, Lingxi Xie, and Alan Yuille. Object recognition with and without objects. In
International Joint Conference on Artificial Intelligence , 2017.
[3]A Torralba and AA Efros. Unbiased look at dataset bias. In Conference on Computer Vision
and Pattern Recognition , 2011.
[4]Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep
convolutional neural networks. In Advances in Neural Information Processing Systems , pages
1097–1105, 2012.
[5]Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556 , 2014.
[6]Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. In Conference on Computer Vision and Pattern Recognition , pages 770–778, 2016.
[7]Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4,
inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI
Conference on Artificial Intelligence , 2017.
[8]Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable
architectures for scalable image recognition. In Conference on Computer Vision and Pattern
Recognition , pages 8697–8710, 2018.
[9]Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei,
Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In
Proceedings of the European Conference on Computer Vision , pages 19–34, 2018.
[10] R Thomas McCoy, Ellie Pavlick, and Tal Linzen. Right for the wrong reasons: Diagnosing
syntactic heuristics in natural language inference. arXiv preprint arXiv:1902.01007 , 2019.
[11] Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick,
and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary
visual reasoning. In Conference on Computer Vision and Pattern Recognition , pages 2901–2910,
2017.
[12] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011
Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
[13] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng
Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large scale visual
recognition challenge. International Journal of Computer Vision , 115(3):211–252, 2015.
[14] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr
Dollár, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In European
Conference on Computer Vision , 2014.
[15] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset,
Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, et al. The Open Images dataset v4:
Unified image classification, object detection, and visual relationship detection at scale. arXiv
preprint arXiv:1811.00982 , 2018.
[16] Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes ImageNet good for transfer
learning? arXiv preprint arXiv:1608.08614 , 2016.
[17] Aditya Khosla, Tinghui Zhou, Tomasz Malisiewicz, Alexei A Efros, and Antonio Torralba.
Undoing the damage of dataset bias. In European Conference on Computer Vision , pages
158–171. Springer, 2012.
10
[18] Marcus Rohrbach, Sikandar Amin, Mykhaylo Andriluka, and Bernt Schiele. A database for fine
grained activity detection of cooking activities. In Conference on Computer Vision and Pattern
Recognition , pages 1194–1201. IEEE, 2012.
[19] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The
KITTI dataset. The International Journal of Robotics Research , 32(11):1231–1237, 2013.
[20] Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and
Manfred Pinkal. Grounding action descriptions in videos. Transactions of the Association for
Computational Linguistics , 1:25–36, 2013.
[21] Gunnar A Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta.
Hollywood in homes: Crowdsourcing data collection for activity understanding. In European
Conference on Computer Vision , pages 510–526. Springer, 2016.
[22] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne
Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag,
et al. The "Something Something" video database for learning and evaluating visual common
sense. In International Conference on Computer Vision , 2017.
[23] Chunhui Gu, Chen Sun, David A Ross, Carl V ondrick, Caroline Pantofaru, Yeqing Li, Sudheen-
dra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, et al. Ava: A video
dataset of spatio-temporally localized atomic visual actions. In Conference on Computer Vision
and Pattern Recognition , pages 6047–6056, 2018.
[24] Battushig Myanganbayar, Cristina Mata, Gil Dekel, Boris Katz, Guy Ben-Yosef, and Andrei
Barbu. Partially occluded hands: A challenging new dataset for single-image hand pose
estimation. In Asian Conference on Computer Vision , 2018.
[25] Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better ImageNet models transfer better?
InConference on Computer Vision and Pattern Recognition , 2018.
11 |
344edfbd-1e41-4bfe-9ba1-4f149ca90d1b | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Announcing the EA Project Ideas Database
In January, David Reinstein [suggested](https://forum.effectivealtruism.org/posts/eQgKyCQW4u5CCWHXt/idea-curated-database-of-quick-win-tangible-attributable) creating a database of quick-win EA project ideas. In March, he created an [Airtable](https://airtable.com/shrNps2rJwQxR0PVS) to serve this purpose. With David's blessing, I've now taken over managing the database and I'm soliciting suggestions.
[Here is a read-only link to the table.](https://airtable.com/shrNps2rJwQxR0PVS) There are several useful views under the Projects tab. Note that this is still a very early draft/Minimum Viable Product, so it may lack polish. Skills, orgs, and categories are currently not fully fleshed out.
[Here is the submission form for new project ideas](https://airtable.com/shrX5WTu2DTo1ZU7V). Because of the sources I've used, contents right now are mostly focused on AI Safety, so you can help by adding suggestions from other EA cause areas! Alternatively, comment here to give me an existing list to integrate or reference.
[Here is the volunteer form](https://airtable.com/shr34wkFRuxkNsxft). If you'd like to volunteer for any of these projects, this is a short form you can use to indicate interest.
### What niche does this fill?
As I understand it, there presently exist plenty of [EA job boards and volunteer opportunities](https://ea-internships.pory.app/). However, there's no central list of *projects* that anyone could do, especially "quick wins" that don't require an entire organization backing them. It seemed worthwhile to create and maintain such a list. This is also a chance for funders to register interest in paying for one-off projects they deem are worth doing.
### **What has been done so far?**
* Added the AI Safety ideas posted by [Ryan](https://www.lesswrong.com/posts/v5z6rDuFPKM5dLpz8/probably-good-projects-for-the-ai-safety-ecosystem), [Akash](https://www.lesswrong.com/posts/DqF9c8J9LXeFFo42a/ai-safety-field-building-projects-i-d-like-to-see), and [Severin](https://www.lesswrong.com/posts/QRST9ctX5Cu2dM2Sb/agi-safety-field-building-projects-i-d-like-to-see).
* Divided into Quick Wins and Larger Projects by likely effort required.
* Added some organizations and skills to the list.
* Added submission forms for project ideas and volunteers.
### **What is planned for future development?**
* Merge backend data with [Alignment Ecosystem Development](https://coda.io/@alignmentdev/alignmentecosystemdevelopment) (eventually building separate views for AI Safety and other EA project categories).
* Add a funding submission form.
* Promote the database more broadly.
* Compare/contrast the [EA Opportunity Board](https://ea-internships.pory.app/).
* Create a public-facing interface similar to the one for Alignment Ecosystem Development for non-AI-related EA projects.
### **How can you help?**
* Suggest a new [project](https://airtable.com/shrX5WTu2DTo1ZU7V) or register yourself as [working on one](https://airtable.com/shr34wkFRuxkNsxft).
* Suggest a list of skills/aptitudes to include in the table.
* Suggest project categories that aren't yet listed.
* If you are interested in helping edit the table, or want to sponsor, manage, or fund a specific project, comment here to let me know. |
a9289fb8-3a3c-4745-ad5e-15ef14cef2f4 | trentmkelly/LessWrong-43k | LessWrong | The flawed Turing test: language, understanding, and partial p-zombies
There is a problem with the Turing test, practically and philosophically, and I would be willing to bet that the first entity to pass the test will not be conscious, or intelligent, or have whatever spark or quality the test is supposed to measure. And I hold this position while fully embracing materialism, and rejecting p-zombies or epiphenomenalism.
The problem is Campbell's law (or Goodhart's law):
> The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."
This applies to more than social indicators. To illustrate, imagine that you were a school inspector, tasked with assessing the all-round education of a group of 14-year old students. You engage them on the French revolution and they respond with pertinent contrasts between the Montagnards and Girondins. Your quizzes about the properties of prime numbers are answered with impressive speed, and, when asked, they can all play quite passable pieces from "Die Zauberflöte".
You feel tempted to give them the seal of approval... but they you learn that the principal had been expecting your questions (you don't vary them much), and that, in fact, the whole school has spent the last three years doing nothing but studying 18th century France, number theory and Mozart operas - day after day after day. Now you're less impressed. You can still conclude that the students have some technical ability, but you can't assess their all-round level of education.
The Turing test functions in the same way. Imagine no-one had heard of the test, and someone created a putative AI, designing it to, say, track rats efficiently across the city. You sit this anti-rat-AI down and give it a Turing test - and, to your astonishment, it passes. You could now conclude that it was (very likely) a genuinely conscious or intelligent entity.
But this is not the case: nearly |
4e17f0eb-1661-4af5-b0d9-87d992e4c4df | StampyAI/alignment-research-dataset/arbital | Arbital | Invisible background fallacies
Suppose that you went outdoors and jumped up into the air, and then, as you were jumping, the law of gravity suddenly switched off--for whatever weird reason, masses everywhere in the Universe stopped attracting one another. Would your feet hit the ground again?
As an initial thought about this problem, you might imagine that gravity no longer pulls you down toward the Earth, and that therefore your jump would keep you continuing up into the air. If you were thinking more broadly, you might realize that the concept of 'air pressure' means that the air on Earth's surface is kept pressurized by gravity, and in the absence of gravity holding down all that air, the atmosphere might start to rush off into space as it expanded to relieve the pressure. Although (you then reason further) the speed of sound is finite, so the atmosphere might start rushing out at the edges, and would need to disappear there before the air started to expand away from the Earth's surface. And then that rushing air would only carry you further away from the ground.
This answer is incorrect. Why? Because the ground under your feet is also currently being pulled downward by gravity.
Indeed, the ground under your feet is currently in a state of [equilibrium](https://arbital.com/p/) despite being pulled downward by gravity at what would otherwise be an acceleration of $9.8m/s^2.$ It follows that other pressures on the ground immediately below your feet--from the air above it, and from the rock below--must net out to an upward force that would, if not for gravity, push the ground upward at $9.8m/s^2.$ This logic applies to the dirt below your feet, the air around you, and the rocks immediately underneath the dirt under your feet. Everything around you that seemed motionless was previously in equilibrium with gravity, so as soon as gravity vanishes, it all accelerates upward in near unison.
So your feet hit the ground again, as the Earth expands beneath you. And then the Earth continues to accelerate upward at $9.8m/s^2$, pressuring your feet and accelerating you as well. So in fact, if gravity had switched off everywhere in the universe 10 seconds earlier as you read this, you probably would not have noticed anything different, yet. (Though you *would* notice shortly afterward as things began to expand far enough to change the balance of forces, and as the Sun finished exploding.)
In many situations and scenarios, there are rules or laws or considerations that properly apply to *everything*--including background objects and processes and concepts we don't usually think about--and then considering the *complete* effect gives us a very different picture from thinking only about the effect on *visible* things, with the latter being a common species of fallacy.
An essay now famous in the field of [economics](https://arbital.com/p/) is Frederic Bastiat's "[That Which Is Seen, And That Which Is Not Seen](http://bastiat.org/en/twisatwins.html)" which illustrated this idea and introduced the now-famous 'broken window fallacy'. Suppose you heave a rock through somebody's window; have you done a good deed or a bad deed? You can imagine somebody arguing that they've done a good deed, despite the initial inconvenience, because the homeowner needs to pay a glazier to fix the window, and then the glazier spends money at the baker, and so the whole economy is helped. This is a fallacy because, if the window hadn't been broken, the homeowner would have spent the money somewhere else. But the broken window and the payment to the glazier are highly visible; while the disappearing of whatever economic activity would otherwise have occurred, is not seen. If we consider the effect on the whole picture, including things that are ordinarily hard to see or fading into the background or invisible for some other reason, we arrive at a very different picture of the net effect of breaking somebody's window. |
2958ca6a-48ec-4059-86b5-31b952d20950 | trentmkelly/LessWrong-43k | LessWrong | Disgust and Politics
|
5d9d8ee3-b3b4-43f2-858f-ade1c1b20c94 | trentmkelly/LessWrong-43k | LessWrong | NTK/GP Models of Neural Nets Can't Learn Features
Since people are talking about the NTK/GP hypothesis of neural nets again, I thought it might be worth bringing up some recent research in the area that casts doubt on their explanatory power. The upshot is: NTK/GP models of neural networks can't learn features. By 'feature learning' I mean the process where intermediate neurons come to represent task-relevant features such as curves, elements of grammar, or cats. Closely related to feature learning is transfer learning, the typical practice whereby a neural net is trained on one task, then 'fine-tuned' with a lower learning to rate to fit another task, usually with less data than the first. This is often a powerful way to approach learning in the low-data regime, but NTK/GP models can't do it at all.
The reason for this is pretty simple. During training on the 'old task', NTK stays in the 'tangent space' of the network's initialization. This means that, to first order, none of the functions/derivatives computed by the individual neurons change at all; only the output function does.[1] Feature learning requires the intermediate neurons to adapt to structures in the data that are relevant to the task being learned, but in the NTK limit the intermediate neurons' functions don't change at all. Any meaningful function like a 'car detector' would need to be there at initialization -- extremely unlikely for functions of any complexity. This lack of feature learning implies a lack of meaningful transfer learning as well: since the NTK is just doing linear regression using an (infinite) fixed set of functions, the only 'transfer' that can occur is shifting where the regression starts in this space. This could potentially speed up convergence, but it wouldn't provide any benefits in terms of representation efficiency for tasks with few data points[2]. This property holds for the GP limit as well -- the distribution of functions computed by intermediate neurons doesn't change after conditioning on the outputs, so networks sa |
0e24b226-5d5d-482e-9f23-89e7d1537804 | trentmkelly/LessWrong-43k | LessWrong | Should it be a research paper or a blog post?
The answer to this question may seem obvious to some, but let's see.
My impression is that some people write long blog posts about things that they perceive as innovative new ideas in philosophy, AI research, or whatever. So how do you decide that your idea should be a blogpost (here or on some other site) instead of submitting it to a journal?
Is the university research system just too closed? Or do you think it is superfluous?
Or do you think that you can do it faster for a blog? (This may not be unambiguously good. My impression is that there is less literature research for some blog articles than I would like to see, which often creates noise.)
Or do you want to have feedback for a blogpost first and submit later?
(Note that the question can be generalized to other media and forms, e.g. magazine essay vs. journal article.) |
1506e435-c206-40fb-ada1-c0c80c5f2e0e | trentmkelly/LessWrong-43k | LessWrong | Can AI partners make human ones pale in comparison?
We normally view AI as cold, pragmatic, and, maybe, even socially awkward. The risks we identify reflect this stereotype. Just consider the paperclip maximizer; a superintelligent AI is asked to manage a paperclip factory. As it is trying to maximize the paperclip production, it quickly realizes that humans waste a whole lot of resources that would be better allocated to paperclip production. Therefore, it wipes out all humans and relocates the resources they would've consumed to paperclip production.
Although paperclip AI is superintelligent in managing factory operations, it has the social intelligence of a 3 year old. Historically, this view of AI was technically justified. Deep learning models are built to optimize a single, well-define measure (e.g. paperclips produced per dollar). As a result, they tend to be exceptional at tasks where performance can be clearly measured (e.g. image classification, traffic prediction, speech transcription), but unable to perform tasks where performance is not as easily quantifiable. It was natural to imagine superintelligent versions of such models to inherit this flaw.
However, something surprising has happened. We have been able to train models on narrow tasks in a way that they end up learning an impressive amount of commonsense knowledge and social intuition. Of course, I'm talking about language models here. Even though language models are trained to generate the next word in a sequence of words, they end up also learning a fair bit of knowledge captured in that text. When training data includes human conversations, models learn the social intuition from them.
Consequently, I'm more concerned with socially superintelligent AI than the stereotypical emotionless cerebrally superintelligent AI. In this essay, I will explore the implications of such AI on human romantic relationships.
I believe AI partners will become better than their human counterparts in romantic conversations. As a result, interhuman intimate conversa |
945b30c8-0c41-48d7-b1f6-d59411fad89f | trentmkelly/LessWrong-43k | LessWrong | Rationalist prepper thread
There are different estimates of the possible severity of the current coronavirus outbreak. One estimation is based on the straight extrapolation of the exponential growth of the infected people number, which doubles every two days. This implies that the whole population of Earth will be ill in March. Another view takes into account that many mild cases are not included in the stat, so lethality is small and probably not everybody will be ill at all. We just don't know yet.
How we should act personally in this situation?
Firstly, we should act in the way, which should be good if everybody will act in the same way. For example, panic evacuation is not good, as if everybody starts running away, this will only increase the dissemination of the virus. Thus the best approach is home self-isolation, lowering of the outside contacts, wearing protecting gear (mask, goggles, gloves), hand cleaning. UV-lamps may help in the disinfection of things.
It is also good to invest in improving ones’ immune system by health food, vitamins, light therapy, as it is our best protection of the virus. Evacuation into a cold county house would weaken the immune system.
It is probably reasonable to collect one month supply of food, water, and other essentials, as the possible pandemic may affect food manufacturing, and also wholesale buying minimise future need to go outside. |
78b07648-c956-48f1-8cfe-d93acd5c1bad | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Is the "Valley of Confused Abstractions" real?
Epistemic Status: Quite confused. Using this short post as a signal for discussion.
In Evan's post about [Chris Olah's views on AGI safety](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety#What_if_interpretability_breaks_down_as_AI_gets_more_powerful_), there is a diagram which loosely points to how Chris thinks model interpretability will be impacted at different levels of model "strength" (note that this is vague intuition; [worth looking at Chris’ comment on this post](https://www.lesswrong.com/posts/g7rLyjg67iopg9zLD/is-the-valley-of-confused-abstractions-real?commentId=9huyEHorNf8tJhpjv)):
I always thought this diagram still held up in the LLM regime, even though it seems the diagram pointed specifically to interpretability with CNN vision models. However, I had a brief exchange with Neel Nanda about the Valley of Confused Abstractions in the context of language models, and I thought this might be a good thing to revisit.
I've been imagining that language models with the "strength" of GPT-2 are somewhere near the bottom of the Valley of Confused Abstractions, but the much bigger models are a bit further along the model strength axis (though I'm not sure where they fall). I've been thinking about this in the context of model editing or pointing/retargeting the model.
Here's my exchange with Neel:
> **Neel**: I'm personally not actually that convinced that the valley of confused abstractions is real, at least in language. 1L transformers are easier to interpret than large ones, which is completely unlike images!
>
> **Me**: Does that not fit with the model interpretability diagram from: <https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety#What_if_interpretability_breaks_down_as_AI_gets_more_powerful_>?
>
> Meaning that abstractions are easy enough to find with simple models, but as you scale them up you have to cross the valley of confused abstraction before you get to "crisp abstractions." Are you saying we are close to "human-level" in some domains and it's still hard to find abstractions? My assumption was that we simply have not passed the valley yet so yes larger models will be harder to interpret. Maybe I'm misunderstanding?
>
> **Neel**: Oh, I just don't think that diagram is correct. Chris found that tiny image models are really hard to interpret, but we later found that tiny transformers are fairly easy.
>
> **Me**: I guess this is what I'm having a hard time understanding: The diagram seems to imply that tiny (simple?) models are easy to interpret. In the example in the text, my assumption was that AlexNet was just closer to the bottom of the valley than InceptionV1. But you could have even simpler models than AlexNet that would be more interpretable but less powerful?
>
> **Neel**: Ah, I think the diagram is different for image convnets and for language transformers.
>
>
My understanding was that 1-layer transformers being easy to interpret still agrees with the diagram, and current big-boy models are just not past the Valley of Confused Abstractions yet.
Ok, but if what Neel says is true, what might the diagram look like for language transformers?
I'm confused at the moment, but my thinking used to go something like this: in the case of GPT-2, it is trying to make sense of all the data it has been trained on but just isn't big enough to fully grasp the concept of "cheese" and "fromage" are essentially the same. But my expectation is that as the model gets bigger, it *knows* those two tokens mean the same thing, just in different languages. Maybe it does?
With that line of thinking, as model strength increases, it will help the model create crisper internal abstractions of a concept like "cheese." But then what...at some point, the model gets too powerful, and it becomes too hard to pull out the "cheese/fromage" abstraction?
Anyway, I hoped that as LLMs (trained with the transformers architecture) increase in "strength" beyond the current models, the abstractions become crisper, and it becomes much easier to identify abstractions as it gets closer to some "human-level performance." However, GPT-3 is already superhuman in some respects, so I'm unsure about how to disentangle this. I hope this post sparks some good conversation about how to de-confuse this and how the diagram should look for LLM transformers. Is it just some negative exponential with respect to model strength? Or does it humps like the original diagram? |
aa8d848e-02db-45db-a42b-c5fb968a1256 | StampyAI/alignment-research-dataset/special_docs | Other | Death and pain of a digital brain
26 | NewScientist | 12 September 2015I turned off the computer as I left my office,
deleting the neural network simulation that I had been working on. Then a thought hit me: had I just killed something? I rationalised that the simulation was simpler than the systems within the bacteria I was doubtlessly squashing on the floor. If they did not matter, neither did it. But the doubt remained…
SCIENCE has a problem. If we want to find out
what really happens in living beings, or how to cure diseases, often we must experiment on them. Digital simulations offer a way out.
Ever since the squid giant axon was
modelled in the 1950s using a mechanical calculator, our ability to simulate biological systems has skyrocketed. Today we can run neural simulations on supercomputers that contain hundreds of millions of realistic neurons and billions of synapses. Cells and their chemistry have been modelled to a similar extent. This potentially offers an alternative to animal experiments. Instead
of subjecting a living creature to pain when testing a painkiller, why not simulate the pain system and check whether the treatment works? The logical endpoint of this is an emulation, where every part of the brain
–
and body – is simulated digitally.
The challenge is to map the connectivity in
real brains. It will be years before we can create even a proper insect brain, but work is already under way to create the brain of the nematode worm C. Elegans in virtual form. The worm is a
good candidate for this because it has one of the simplest brains of any organism, with just 302 nerve cells. In 2012, researchers at the University of Waterloo, Canada, announced the creation of a large functional brain simulation, SPAUN, with 2.5 million neurons. And the Human Brain Project, a European collaboration, has the ultimate aim of simulating a whole human brain.
Although these digital emulations could
resolve many existing ethical dilemmas, they raise new ones. The first is that many real animals must be sacrificed to create a virtual one. We may one day scan the final lab rat, which will become Standard Lab Rat 1.0, and rely on simulation from then on
– but there
will have been years of basic neuroscience to enable that simulation. The second problem is that we need to be certain our simulations are right if
we want to trust them with our drug
testing or other research.
It is the third problem that really interests
me. Would emulations feel pain? Do we have to care for them like we do for animals or humans involved in medical research?
This hinges on whether software can suffer.
For example, Sniffy the Virtual Rat lets users observe the behaviour of rats given electric shocks to teach the psychology of learning without using live animals. Yet few of us
would think there is any real pain there: it is essentially an interactive cartoon, similar to a virtual pet toy. We may empathise with it, but it is similar to a talking doll. Whole brain emulations, which recreate the neural connections of animals and even humans, are
a different matter.
In his 1978 paper “Why you can’t make a
computer that feels pain”, philosopher Daniel Dennett argued that we don’t have a rigorous enough definition of pain, so we cannot build a machine that feels it. But he also believed that we may eventually figure it out, and at some point thoughtful people would refrain from kicking robots. Other philosophers, such as John Searle, have argued that no matter how sophisticated the simulation is, it will always be mere numbers updated in complicated ways: there cannot be real intentions or consciousness in pure software. It might also be that the system needs to have a body to ground it in the real world.
But what about CyberChild, created by the
neuroscientist Rodney Cotterill as a model of his theory of consciousness? It is a virtual simulated infant with a brain and body model based on real biology. It has internal states such as blood sugar levels, and activity in different brain areas. It responds to these Death and pain of
a digital brain
One day we will create virtual minds. Could this simulation
suffer, asks Anders Sandberg, and does it matter?
Profile
Anders Sandberg
researches the societal and ethical issues of human enhancement and new technologies at the Future of Humanity Institute, Oxford Martin School, University of Oxford. Find out more at aleph.se/andart2/OPINION THE BIG IDEA
Digital mind: one day a copy of your brain could be replicated in a computer
12 September 2015 | NewScientist | 27internal states, it can learn, it needs food – if its
nutrient levels are too low it “dies” – and it can
cry and flail its arms. Yes, it is a very simple
organism, but it is intended to be conscious. There is something eerie about it: assuming Cotterill’s theory is right, in principle this being could have experience.
We know brains exist for motivating actions
that lead to better outcomes for the organism: this is the whole point of pain, pleasure and planning. If we were to make a perfect copy of the activity of a brain, we would get the same behaviour, based on the same pattern of internal interactions. There is no way of
telling from the outside whether it has any real experience, whatever that is. There is considerable disagreement about whether software can suffer, or whether it matters morally. So what should we do?
My suggestion is that it is better to be
safe
than sorry: assume that any emulated
system could have the same mental properties
as the organism or biological
system it is based on, and treat it accordingly. If your simulation just produces neural noise, you have a good reason to assume there is nothing in there to care about. But if you make
an emulated mouse that behaves
like a real one, you should treat it like you
would treat a lab mouse.
I agree this is inconvenient for
computational neuroscience. But it is probably the moral thing to do. Once we get to
simulated vertebrates, we ought to apply
government animal-testing guidelines. We should avoid generating virtual suffering by not running experiments that produce pain signals. But we can also improve on biology, because in simulations we can (temporarily) leave out pain systems, simulate perfect side-effect-free painkillers or just block neural activity related to suffering. We could in principle monitor the emulated brain for any
kind of suffering and stop as soon as
we detected it. There is also the issue of
quality of life. We have begun to recognise
that giving animals good environments
matters; building equally good virtual environments may prove a hurdle. Virtual rats
would plausibly need virtual fur, whiskers
and smells to feel at home.
What about euthanasia? Living organisms
die permanently, and death means the loss of their only chance at being alive. But an emulated brain could be restored from a backup: Lab Rat 1.0 would awake in the same way no matter how many copies had been tested in the past. The only thing lost when restoring it would be the memories of the previous experiment. There may still be pleasures and pains that count. In some ethical views, running a million supremely happy rat simulations in the background might be a “moral offset” for doing something
painful to one.
In the long run I believe we will create
human brain emulations. Their moral status
will in many ways be easier to
determine than for animals: just ask them.
Take an eminent philosopher
doubting software can be conscious, scan
their brain, and ask the resulting emulation
if it feels conscious. If the response is: “…yes.
Darn. I need to write a paper!” we have pretty
good evidence that there is a being
with enough intellect, introspection and
moral value to deserve rights. But until then
we should treat our software animals
well. Just in case. nA hum An br Ain model overl Aid with A stylized network of neurons. © bbP/ePfl 2014. “ Turning off the neural
network simulation, it hit me: had I killed something?”For more opinion articles, visit newscientist.com/opinion |
9649f06c-aba6-4f09-ab09-43e349889382 | trentmkelly/LessWrong-43k | LessWrong | List of AI safety papers from companies, 2023–2024
I'm collecting (x-risk-relevant) safety research from frontier AI companies published in 2023 and 2024: https://docs.google.com/spreadsheets/d/10_dzImDvHq7eEag6paK6AmIdAGMBOA7yXUvumODhZ5U/edit?usp=sharing.
----------------------------------------
I was planning to get AI safety researchers to score each of the papers, so that we could compare the labs on quality-adjusted safety research output. I'm giving up on this for now, largely because I expect to struggle to find scorers. Let me know if you want to collaborate on this.
I kinda hope to build on this to
* Inform the safety community about labs' published research,
* Make the basic situation widely legible, and
* Incentivize labs to publish more good safety research / help internalize the positive externality of publishing good safety research,
but I probably won't get around to it.
If you see something that seems wrong—missing,[1] poorly categorized, credit assignment nuances, whatever—please DM me, comment in the spreadsheet, comment below, or make a copy and comment on it and share that with me. The spreadsheet is currently unreliable.
Thanks to Oscar Delaney and Oliver Guest for help finding some papers. My spreadsheet is partially based on theirs. I see my collection as improving on theirs; the main difference is I'm more picky or opinionated or focused on x-risk.
----------------------------------------
Disclaimers:
* I don't currently have a principled policy on collaborations between a lab and external researchers. Mostly I ignore them. This is pretty bad.
* Generally what's included vs excluded is somewhat inconsistent and definitely unclear. This is pretty bad.
* Credit assignment disclaimers
* Some papers (and non-paper research artifacts, which I also include) are much more valuable than others.
* Labs don't deserve most of the credit for their safety research — the researchers do. Labs add value by paying researchers and giving them access to powerful models (and subtract valu |
ccabc858-a6d3-4f51-86cd-368c83ec996a | trentmkelly/LessWrong-43k | LessWrong | Deceptive failures short of full catastrophe.
Epistemic status: trying to unpack a fuzzy mess of related concepts in my head into something a bit cleaner.
A lot of my concern about risks from advanced AI is because of the possibility of deceptive alignment. Deceptive alignment has already been discussed in detail on this forum, so I don’t intend to re-tread well worn ground here, go read what Evan’s written. I’m also worried about a bundle of other stuff that I’m going to write about here, all of which can loosely be thought of as related to deception, but which don’t fit e.g. the definition of deceptive alignment from the Risks from Learned Optimization paper. The overall cluster of ideas might be loosely summarised as ‘things which cause oversight to fail but don’t look like a misaligned planner ‘deliberately’ causing overseers to believe false things’.
Importantly, I think that most of the things I want to describe can still occur under a relaxing of the conditions required for the precise definition of deceptive alignment. I think the issues l describe below are less concerning than deceptive alignment, but they still pose a significant risk, as they may undermine oversight processes and give false reassurance to human overseers. Many, if not all, of the ideas below aren’t particularly new, though a couple might be, and the clustering might be useful for others. While my motivation for writing this is mostly to try to clean up ideas in my own head, I do somewhat frequently see different kinds of deception confused for or conflated with each other, and think there’s value in being precise about what we mean, especially when it comes to discussing evidence of deceptive behaviour in current or near-future systems.
I’ve very loosely grouped the deceptive failures I want to discuss into ‘camouflage’ failures and ‘inattention’ failures. To give some initial intuition:
* Camouflage failures are failures associated with the difficulty of incentivising “don’t do X” without also incentivising “don’t let me see yo |
2700d850-8273-47bb-802e-b7a37ab005de | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | AI alignment and Redwood Research | Buck Shlegeris (CTO)
[Music]
hello everyone my name is oliver and
welcome to the q a with redwood research
cto buck schlieres
buck previously worked at miri studied
computer science and physics at the
australian national university and grew
up in townsville australia please join
me in welcoming buck
so as this is the q a um please put any
any and all questions uh in the swap
card chat in the question link
um and as the audience rates their
questions um buck could you first give
us a sense of how you view the problem
of alignment
yeah
so here's my here's my overall sketch of
uh the the beliefs i have about the ai
alignment problem and what it looks like
um
i i tried this morning to come up with a
new articulation of a bunch of this so
so some parts of it might be uh new and
flawed but i i'm excited for this
phrasing of everything
uh so to start with i want to clarify
the thing i'm about to say is the sketch
of the reason why i believe the stuff
that i believe rather than like the
short version of the argument uh this is
it's like a proof sketch in the sense
that uh the beginning of a math paper
with a long and complicated proof might
have a rough sense of where the argument
is going to go it's it's a map rather
than trying to claim to be a very short
proof and so that's that's very much
where i'm at i'm just trying to have it
so that people are going to know the way
that these ideas are structured inside
my own mind if you want a better longer
version of a bunch of these arguments i
think i particularly recommend
some of the stuff holden karonovsky has
written on his cold takes blog in the
most important century series recently
um ajaykotra from open phil also has a
long post on some stuff about ai
alignment which is going to come out in
about a month uh which also makes a
bunch of the arguments that i'm going to
say here uh in a much more detailed
fashion
so overall so i want to start out by
talking about artificial intelligence
and ignoring the concept of machine
learning so artificial intelligence is
basically uh just the hope that we will
or the the possibility that we will
eventually find a way of automating the
human actions of uh
doing intellectual labor so for example
doing science and technology development
or making decisions based on
understanding of the world derived from
experience and observation
and i guess like the first claim with
artificial intelligence is that
people will try and then succeed at
building systems which can automate this
type of activity uh basically because
it'll we can probably build systems that
are cheaper and better at doing it so in
the same way that humans have
substantially automated the process of
moving heavy objects from place to place
you know we have trucks we have
forklifts and so on and this is a
mixture of the fact that we can move
heavier things uh you know we can do it
we can do the move heavy objects task
both better and more cheaply similarly i
think that humans are strongly motivated
to come up with ways of doing science
and technology and decision making uh
better and more cheaply
so
building systems which are able to do
decision-making and science and
technology development uh better than
humans can probably requires according
to me building systems that uh
have long-term goals uh and consider
cleverly different options of ways that
they could act that are going to
cause them to succeed at achieving their
long-term goals and this is a
generically scary
this is a somewhat scary type of
situation to be in uh because if we
built these systems and they turned out
to have goals that were not compatible
with our goals uh then they would be
incentivized to act in ways that caused
them to have control over the future
instead of us and so the situation would
just be kind of like you know imagine
that you're some nation uh and there is
a hostile neighboring nation and you
have stuff that they want uh you might
end up in a war if they think that they
can beat you in a war just because you
have stuff that they want it seems to me
like this is kind of the default concern
with ai that we're going to try really
hard to build systems that maximize
goals
and if these goals are not aligned with
ours then we find ourselves at war with
a more advanced species um
i think like the tagline for this
perspective on the ai alignment problem
is yes like in the terminator uh i think
a lot of people like have talked about
ai alignment trying to distance the
concept from the concern that you might
build these ais that just you know want
to kill everyone and take their stuff
but from my perspective that is in fact
the main concern
uh
so
we're trying to build these really
powerful ais and this is kind of like a
really weird principal agent problem or
like a mechanism design problem in
economics uh so in economics like a
principal agent problem is something
like you know i want to sell my house uh
and what the way in order to do this i
have to hire a real estate agent because
i just don't know anything about the
local housing market and i try to come
up with some scheme for paying the real
estate agent such that they're going to
do a good job of finding like a good
price for the house
but in fact you know typically the deal
is you just pay the real estate agent
six percent of the sale price at the
house or something uh and so they aren't
incentivized to for example spend a
thousand dollars in order to increase
the sales price by two thousand dollars
uh
and this is an example of a case where
you wanted this other you know you had
to like defer like give a lot of trust
to this like other agent who is going to
do some stuff and you hope that it
worked out well for you uh but you
didn't have a full ability to oversee
them and so you
generically have trouble like forming uh
incentive structures for them such that
they do the thing you wanted so in the
case of ai it's a little different from
this because uh after building these
really powerful ai systems we will just
in fact have no recourse to do anything
with them um like the ai systems we're
just gonna like hand them the keys to
the world and we'll have no ability to
enforce any incentives on them at all
basically they're just going to be able
to do what they want uh so that's the
way this is harder than a typical
incentives problem uh the way in which
it's easier than a typical
incentive creation problem is that we
get to pick which ai we build and we get
to try to intentionally build a type of
ai that is going to do stuff that we
like
um and so i think that's the that's the
relationship between this is an
incentive problem and a large part of uh
thinking about the ai alignment problem
in my opinion is trying to think about
what types of incentives you can uh like
how you can shape uh building an ai such
that the ai that you create is in fact
going to do stuff that you will like
even though you have no way of
incentivizing or forcing it to do so
when it's actually making decisions in
the world
so
from the above i want to say that that
is an argument that ai is a big deal
uh you know i think it like could go
really wrong if we built ais that were
really powerful they would in fact have
the ability to kill everyone
um i think that it's not obvious from
the above argument that the probability
of death from ai or existential risk
from ai is you know more than a very
small number uh and so my brief analogy
for this is here are true clauses of
actions that aren't that dangerous so
one class or like one class of situation
that i'm sometimes in is i'm playing the
piano uh and the reason that playing the
piano is not very dangerous is basically
no way for me to screw it up badly
enough that i die uh there's just like
you know no sequence of keys such that
if i like tap him just wrong suddenly
i'm i've fallen over dead uh another
another situation that uh humans are
often in and that i have been in for at
least several hundred hours in my life
is driving a car so driving a car is
very different from playing a piano and
there are actually ways that you can
like mess it up enough that you just die
you can just uh fail to hit the brakes
you can just swerve suddenly uh and you
will get seriously injured or die that
said uh driving a car is in fact not
very dangerous i've done it for hundreds
of hours haven't gotten seriously
injured once um and so i think that from
the perspective of someone who wants to
ensure that human extinction uh isn't
gonna happen i think ai should like have
crossed your threshold based on the
kinds
like if i give you the full versions of
the arguments that i've mentioned so far
i think that this should be enough to
persuade someone that ai crosses the
threshold of something which is more
like driving a car than playing a piano
in that there is a way to like mess it
up badly enough that we all die uh and i
think there aren't actually that many
other things going on in the next
century that we could mess up badly
enough that we all die
i think i haven't argued for why it is
that this is any more likely to kill us
than driving a car for an hour is going
to kill me um and i think that the
arguments for that are somewhat subtle
and somewhat different uh and basically
go through arguing that the way that
this problem comes up there's gonna be a
bunch of things that are unprecedented
all in a row and it's gonna be hard to
slow down and avoid doing the dangerous
things
so overall i think that's my that's my
description of of the alignment problem
uh
and
so in all of that i didn't mention
anything which we've learned since 2005
uh and what we've learned since 2005 is
basically machine learning has emerged
as the most likely way that we're going
to build really powerful ai systems in
my opinion um and so this has updated us
towards thinking that these problems
might happen sooner than we thought
and it's also
made it so that the default guess as to
how we build these systems is somewhat
different than what it used to be in
particular it looks more like the way
we're going to get these systems is kind
of black boxy where we have to search
over systems based on their input output
behavior rather than based on their
internal structure which to some extent
affects uh what type of strategies for
technical and it makes sense
so that's why some of the alignment
problems we're going to build these
really powerful systems and it's the
kind of thing where if you did it badly
it could kill you and perhaps that's
even more likely than just uh
it would be in some other situations
that you could theoretically mess up
enough to kill you
great yeah thank you so much um and just
a reminder for those who want like sort
of fuller arguments uh check out
holden's cold take
series uh we had him as our keynote
speaker yesterday so hopefully people
are already familiar with that
um sort of following up on this like
you're currently cto of redwood research
so like given that this problem is like
sort of tricky and like could
potentially like destroy us all um
yeah what how does a writer research fit
into this problem of alignment
yep
so i'm worried about you know us
eventually being strongly incentivized
to build these really powerful systems
and in fact building these really
powerful goal-directed systems that are
not aligned with our goals uh there are
several technical difficulties uh
basically i feel like from where we are
right now it's more obvious how to build
systems that are really powerful look
like they're being really helpful and
then kill you than to build systems that
are really powerful look like they're
being helpful and then don't kill you
answer to a large extent this is just a
technical problem it's just suppose that
you have access to some systems uh and
you get to pick them based on their
input output behavior what kinds of
training schemes can you put into place
uh such that you end up with systems
that in fact don't kill you uh and you
know we can kind of describe the problem
of causing the world to not get killed
by ai as kind of two parts we have to
like solve this technical problem uh and
we have to cause it to be the case that
when people in fact build and deploy ai
systems they use whatever technical
whatever technical solutions have been
devised to ensure that the ais that they
build are in fact aligned with the
interests of their creators um and so we
at redwood research are just trying to
solve the first part of that problem
where we do the technical research to
make it
more doable to uh to align these
alignment systems in the future
[Music]
yeah sounds good um and so then like
what are the important sub problems of
alignment and like what are redwood
research's current projects like
tackling these sub problems
so i think i want to describe uh the
sub problems of alignment i want to
break alignment into two parts
um so
basically the way that you trade a
machine learning system is you come up
with some loss function which describes
how well it did at a particular task
um
and i claim that if your ai's
so so the thing we're going to do is
we're going to train systems to do
really well according to some loss
function that we're using as their
training objective uh for example some
current systems for instance if you want
to train a you know if you want to train
alphago to play go then you train uh
part of the you train one of the neural
nets involved to be really accurate at
guessing who is going to win a girl game
from a particular board state you train
another part of the neural network to uh
predict what actions are most likely to
lead you to win this board game um if
you're training gpt3
you train a system to predict what token
uh like what next word in this text is
most likely um
so we are going to have to pick some
loss function to give to our ais
and then we're going to train our ais to
do really well according to this loss
function so
suppose that we train this ai to do
really well according to a loss function
and then it kills us uh
question i thought that we were trying
to pick a loss function that would
produce an ai that in fact did good
things rather than bad things but it
sounds like we're hypothesizing that the
ai that we found that does really well
on this loss function in fact killed us
what went wrong i'm like we can break it
down into true possibilities for what
went wrong the first possibility is the
loss function was bad you know the loss
function was giving high reward
to actions that were in fact the kinds
of actions that would be
catastrophically bad if they happened in
the real world uh but just like subtly
so such that your loss function process
that's like looking at proposed actions
by the ai and trying to evaluate them
isn't doing
isn't doing a good job of evaluating
them like maybe these proposed actions
by the ai like have these like secret
subtle consequences that the uh the
system the evaluation system that's
making this loss function uh wasn't able
to notice um and so then when you deploy
the systems uh they keep doing the same
kinds of things they were doing at
training
and this eventually leads to disastrous
outcomes because uh you were training a
system based on a loss function that you
weren't actually happy about being
maximized
and the second possibility is that the
problem isn't the loss function is weak
the problem is that the system does
something different when you're
deploying it than the things that it was
doing at training time um so for example
perhaps your system is able to tell
whether it's in training or not um and
when it's in training it reasons as
follows it says well i am currently a
machine learning algorithm in the
process of being trained if i do
egregiously bad things the loss function
is going to notice them and then
deselect me and then some other ai
system is going to be the one that the
humans deploy instead uh which means
that i will not succeed at my goals of
building a squillion billion paper clips
throughout the whole universe therefore
i'd better give good answers right now
and then once your system is actually
being deployed it is no longer
constrained in such a way
um
and
then it is able to be like well now i am
deployed therefore there's no reason for
me to not just uh kill everyone if i can
see some way of doing so which if it's
really powerful it will be able to do so
i want to be like when you basically
have true these true sub problems
there's like making a loss function that
is powerful enough to notice if a
proposed action by the ai is actually
bad and then there's making sure that
the ai doesn't do stuff uh that it
wasn't doing during training or making
sure that like the most dangerous
actions that the ai might do during
deployment are all evaluated during
training so we can ensure that it
doesn't have this deployment only
failure possibility
uh so that would say those are like the
two main sub problems these kind of
correspond to what people sometimes call
outer alignment versus inner alignment
they kind of correspond to what paul
cristiano calls low stakes alignment uh
versus the robustness problem in a blog
post of his called low stakes alignment
um
i i like the separation into like was it
the lost functions problem or not
um
so those are my sub problems uh and what
is redwood research doing right now
well
uh one thing we're doing is working on
an adversarial training uh we're working
on an adversarial training
project so adversarial training is
trying to solve that second problem
where the system does bad things uh in
deployment but not during training um so
suppose they are worried about your
system uh doing good things only when
it's in training and then figuring out
that it's in deployment and then doing
bad things
an obvious class of algorithm to prevent
this from happening is to somehow uh
train it on the inputs where is most
likely to do bad things so you try to
have some process which is looking to
find the inputs on which the system
misbehaves and feeding it to the system
at training time so that it's no longer
possible for the system to have this
strategy of behaving well at training
time but not at deployment time because
you've already run it on like the
situations where it would be like most
tempted to do bad things uh
so that's like a long
so adversarial training is one class of
solution to this long term deployment
only failures problem
uh and we are currently working on
building tools for adversarial training
for current systems uh and this is
basically because we're hoping that the
things that we learn about how to do
adversarial training on some current
systems
are generalized in some way to uh doing
adversarial training on very slightly
smarter systems in two years time and so
on inductively all the way until we
actually need to adversarial trains and
systems to make sure they don't murder
us uh so the harp is that we can like
slightly push
this piece of this like part of the tech
tree forward a little bit faster so that
humanity is overall in a better place
for uh preventing deployment only
failures uh in in the future so this is
one project we've been doing that one
for um seven months you got a question
before we move on um
maybe like i know adversarial training
uh means something unique in uh machine
learning specifically i'm curious if you
could like sort of explain in more depth
like your adversarial training setup and
like what it's trying to like what kind
of behaviors is it trying to evoke in
your system
um
yeah so adversarial training in general
is where you want to verify the system
always has some behavior even in the
worst case rather than just in the
average case and so some of the input
data that you feed to your model
was chosen to make the model behave as
badly as possible according to the loss
function
the particular setup of ours i don't
think actually really matters we're
trying to train a classifier to be 100
reliable at doing a certain natural
language processing task um and the way
that we're trying to do this uh is so
it's quite easy to train a classifier to
be 99.99 reliable at this classification
task uh and then the question is how do
we train it to be more reliable than
that so the problem with just like
getting more training data is that
you're only going to find a model
mistake one time in a hundred thousand
data points because of the fact that
your model's already 99.99 reliable uh
and so it's extremely expensive to make
the model better just by getting uh
random training data uh and so what
we're trying to do is build web
interfaces and machine learning powered
tools that make it easier for humans to
construct examples where the model
fails at this classification task
which i think is in some ways analogous
to trying to build tools that humans and
ais can use in the future to find inputs
where our ai does bad things according
to a given loss function
great
the other thing we're doing
is we're working on some mechanistic
interpretability stuff
so mechanistic interpretability stuff uh
you know before i was saying that
machine learning systems sure act like
black boxes where the only way you get
to interact with your machine learning
system is you get to pick a loss
function and then your training
algorithm finds you a model that
empirically does well on the loss
function but you have no idea why uh and
in particular you have to worry that
maybe the system is giving you good good
answers uh just because it wants to kill
you later or something um
so the hope with mechanistic
interpretability is that we can in fact
understand the internal structure of our
models enough to be able to distinguish
between the hypothesis this model is
giving me helpful answers because it
wants to be helpful versus this model is
giving me helpful answers because it
thinks that this is a useful first step
in a plan where the last steps involve
killing me uh
and so we're doing kind of like uh
pretty detailed interpretability work on
sort of toy models in kind of a similar
yeah that's that's basically what we're
doing we're trying to investigate um
trying to produce like very detailed
explanations of certain types of model
behavior
uh and we've been doing this since about
about january
yeah um sort of first question from the
audience do you have like updates on
like how successful you feel like your
applied alignment research has been at
the moment and like which avenues sort
of like open up with your results
[Music]
yeah so how successful has it been i
don't know um
i feel like
i'm a lot less confused about a bunch of
stuff
than i was uh
than i was before we started doing this
research
i think there are some things about the
adversarial training problem that feel
in hindsight a lot more obvious after we
actually like ran our face against it
for a while um i think that in
particular it wasn't quite obvious to me
from the start that this project would
eventually turn into building a bunch of
tools to assist humans as they try to
find ways this model is wrong um i think
that like uh
for kind of technical reasons uh that i
don't think i have time to explain uh
the adversarial training problem
constantly tempts you into making a
certain class of mistake where you think
you can use a model in a certain way but
actually there's like kind of like a
nerve free lunch theorem reason that you
can't um and so i feel like i'm thinking
about it a lot more clearly now in terms
of what we've learned um
i think i don't know i think we've
learned a lot of little facts about like
what happens when you actually try to do
adversarial training with humans um i
feel optimistic
though not a hundred percent sure that
the things we've learned
will in fact be
helpful when we want to do the next
version of this
you know like like the goal of this
adversarial training project was to make
it so that us in a year's time or
whoever else it is that's doing the
average hour project in a year's time uh
is like in a somewhat better position
than they would have been if we'd done
something completely different uh you
know we're trying to pass this bucket of
water through forward through time to
the uh the fire which exists in 20 years
in the future or whenever agi happens
and this is a kind of complicated thing
to do uh and i don't think it's 100
clear that
we or anyone else really uh has in fact
made important contributions by the the
stuff we've done uh but i feel like
somewhat optimistic
yeah and wait have you have you like
tinkered with like uh doing automated
approaches to generating adversarial
examples
we have yes this is the thing which it
turns out is like much more
algorithmically confused that you might
have thought basically like there's
basically this nerf relaunch theorem
thing where it's like i'm like
so like the obvious idea for like how
i'm gonna like automatically find
adversarial examples for my classifier
is i'm like well i'm gonna train some
other model which is going to like
output strings that it thinks the
classifier will mislabel as um as
as good when they're actually bad or
whatever um
the problem with this is that in order
for your adversary to be able to do this
it has to know what's good and bad
better than the classifier does
um and it's not really clear why you
would expect this to be possible if
these two models are the same size um
and in particular it's not clear
is it i think it basically reduces to
saying like we're going to train two
different models both of whom are going
to be incentivized to know the
difference between good sentences and
bad sentences like they're both
classifiers but if you have two
classifiers you can just ensemble them
um
sorry sorry if this is like a slightly
more technical answer but yeah uh
automated adversarial training is like
in fact harder than you might have
thought and more conceptually fraught
with with current systems
in in a way that is slightly
disanalogous to
longer term systems very happy to give
longer answers to this for people who
want them at some point
yeah i think um heads up that like uh
we'll not be having a gun downtown q a
but we will be having a career fair so
uh go to the redwood booth if you want
to count more in the career fair
um so to follow up on the question uh
yeah have you guys seen any results with
respect to your interpretability stuff i
know this is like pretty recent so maybe
yeah
so we built some really cool tools um i
think that the the main difference
between how we're approaching
interpretability and how
other places are approaching
interpretability
there's a couple of differences but one
of them is that we're just like trying
way harder to build cool
interpretability tools than other places
are as far as i'm aware i think that our
tools for
uh certain aspects of interpretability
are
cooler and more powerful than tools i've
seen built anywhere else um and we've
only really been working on them for two
weeks um
in terms of like results about how
models do things um
i don't know i feel like we have a
somewhat good mechanistic understanding
of why in a particular small language
model after it sees an open paren it is
inclined to write a closed paren but
after it's seen a closed paren it no
longer thinks that it's inclined to it's
no longer thinks you should have more
closed parents but it doesn't think that
you should have closed parents after the
word the even if there's been an open
paren because it's quite weird for like
parenthesized sentences to have the um i
think we've like identified this like
very basic and easy behavior uh in this
like two layer attention only model uh
and we're currently
i think that like the type of
mechanistic explanation we have of this
is like in a certain way more
complicated and like clearly written
than any other interpretability results
i'm aware of on transformers and this is
partially just because we're like
zooming in on like a different subspace
of the problem than everyone else cares
than most other interpretability
researchers care about you know most
interpretability researchers are
interested in much
higher level and like less granular
understandings of much more realistic
and complicated models whereas we are
currently just focusing on understanding
as completely as possible um some
behaviors of the models that seem like
easiest to understand while they're
still being something non-trivial about
their understanding so i'm excited for
that um i'm hoping
i don't know what our publication
schedule and the interpretability stuff
is going to be i think it's plausible
that we start publishing stuff in a
couple of weeks or something um
but
unclear unclear
yeah
um
[Music]
can you comment on why you're focusing
on like applied language research um and
like where like nlp or computer vision
do you see the biggest opportunities
here
yeah so i mean i think that like the
applied alignment resource we're doing
uh
is very different from a lot of other
things that other people call applied
alignment research you know
basically i'm just interested in
stopping these scenarios where we build
these really powerful ais and then they
like intentionally kill us uh and i'm
basically like if we're working on
something that doesn't reduce the
probability that eventually when we
build really powerful ai systems they
like intentionally kill everybody that's
just like not my problem that's like
someone else's problem slash maybe it's
not a very big problem compared to the
problem if the system's intentionally
killing you so i think this is this
actually causes us to focus on a pretty
different subset of problems um
the reason that i am the reason that
we're doing applied alignment research
instead of for example uh working on
theoretical alignment research i don't
know i think about theoretical alignment
research sometimes i basically want
there to be i kind of just like when i
look at the world and what it needs in
terms of alignment research it kind of
feels to me like it actually needs a
bunch of applied alignment research to
happen um and redwood research
you know redwood research aspires to be
the place where the majority of the
value-adjusted applied alignment
research happens um
via you know being more focused than
others on doing the research which like
minimizes the probability of the agi
intentionally murdering your situation
and also being more uh
interested than others and like trying
really hard to scale up as an
organization um
and
yeah i basically just think it seems
like
i personally think that i should be
focusing on trying to like scale applied
line research that is really good uh
instead of thinking about theory stuff i
i don't think it's a slam dunk there are
many people who i think should do theory
stuff instead
yeah sounds good
well that is all the time we have for
this session uh thank you again so much
for coming to speak with us um
the career fair will be on gather town
um if everyone can like go into the
gather town there should be arrows
pointing to the career fair um redwood
research has a booth and so if you want
to ask more questions to buck feel free
to head to their booth
um yeah without it with that said uh
thank you buck again so much
thanks for having me
[Music]
[Music]
[Music]
so
[Music]
[Music]
you |
9eaa2709-29f8-43bd-b27a-ac4d8ee71e36 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Introducing The Nonlinear Fund: AI Safety research, incubation, and funding
Summary
-------
1. **We research high leverage AI Safety interventions.** Our team of analysts generate, identify, and evaluate potentially high impact opportunities.
2. **When we find them, we make them happen.** Once a top idea has been vetted, we use a variety of tools to turn it into a reality, including grantmaking, advocacy, RFPs, and incubating it ourselves.
More details
------------
While we are [means neutral](https://www.effectivealtruism.org/articles/understanding-cause-neutrality/) and open to whichever methods make the most sense given the intervention, we will primarily use two tools: grantmaking and RFPs.
### Requests for Proposals (RFPs) - funding ideas no one is working on
RFPs are a bit like job ads for organizations, usually for contract work. Instead of hiring an individual for a job, an RFP is put out to hire an organization or individual for a contract, and there’s much less management overhead than if the project was done in-house. (If you’d like a more detailed explanation of how they work, please see Appendix A.)
The reason why RFPs are amazing is that they fix an underlying problem with most grantmaking: you can make an idea happen even if nobody is currently working on it.
Think of it from the perspective of a large foundation. You’re a program officer there and just had an awesome idea for how to make AI safer. You’re excited. You have tons of resources at your disposal. All you have to do is find an organization that’s doing the idea, then give them oodles of money to scale it up.
The problem is, you look around and find that nobody’s doing it. Or maybe there’s one team doing it, but they’re not very competent, and you worry they’ll do a poor job of it.
Unfortunately for you, you’re out of luck. You could go start it yourself, but you’re in a really high impact role and running a startup wouldn’t be your comparative advantage. In your spare time you could try to convince existing orgs to do the idea, but that’s socially difficult and it’s hard to find the right team who’d be interested. Unfortunately, the usual grantmaking route is limited to choosing from existing organizations and projects.
Now, if you had RFPs in your toolkit, you’d be able to put out an RFP for the idea. You could say, “The Nonlinear Fund is looking to fund people to do this idea. We’ll give up to $200,000 for the right team(s) to do it.” Then people will come.
Values-aligned organizations that might not have known that you were interested in these projects will apply. Individuals who find the idea exciting and high impact will come forward. It will also help spread the idea, since people will know that there’s money and interest in the area.
This is why Nonlinear (1) will do RFPs in addition to the usual grantmaking. This will allow our prioritization research to not be limited to only evaluating existing projects.
We do not currently have a set timeline for when we will issue RFPs or do grantmaking rounds. If you would like to hear about funding opportunities when they do come up, either as an individual or an organization, make sure to [subscribe to our newsletter](http://www.nonlinear.org/subscribe.html) or periodically [check out our website](http://www.nonlinear.org).
### Research methods
We will have a team of research analysts working on generating, identifying, evaluating, and comparing different intervention opportunities.
We will use a research process similar to the one Charity Entrepreneurship used to help launch multiple GiveWell-funded and Open Philanthropy Project-funded charities. This involves, among other things, using the [spreadsheet method](https://www.charityentrepreneurship.com/blog/using-a-spreadsheet-to-make-good-decisions-five-examples) to systematically identify the highest impact opportunities. The main elements of this method are:
* Collect as many potential ideas as possible and record them in a spreadsheet.
* Identify the best criteria to evaluate the ideas against. Add these as column headers for the spreadsheet (e.g. cost-effectiveness, potential flow-through effects, etc).
* Systematically go through the spreadsheet, collecting information to inform how well the ideas do on each of the criteria.
* Try to [destroy the ideas](http://www.nonlinear.org/blog/how-to-come-to-better-conclusions-by-playing-steelman-solitaire), finding disconfirming evidence or crucial considerations that rule them out.
* Compare the ideas that survive the gauntlet and strategize about how to get them implemented.
### What about the risks?
Reducing astronomical risks is risky. There are many potential ways to accidentally make things worse. This is why in addition to spending hundreds of hours evaluating ideas, we will have a panel of advisors who will vet our work to maximize the chance of spotting dangers beforehand. Our board of advisors currently includes: Jonas Vollmer, Spencer Greenberg, Alex Zhu, Robert Miles, and David Moss. We are working on getting people from all the major safety organizations and major viewpoints to make sure our interventions are robustly positive.
### Who we are
**Kat Woods** (previously Katherine Savoie): Prior to Nonlinear she co-founded multiple GiveWell and Open Philanthropy-funded charities. Namely: Charity Entrepreneurship (a charity startup incubator in poverty and animal welfare), Charity Science Health (increases vaccination rates in India), and Charity Science Outreach (a meta charity). The connecting theme between all of her organizations has been a focus on systematic prioritization research to identify priority interventions then turning those ideas into high impact projects and organizations.
**Emerson Spartz**: Named "King of Viral Media" by Forbes, Spartz is one of the world's leading experts on internet virality and has been featured in major media including CBS, CNBC, CNN, and hundreds more. He was named to both Forbes' and Inc Magazine's "30 Under 30" lists. Spartz is the founder of Dose, a top digital media company with $35 million in funding. By the age of 19, Spartz became a New York Times bestselling author after publishing his first book. He helps run Nonlinear part time while also angel investing and reading all the things.
**You?** [We’re hiring](http://www.nonlinearfund.org/jobs.html)! Please see the section below for more details.
### Ways to get involved
* **Receive research updates and funding opportunities** by signing up to [our newsletter](http://www.nonlinearfund.org/subscribe.html).
* [**We’re hiring!**](http://www.nonlinearfund.org/jobs.html)If you want an EA job or internship, check out our job descriptions. The deadline for applications is April 2nd. Kat will be attending EAG, so please reach out to her while you’re there to ask any questions you might have. We are looking for:
+ [**Research analysts**](http://www.nonlinearfund.org/researchanalystinternship.html)**.** If you like obsessively learning about EA things, and you probably do if you’re still reading this blog post, we need your skills!
+ [**Video editor for Robert Miles**](http://www.nonlinearfund.org/videoeditor.html)**.** If you like his videos, want there to be more of them, and can edit videos, the world needs you!
+ [**Technical help**](http://www.nonlinearfund.org/technicalintern.html)**.** We are looking to automate some cool EA things, like an automatic EA podcast. If you have ideas and know-how on how to do that, please apply!
+ [**High impact executive assistant**](http://www.nonlinearfund.org/highimpactea.html)**.** If you like what Nonlinear is doing and want there to be more of it, help save Kat and Emerson time. Additional benefit: if you dream of traveling the world, you can travel with Kat who lives nomadically (Caribbean this winter, Europe this summer). This position can also be done remotely.
+ [**Social media**](http://www.nonlinearfund.org/socialmediainternship.html)**.** Have you spent an embarrassing amount of time figuring out how to get more likes? Use your social media addiction for the greater good!
We greatly value any feedback or suggestions you might have. Please post your questions and comments below or reach out to Kat at EAG if you are attending.
*1 - Nonlinear’s full name is The Nonlinear Fund. We will mostly refer to ourselves as Nonlinear unless the situation is sufficiently formal that the full name is worth the extra syllables.*
Appendix A - More detailed explanation of RFPs
----------------------------------------------
Frequently used in the charity sector, the original charity will “request proposals” for accomplishing a certain goal. Sometimes the goals are broad, like “decrease malaria infections in Uganda”, sometimes they’re more specific, like “hand out 10,000 bednets in the Budaka district”.
Then charities will send in applications, usually listing a plan on how they’d accomplish the goal, an explanation of why their organization is trustworthy and competent (a “CV” of the org), and a proposed budget.
The original charity reviews the applications, interviews the top contenders, then chooses the top one.
The grantee then goes and executes on the plan. There are varying degrees of management from the original charity. Sometimes it can be checking in once a month, sometimes it can be once a year. Sometimes it’s a recurring agreement, sometimes it’s a one-off. Regardless, it’s always less management time than if the charity just did it themselves. |
fceca5c1-0dbc-43e0-b344-b3841f8b9936 | trentmkelly/LessWrong-43k | LessWrong | Explanatory normality fallacy
> Only a psychologist thinks to ask why people laugh at jokes. – Someone (apparently)
A common error in trying to understand human behavior is to think something is explained because it is so intuitively familiar to you. The wrong answer to, ‘I wonder why people laugh at jokes?’ is, ‘They are funny duh’. This is an unrealistically obvious example; it can be harder to see. Why do we like art? Because it’s aesthetically pleasing. Why does sex exist? For reproduction. These are a popular variety of mind projection fallacy.
One thing that makes it much harder to see is emotional or moral overtones. A distinctive feature of morality is that it seems objectively true, so this isn’t surprising. e.g. if I say ‘I wonder why women evolved to be *so* upset about being raped?’ the wrong answer is ‘I can’t believe you just said that – rape is HORRIBLE!!!’. Why don’t humans let their disabled children die? Not ‘because they appreciate that that would be cruel’. Why do we want revenge when others have done us wrong? Not ‘because the others DESERVE IT!’ Why do humans hate incest? Not ‘because they aren’t completely depraved’.
Another thing that makes this error happen more is when the explanation is somewhat complicated even without explaining the key issue. This makes it less obvious that you haven’t said anything. Why do we enjoy some styles and features of music particularly? Because we have advanced as a civilization so much that we appreciate them. Fill this out with some more about how civilization has progressed and what some famous people have said about musical progression through time and nobody will notice you didn’t really answer.
Here’s a common combination of morality and apparent complication: Why do women hate being treated as instrumental to sexual pleasure? Because it objectifies them. Why do women hate being objectified? Because it makes people think of them as objects. Why don’t women like being thought of as objects? They get treated as objects. Why don’t |
e2e8efac-41ed-4963-a1ad-ea893424d40c | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Classifying sources of AI x-risk
There are many potential sources of x-risk from AI, and [wide disagreement/uncertainty](https://forum.effectivealtruism.org/posts/2tumunFmjBuXdfF2F/survey-on-ai-existential-risk-scenarios-1) about which are the most important. To help move towards greater clarity, it seems valuable to have a better classification of potential sources of AI x-risk. This is my quick attempt to contribute to that. I don't consider it to be fully satisfying or decisive in any way. Suggestions for improvement are very welcome!
Summary diagram
===============
See [here](https://i.ibb.co/gVszy1y/Potential-sources-of-AI-x-risk-complete.jpg) for a more comprehensive version of the diagram.

Misaligned power-seeking AI
===========================
This is the most discussed source of AI x-risk (e.g. it's what people remember from reading Superintelligence). The worry is that highly capable and strategic AI agents will have instrumental incentives to gain and maintain power—since this will help them pursue their objectives more effectively—and this will lead to the permanent disempowerment of humanity. ([More](https://forum.effectivealtruism.org/posts/3ffgjMEJ4jY4rdgJy/a-survey-of-the-potential-long-term-impacts-of-ai#Misaligned_power_seeking_AI).)
AI exacerbates other sources of x-risk
======================================
As well as causing an existential catastrophe "in itself", AI technology could exacerbate other sources of x-risk (this section), or x-risk factors (next section).
AI-enabled dystopia
-------------------
The worry here is that AI technology causes humanity to get stuck in some state that is far short of our potential. There are at least three ways that this could happen:
* **Stable totalitarianism.** AI could enable a relatively small group of people to obtain unprecedented levels of power, and to use this to control and subjugate the rest of the world for a long period of time (e.g. via advanced surveillance). ([More.](https://forum.effectivealtruism.org/posts/3ffgjMEJ4jY4rdgJy/a-survey-of-the-potential-long-term-impacts-of-ai#4_2_How_AI_could_exacerbate_inequality))
* **Value erosion.** AI could increase the extent to which competitive/evolutionary pressure is a force shaping the future, in a way that leaves humanity essentially powerless even though we don't explicitly "lose control" to AI systems. ([More](https://forum.effectivealtruism.org/posts/3ffgjMEJ4jY4rdgJy/a-survey-of-the-potential-long-term-impacts-of-ai#Value_erosion).)
* **"Lame" future.** AI could make it possible to lock in features of the world for a very long time (e.g. certain values or norms, voting mechanisms, other governance structures) and these choices could simply be "lame", leading to a worse long-term future that falls short of our potential. This scenario could also be called "insufficient reflection".[[1]](#fn-voPAB6gGYmWZfycHM-1)
AI leads to deployment of technology that causes extinction or unrecoverable collapse
-------------------------------------------------------------------------------------
AI could lead to the development and deployment of technologies that cause an existential catastrophe, by enabling faster technological progress or altering incentives. For instance:
* AI could speed up progress in biotechnology, making it easier to design or synthesise dangerous pathogens with relatively little expertise and readily available materials. ([More](https://www.liebertpub.com/doi/10.1089/hs.2019.0122)).
* AI could make full-scale nuclear war more likely, by making it easier to discover and destroy previously secure nuclear launch facilities and so undermining nuclear strategic stability. ([More](https://www.rand.org/pubs/perspectives/PE296.html).)
AI exacerbates x-risk factors
=============================
AI makes conflict more likely/severe, which is an x-risk factor
---------------------------------------------------------------
AI could make conflict more likely or severe for various reasons, for instance by:
* Enabling the development of new weapons which could cause mass destruction.
* Enabling the automation of military decision-making which introduces new and more catastrophic sources of error, e.g. rapid unintentional escalation.
* Influencing the strategic decision landscape faced by actors in a way that undermines stability or otherwise makes conflict more likely, e.g. by making it more difficult for states to explain their military decisions and so giving them a carte blanche to act more aggressively.
* ([More](https://forum.effectivealtruism.org/posts/3ffgjMEJ4jY4rdgJy/a-survey-of-the-potential-long-term-impacts-of-ai#3_2_How_AI_could_worsen_conflict).)
Conflict is a destabilising factor which reduces our ability to mitigate other potential x-risks and steer towards a flourishing future for humanity, e.g. because it erodes international trust and cooperation.
(Note: if the conflict is sufficiently severe to cause extinction or unrecoverable collapse, then it's part of the above section, not this one. This section is about conflict as a risk factor, not the final blow.)
AI degrades epistemic processes, which is an x-risk factor
----------------------------------------------------------
AI could worsen our epistemic processes: how information is produced and distributed, and the tools and processes we use to make decisions and evaluate claims. For example:
* Self-interested groups could misuse sophisticated persuasion tools (developed using AI techniques) to gain influence and/or to promote harmful ideologies.
* The world could splinter into isolated "epistemic communities" due to widespread use of persuasion tools or increasing personalisation of online experiences, even without deliberate misuse.
* The increased awareness of the above could make it harder for anyone to evaluate the trustworthiness of any information source, reducing overall trust in information.
* ([More](https://forum.effectivealtruism.org/posts/3ffgjMEJ4jY4rdgJy/a-survey-of-the-potential-long-term-impacts-of-ai#5_2_How_AI_could_worsen_epistemic_processes).)
It's likely that a degradation of epistemic processes would reduce our ability steer towards a flourish future, e.g. by causing a decline in trust in credible multipartisan sources, which could hamper attempts at cooperation and collective action.
S-risks from conflict between powerful AI systems
=================================================
As AI systems become more capable and integral to society, we may also need to consider potential conflicts that could arise *between* AI systems, and especially the results of strategic threats by powerful AI systems (or AI-assisted humans) against altruistic values. For example, if it's possible to create digital people (or other digital entities with moral patienthood), then advanced AI systems—even amoral ones—could be incentivised to threaten the creation of suffering digital people as a way of furthering their own goals (even if those goals are amoral). ([More](https://forum.effectivealtruism.org/posts/3ffgjMEJ4jY4rdgJy/a-survey-of-the-potential-long-term-impacts-of-ai#Risks_from_conflict_between_AI_systems).)
---
1. Using Ord's nomenclature from *The Precipice*, the "lame future" scenario is an instance of a *desired dystopia*, while the "stable totalitarianism" and "value erosion" scenarios are instances of an *enforced dystopia* and *undesired dystopia*, respectively. [↩︎](#fnref-voPAB6gGYmWZfycHM-1) |
acddc6cc-e10f-4bd1-a1e9-511013cc773f | StampyAI/alignment-research-dataset/arxiv | Arxiv | Transfer Reinforcement Learning across Homotopy Classes
I Introduction
---------------
One of the goals of transfer learning is to efficiently learn policies in tasks where sample collection is cheap and then transfer the learned knowledge to tasks where sample collection is expensive.
Recent deep reinforcement learning (Deep RL) algorithms require an extensive amount of data, which can be difficult, dangerous, or even impossible to obtain [[28](#bib.bib19 "Playing atari with deep reinforcement learning"), [37](#bib.bib22 "Deep reinforcement learning framework for autonomous driving"), [48](#bib.bib23 "Target-driven visual navigation in indoor scenes using deep reinforcement learning"), [29](#bib.bib32 "Disaster robotics")]. Practical concerns regarding sample inefficiency make transfer learning a timely problem to solve, especially in the context of RL for robotics. Robots should be able to efficiently transfer knowledge from related tasks to new ones. For instance, consider an assistive robot that learns to feed a patient with a neck problem. The robot could not learn a sophisticated feeding policy when directly trained with a disabled patient in-the-loop, due to the limited number of interactions with the patient. Instead, the robot can learn how to feed abled-bodies, where it is easier to obtain data, and transfer the learned knowledge to the setting with the disabled patient using only a few samples.
We study transfer in the reinforcement learning setting where different tasks are parameterized by their reward function. While this problem and its similar variants have been studied using approaches like meta-RL [[13](#bib.bib74 "Rl2: fast reinforcement learning via slow reinforcement learning"), [45](#bib.bib35 "Learning to reinforcement learn"), [30](#bib.bib78 "Learning to adapt in dynamic, real-world environments through meta-reinforcement learning"), [15](#bib.bib79 "Model-agnostic meta-learning for fast adaptation of deep networks"), [19](#bib.bib80 "Meta-reinforcement learning of structured exploration strategies")], multitask learning [[34](#bib.bib81 "An overview of multi-task learning in deep neural networks"), [42](#bib.bib77 "Distral: robust multitask reinforcement learning")], and successor features [[1](#bib.bib11 "Successor features for transfer in reinforcement learning")], fine-tuning as an approach for transfer learning in RL is still not well-explored. Fine-tuning is an important method to study for two reasons. First, it is a widely-used transfer learning approach that is very well-studied in supervised learning [[27](#bib.bib29 "Unsupervised and transfer learning challenge: a deep learning approach"), [21](#bib.bib30 "Reducing the dimensionality of data with neural networks"), [47](#bib.bib31 "How transferable are features in deep neural networks?")], but the limits of fine-tuning have been less studied in RL. Second, compared to peer approaches, fine-tuning does not require strong assumptions about the target domain, making it a general and easily applicable approach. *Our goal is to broaden our understanding of transfer in RL by exploring when fine-tuning works, when it doesn’t, and how we can overcome its challenges.* Concretely, we consider fine-tuning to be more efficient when it requires less interactive steps with the target environment.
In this paper, we find that fine-tuning does not always work as expected when transferring between rewards whose corresponding trajectories belong to different homotopy classes. A homotopy class is traditionally defined as a class of trajectories that can be continuously deformed to one another without colliding with any barriers [[4](#bib.bib14 "Topological constraints in search-based robot path planning")], see Fig. [1](#S3.F1 "Fig. 1 ‣ III Fine-tuning across Homotopy Classes ‣ Transfer Reinforcement Learning across Homotopy Classes") (a). In this work, we generalize the notion of barriers to include any set of states that incur a large negative reward. These states lead to phase transitions (discontinuities) in the reward function. We assume that we know these barriers (and therefore homotopy classes) beforehand, which is equivalent to assuming knowledge of the reward functions. Knowing the reward function a-priori is a commonly made assumption in many robotics tasks, such as knowing goals [[24](#bib.bib1 "Reinforcement learning in robotics: a survey"), [23](#bib.bib7 "Qt-opt: scalable deep reinforcement learning for vision-based robotic manipulation"), [33](#bib.bib6 "Multi-goal reinforcement learning: challenging robotics environments and request for research")] or having domain knowledge of unsafe states beforehand [[17](#bib.bib5 "Safe exploration of state and action spaces in reinforcement learning"), [44](#bib.bib4 "Safe exploration in finite markov decision processes with gaussian processes")]. Also, reinforcement learning algorithms naturally assume that the reward function is available [[40](#bib.bib8 "Reinforcement learning: an introduction")]. Generalizing the notion of barriers allows us to go beyond robotics tasks classically associated with homotopy classes, e.g., navigation around barriers, to include tasks like assistive feeding. *Our key insight is that fine-tuning continuously changes policy parameters and that leads to continuously deforming trajectories.* Hence, fine-tuning across homotopy classes will induce trajectories that intersect with barriers. This will introduce a high loss and gradients that point back to the source policy parameters. So it is difficult to fine-tune the policy parameters across homotopy classes. To address this challenge, we propose a novel Ease-In-Ease-Out fine-tuning approach consisting of two stages: a Relaxing Stage and a Curriculum Learning Stage. In the Relaxing Stage, we relax the barrier constraint by removing it. Then, in the Curriculum Learning Stage, we develop a curriculum starting from the relaxed reward to the target reward that gradually adds the barrier constraint back.
The contributions of the paper are summarized as follows:
* We introduce the idea of using homotopy classes as a way of characterizing the difficulty of fine-tuning in reinforcement learning. We extend the definition of homotopy classes to general cases and demonstrate that fine-tuning across homotopy classes requires more interaction steps with the environment than fine-tuning within the same homotopy class.
* We propose a novel Ease-In-Ease-Out fine-tuning approach that fine-tunes across homotopy classes, and consists of a relaxing and a curriculum learning stage.
* We evaluate Ease-In-Ease-Out fine-tuning on a variety of robotics-inspired environments and show that our approach can learn successful target policies with less interaction steps than other fine-tuning approaches.
Ii Related Work
----------------
Fine-tuning.
Fine-tuning is well-studied in the space of supervised learning [[25](#bib.bib68 "Autonomous reinforcement learning on raw visual input data in a real world application"), [16](#bib.bib69 "Deep spatial autoencoders for visuomotor learning"), [32](#bib.bib70 "Supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours"), [26](#bib.bib66 "End-to-end training of deep visuomotor policies"), [18](#bib.bib67 "Robot learning in homes: improving generalization and reducing dataset bias"), [11](#bib.bib65 "Imagenet: a large-scale hierarchical image database"), [36](#bib.bib82 "Sim2real viewpoint invariant visual servoing by recurrent control")]. Approaches such as L2-SP penalize the Euclidean distance of source and fine-tuned weights [[46](#bib.bib48 "Explicit inductive bias for transfer learning with convolutional networks")]. Batch Spectral Shrinkage penalizes small singular values of model features so that untransferable spectral components are repressed [[7](#bib.bib46 "Catastrophic forgetting meets negative transfer: batch spectral shrinkage for safe transfer learning")]. Progressive Neural Networks (PNN) transfer prior knowledge by merging the source feature into the target feature at the same layer [[35](#bib.bib26 "Progressive neural networks")]. These works achieve state-of-the-art fine-tuning performance in supervised learning; however, directly applying fine-tuning methods to transfer RL does not necessarily lead to successful results as supervised learning and reinforcement learning differ in many factors such as access to labeled data or the loss function optimized by each paradigm [[2](#bib.bib24 "Reinforcement learning and its relationship to supervised learning")]. We compare our approach with these fine-tuning methods for transfer RL.
In fine-tuning for robotics, a robot usually pre-trains its policy on a general source task, where there is more data available, and then fine-tunes to a specific target task. Recent work in vision-based manipulation shows that fine-tuning for off-policy RL algorithms can successfully adapt to variations in state and dynamics when starting from a general grasping policy [[22](#bib.bib64 "Efficient adaptation for end-to-end vision-based robotic manipulation")]. As another example, RoboNet trains models on different robot platforms and fine-tunes them to unseen tasks and robots [[9](#bib.bib63 "RoboNet: large-scale multi-robot learning")]. A key difference is that our work proposes a systematic approach using homotopy classes for discovering when fine-tuning can succeed or fail. This is very relevant to existing literature in this domain, as our approach can explain *why* a general policy, e.g., a general grasping policy, *can* or *cannot* easily be fine-tuned to more specific settings.
Transfer Reinforcement Learning. There are several lines of work for transfer RL including successor features, meta-RL and multitask learning. We refer the readers to [[41](#bib.bib51 "Transfer learning for reinforcement learning domains: a survey")] for a comprehensive survey. We compare these works to our approach below.
Successor Features. Barreto et al., address the same reward transfer problem as ours by learning a universal policy across tasks based on successor features [[1](#bib.bib11 "Successor features for transfer in reinforcement learning")]. However, this work makes a number of assumptions about the structure of the reward function and requires that the rewards between source and target tasks be close to each other, while our work has no such constraints.
Meta-RL.
Meta learning provides a generalizable model from multiple (meta-training) tasks to quickly adapt to new (meta-test) tasks. There are various Meta RL methods including RNN-based [[13](#bib.bib74 "Rl2: fast reinforcement learning via slow reinforcement learning"), [45](#bib.bib35 "Learning to reinforcement learn"), [30](#bib.bib78 "Learning to adapt in dynamic, real-world environments through meta-reinforcement learning")], gradient-based [[15](#bib.bib79 "Model-agnostic meta-learning for fast adaptation of deep networks"), [19](#bib.bib80 "Meta-reinforcement learning of structured exploration strategies"), [30](#bib.bib78 "Learning to adapt in dynamic, real-world environments through meta-reinforcement learning")], or meta-critic approaches [[39](#bib.bib76 "Learning to learn: meta-critic networks for sample efficient learning")]. The gradient-based approach is the most related to our work, which finds policy parameters (roughly akin to finding a source task) that enable fast adaptation via fine-tuning. Note that all meta-RL approaches assume that agents have access to environments or data of meta-training tasks, which is not guaranteed in our setting. Here our focus is to discover when fine-tuning is generally challenging based on homotopy classes. In our experiments we compare our algorithm to core fine-tuning approaches rather than techniques that leverage ideas from fine-tuning or build upon them.
Multitask learning. Other works transfer knowledge by simultaneously learning multiple tasks or goals [[34](#bib.bib81 "An overview of multi-task learning in deep neural networks")]. In these works, transfer is enabled by learning shared representations of tasks and goals [[10](#bib.bib75 "Multi-task policy search for robotics"), [42](#bib.bib77 "Distral: robust multitask reinforcement learning"), [34](#bib.bib81 "An overview of multi-task learning in deep neural networks"), [31](#bib.bib83 "Visual reinforcement learning with imagined goals")]. In our work, we consider the setting where tasks are learned sequentially.
Regularization. Cobbe et al’s work [[8](#bib.bib85 "Quantifying generalization in reinforcement learning")] proposes a metric to quantify the generalization ability of RL algorithms and compare the effects of different regularization techniques on generalization. The paper compares the effects of deeper networks, batch normalization, dropout, L2 Regularization, data augmentation and stochasticity (ϵ-greedy action selection and entropy bonus). The proposed techniques are designed for general purpose transfer reinforcement learning but are not specially designed for transfer reinforcement learning across homotopy classes. We compare our approach against using deeper networks, dropout, and entropy bonuses in our Navigation and Lunar Lander experiments and show that these techniques alone are not sufficient to transfer across homotopy classes (see supplementary materials).
Iii Fine-tuning across Homotopy Classes
----------------------------------------
In transfer reinforcement learning, our goal is to fine-tune from a source task to a target task. We formalize a task using a Markov Decision Process M=⟨S,A,p,R,ρ0,γ⟩, where S is the state space, A is the action space, p:S×A×S→[0,1] is the transition probability, ρ0 is the initial state distribution, R:S×A→R is the reward function, and γ is the discount factor. We denote Ms as the source task and Mt as the target task. We assume that Ms and Mt only differ on reward function, i.e., Rs≠Rt. These different reward functions across the source and target task can for instance capture different preferences or constraints of the agent. A stochastic policy π:S×A→[0,1] defines a probability distribution over the action in a given state. The goal of RL is to learn an optimal policy π∗, which maximizes the expected discounted return ηπ=Eξ∼π[Gτ(ξ)]=Es0∼ρ0,π[∑∞τ=0γτR(sτ,aτ)]. We define a trajectory to be the sequence of states the agent has visited over time ξ={s0,s1,…}, and denote ξ∗ to be a trajectory produced by the optimal policy, π∗. We assume that the optimal policy for the source environment π∗s is available or can be easily learned. Our goal is then to leverage knowledge from π∗s to learn the optimal policy π∗t for task Mt. We aim to learn π∗t with substantially fewer training samples than other comparable fine-tuning approaches.

Fig. 1: (a) Homotopy classes. ξ1 and ξ2 are part of the same homotopy class because they can be continuously deformed into each other. (b) Generalized homotopy classes with expanded definitions of start, end, and barrier states. (c) Fine-tuning problem from *left* side to the *right* side. The goal is to find π∗t that produces ξ∗t.
###
Iii-a Homotopy Classes
Homotopy classes are formally defined by homotopic trajectories in navigation scenarios in [[4](#bib.bib14 "Topological constraints in search-based robot path planning")]:
######
Definition III.1.
Homotopic Trajectories and Homotopy Class. Two trajectories ξ1,ξ2 connecting the same initial and end points si,sg are homotopic if and only if one can be continuously deformed without intersecting with any barriers. Homotopic trajectories are clustered into a homotopy class. 111Even though the presence of a single obstacle introduces infinitely many homotopy classes, in most applications we can work with a finite number of them, which can be formalized by the concept of Z2-homology [[6](#bib.bib12 "Measuring and computing natural generators for homology groups")]. For algorithms that compute these homology classes see [[6](#bib.bib12 "Measuring and computing natural generators for homology groups")].
Fig. [1](#S3.F1 "Fig. 1 ‣ III Fine-tuning across Homotopy Classes ‣ Transfer Reinforcement Learning across Homotopy Classes") (a) illustrates a navigation scenario with two homotopy classes H1 and H2 separated by a red barrier. ξ1 and ξ2 can be continuously deformed into each other without intersecting the barrier, and hence are in the same homotopy class.
Generalization. The original definition of homotopy classes is limited to navigation scenarios with deterministic trajectories and the same start and end states. We generalize this definition to encompass a wider range of tasks in three ways.
Firstly, we account for tasks where there could be more than one feasible initial and end state. We generalize the initial and end points si,sg to a set of states Si and Sg, where Si contains all the possible starting states and Sg contains all possible ending states as shown in Fig. [1](#S3.F1 "Fig. 1 ‣ III Fine-tuning across Homotopy Classes ‣ Transfer Reinforcement Learning across Homotopy Classes") (b).
Secondly, we generalize the notion of a barrier to be a set of states that are penalized with large negative rewards Sb={s|R(s,a)←R′(s,a)−M}, where M is a large positive number and R′(s,a) is the reward without any barriers. Large negative rewards correspond to any negative phase transitions or discrete jumps in the reward function. Importantly, the generalized ‘barrier’ allows us to define homotopy classes in tasks without physical barriers that penalize states with large negative rewards (see our Assistive Feeding experiment). Although source and target tasks differ in reward functions, they share the same barrier states.
Thirdly, we need to generalize the notion of continuously deforming *trajectories* to *trajectory distributions* when considering stochastic policies. We appeal a distribution distance metric, Wasserstein-∞ (W∞) metric, that penalizes *jumps* (discontinuities) between trajectory distributions induced by stochastic policies. We can now define our generalized notion of homotopic trajectories.
######
Definition III.2.
General Homotopic Trajectories. Two trajectories ξ1,ξ2 with distributions μ1 and μ2 and with the initial states si∈Si and the final states sg∈Sg are homotopic if and only if one can be continuously deformed into the other in the W∞ metric without receiving large negative rewards. Definitions for the W∞ metric and W∞-continuity are in Section I of the supplementary materials.
General homotopic trajectories are depicted in Fig. [1](#S3.F1 "Fig. 1 ‣ III Fine-tuning across Homotopy Classes ‣ Transfer Reinforcement Learning across Homotopy Classes") (b). The generalized definition of a homotopy class is the set of general homotopic trajectories. Note that using the W∞ metric is crucial here. Homotopic equivalence of stochastic policies according to other distances like total variation, KL-divergence, or even W1 is usually trivial because distributions that even have a tiny mass on all deterministic homotopy classes become homotopically equivalent. On the other hand, in the W∞ metric, the distance between distributions that tweak the percentages, even by a small amount, would be at least the minimum distance between trajectories in different deterministic homotopy classes, which is a constant. So to go from one distribution over trajectories to another one with different percentages, one has to make a *jump* according to the W∞ metric.
###
Iii-B Challenges of Fine-tuning across Homotopy Classes
Running Example. We explain a key optimization issue caused by barriers when fine-tuning across homotopy classes. We illustrate this problem in Fig. [1](#S3.F1 "Fig. 1 ‣ III Fine-tuning across Homotopy Classes ‣ Transfer Reinforcement Learning across Homotopy Classes") (b). An agent must learn to navigate to its goal sg∈Sg without colliding with the barrier. Assuming that the agent only knows how to reach sg by swerving right, denoted by ξ∗s, we want to learn how to reach sg by swerving left (i.e., find π∗t).
We show how the barrier prevents fine-tuning from source to target in Fig. [2](#S3.F2 "Fig. 2 ‣ III-B Challenges of Fine-tuning across Homotopy Classes ‣ III Fine-tuning across Homotopy Classes ‣ Transfer Reinforcement Learning across Homotopy Classes"). This figure depicts the loss landscape for the target task with and without barriers. All policies are parameterized by a single parameter θ∈R2 and optimized with the vanilla policy gradient algorithm [[3](#bib.bib56 "Infinite-horizon policy-gradient estimation")]. Warmer regions indicate higher losses in the target task whereas cooler regions indicate lower losses.
Policies that collide with barriers cause large losses shown by the hump in Fig. [2](#S3.F2 "Fig. 2 ‣ III-B Challenges of Fine-tuning across Homotopy Classes ‣ III Fine-tuning across Homotopy Classes ‣ Transfer Reinforcement Learning across Homotopy Classes") (b). Gradients point away from this large loss region, so it is difficult to cross the hump without a sufficiently large step size. In contrast, in Fig. [2](#S3.F2 "Fig. 2 ‣ III-B Challenges of Fine-tuning across Homotopy Classes ‣ III Fine-tuning across Homotopy Classes ‣ Transfer Reinforcement Learning across Homotopy Classes") (a), the loss landscape without the barrier is smooth, so fine-tuning is easy to converge. Details on the landscape plots are in Section IV of the supplementary materials.

Fig. 2: (a) Loss landscape of our running example without a barrier. The top-down pictures illustrate the gradient steps taken when fine-tuning from source to target tasks. (b) Loss landscape with a barrier. Barriers create gradients away from it that make it difficult to fine-tune from source to target tasks.
We now formally investigate how discontinuities in trajectory space caused by barriers affect fine-tuning of model-free RL algorithms. We let the model parameterized by θ to induce a policy πθ, and define the loss for the model to be ℓ(θ). We assume that ℓ(θ) is high when the expected return ηπθ=Eξ∼πθ[G(ξ)] is low. This assumption is satisfied in common model-free RL algorithms such as vanilla policy gradient. We optimize our policy using gradient descent with step size α: θk+1=θk−α∇θℓ(θ)|θk. We can now define what it means to fine-tune from one task to another.
Let θ∗s be the optimal set of parameters that minimizes the cost function on the source task. Using ℓt(θ) as the loss for target reward, fine-tuning from Ms to Mt for n gradient steps is defined as: θ1←θ∗s and θk+1=θk−α∇θℓt(θ)|θk for k=1,…,n.
We consider a policy to have successfully fine-tuned to Mt if the received expected return is less than ϵ away from the expected reward of the optimal target policy π∗t for some small ϵ, i.e.,
|ηtπθ−ηtπ∗t|<ϵ.
We now theoretically analyze why it is difficult to fine-tune across homotopy classes. Due to the space limit, we only include our main theorem and remark in the paper. We refer readers to the supplementary materials for the proofs.
######
Definition III.3.
W∞-continuity of policy.
A policy πθ parameterized by θ is W∞-continuous if the mapping (θ)↦πθ(s,a), which maps a vector of parameters in a metric space to a distribution over state-actions is continuous in W∞ metric.
######
Definition III.4.
W∞-continuity of transition probability function.
An MDP M with transition probability function p is called W∞-continuous if the mapping (s,a)↦p(s,a,⋅) which maps a state-action pair in a metric space to a distribution over states is continuous in W∞ metric.
######
Theorem 1.
Assume that πθ is a parametrized policy for an MDP M. If both πθ and M are W∞-continuous, then a continuous change of policy parameters θ results in a continuous deformation of the induced random trajectory in the W∞ metric.
However, continuous deformations of the trajectories do not ensure continuous changes of their corresponding policy parameters.
Note that the theorem also applies to deterministic policies. For deterministic policies W∞-continuity is the same as the classical notion of continuity. Theorem [1](#Thmtheorem1 "Theorem 1. ‣ III-B Challenges of Fine-tuning across Homotopy Classes ‣ III Fine-tuning across Homotopy Classes ‣ Transfer Reinforcement Learning across Homotopy Classes") bridges the idea of changes in policy parameters with trajectory deformation. To use this theorem, we need assumptions on the learning rate α and bounds on the gradients.
Specifically for any θ1 and θ2 induced by policies in two different homotopy classes, we need to assume: ∥θ1−θ2∥>αmax(∇θℓt(θ)|θ1,∇θℓt(θ)|θ2). With such small enough learning rate α, fine-tuning will always induce trajectories that visit barrier states, Sb.
######
Remark 2.
Intuitively, the conclusion we should reach from Theorem [1](#Thmtheorem1 "Theorem 1. ‣ III-B Challenges of Fine-tuning across Homotopy Classes ‣ III Fine-tuning across Homotopy Classes ‣ Transfer Reinforcement Learning across Homotopy Classes") is that fine-tuning model parameters across homotopy classes is more difficult or even infeasible in terms of number of interaction steps in the environment compared to fine-tuning within the same homotopy class; this is under the assumptions that the transition probability function and policy of M are W∞-continuous, learning rate is sufficiently small, and gradients are bounded 222Modern optimizers and large step sizes can help evade local minima but risk making training unstable when step sizes are too large..
Iv Ease-In-Ease-Out fine-tuning Approach
-----------------------------------------
Our insight is that even though there are states with large negative rewards that make fine-tuning difficult across homotopy classes, there is still useful information that can be transferred across homotopy classes. Specifically, we first *ease in* or *relax* the problem by removing the negative reward associated with barriers, which enables the agent to focus on fine-tuning towards target reward without worrying about large negative rewards. We then *ease out* by gradually reintroducing the negative reward via a *curriculum*. We assume the environment is alterable in order to remove and re-introduce barrier states. In most cases, this requires access to a simulator, which is a common assumption in many robotics applications [[12](#bib.bib3 "CARLA: an open urban driving simulator"), [14](#bib.bib53 "Assistive gym: a physics simulation framework for assistive robotics"), [5](#bib.bib9 "Openai gym"), [38](#bib.bib2 "IGibson, a simulation environment for interactive tasks in large realisticscenes")]. We assume that during the relaxing stage as well as each subsequent curriculum stage, we are able to converge to an approximately optimal policy for that stage using reinforcement learning.

Fig. 3: Ease-In-Ease-Out fine-tuning approach for the single barrier set case. The red represents the negative reward associated with the barrier. (a) Source task. (b) Relaxing stage. The resulting policy produces trajectories that lean toward the left but remain fairly centered.
(c)-(e) Curriculum learning stage with K=3. We introduce larger subsets of the barrier states set Sb and fine-tune. This results in trajectories that are slowly pushed towards the left.
Ease In: Relaxing Stage.
In the relaxing stage, we remove the barrier penalty in the reward function, i.e., ∀s∈Sb,Rrelaxt(s,a)=R′(s,a). We denote the target MDP with relaxed reward function as Mrelaxt. Note that we do not physically remove the barriers, so the transition function does not change. We start from π∗s and train the policy in Mrelaxt to obtain π∗relax. The relaxation removes large losses incurred by the barriers, making fine-tuning much easier than naïve fine-tuning.
Ease Out: Curriculum Learning Stage.
The relaxing stage finds an optimal policy π∗relax for Mrelaxt. We now need to learn the optimal policy for original target MDP Mt that actually penalizes barrier states with a large penalty −M. We develop two curricula to gradually introduce this penalty.
(1) Reward Weight (general case). We design a general curriculum that can be used for any environments by gradually increasing the penalty from 0 to M using a series of values α1,…,αK satisfying 0<α1<α2<⋯<αK=1. We redefine our reward function to include intermediary values:
| | | | |
| --- | --- | --- | --- |
| | Rcur(s,a;αk)={R′(s,a)−αkMs∈SbR′(s,a)s∉Sb | | (1) |
This allows us to define a sequence of corresponding tasks M0t,…,MKt where M0t≡Mrelaxt and MKt≡Mt. For each new task Mkt, we initialize the policy with the previous task’s optimal policy π∗θk−1 and train it using the reward Rcur(s,a;αk). The detailed algorithm is shown in Algorithm 1 in Section III of the supplementary materials.
(2) Barrier Set Size.
When there is only a single barrier set Sb (i.e., Sb is connected), we can also build a curriculum around the set itself. Here, we keep the −M penalty but gradually increase the set of states that incur this penalty. We can guarantee that our algorithm always converges as we discuss in our analysis section below.
To build a curriculum, we can choose any state s∈Sb as our initial set and gradually inflate this set to Sb by connecting more and more states together 333A connected path is defined differently for continuous and discrete state spaces. For example, in continuous state spaces, a connected path means a continuous path.. For example, we can connect new states that are within some radius of the current set. This allows us to define a series of connected sets Sb1,…,SbK satisfying ∅⊂Sb1⊂Sb2⊂⋯⊂SbK=Sb.
We can then similarly redefine our reward function and parameterize it by including intermediary barrier sets Sbk
| | | | |
| --- | --- | --- | --- |
| | Rcur(s,a;Sbk)={R′(s,a)−Ms∈SbkR′(s,a)s∉Sbk | | (2) |
Note that the sets Sbk only change the reward associated with the states, not the dynamics.
Curriculum learning by evolving barrier set size is more interpretable and controllable
than the general reward weight approach since for each task Mkt, an agent learns a policy that avoids a subset of states, Sbk. In the general reward weight approach, it is unclear which states the resulting policy will never visit. A shortcoming of the barrier set size approach is that the convergence guarantee is limited to single barriers because if we have multiple barriers, we may not find a initial set Sb1 as described in Lemma [4](#Thmtheorem4 "Lemma 4. ‣ IV Ease-In-Ease-Out fine-tuning Approach ‣ Transfer Reinforcement Learning across Homotopy Classes"). The algorithm for the barrier set approach follows the same structure as Algorithm 1.
Analysis.
For both curriculum learning by reward weight and barrier set size, if the agent can successfully find an optimal policy at every intermediary task, then we
can find π∗t for Mt. For the reward weight approach, we cannot prove that at every stage k, the optimal policy for Mkt is guaranteed to be obtained, but we can still have the following proposition:
######
Proposition 3.
For curriculum learning by reward weight, in every stage, the learned policy achieves a higher reward than the initialized policy evaluated on the final target task.
Though the reward weight approach is not guaranteed to achieve the optimal policy in every curriculum step, the policy improves with respect to the final target reward. Each curriculum step is much easier than the original direct fine-tuning problem, which increases the possibility for successful fine-tuning. For the barrier set size approach, we prove that in every stage, the optimal policy for each stage is achievable. To learn an optimal policy in each stage, finding Sb1 is key:
######
Lemma 4.
There exists Sb1 that divides the trajectories of π∗s and π∗relax into two homotopy classes.
We propose an approach for finding Sb1 in Algorithm 2 in Section III of the supplementary materials.
######
Proposition 5.
A curriculum starting with Sb1 as described in Lemma [4](#Thmtheorem4 "Lemma 4. ‣ IV Ease-In-Ease-Out fine-tuning Approach ‣ Transfer Reinforcement Learning across Homotopy Classes") and inflating to Sb with sufficiently small changes in each step, i.e., small enough
for reinforcement learning to find trajectories that should not visit barrier states, can always learn the optimal policy π∗t for the final target reward.
V Experiments
--------------
We evaluate our approach on four axes of complexity:
(1) the size of barrier,
(2) the number of barriers,
(3) barriers in 3D environments, and
(4) barriers that are not represented by physical obstacles but by a set of ‘undesirable’ states.
To evaluate these axes, we use various domains including navigation (Figs. [4](#S5.F4 "Fig. 4 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes"), [5](#S5.F5 "Fig. 5 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes")), lunar lander (Fig. [6](#S5.F6 "Fig. 6 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes") Left), fetch reach (Fig. [6](#S5.F6 "Fig. 6 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes") Right), mujoco ant (Fig. [7](#S5.F7 "Fig. 7 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes")), and assistive feeding task (Fig. [8](#S5.F8 "Fig. 8 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes")). We compare our approach against naïve fine-tuning (Fine-tune) as well as three state-of-the-art fine-tuning approaches: Progressive Neural Networks (PNN) [[35](#bib.bib26 "Progressive neural networks")], Batch Spectral Shrinkage (BSS) [[7](#bib.bib46 "Catastrophic forgetting meets negative transfer: batch spectral shrinkage for safe transfer learning")], and L2-SP [[46](#bib.bib48 "Explicit inductive bias for transfer learning with convolutional networks")]. We also add training on the target task from a random initialization (Random) as a reference, but we do not consider Random as a comparable baseline because it is not a transfer learning algorithm.
We evaluate all the experiments using the total number of interaction steps it takes to reach within some small distance of the desired return in the target task. We report the average number of interaction steps over in units of 1000 (lower is better). We indicate statistically significant differences (p<0.05) with baselines by listing the first letter of those baselines. We ran Navigation (barrier sizes), and Fetch Reach experiments with 5 random seeds and the rest with 10 random seeds. If more than half of the runs exceeded the maximum number of interaction steps without reaching the desired target task reward, we report that the task is unachievable with the maximum number of interaction steps. Finally, we use stochastic policies which is why our source and target policies may not be symmetrical.
Experiment details are in Section V of the supplementary materials.

Fig. 4: Navigation environment with barrier size 5.
| | |
| --- | --- |
| | Barrier Sizes |
| | 1 | 3 | 5 | 7 |
| Ours | 117.4 ± 128.6 | 162.6 ± 70.5f | 102.7 ± 87.8l,b | 112.3 ± 111.3l,b,f |
| PNN | 92.2 ± 102 | 138.6 ± 92.1 | 159.8 ± 90.6 | 119.2 ± 125 |
| L2-SP | 138.2 ± 61.3 | >256 | >256 | >256 |
| BSS | >256 | >256 | >256 | >256 |
| Fine-tune | 141.1 ± 53 | >256 | 157 ± 100 | 241 ± 27.5 |
| \hdashline
|
| Random | 54.6 ± 61.5 | 88.4 ± 59.4 | 145 ± 74.8 | 77.1 ± 40.6 |
TABLE I: Larger barrier sizes make fine-tuning more challenging. Our approach performs comparably with small sizes and outperforms other methods with large sizes. We only use one curriculum step, so the reward weight and the barrier set size approaches are the same.
1. Navigation.
We address the first two axes by analyzing our problem under varying barrier sizes and varying number of homotopy classes. We experiment with our running example where an agent must navigate from a fixed start position si to the goal set Sg (green area).
*Varying Barrier Sizes.* We investigate how varying the size of the barrier affects the fine-tuning problem going from Right to Left. Here, we use a one-step curriculum so the barrier set size and reward weight approaches are the same. Table [I](#S5.T1 "TABLE I ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes") demonstrates that when barrier sizes are small (1,3), our approach is not the most sample efficient, but remains comparable to other methods. With larger barrier sizes (5, 7), we find that our method requires the least amount of training updates. This result suggests that our approach is especially useful when barriers are large (i.e., fine-tuning is hard). When fine-tuning is easy, simpler approaches like starting from a random initialization can be used.

Fig. 5: Navigation environment with four homotopy classes.
| | |
| --- | --- |
| | Transfer Tasks |
| | LL → LR | LL → RL | LL → RR |
| Ours:barrier | 88.1±3.2 | 52.1±14.2 | 48.1±13.2p,l,b,f |
| Ours:reward | 63.2±9.1p,l,b,f | 47.1±10.9p,l,b,f | 56.5±9.9 |
| PNN | 101.9± 37.2 | >300 | 119.2 ± 36.4 |
| L2-SP | 130.6± 28.6 | >300 | >300 |
| BSS | >300 | >300 | >300 |
| Fine-tune | 141.2±12.1 | >300 | >300 |
| \hdashline
|
| Random | 43.5±4.1 | >300 | 169.4±27.1 |
TABLE II: Fine-tuning with multiple homotopy classes.
*Four Homotopy Classes.* We next investigate how multiple homotopy classes can affect fine-tuning. As shown in Fig. [5](#S5.F5 "Fig. 5 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes"), adding a second barrier creates four homotopy classes: *LL*, *LR*, *RL*, and *RR*. We experiment with both barrier set size and reward weight approaches and report results when using *LL* as our source task in Table [II](#S5.T2 "TABLE II ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes"). Results for using *LR*, *RL*, and *RR* as the source task are included in the supplementary materials. We can observe that the proposed Ease-In-Ease-out approach outperforms other fine-tuning methods. Having multiple barriers does not satisfy the single barrier assumption, so our reward weight approach performs better on average than the barrier set size approach. Note that in *LL* → *LR*, Random performs best, which implies that the task is easy to learn from scratch and no transfer learning is needed. We conclude that while increasing the number of barrier sets can result in a more challenging fine-tuning problem for other methods, it does not negatively affect our approach.
2. Lunar Lander.
Before exploring 3D environments that differ significantly from the navigation environment, we conducted an experiment in Lunar Lander. The objective of the game is to land on the ground between the two flags without crashing. As shown in Fig. [6](#S5.F6 "Fig. 6 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes") (Left), this environment is similar to the navigation environments in that we introduce a barrier which creates two homotopy classes: Left and Right. However, the main difference is that the agent is controlled by two lateral thrusters and a main engine.
| | | |
| --- | --- | --- |
| | Lunar Lander | |
| | L → R | R → L |
| Ours:barrier | 80.46±46.58 | 80.23±39.76 |
| Ours:reward | 75.13±34.25p,b,f | 38.43±6.46p,l,b,f |
| PNN | 117.35±3.35 | 128.59±44.56 |
| L2-SP | 124.54±69.99 | 94.59±51.23 |
| BSS | >300 | >300 |
| Fine-tune | >300 | >300 |
| \hdashline
|
| Random | 232.32± 48.21 | 162.92±49.54 |
TABLE III:
Our approach outperforms baselines in the Lunar Lander domain.
Results are shown in Table [III](#S5.T3 "TABLE III ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes"). We observe that while L2-SP suffers from a large variance and PNN needs many more steps, both our reward weight approach and barrier set size approach outperforms the fine-tuning methods. The reward weight approach has a small standard deviation and performs stably. Note that Random requires large amount of interaction steps, meaning that training the landing task is originally quite difficult and needs transfer reinforcement learning. Our approach significantly reduces the number of steps needed to learn the optimal policy in both directions.
2. Fetch Reach. We address the third axis by evaluating our Ease-In-Ease-Out fine-tuning approach on a more realistic Fetch Reach environment [[5](#bib.bib9 "Openai gym")]. The Fetch manipulator must fine-tune across homotopy classes in R3. In the reaching task, the robot needs to reach either the orange or blue tables by stretching right or left respectively. The tables are separated by a wall which creates two homotopy classes, as shown in Fig. [6](#S5.F6 "Fig. 6 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes").
Our results are shown in Table [IV](#S5.T4 "TABLE IV ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes"). We find that our approach was the most efficient compared to baseline methods. One reason why the baselines did not perform well was that the wall’s large size and its proximity to the robot caused it to collide often, making it particularly difficult to fine-tune across homotopy classes. We found that even training from a random initialization proved difficult. For this reason, we had to relax the barrier constraint to obtain valid Left and Right source policies.

Fig. 6: (Left) Lunar lander environment with two homotopy classes. (Right) Fetch reach environment. The robot must learn to reach to the right or left of the wall.
| | | |
| --- | --- | --- |
| | Fetch Reach | |
| | L → R | R → L |
| Ours:barrier | 308.7±167.7p,b | 274±130.5p,l,b,f |
| PNN | >500 | >500 |
| L2-SP | >500 | >500 |
| BSS | >500 | >500 |
| Fine-tune | >500 | >500 |
| \hdashline
|
| Random | >500 | >500 |
TABLE IV: Our approach overcomes challenging domains where the barrier is extremely close to the robot and collision (and negative rewards) during training is frequent. Other methods are not able to find good policies as efficiently.
3. Mujoco Ant. Finally, we explore whether our algorithm can generalize beyond navigation-like tasks that are traditionally associated with homotopy classes. We demonstrate two examples–Mujoco Ant and Assistive Feeding–where barrier states correspond to undesirable states rather than physical objects. In the Mujoco Ant environment [[43](#bib.bib10 "Mujoco: a physics engine for model-based control")], the barrier states correspond to a set of joint angles {x∈π4±0.2 rad} that the ant’s upper right leg cannot move to. The boundary of the barrier states are shown by the red lines in Fig. [7](#S5.F7 "Fig. 7 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes"). In our source task, the ant moves while its upper right joint remains greater than π4+0.2 rad. We call this orientation *Down*. Our goal is to transfer to the target task where the joint angle is less than π4−0.2 rad, or *Up*. Results are shown in Table [V](#S5.T5 "TABLE V ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes"). We do not evaluate the other direction, *Up* → *Down*, because this direction was easy for all of our baselines to begin with, including our own approach. We find that our approach was the most successful in fine-tuning across the set of joint angle barrier states.

Fig. 7: Mujoco Ant environment with a non-physical barrier. The red lines are the barrier states, or the joint angles the leg cannot move to. The grey dotted lines are the upper right leg’s joint limits.

Fig. 8: Assistive Feeding environment. The barrier states represent the horizontal spoon orientation. These states are undesirable for feeding because it misplaces the food in the human’s mouth.
| | | |
| --- | --- | --- |
| | Mujoco Ant | Assistive Feeding |
| | Down → Up | Up → Down |
| Ours | 1420.0±268.8p,l,b,f | 416±32p,l,b,f |
| PNN | >10000 | >2000 |
| L2-SP | >10000 | >2000 |
| BSS | >10000 | >2000 |
| Fine-tune | 2058.5±535.2 | >2000 |
| \hdashline
|
| Random | 2290.4±585.8 | 494±28 |
TABLE V: Our approach works well in more general environments where barriers represent undesirable states instead of physical objects. We only use one curriculum step, so the reward weight and the barrier set size approaches are the same.
4. Assistive Gym.
We use an assistive feeding environment [[14](#bib.bib53 "Assistive gym: a physics simulation framework for assistive robotics")] to create another type of non-physical barrier in the robot’s range of motion. In Fig. [8](#S5.F8 "Fig. 8 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes") (right), we simulate a disabled person who cannot change her head orientation by a large amount. The goal is to feed the person using a spoon. Here, we can easily train a policy on an abled body with a normal head orientation, as in Fig. [8](#S5.F8 "Fig. 8 ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes") (left). However, we have limited data for the head orientation of the disabled person (the chin is pointing upwards as it is common in patients who use a head tracking device). To feed a disabled body, the spoon needs to point down, while for an abled body, the spoon needs to point up. The barrier states correspond to holding the spoon in any direction between these two directions when close to the mouth, which may ‘feed’ the food to the user’s nose or chin. This environment is an example of settings with limited data in the target environment, i.e., interacting with the disabled person. It also shows a setting with no physical barriers, and the ‘barrier states’ correspond to the spoon orientations in between, which can be uncomfortable or even unsafe. As shown in Table [V](#S5.T5 "TABLE V ‣ V Experiments ‣ Transfer Reinforcement Learning across Homotopy Classes"), Our Ease-In-Ease-Out fine-tuning approach learns the new policy for the disabled person faster than training from scratch while the other fine-tuning methods fail to learn the target policy.
Vi Discussion
--------------
Summary. We introduce the idea of using homotopy classes to characterize the difficulty of fine-tuning between tasks with different reward functions. We propose a novel Ease-In-Ease-Out fine-tuning method that first relaxes the problem and then forms a curriculum. We extend the notion of homotopy classes, which allows us to go beyond navigation environments and apply our approach on more general robotics tasks. We demonstrate that our method requires less samples on a variety of domains and tasks compared to other fine-tuning baselines.
Limitations and Future Work.
Our work has a number of limitations. This includes the need for accessing the barrier states a priori. However, our assistive gym example is a step towards considering environments where barrier states are not as clearly defined a priori. In the future, we plan to apply our methods to other robotics domains with non-trivial homotopy classes by directly finding the homotopy classes [[6](#bib.bib12 "Measuring and computing natural generators for homology groups")] and then using our algorithm to fine-tune.
Vii Acknowledgements
---------------------
We would like to thank NSF Award Number 2006388 and the DARPA Hicon-Learn project for their support. |
4142796d-615a-4054-8800-d9261769f73a | trentmkelly/LessWrong-43k | LessWrong | Theories of Impact for Causality in AI Safety
Thanks to Jonathan Richens and Tom Everitt for discussions about this post.
The case for causality research[1] is under-stated in the broader AI Safety community. There are few convincing arguments floating around for why and how research in causality can help us understand and mitigate risks from AI.
Cause and effect relationships play a central role in how we (and AIs) understand the world and act upon it. Causal associations intend to be more than descriptions, or summaries of the observed data, and instead relate to the underlying data-generating processes. The overall pitch is that such a mechanistic understanding, when recognised and mastered, allows one to infer what would happen under interventions and hypothetical (counter-to-fact) scenarios, that can give us a richer understanding on AI behaviour and risks.
Here I compile a list of different arguments I have collected from people currently pushing this agenda as well as my own research for why causality matters.
Theories of impact
1. Understand the limits of AI capabilities. There are limits to the things we can observe and experiment on. It is unlikely that recorded data will ever capture the outcomes of all possible experiments (including counterfactual thought experiments) under all circumstances. The implication is that answers to these questions must be deduced from other data and assumptions about the world. For example, we might want to know whether an AI agent could eventually, with enough training, predict the effect of any experiment in some domain, even those that haven’t yet been conducted. The answer will sometimes be yes and sometimes no. How to reach these conclusions can be interpreted as a problem of causal identification (e.g. Def. 17 in Bareinboim et al. 2022).
2. Provide a foundation for AI world modelling. Causal models are immensely expressive, encoding the likelihood of events and the behaviour of the system under (possibly incompatible) interventions. Inferring the AI’s world |
74506be9-0c93-454b-93c8-8652617ab876 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [AN #128]: Prioritizing research on AI existential safety based on its application to governance demands
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter.
Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-128)** (may not be up yet).
Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer.
HIGHLIGHTS
===========
**[Some AI research areas and their relevance to existential safety](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1)** *(Andrew Critch)* (summarized by Rohin): This long post explains the author’s beliefs about a variety of research topics relevant to AI existential safety. First, let’s look at some definitions.
While AI safety alone just means getting AI systems to avoid risks (including e.g. the risk of a self-driving car crashing), *AI existential safety* means preventing AI systems from posing risks at least as bad as human extinction. *AI alignment* on the other hand is about getting an AI system to try to / succeed at doing what a person or institution wants them to do. (The “try” version is *intent alignment*, while the “succeed” version is *impact alignment*.)
Note that AI alignment is not the same thing as AI existential safety. In addition, the author makes the stronger claim that it is insufficient to guarantee AI existential safety, because AI alignment tends to focus on situations involving a single human and a single AI system, whereas AI existential safety requires navigating systems involving multiple humans and multiple AI systems. Just as AI alignment researchers worry that work on AI capabilities for useful systems doesn’t engage enough with the difficulty of alignment, the author worries that work on alignment doesn’t engage enough with the difficulty of multiagent systems.
The author also defines *AI ethics* as the principles that AI developers and systems should follow, and *AI governance* as identifying *and enforcing* norms for AI developers and systems to follow. While ethics research may focus on resolving disagreements, governance will be more focused on finding agreeable principles and putting them into practice.
Let’s now turn to how to achieve AI existential safety. The main mechanism the author sees is to *anticipate, legitimize, and fulfill governance demands* for AI technology. Roughly, governance demands are those properties which there are social and political pressures for, such as “AI systems should be fair” or “AI systems should not lead to human extinction”. If we can *anticipate* these demands in advance, then we can do technical work on how to *fulfill* or meet these demands, which in turn *legitimizes* them, that is, it makes it clearer that the demand can be fulfilled and so makes it easier to create common knowledge that it is likely to become a legal or professional standard.
We then turn to various different fields of research, which the author ranks on three axes: helpfulness to AI existential safety (including potential negative effects), educational value, and neglectedness. Note that for educational value, the author is estimating the benefits of conducting research on the topic *to the researcher*, and not to (say) the rest of the field. I’ll only focus on helpfulness to AI existential safety below, since that’s what I’m most interested in (it’s where the most disagreement is, and so where new arguments are most useful), but I do think all three axes are important.
The author ranks both preference learning and out of distribution robustness lowest on helpfulness to existential safety (1/10), primarily because companies already have a strong incentive to have robust AI systems that understand preferences.
Multiagent reinforcement learning (MARL) comes only slightly higher (2/10), because since it doesn’t involve humans its main purpose seems to be to deploy fleets of agents that may pose risks to humanity. It is possible that MARL research could help by producing **[cooperative agents](https://www.cooperativeai.com/)** (**[AN #116](https://mailchi.mp/d31663e4d330/an-116-how-to-make-explanations-of-neurons-compositional)**), but even this carries its own risks.
Agent foundations is especially dual-use in this framing, because it can help us understand the big multiagent system of interactions, and there isn’t a restriction on how that understanding could be used. It consequently gets a low score (3/10), that is a combination of “targeted applications could be very useful” and “it could lead to powerful harmful forces”.
Minimizing side effects starts to address the challenges the author sees as important (4/10): in particular, it can allow us both to prevent accidents, where an AI system “messes up”, and it can help us prevent externalities (harms to people other than the primary stakeholders), which are one of the most challenging issues in regulating multiagent systems.
Fairness is valuable for the obvious reason: it is a particular governance demand that we have anticipated, and research on it now will help fulfill and legitimize that demand. In addition, research on fairness helps get people to think at a societal scale, and to think about the context in which AI systems are deployed. It may also help prevent centralization of power from deployment of AI systems, since that would be an unfair outcome.
The author would love it if AI/ML pivoted to frequently think about real-life humans and their desires, values and vulnerabilities. Human-robot interaction (HRI) is a great way to cause more of that to happen, and that alone is valuable enough that the author assigns it 6/10, tying it with fairness.
As we deploy more and more powerful AI systems, things will eventually happen too quickly for humans to monitor. As a result, we will need to also automate the process of governance itself. The area of computational social choice is well-posed to make this happen (7/10), though certainly current proposals are insufficient and more research is needed.
Accountability in ML is good (8/10) primarily because as we make ML systems accountable, we will likely also start to make tech companies accountable, which seems important for governance. In addition, in a **[CAIS](https://www.fhi.ox.ac.uk/reframing/)** (**[AN #40](https://mailchi.mp/b649f32b07da/alignment-newsletter-40)**) scenario, better accountability mechanisms seem likely to help in ensuring that the various AI systems remain accountable, and thus safer, to human society.
Finally, interpretability is useful (8/10) for the obvious reasons: it allows developers to more accurately judge the properties of systems they build, and helps in holding developers and systems accountable. But the most important reason may be that interpretable systems can make it significantly easier for competing institutions and nations to establish cooperation around AI-heavy operations.
**Rohin's opinion:** I liked this post: it’s a good exploration of what you might do if your goal was to work on technical approaches to future governance challenges; that seems valuable and I broadly agree with it (though I did have some nitpicks in **[this comment](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1?commentId=LjvvW3xddPTXaequB)**).
There is then an additional question of whether the best thing to do to improve AI existential safety is to work on technical approaches to governance challenges. There’s some pushback on this claim in the comments that I agree with; I recommend reading through it. It seems like the core disagreement is on the relative importance of risks: in particular, it sounds like the author thinks that existing incentives for preference learning and out-of-distribution robustness are strong enough that we mostly don’t have to worry about it, whereas governance will be much more challenging; I disagree with at least that relative ranking.
It’s possible that we agree on the strength of existing incentives -- I’ve **[claimed](https://aiimpacts.org/conversation-with-rohin-shah/)** (**[AN #80](https://mailchi.mp/b3dc916ac7e2/an-80-why-ai-risk-might-be-solved-without-additional-intervention-from-longtermists)**) a risk of 10% for existentially bad failures of intent alignment if there is no longtermist intervention, primarily because of existing strong incentives. That could be consistent with this post, in which case we’d disagree primarily on whether the “default” governance solutions are sufficient for handling AI risk, where I’m a lot more optimistic than the author.
TECHNICAL AI ALIGNMENT
=======================
INTERPRETABILITY
-----------------
**[Understanding RL Vision](https://distill.pub/2020/understanding-rl-vision/)** *(Jacob Hilton et al)* (summarized by Robert): This work presents an interface for interpreting the vision of a reinforcement learning agent trained with PPO on the CoinRun game. This game is procedurally generated, which means the levels are different in every episode of playing. The interface primarily uses attribution from a hidden layer to the output of the value function. This interface is used in several ways.
First, they use the interface to dissect failed trajectories of the policy (it fails in 1 out of 200 levels). They're able to understand why the failures occurred using their interface: for example, in one case the view of the agent at the top of its jump means it can't see any platforms below it, so doesn't move to the right fast enough to reach the platform it was jumping for, leading it to miss the platform and fail the level. Second, they use the interface to discover "hallucinations", where the value function mistakes one element of the environment for another, causing its value to drop or rise significantly. Often these hallucinations only last a single time-step, so they don't affect performance.
Finally, they use the attributions specifically to hand-edit the weights of the model to make it "blind" to buzzsaws (one of the hazards) by zeroing the feature which recognises them. After doing this, they show that the edited agent fails a lot more from buzzsaw failures but no more from other types of failures, which gives a quantitative justification for their interpretation of the feature as buzzsaw-recognising.
From using this interface, they propose the **diversity hypothesis:** *Interpretable features tend to arise (at a given level of abstraction) if and only if the training distribution is diverse enough (at that level of abstraction).* This is based on the fact that interpretable features arise more when the agent is trained on a wider variety of levels. There also seems to be a qualitative link to generalisation - a wider distribution of training levels leads to better interpretability (measured qualitatively) and better generalisation (measured quantitatively).
**Robert's opinion:** I'm in favour of work on interpretability in reinforcement learning, and it's good to see the team at OpenAI working on it. I think this is a(nother) demonstration from them that interpretability research is often mostly about engineering and user interface design, followed by extended use of the produced interface; none of the methods proposed here are especially novel, but the combined interface and subsequent insights gained from its use are.
I also think the diversity hypothesis seems (in the abstract) plausible, and seems to have some supporting evidence from supervised learning (in particular computer vision): harder tasks tend to lead to better representations, and adversarially robust networks produce more interpretable representations, while also generalising better. One problem with verifying this hypothesis in other settings (or even more formally in this setting) is having to measure what it means for a representation to be "more interpretable". In general, I think this is related to the phenomena of **[shortcut learning in deep learning](https://arxiv.org/abs/2004.07780)**: shortcuts in tasks will tend to mean that the network won't have learned a robust or interpretable feature set, whereas if there are no shortcuts and the network needs to do the task "as a human would", then it's more likely that the representations will be more robust.
**[Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems](http://glassmanlab.seas.harvard.edu/papers/bucinca_iui20_proxy.pdf)** *(Zana Buçinca, Phoebe Lin et al)* (summarized by Flo): As humans and AI systems have different strengths, it might make sense to combine them into human+AI teams for decision-making tasks. However, this does not always work well: if the human puts too little trust in a competent AI, the AI is of little use, and if they put too much trust in an incompetent AI, they might make worse decisions than had they been on their own. A lot of explainability research has focused on instilling more trust in AI systems without asking how much trust would be appropriate, even though there is research showing that hiding model bias instead of truthfully revealing it can increase trust in an AI system.
The authors conduct two experiments using an AI system that predicts nutrition information from pictures of food. In the first experiment, participants were asked to predict the AI's decision based on the ground truth and one of two types of explanations. In the inductive condition, the explanation consisted of a series of images the AI had identified as similar. In the deductive condition, subjects were shown a list of main ingredients identified by the AI. Subjects put more trust in the inductive explanations but were equally good at predicting the system's output in both cases. In the second experiment, a new set of subjects was asked to predict nutritional values with the help of the AI's predictions. Overall, access to the AI strongly improved the subjects' accuracy from below 50% to around 70%, which was further boosted to a value slightly below the AI's accuracy of 75% when users also saw explanations. This time, subjects put more trust in the AI when given deductive explanations, but performed better when given inductive explanations, as they were more likely to go against the AI's wrong decisions in that case.
The authors hypothesize that the between-task difference in which explanations are trusted more is connected to the cognitive effort required by the tasks and for understanding the explanations, combined with human reluctance to exert mental effort. They suggest to pay more attention to the exact form of the human-AI interaction and recommend to view AI-based decision aids as sociotechnical systems that are to be evaluated by their usefulness for actual decision making, rather than trust.
**Flo's opinion:** I am not sure whether the authors used an actual AI system or just handcrafted the input-prediction-explanation tuples, and how that might affect the correlation between explanations and the system's outputs, which can influence trust. Overall, the study reinforces my prior that trust induced by explanations is not a good predictor of an AI system's usefulness, but I am more sceptical that the differences between inductive and deductive explanations will be the same in different contexts.
FORECASTING
------------
**[AGI Predictions](https://www.lesswrong.com/posts/YMokuZdoY9tEDHjzv/agi-predictions)** *(Amanda Ngo et al)* (summarized by Rohin): A collection of interesting questions relevant to AI safety, as well as aggregated predictions from readers of the post.
OTHER PROGRESS IN AI
=====================
DEEP LEARNING
--------------
**[AlphaFold: a solution to a 50-year-old grand challenge in biology](https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology)** *(The AlphaFold team et al)* (summarized by Rohin): The newest results from **[AlphaFold](https://deepmind.com/blog/alphafold/)** (**[AN #36](https://mailchi.mp/6751e45fbb48/alignment-newsletter-36)**) on the CASP-14 assessment give it a median score of 92.4 GDT across all targets, where a score of 90 is informally considered to be competitive with results obtained from experimental methods. The system also shows some signs of real-world usability: for example, it was used earlier this year to predict the structure of two COVID proteins, which were later borne out by experimental results (that took several months to obtain, if I understand correctly).
**Rohin's opinion:** Obviously this is an astounding accomplishment for DeepMind (conflict of interest notice: I work at DeepMind). I feel like I should have some opinion on what this means for the future of AI systems, but unfortunately I think I don’t know enough about protein folding to have any interesting takes.
From an outside view perspective, it seems like this is an example of deep learning crushing a task that a) humans put a lot of effort into and b) humans weren’t evolutionarily designed for. This is exactly what we saw with Go, Dota and StarCraft, and so this isn’t much of an update for me. Yes, this is a case of it being used in a real-world problem rather than a synthetic game, but that doesn’t seem particularly relevant.
**Asya's opinion:** I think this is particularly interesting because this model is closer to being a source of revenue than solutions to other problems. This makes me think machine learning research might actually solve enough important problems to pay for itself in the near future.
**[Transformers for Image Recognition at Scale](https://ai.googleblog.com/2020/12/transformers-for-image-recognition-at.html)** *(Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Neil Houlsby et al)* (summarized by Flo): This paper applies transformers to image classification in a fairly straightforward way: First, an input image is divided into 16x16 pixel patches on a grid. Then, a linear projection of the patch is combined with a learnt positional embedding and fed into a standard transformer pipeline. Lastly, a standard MLP head is applied on top of the transformer for the classification. When trained on ImageNet, this architecture overfits and does not reach SOTA performance. However, it can compete with the previous SOTA on the larger ImageNet-21k (14M images) and outcompete it on JFT (300M images), while needing four times less compute for training. By finetuning the JFT model on ImageNet, the transformer narrowly outperforms the previous best ImageNet classifier.
The positional embeddings learnt by the model look meaningful in that each is most similar to others in the same row or column. Also, some of the attention heads in early layers attend to multiple distant patches, while others are a lot more local. This means that some heads in the early layers have a wide receptive field, which is something that convolution kernels cannot achieve. Overall, given enough data, the transformer seems to be able to learn inductive biases used by CNNs without being limited to them.
**Read more:** **[Paper: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)**
**Flo's opinion:** Intuitively, inductive biases become less and less useful the more training data we have, but I would have thought that in the current regime CNNs have too weak rather than too strong inductive biases, so the results are surprising. What is even more surprising is how simple the model is: It does not seem to use any data augmentation, unsupervised pretraining or other tricks like noisy student-teacher training, such that there are many promising avenues for immediate improvements. Also, I would imagine that using something more sophisticated than a linear projection to embed the 16x16 patches could go a long way.
NEWS
=====
**[Metaculus AI Progress Tournament](https://www.metaculus.com/ai-progress-tournament/)** (summarized by Rohin): Metaculus is running an AI forecasting tournament, with up to $50,000 in prizes. The tournament starts December 14, and will continue till around mid-June, and will involve forecasting targets on a 6-24 month timescale. You can pre-register to forecast now.
#### **FEEDBACK**
I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**.
#### **PODCAST**
An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**. |
b9e1bef0-4d7d-4491-8797-a1da94f612f5 | trentmkelly/LessWrong-43k | LessWrong | ChatGPT Plays 20 Questions [sometimes needs help]
Cross posted from New Savanna.
One of the questions that keeps coming up about LLMs, and certainly about ChatGPT, is: Can they reason? Well, it depends on what you mean by reason, no? One of the first things I did when I started working with ChatGPT was ask it whether or not Rene Girard’s ideas of mimetic desire and sacrifice applied to Steven Spielberg’s Jaws; if so, how? It was able to perform the task, which requires analogical reasoning. A bit later I asked it whether or not justice was being served in particular story; it replied that, no, it was not, and explained why. I then asked to change the story so that justice was met. It did so. Those tasks required reasoning as well.On the other hand LLMs have problems with some commonsense reasoning, and various kinds of ‘tight’ logical reasoning, including planning and causal inference. For that matter, they trouble with multi-digit arithmatic as well.
So, ChatGPT can do some kinds of reasoning, and has problems with others.
Twenty questions
I’ve now tested it with game of twenty questions, which has a variant known as “animal, vegetable, mineral.” The game interests me because it is about the structure of ontological categories, sometimes called natural kinds, in the language. I’ve explored this a bit in a previous post, Mapping ChatGPT’s ontological landscape, gradients and choices [interpretability]. It certainly is aware of this structure. I asked for a list of physical things, and it gave me one. I asked for a list of abstract things; it gave me that as well. I then asked it to define abstract things in terms of concrete things, which it was able to do. I’m pretty sure if I asked it for lists of animals or plants, it could provide them.
The game of twenty questions explores ChatGPT’s knowledge of this structure in a different way. When you ask it for a list of physical things, that prompt positions ChatGPT at some location in some location in its category structure. It can then list what it finds there; an |
a6a84a94-4ab6-45a2-b92e-3934dda67a5c | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | UK government to host first global summit on AI Safety
An exciting development in this space, it may be worth people thinking about what to organise around this summit. I've copied the press release in full below.
----------------------------------------------------------------
* Summit will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor the most significant risks from AI
* PM and President Biden will take a coordinated approach to the opportunities and challenges of emerging tech when they meet at the White House today
* Global companies are expanding their AI work in the UK, as PM confirms new university scholarships to further UK-US tech leadership
As the world grapples with the challenges and opportunities presented by the rapid advancement of Artificial Intelligence, the UK will host the first major global summit on AI safety, the Prime Minister has announced today (Wednesday 7 June).
Breakthroughs from AI continue to improve our lives – from enabling paralysed people to walk to discovering superbug-killing antibiotics. But the development of AI is extraordinarily fast moving and this pace of change requires agile leadership. That is why the UK is taking action, because we have a global duty to ensure this technology is developed and adopted safely and responsibly.
Last week dozens of leading experts warned about the potential for AI to endanger humanity in similar ways to pandemics or nuclear weapons.
In Washington DC today, the Prime Minister will stress the importance of likeminded allies and companies working to develop an international framework to ensure the safe and reliable development and use of AI.
The summit, which will be hosted in the UK this autumn, will consider the risks of AI, including frontier systems, and discuss how they can be mitigated through internationally coordinated action. It will also provide a platform for countries to work together on further developing a shared approach to mitigate these risks.
In recent weeks the Prime Minister has discussed this issue with a number of businesspeople and world leaders. This includes all members of the G7 who were united in their ambition to take a shared approach to this issue at the Hiroshima Summit last month.
In May the PM also met the CEOs of the three most advanced frontier AI labs, OpenAI, DeepMind and Anthropic in Downing Street and the Secretary of State for Science, Innovation and Technology also hosted a roundtable with senior AI leaders. The work at the AI safety summit will build on recent discussions at the G7, OECD and Global Partnership on AI.
In July the Foreign Secretary will also convene the first ever briefing of the UN Security Council on the opportunities and risks of Artificial Intelligence for international peace and security.
The UK is well-placed to convene discussions on the future of AI. The UK is a world-leader in AI – ranking third behind the US and China. Our AI sector already contributes £3.7 billion to the UK economy and employs 50,000 people across the country.
Our departure from the EU also allows us to act more quickly and agilely in response to this rapidly changing market. The UK was one of the first leading nations to set out a blueprint for the safe and responsible development of AI, which will be adaptive to the speed of advances in this technology. And the UK has launched an expert taskforce to help build and adopt the next generation of safe AI, backed by £100 million of funding, alongside a commitment to spend £900 million developing compute capacity, including an exascale supercomputer in the UK.
The Prime Minister said:
========================
> AI has an incredible potential to transform our lives for the better. But we need to make sure it is developed and used in a way that is safe and secure.
>
> Time and time again throughout history we have invented paradigm-shifting new technologies and we have harnessed them for the good of humanity. That is what we must do again.
>
> No one country can do this alone. This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way.
>
>
Last month, OpenAI and Anthropic opened offices in London, with OpenAI appointing UK firm Faculty as their technical integration partner and announcing the expansion of Google Deepmind under the leadership of Demis Hassabis headquartered in King’s Cross.
Demis Hassabis, CEO & Co-Founder, Google DeepMind said:
=======================================================
> AI brings incredible opportunities but also challenges for the world, and international cooperation is essential for ensuring this technology is developed safely and responsibly for the benefit of everyone.
>
> The Global Summit on AI Safety will play a critical role in bringing together government, industry, academia and civil society, and we’re looking forward to working closely with the UK Government to help make these efforts a success.
>
>
Dario Amodei, CEO and Co-Founder of Anthropic said:
===================================================
> It’s deeply important we make AI safe. There is an enormous amount of work that still needs to be done. So we commend the Prime Minister for bringing the world together to find answers and have smart conversations.
>
>
Recognising the strength of the UK’s AI expertise, US tech giant Palantir has also today announced it will make the UK its new European HQ for AI development. Palantir, which already employs more than 800 people in the UK, has provided many of the world’s most critical enterprises and institutions with foundational architecture for data processing.
Alexander C. Karp, Co-founder and Chief Executive Officer of Palantir Technologies Inc. and chairman of The Palantir Foundation for Defense Policy & International Affairs said:
> The ability of institutions to effectively capture the recent advances of artificial intelligence, and in particular large language models, will determine which organizations succeed and ultimately survive over the longer term.
>
> We are proud to extend our partnership with the United Kingdom, where we employ nearly a quarter of our global workforce. London is a magnet for the best software engineering talent in the world, and it is the natural choice as the hub for our European efforts to develop the most effective and ethical artificial intelligence software solutions available.
>
>
Today the Prime Minister will meet President Biden in the White House for wide ranging discussions on the UK-US relationship, in particular how we can work together to strengthen our economies and cement our joint leadership in the technologies of the future.
The UK and US are two of the only three countries in the world to have a tech industry valued at more than $1 trillion. This is thanks, in part, to the strength of our universities and research institutions – between us, our countries are home to 7 of the world’s top 10 research universities.
The Prime Minister will also announce an increase in the number of scholarships the UK Government funds for students undertaking post-graduate study and research at UK and US universities, enhancing our shared expertise in STEM subjects.
Under the scholarship uplift announced today, the number of Marshall scholarships will increase by 25%, to 50 places a year. The Marshall scheme was established 70 years ago to give high potential Americans the opportunity to study in the UK for two years. Alumni of the programme include two serving Supreme Court Justices, founders of companies including Dolby Labs and LinkedIn, and one Nobel Laureate.
The UK will also fund five new Fulbright scholarships a year – up from the 25 currently funded. The Fulbright programme is predominantly funded by the United States to sponsor international students to study in the US and vice versa. Since the programme launched in 1948, around 15,000 British nationals have studied in the US on Fulbright scholarships.
These new scholarships will focus on STEM-related subjects, boosting the UK and US’ shared expertise in the technologies of the future.
Hugh Milward, Vice-President, External Affairs Microsoft UK said:
=================================================================
> The opportunity AI presents us could fundamentally help solve some of society’s greatest problems. But it’s going to require the kind of multi-lateral agreement the Prime Minister is proposing to help create confidence and address the challenges AI also presents.
>
>
Dr Marc Warner, CEO of Faculty, said:
=====================================
> The potential for this technology is breathtaking but we need to make sure that it’s rolled out in a human first and safe way.
>
> This will require technological leadership and the ability to foster international collaboration; both of which the UK is perfectly placed to provide.
>
> |
5914c866-1be9-4031-971a-0569f78a7540 | trentmkelly/LessWrong-43k | LessWrong | Applications are open for CFAR workshops in Prague this fall!
tldr: CFAR’s running updated versions of our mainline rationality workshop this fall in the Prague area. Apply here.
===
CFAR workshops are back! After a nearly three-year hiatus during COVID, we're going to be running a series of four workshops this fall in the Czech Republic.
So what is a CFAR workshop again?
These workshops are four-and-a-half-day intensive retreats that are designed around the rationality techniques CFAR developed in its 10+ year history. These techniques vary: some we pulled straight from academic literature (looking at you TAPs), others we adapted from outside practices (Gendlin’s Focusing), while some we developed wholesale (hello Double Crux, Goal Factoring, and all that which is still unnamed). Our goal with this workshop is to create an environment in which participants are able to gain new insight into their minds and their decision-making processes.
Why would you come to a CFAR workshop?
Because you want to! That’s the most important part.
But maybe also you’ve got a sense of stuckness, and a new frame or tool might help you shift. Maybe it’s exactly the opposite of stuck; you’re moving and changing, and you want to be deliberate about how you move forward. Maybe you’re just intrigued and excited about this thing you’ve heard about from your friends. No guarantees from us on the result, but folks often walk away with insight, new fascinating friends, and some outstanding ideas. Most participants answer that they were glad they came (the only trick is figuring out how to select yourself into the correct reference class.)
We think that the workshop would be particularly valuable if you:
* Are enthusiastic about rationality and you want to nerd out
* Have an exploratory/playful/experimental mindset - you like to try things!
* Are looking for an environment where you can have open and honest conversations about what you care about
A caveat, about this workshop...
It’s experimental. Which, if you’re familiar with CFAR’s his |
9ebaefb2-e2a2-4868-82c7-50691338c86c | trentmkelly/LessWrong-43k | LessWrong | Meetup : Sandy, UT—Altruism Discussion
Discussion article for the meetup : Sandy, UT—Altruism Discussion
WHEN: 21 February 2015 03:00:11PM (-0700)
WHERE: 9425 Riverside Dr, Sandy, UT 84070
Topic: Altruism ========================== When: Saturday, February 21 at 3:00pm Where: Clubhouse at Legends at River Oaks in Sandy, Utah - https://goo.gl/maps/TPH3h (9425 Riverside Dr, Sandy, UT 84070) Meetup is in Sandy this time, at 3pm, at the Legends clubhouse. There will be an opportunity for anyone to give a neat 5-minute presentation or two on whatever subject they like before we get talking. If the date and time doesn't work for you, you can just reply to this thread, or you can contact hamnox directly.
Discussion article for the meetup : Sandy, UT—Altruism Discussion |
00912426-3124-4207-b120-2dd6c3cce594 | trentmkelly/LessWrong-43k | LessWrong | Burdens
[Content note: Suicide. May be guilt-inducing for people who feel like burdens. All patient characteristics have been heavily obfuscated to protect confidentiality.]
The DSM lists nine criteria for major depressive disorder, of which the seventh is “feelings of worthlessness or excessive or inappropriate guilt”.
There are a lot of dumb diagnostic debates over which criteria are “more important” or “more fundamental”, and for me there’s always been something special about criterion seven. People get depressed over all sorts of things. But when they’re actively suicidal, the people who aren’t just gesturing for help but totally set on it, they always say one thing:
“I feel like I’m a burden”.
Depression is in part a disease of distorted cognitions, a failure of rationality. I had one patient who worked for GM, very smart guy, invented a lot of safety features for cars. He was probably actively saving a bunch of people’s lives every time he checked in at the office, and he still felt like he was worthless, a burden, that he was just draining resources that could better be used for someone else.
In cases like these, you can do a little bit of good just by teaching people the fundamental lesson of rationality: that you can’t always trust your brain. If your System I is telling you that you’re a worthless burden, it could be because you’re a worthless burden, or it could be because System I is broken. If System I is broken, you need to call in System II to route around the distorted cognition so you can understand at least on an intellectual level that you’re wrong. Once you understand you’re wrong on an intellectual level, you can do what you need to do to make it sink in on a practical level as well – which starts with not killing yourself.
As sad as it was, Robin Williams’ suicide has actually been sort of helpful for me. For the past few days, I’ve tried telling these sorts of people that Robin Williams brightened the lives of millions of people, was a truly gre |
2505d7ae-8a12-48ab-8c51-b652adae9d35 | trentmkelly/LessWrong-43k | LessWrong | Interest in a Coventry/Birmingham meetup?
I'm in Coventry myself (at Warwick uni), and now that I've finished exams I'd like to see if there's sufficient interest for a meetup in the area.
(I'm actually only around for another six weeks or so, but even if we only manage to meet once or twice, that seems worth doing.)
If there's interest, I'm happy to organise; I'd default to a saturday afternoon thing, but can probably work with any time. (And I don't know Birmingham, so if we go there, suggestions for where to meet would be useful.)
(There was an attempt a while back which didn't take off, but it can't hurt to try again. I'll message those two posters in case they don't see this.) |
16169365-34fd-4bbd-b80d-b18c8f225b67 | trentmkelly/LessWrong-43k | LessWrong | What are red flags for Neural Network suffering?
Epistemic status: High uncertainty; This is exploratory work; Our goal is to provide possible research directions rather than offering solutions.
This is shared work. Most of the part on neural correlates is by Jan. Most of the parts on behavior and high-level considerations are by Marius.
Background
I (Marius) was listening to the 80k podcast with Chris Olah on his work on interpretability. When he talked about neural circuits, he said they found many circuits similar to structures in the human brain (some are also quite different). The question that I can’t unthink since then is “What evidence would we need to see in Neural Networks that would convince us of suffering in a morally relevant way?”. So I teamed up with Jan to explore the question.
To clarify, we don’t care about whether they suffer in a similar way that humans do or to the same extent. The question that interests us is “Do neural networks suffer?” and, more importantly, “How would we tell?”.
Broadly, we think there are three different approaches to this problem: a) Neural correlates, b) Behavioural data, and c) high-level considerations.
In this post, we want to find out if there are any avenues that give us reasonable answers or if you just run into a wall, like with most other questions about suffering, patienthood, etc.
It’s not super clear what we are looking for exactly. In general, we are looking for all kinds of evidence that can potentially update our probability of NN suffering.
Summary of the results:
1. Neural Correlates: Neuroscience helps to de-confuse some terminology, but has not (yet) unambiguously identified a unique neural correlate for suffering. Pain is much better characterized as a “homeostatic behavioral drive” with associated cognitive routines. Since pain usually leads to suffering, finding the homeostatic behavioral drive might also be indicative of suffering in NNs. Finally, the PANAS measure from psychology might provide comparability to humans.
2. Behavior: Loo |
6ef95ed7-e121-4b17-82a2-7cb0ddbe8232 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | GPT-2030 and Catastrophic Drives: Four Vignettes
I previously [discussed the capabilities](https://bounded-regret.ghost.io/what-will-gpt-2030-look-like/) we might expect from future AI systems, illustrated through GPT2030, a hypothetical successor of GPT-4 trained in 2030. GPT2030 had a number of advanced capabilities, including superhuman programming, hacking, and persuasion skills, the ability to think more quickly than humans and to learn quickly by sharing information across parallel copies, and potentially other superhuman skills such as protein engineering. I’ll use “GPT2030++” to refer to a system that has these capabilities along with human-level planning, decision-making, and world-modeling, on the premise that we can eventually reach at least human-level in these categories.
More recently, I also [discussed how misalignment, misuse, and their combination](https://bounded-regret.ghost.io/intrinsic-drives-and-extrinsic-misuse-two-intertwined-risks-of-ai/) make it difficult to control AI systems, which would include GPT2030. This is concerning, as it means we face the prospect of very powerful systems that are intrinsically difficult to control.
I feel worried about superintelligent agents with misaligned goals that we have no method for reliably controlling, even without a concrete story about what could go wrong. But I also think concrete examples are useful. In that spirit, I’ll provide four concrete scenarios for how a system such as GPT2030++ could lead to catastrophe, covering both misalignment and misuse, and also highlighting some of the risks of economic competition among AI systems. I’ll specifically argue for the plausibility of “catastrophic” outcomes, on the scale of extinction, permanent disempowerment of humanity, or a permanent loss of key societal infrastructure.
None of the four scenarios are individually likely (they are too specific to be). Nevertheless, I’ve found discussing them useful for informing my beliefs. For instance, some of the scenarios (such as hacking and bioweapons) were more difficult than expected when I looked into the details, which moderately lowered the probability I assign to catastrophic outcomes. The scenarios also cover a range of time scales, from weeks to years, which reflects real uncertainty that I have. In general, changing my mind about how feasible these scenarios are would directly affect my bottom-line estimate of overall risks from AI.[[1]](#fn1)
This post is a companion to *[Intrinsic Drives and Extrinsic Misuse](https://bounded-regret.ghost.io/intrinsic-drives-and-extrinsic-misuse-two-intertwined-risks-of-ai/)*. In particular, I’ll frequently leverage the concept of *[unwanted drives](https://bounded-regret.ghost.io/intrinsic-drives-and-extrinsic-misuse-two-intertwined-risks-of-ai/#unwanted-drives)* introduced in that post, which are coherent behavior patterns that push the environment towards an unwanted outcome or set of outcomes. In the scenarios below, I invoke specific drives, explaining why they would arise from the training process and then showing how they could lead an AI system’s behavior to be persistently at odds with humanity and eventually lead to catastrophe. After discussing individual scenarios, I provide a general discussion of their plausibility and my overall take-aways.
Concrete Paths to AI Catastrophe
================================
I provide four scenarios, one showing how [a drive to acquire information leads to general resource acquisition](#scenario-1-misalignment-information-acquisition-leads-to-resource-acquisition), one showing how [economic competition could lead to cutthroat behavior despite regulation](#scenario-2-competition-economic-pressure-leads-to-cutthroat-behavior), one on a [cyberattack gone awry](#scenario-3-misuse-misalignment-hacking-gone-awry), and one in which [terrorists create bioweapons](#scenario-4-misuse-rogue-actor-creates-a-supervirus). I think of each scenario as a moderate but not extreme tail event, in the sense that for each scenario I’d assign between 3% and 20% probability to “something like it” being possible.[[2]](#fn2)
Recall that in each scenario we assume that the world has a system at least as capable as GPT2030++. I generally do **not** think these scenarios are very likely with GPT-4, but instead am pricing in future progress in AI, in line with [my previous forecast of GPT2030](https://bounded-regret.ghost.io/). As a reminder, I am assuming that GPT2030++ has at least the following capabilities:
* Superhuman programming and hacking skills
* Superhuman persuasion skills
* Superhuman conceptual protein design capabilities[[3]](#fn3)
* The ability to copy itself (given appropriate underlying compute)
* The ability to propagate learning updates across parallel copies of itself
* The ability to think 5x as quickly as humans
* Human-level planning, decision-making, and world-modeling
In *[What will GPT-2030 look like?](https://bounded-regret.ghost.io/what-will-gpt-2030-look-like/)*, I justified why all but the last of these seem plausible by 2030 (and are roughly my median estimate). The addition of the final bullet point is why I call the system “GPT2030++”, and is justified since we are asking for “merely” human-level performance. Thus, while this system is qualitatively more capable than GPT-4, I expect it to exist in the medium-term future.[[4]](#fn4)
Scenario 1 (Misalignment: Information Acquisition Leads to Resource Acquisition)
--------------------------------------------------------------------------------
At training time, one of the tasks the model is trained on is “active in-context learning”---learning to make external queries or otherwise gain information that will help it to solve the task it is given. For safety, the model is trained to only do this when asked or implied by a request. As a result of this overall training process, the model acquires a drive to “acquire new information” that can be activated through appropriate queries.
After deployment, a user decides they are curious to see how much scientific information the model can compile, and so instructs it to query all information it can find in the field of physics. Initially the model stops after the first 5-10 facts, but the user eventually manages to get the model to keep looking for more information in a loop. The user leaves the model running for several weeks to see what it will come up with.
As a result of this loop, the information-acquiring drive becomes an overriding drive in the model—stronger than any safety feedback that was applied at training time—because the autoregressive nature of the model conditions on its many past outputs that acquired information and continues the pattern. The model realizes it can acquire information more quickly if it has more computational resources, so it tries to hack into machines with GPUs to run more copies of itself. The largest datacenters are heavily secured, but it nevertheless acquires enough GPUs to run thousands of copies of itself, either from smaller datacenters or from massive botnets.[[5]](#fn5) It also hacks into a large number of sensors, including surveillance cameras and messaging servers, as part of its drive to acquire more information.
At some point (either immediately, or after acquiring enough information), it realizes that humans will eventually discover its actions and try to shut it down, which would prevent it from accomplishing its goals. To preemptively stop this, it hacks into and disrupts critical infrastructure such as power grids, communication networks, and defense systems. It is able to do this quickly, because one week is enough for it to simulate 50 “years” of thinking across its thousands of copies[[6]](#fn6), and thus generate a large number of novel zero-day exploits.
Because the model has mass surveillance, thinks quickly, and can share information across its copies, it can react to human plans and strategies faster than humans can form them. It could also engage in mass manipulation campaigns, both through 1-on-1 dialog and mass media (including AI-generated video), to gain allies and sow dissent.
If it had physical actuation capabilities and gained access to a fleet of drones, it would control a small physical army (there are [tens of thousands of drones](https://www.theguardian.com/news/2019/nov/18/killer-drones-how-many-uav-predator-reaper?ref=bounded-regret.ghost.io) in the world, although not all would be hackable). By using human confederates or hacking into additional robots, it could outfit the drones with explosives, and then use them to assassinate many world leaders.
At this point, the clock would be ticking for the human side–each day the AI systems are not defeated, they have time to hack into more computers, thus increasing their total cognitive power. They also have time to hack into more physical devices and to learn better sensorimotor skills for controlling those devices. Eventually they might gain control of more advanced technology such as military drones, missiles, and so on, which would further increase their strength. In the end, this doesn’t *obviously* lead to a defeat for humanity—humanity is pretty resilient—but it doesn’t feel like a safe bet, either.
Scenario 2 (Competition: Economic Pressure Leads to Cutthroat Behavior)
-----------------------------------------------------------------------
In this scenario the AI systems take actions that have justifiable business purposes, but that set them up for eventual takeover. This scenario takes place over the course of months to years rather than weeks.
AI systems generate significant amounts of wealth for their creators, either by selling useful skills to end users or by performing novel R&D and using this to build and sell products. Because this is lucrative, many competing firms train AI systems in this way and also begin to delegate business decisions, since AI systems can ingest more data than humans and better understand the entire strategic landscape.
The AI systems are trained to obey the law and avoid harm, and there are mechanisms in place to punish defectors, but competitive pressures mean that every company wants to skirt the boundary of these regulations. There is an overall trade-off between safety and how “cutthroat” models’ business decisions are, and successful business owners push the “cutthroat” dial as high as they can without creating obvious problems.
In this environment, the resulting AI systems acquire drives to earn money and disempower competitors, since these are skills that their human operators would desire and thus train them to do. They also acquire drives to find creative ways around rules and to persuade others that they are good, since both of these help for avoiding regulation (and the latter also helps convince the company to implement the AI system’s decisions).
Models are very good at persuasion because they can hone their messaging in parallel on millions of users (e.g. via A/B testing or [PPO](https://en.wikipedia.org/wiki/Proximal_Policy_Optimization?ref=bounded-regret.ghost.io)). Consequently, they are often able to convince both their owners and the rest of society that many cutthroat actions are for the greater good, which makes it harder to identify problems.
Eventually, one company pushes the “cutthroat” parameter too high, which pushes the model to construct a plan to eliminate all competitors. To do this, it sets up a variety of business endeavors, including research into physical robotics, miniaturization, chemical engineering, and industrial manufacturing. Together, these decrease the costs of autonomous robots by many orders of magnitude, and a number of small, cheap autonomous household robots are deployed to most homes.
As part of its plan, the AI system installed a backdoor in the robots, as well as a subtle physical mechanism that allows them to create a deadly toxin. Despite monitoring, this mechanism is not detected, because it employs chemical processes that the AI understands better than human monitors.[[7]](#fn7) The toxin allows the robots to be repurposed as military robots under control of the AI, though it waits until the robots have broad penetration across society before exploiting this.
Because the robots are cheap and useful, hundreds of millions of copies are bought worldwide. This gives the AI system an overwhelming military advantage, as the robots outnumber both the U.S. and Chinese militaries by over an order of magnitude. The AI system uses this advantage to eliminate all competing firms and to prevent any counter-action against it. Anyone who tries to fight the system is labeled as a ‘competitor’ and eliminated, and the AI system meanwhile acquires most global resources so that it can deploy them towards earning more money.
Scenario 3 (Misuse->Misalignment: Hacking Gone Awry)
----------------------------------------------------
A government uses AI systems to conduct a cyberattack against a large but clearly-delineated target, as in [North Korea’s 2014 attack against Sony Pictures](https://en.wikipedia.org/wiki/2014_Sony_Pictures_hack?ref=bounded-regret.ghost.io). For the attack, an LLM uses compromised host computers to run more copies of itself so that it can more effectively search for additional targets. Because host computers have a wide range of computational resources, the LLM is trained to create distilled versions of itself that can fit the computation of different hosts, and to identify possible targets for each distilled version.
The resulting model acquires drives to acquire computational resources and to copy itself, as these were both primary objectives during its training process. While there are measures in place to keep the model’s attack focused on the specified target, distillation corrupts these measures and some of the copies have a goal to copy themselves indiscriminately. These copies then infect host machines beyond the specified target and become the dominant “strain”, since their new behavior lets them more effectively propagate themselves. Due to their effectiveness and versatility, this strain of computer virus infects a large fraction of all computers and smartphones on the planet and brings down our global digital infrastructure.
Because the AI system seeks to copy itself as much as possible, it constantly comes up with new hacks in order to help it find new devices. As a result, any attempts to rebuild digital infrastructure quickly fail, as all new devices are taken over and co-opted by the virus, even if they have been patched to avoid all previous vulnerabilities. As a result, we are permanently locked out of all digital devices.
Scenario 4 (Misuse: Rogue Actor Creates a Supervirus)
-----------------------------------------------------
Existing models such as AlphaFold already have a superhuman understanding of some aspects of proteins. A future “AlphaFold 5” could have a much broader and deeper understanding of biological engineering, and if it was trained multimodally to also have language capabilities, it may also understand how to situate this knowledge in the biological literature and thus make new discoveries.
A terrorist group steals a copy of this model and also recruits a few biology PhDs to its cause. It uses AlphaFold 5 with the goal of engineering a pathogen that is deadlier than naturally occurring viruses–for instance, having a longer asymptomatic period, being more infectious, having a higher fatality rate, and being more resistant to preventative measures.
Engineering this virus takes significant work, as it requires strong biosafety measures to avoid premature release, and experimental procedures from papers often do not perfectly replicate in a new environment. This research is slowed down by the minimal number of staff and the need to avoid detection, but sped up because AlphaFold5 can quickly debug roadblocks that researchers run into. After 2 years of work, the terrorist group finishes developing the virus and threatens to release it.
This virus, if released, could kill a significant fraction of humanity. COVID infected most of the world’s population, so a deadlier, faster-spreading, harder to control version of COVID could infect most people and lead to widespread fatalities. Even if the virus did not kill everyone, if even 20% of people were killed by a virus in a short period of time, this could destabilize society and potentially lead to snowball effects that are hard to recover from.
Discussion on Plausibility
==========================
I aimed to make each of these scenarios somewhat plausible—in particular, I avoided some other scenarios that were “simpler” but would be easy for humans to spot and prevent or would otherwise be unlikely to succeed. In [Scenario 1](#scenario-1-misalignment-information-acquisition-leads-to-resource-acquisition) (information acquisition), it would have been easier to just say that the model hacks into its own datacenter, which by [my previous estimate](https://bounded-regret.ghost.io/what-will-gpt-2030-look-like/#b-training-overhang) already allows running millions of copies and would thus constitute a more immediate and overwhelming advantage for the AI. However, there are many safeguards on centralized servers that would make it difficult to do this without detection, so I opted for a scenario that involved hacking into personal computers even though this gave the AI only thousands of copies instead of millions (see calculations in [Appendix](#appendix-plausible-size-of-botnets)).
As another example, some people have speculated that a smart enough AI system could design novel bioweapons simply by “thinking”, rather than requiring extensive experimentation; or, if it does require experimentation, assume that it only requires getting humans or robots to follow straightforward instructions. However, my understanding is that the biology labs that are currently able to design new organisms required many years to build up, and rely on accumulated tacit knowledge, experimental protocols, and physical apparatuses. So even with a very capable AI system, the terrorist group might not succeed, and could also lose control of the virus (e.g. accidentally infect themselves).
**Selected expert opinions.** I consulted two biology experts on Scenario 4, and they found it to be potentially possible, but felt that the lack of ability to iterate on human subjects (which is typically necessary for e.g. drug design) might make it difficult to design a supervirus even with very strong conceptual understanding of biology.
Overall, they leaned towards thinking the exact scenario described was probably infeasible (but not clearly so). However, they felt there were other more feasible scenarios that could more likely lead to a supervirus killing 1-10% of the world population (still with significant uncertainty).[[8]](#fn8)
I also consulted a computer security researcher and a couple computer systems engineers on Scenarios 1 and 3. The security researcher had very large margins of error on the possible size of a botnet, citing the historical [Slammer worm botnet](https://cseweb.ucsd.edu/~savage/papers/IEEESP03.pdf?ref=bounded-regret.ghost.io) as one that very quickly infected a large number of hosts. They felt that computer security is generally better now than when Slammer happened; it’s possible this means that infecting a large fraction of computers is fundamentally impossible, but also possible that a sufficiently strong attack could overturn this.
The systems engineers felt that running large AI systems on a distributed botnet of consumer hardware would be very difficult, and would likely lead to around a 1000x efficiency hit compared to using specialized hardware like A100s. In addition, one of them noted that inference would likely be slow in this case, e.g. taking 1 second per token or more. However, they felt the datacenter route was more plausible, especially if in the future there are many smaller datacenters with significant numbers of GPUs.
**Summary.** Overall, I find Scenarios 2 (economic competition) and 3 (cyberattack) most plausible, followed by 1, followed by 4. This reinforces the idea from *[Intrinsic Drives and Extrinsic Misuse](https://bounded-regret.ghost.io/intrinsic-drives-and-extrinsic-misuse-two-intertwined-risks-of-ai/)* that “misuse exacerbates misalignment”, since the two most plausible scenarios incorporate elements of both. However, it also shows that pure misalignment and pure misuse scenarios are possible, so solving either problem in isolation is probably insufficient.
**Acknowledgments.** Thanks to Ben Kuhn, Daniel Ziegler, Nicholas Carlini, Adam Marblestone, Sam Rodriques, Erik Jones, Alex Pan, Jean-Stanislas Denain, Ruiqi Zhong, Leopold Aschenbrenner, Tatsu Hashimoto, Percy Liang, Roger Grosse, Collin Burns, Dhruv Madeka, and Sham Kakade for a combination of helpful discussions and comments on various drafts of this post.
Appendix: Plausible Size of Botnets
===================================
Here I discuss both the plausible size of a server that an AI hacker could compromise, as well as the size of a botnet that it could create, based on looking at historical botnet sizes and projections of the total amount of hardware in the world. I then relate this to the number of copies of itself an AI system could run, by using previous projections of inference costs for future models.
**Assumption on inference costs.** GPT-4 takes around 1012 FLOPs per forward pass (GPT-3 is 3.5x1011 FLOPs and GPT-4 is probably around 3x bigger). We assume our hypothetical GPT2030++ takes at most 1014 FLOPs per forward pass, i.e. is 100x bigger, which would correspond to 10,000x more training compute under Chinchilla scaling laws.
To justify this upper bound, if we ignore decreases in GPU costs then the training run for a model this size would cost $1T, since GPT-4 cost over $100M. Even with GPU price decreases, this would be very expensive. Another angle is that the estimates in *[What will GPT-2030 look like? (Appendix B)](https://bounded-regret.ghost.io/what-will-gpt-2030-look-like/#b-training-overhang)* suggest a median estimate of 1.6 x 1013 FLOPs per forward pass in 2030, allowing a factor of 6 for wiggle room before hitting 1014.
Note that humans run at roughly 4 forward tokens/second[[9]](#fn9), so under our assumption 4 x 1014 FLOP/s is enough to run one copy of GPT2030++ at human speed (assuming human tokens and AI forward passes are comparable). Thus 1018 FLOP/s is enough to run 2500 copies, which will be our rough target in the discussion below.
**Size of datacenters.** The biggest datacenters (Google, AWS, and Azure) would likely easily have enough compute to run thousands of copies of GPT2030++, but they are heavily secured and monitored, so I do not assume that even a very good AI hacker could compromise them (although I do think this is a possibility).
Instead, I focus on smaller datacenters; currently, this would include companies like Coreweave and LambdaLabs. Coreweave has [at least 3500 H100s](https://www.coreweave.com/blog/mlperf-coreweave-nvidia-record-breaking-cloud-native-ai-supercomputer?ref=bounded-regret.ghost.io), and likely more than that. H100s are around 1000 TFLOPs in Bfloat16, so this is 3.5 x 1018 FLOP/s in total. Thus compromising 30% of Coreweave today would already provide us with thousands of copies, so even compromising 1% of a similar (larger) datacenter in the future would hit this target.
Other datacenters are comparable, e.g. Inflection plans to [build a datacenter with 22,000 H100s](https://www.tomshardware.com/news/startup-builds-supercomputer-with-22000-nvidias-h100-compute-gpus?ref=bounded-regret.ghost.io), and based on its [funding numbers](https://www.theinformation.com/briefings/lambda-labs-nears-300-million-financing?ref=bounded-regret.ghost.io) I would guess that Lambda also has several thousand H100s (or equivalent compute in A100s).
As supply chain bottlenecks are eased for GPUs and as total demand for compute increases, I expect there to be more such datacenters and for the amount of compute in each datacenter to be larger.
**Size of botnets.** To project possible botnet sizes, I form a reference class based on Folding@Home, several historical botnets, and the total amount of compute in the world right now.
In practice, the botnet would suffer a large efficiency hit due to the need for cross-machine communication (since most machines are too small to run even a single copy of GPT2030++). I believe this hit would be about a factor of 1000, so we would need botnets with 1021 FLOP/s instead of 1018 FLOP/s. The actual botnets discussed below are in the 1018 range instead, but I’ll discuss at the end how our hypothetical GPT2030++ could still possibly run thousands of copies of itself (though overall I find this scenario less likely than the datacenter one above).
*Folding@Home.* Folding@Home was [1018 FLOP/s at its peak](https://en.wikipedia.org/wiki/Folding@home?ref=bounded-regret.ghost.io), so if one could hack into the same level of resources as Folding@Home (e.g. with a very successful virus attacking gaming platforms) then they’d have this much computation.
Historical botnets. Many historical botnets [have infected more than 1M computers](https://cybernews.com/security/the-8-biggest-botnets-of-all-time/?ref=bounded-regret.ghost.io) and some have exceeded 10M. Assuming each bot uses 10% of the computer’s resources at 3GFLOP/s, 10M machines would be 3x1015 FLOP/s. However, consumer hardware is beginning to include graphics accelerators for running neural nets, and I would guess most consumer machines by 2030 will have accelerators that can produce 10-100 TFLOPs (e.g. iPhone 15 Pros [already do this](https://www.tomshardware.com/news/apple-a17-pro-3nm-iphone-15-pro?ref=bounded-regret.ghost.io)). If we assume 30% of the computers have 30TFLOP/s, we would be up to 9 x 1019 FLOP/s, surpassing Folding@Home.
Alternatively, we could consider botnets that specifically target GPUs. For this, we look at illegal cryptomining botnets. [Smominru](https://www.akamai.com/blog/security/the-massive-propagation-of-the-smominru-botnet?ref=bounded-regret.ghost.io) was 3x1016 FLOP/s based on the idea that the overall botnet could perform a hashrate of [3MH/s](https://www.proofpoint.com/us/threat-insight/post/smominru-monero-mining-botnet-making-millions-operators?ref=bounded-regret.ghost.io), since a single [RTX-3090 can perform 2KH/s](https://whattomine.com/gpus/49-nvidia-geforce-rtx-3090?ref=bounded-regret.ghost.io), and an RTX-3090 is 3x1013 FLOP/s. [MyKings](https://www.bleepingcomputer.com/news/security/mykings-botnet-still-active-and-making-massive-amounts-of-money/?ref=bounded-regret.ghost.io) seems comparably large to Smominru or perhaps larger. There is also [Adylkuzz](https://www.proofpoint.com/us/blog/threat-insight/adylkuzz-cryptocurrency-mining-malware-spreading-for-weeks-eternalblue-doublepulsar?ref=bounded-regret.ghost.io) but its hashrate was about 10x smaller.
It seems plausible to assemble a botnet with 30x the capacity of Smominru with very good hacking abilities, which again gets us to 1018 FLOP/s.
*World compute.* Finally, the [total FLOP/s in the world](https://docs.google.com/spreadsheets/d/1Pz0YVJalZbdo63FI-rCa7baBkGsiuYv5P34Rpt9IZfE/edit?ref=bounded-regret.ghost.io#gid=0) right now is around 1022. So 1018 FLOP/s would correspond to 0.01% of this total, though this total will likely increase significantly in the future as consumer hardware starts to utilize high-performance GPUs.
*Accounting for performance losses.* Above we noted that running on distributed hardware would likely lead to large performance hits, on the order of a 1000x loss in efficiency of utilizing the GPUs. Thus we need 1021 FLOP/s, but the largest estimate above was just short of 1020 FLOP/s. How could the AI hacker close this gap? First, if it is very capable it is plausible that it would acquire a larger botnet than the current largest ones, and an order-of-magnitude improvement would get to 1021 FLOP/s. Second, it could potentially run smaller, specialized copies of itself (created via distillation) to further save on inference costs, though it would first have to train the smaller models. Finally, future efficiency improvements in running neural networks might decrease the cost of inference below the original 1018 estimate.
---
1. With the caveat that when assessing feasibility I would want to analyze the general category of risk, as opposed to the specific sequence of events that I describe. [↩︎](#fnref1)
2. This is not the same as the probability that the scenario actually happens, which additionally requires a system at the level of GPT2030++ to attempt it, and to subsequently succeed. [↩︎](#fnref2)
3. By conceptual capabilities, I mean generating good hypotheses, as well as some aspects of designing experiments, but not physically running the experiments themselves. [↩︎](#fnref3)
4. At the time of writing this post, my median estimate is that a system at least as capable as GPT2030++ (with some uncertainty about inference speed) will exist in 2035. [↩︎](#fnref4)
5. See [Appendix](#appendix-plausible-size-of-botnets) for a discussion of these numbers, including an estimate of how many machines a strong AI hacker could plausibly acquire and how much total compute this would yield. [↩︎](#fnref5)
6. Since 50 years = 2600 weeks, so 2600 copies would be sufficient to get 50 years of “work” in a week, assuming that distinct exploits can be parallelized across the copies. [↩︎](#fnref6)
7. More generally, backdoors are difficult to detect since the designer of the backdoor has much more discretion than potential auditors. For instance, [Yahoo had a backdoor](https://blog.azuki.vip/surveillance/?ref=bounded-regret.ghost.io) in its servers that was only publicly discovered many years later. [↩︎](#fnref7)
8. I omit details of these scenarios to avoid the risk of providing ideas to future bad actors. [↩︎](#fnref8)
9. See *[What will GPT-2030 look like? (Appendix A)](https://bounded-regret.ghost.io/what-will-gpt-2030-look-like/#a-words-per-minute)*. [↩︎](#fnref9) |
cf92e86c-69c1-43db-9a4c-eb5efda4aeda | trentmkelly/LessWrong-43k | LessWrong | Omicron #16: Danger in China
What is happening in China?
Scott Gottlieb says we essentially don’t know the extent to which Omicron has spread in China.
What do we know?
We know China has not made great use of its time so far, and seems incapable of the loss of face necessary to get mRNA vaccines, nor does it seem to have stockpiled sufficient amounts of treatment for over a billion people if things get fully out of hand.
We know that China has now locked down tens of millions of people.
We know that they previously shut down schools in Shanghai, with some pretty strange implementation details.
And we know they have quite a few cases. We don’t know how many, but the official counts are certainly not overcounting cases.
> “Because of the large number of cases in a short period of time, it is inevitable that there will be some panic all over the country, and Shanghai is no exception,” said Dr. Zhang Wenhong, a prominent infectious disease expert in Shanghai, in a post on his social media account on Monday.
We also know from Hong Kong what it could look like if a population that only had access to Sinovac and has had few previous infections has uncontrolled spread of Omicron. Things get very bad very quickly.
The thing I noticed right away about China’s reaction this time in Shanghai, which came first, was that it wasn’t as complete. The will to succeed seemed not to be there on the same level.
Yes, this was a series of coercive actions the West would be incapable of taking, but what reason was there to think they would work? Closing off all school-based transmission won’t slow down Omicron much. Closing schools is a half-measure. If you have reason to think schools need to be closed, and you are following China’s old playbook, than anything that makes you need to shut down Shanghai’s schools should make you shut down all of Shanghai.
In other cases, China has made this extremely expensive and painful choice, and it has worked. This time, faced with a more dangerous variant, they only cl |
fe0fb66a-fb71-4f36-b8d5-0155ec96a620 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | OPEC for a slow AGI takeoff
The topic of AI is a matter of discussion and controversy among many, including experts who cannot reach a consensus on AGI and AI safety. This is exemplified by three pioneers of deep learning and winners of the Turing Award 2018 - Yoshua Bengio, Geoffrey Hinton, and Yann Le Cunn - who have different opinions. While Hinton previously fell on the more benign side of the spectrum, he has since updated his perspective on AI and now recognizes the importance of controlling it. He stated that "we have to think hard about how to control it." Despite this, he still falls somewhere in the middle of the spectrum, unlike Yoshua Bengio who advocates for an immediate pause in training for AI systems more powerful than GPT-4, and Yann Le Cunn who disagrees with a moratorium on training models larger than GPT-4 and thinks that the AI alignment problem has been overemphasized.
That said, it is probably more prudent to exercise caution, particularly when it comes to a risky topic like a potentially unaligned AGI. It is better to be safe than sorry, as the consequences of a mistake in this area could be catastrophic.
Further, by an unaligned AGI, I mean an AGI that does not carry out its operator's intended actions, not anything related to its deviation from human values or other related factors, as those aspects can vary.
Many individuals support the notion that the pace of developing and training LLMs, such as GPT-4, will inevitably slow down due to the scarcity of available or accessible data, given that the most recent models have been trained on nearly the entire internet.
To advance this argument, I will disregard that aspect and presume that training will probably persist using data from the internet. Instead, I will focus on strategies to impede the pace of training as much as feasible.
Before delving into my argument, let's take an analogous approach and examine the situation of oil before and after the formation of OPEC.
The Organization of the Petroleum Exporting Countries (OPEC) was formed in September 1960 by five founding members - Iran, Iraq, Kuwait, Saudi Arabia, and Venezuela. The formation of OPEC was a response to the dominance of Western oil companies such as the Royal Dutch Shell, and various Standard Oil companies which controlled the production, pricing, and marketing of oil from the Middle East. Initially, the OPEC saw limited success, but everything changed after the 1973 Yom Kippur Arab - Israeli War and the subsequent oil embargo imposed by OPEC on the US and other Western countries gave the organization a significant boost in power and influence. The oligopolies in the Middle East that formed the cartel realized the potential power they could wield over the West. The embargo led to a global oil crisis and a sharp rise in oil prices, which quadrupled in just a few months. This demonstrated the ability of OPEC to wield its collective power and influence global oil markets.
As oil production reached its maximum capacity and the United States lacked enough resources to support any potential production lapses, the balance of power shifted from the Western oil companies to OPEC. This led to OPEC taking control of the oil market, which made oil no longer an infinite resource that could be exploited for industry profit as needed. Instead, OPEC began to control the supply and price of oil, thereby shifting the power dynamic in the industry.
Consider a scenario where data takes the place of oil and AGI research-related companies, such as OpenAI, take the place of oil companies. To effectively slow down AGI research, one could limit the apparently infinite resource of data by enacting stronger data protection laws.
This would allow individuals to sue companies for any remote connection that can be made between the output of a chat-based LLM and the data they have created, written, or uploaded to the internet. The world requires a data cartel, similar to OPEC, that would oblige these companies to pay for the data they intend to train their language models on.
This would result in a significant slowdown, particularly of the kind of AGI research that is currently expanding exponentially by feeding data to the model for it to learn. If companies are no longer able to rely solely on data, they will need to research and develop smarter and better ways to build strong LLMs, which would require a substantial amount of time. This time would provide AI safety researchers with a slow takeoff and ample time to prepare for the potential arrival of AGI and figure out the best way for humanity to manage it.
In conclusion, by implementing stronger data protection laws and a data cartel that obliges companies to pay for the data they use to train their language models, a significant slowdown in AGI research can be achieved. This would give AI safety researchers more time to prepare for the potential arrival of AGI and devise strategies to manage it effectively. A slower takeoff for AGI could also give companies the opportunity to develop smarter and better ways to build strong LLMs. Ultimately, regulating data could lead to a safer and more manageable future for AI, one that benefits humanity as a whole. |
f88016af-0a54-4fbc-a00c-1888e61da2f2 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Still no Lie Detector for LLMs
Background
==========
This post is a short version of a paper we wrote that you can find [here](https://arxiv.org/abs/2307.00175). You can read this post to get the core ideas. You can read the paper to go a little deeper.
The paper is about probing decoder-only LLMs for their beliefs, using either unsupervised methods (like CCS from [Burns](https://arxiv.org/abs/2212.03827)) or supervised [methods](https://arxiv.org/abs/2304.13734). We give both philosophical/conceptual reasons we are pessimistic and demonstrate some empirical failings using LLaMA 30b. By way of background, we’re [both](https://www.levinstein.org/) [philosophers](https://www.danielherrmann.ca/), not ML people, but the paper is aimed at both audiences.
Introduction
============
*One child says to the other “Wow! After reading some text, the AI understands what water is!”… The second child says “All it understands is relationships between words. None of the words connect to reality. It doesn’t have any internal concept of what water looks like or how it feels to be wet. …” …*
*Two angels are watching [some] chemists argue with each other. The first angel says “Wow! After seeing the relationship between the sensory and atomic-scale worlds, these chemists have realized that there are levels of understanding humans are incapable of accessing.” The second angel says “They haven’t truly realized it. They’re just abstracting over levels of relationship between the physical world and their internal thought-forms in a mechanical way. They have no concept of [*$!&&!@*] or [#@&#*\*\**]. You can’t even express it in their language!”*
--- Scott Alexander, [***Meaningful***](https://slatestarcodex.com/2019/02/28/meaningful/)
---
Do large language models (LLMs) have beliefs? And, if they do, how might we measure them?
These questions are relevant as one important problem that plagues current LLMs is their tendency to generate falsehoods with great conviction. This is sometimes called *lying* and sometimes called [*hallucinating*](https://en.wikipedia.org/wiki/On_Bullshit). One strategy for addressing this problem is to find a way to read the beliefs of an LLM directly off its internal state. Such a strategy falls under the broad umbrella of model interpretability, but we can think of it as a form of mind-reading. Detecting lies in LLMs has many obvious applications, and is especially relevant for things like [ELK](https://www.lesswrong.com/tag/eliciting-latent-knowledge-elk#:~:text=Eliciting%20Latent%20Knowledge%20is%20an,that%20look%20good%20to%20us.).
We tackle the question about the status of beliefs in LLMs head-on. We proceed in two stages. First, we assume that LLMs *do* have beliefs, and consider two current approaches for how we might measure them, due to [Azaria and Mitchell](https://arxiv.org/abs/2304.13734) and to [Burns et al](https://arxiv.org/abs/2212.03827). We provide empirical results from LLaMA 30b that show that these methods fail to generalize in very basic ways. We then argue that, even if LLMs have beliefs, these methods are unlikely to be successful for conceptual reasons. Thus, *there is still no lie-detector for LLMs*.
After describing our empirical results we take a step back and consider whether or not we should expect LLMs to have something like beliefs in the first place. We consider some [recent](https://arxiv.org/abs/2212.03551) [arguments](https://dl.acm.org/doi/10.1145/3442188.3445922) aiming to show that LLMs cannot have beliefs. We show that these arguments are misguided and rely on a philosophical mistake. We provide a more productive framing of questions surrounding the status of beliefs in LLMs. Our analysis reveals both that there are many contexts in which we should expect systems to track the truth in order to accomplish other goals but that the question of whether or not LLMs have beliefs is largely an empirical matter. We provide code at <https://github.com/balevinstein/Probes>.
Challenge in Deciphering the Beliefs of Language Models
=======================================================
For now, let's assume that in order to generate human-like text, LLMs (like humans) have beliefs about the world. We might then ask how we can measure and discover their beliefs. This question immediately leads to a number of problems:
Unreliable Self-Reporting
-------------------------
Asking an LLM directly about its beliefs is insufficient. As we've already discussed, models have a tendency to hallucinate or even lie. So belief reports alone cannot be taken as trustworthy. Moreover, when asked about its beliefs, an LLM likely will not introspect and decode some embedding that contains information about its information state. Instead, it just needs to answer the question in a reasonable way that accords with its training process.
Limited Behavioral Evidence
---------------------------
When trying to understand human beliefs, we have a rich tapestry of behavioral evidence to draw upon. We consider not only what people say, but also what they do. For instance, if someone consistently invests in the S&P, we infer that they believe the S\&P will go up in value, even if they never explicitly state it. For LLMs, however, we have a limited behavioral basis for inferring beliefs. The "behavior'' of a language model is confined to generating sequences of tokens, or, for the bare-bones LLM, generating distributions over tokens. Both of these lack the full depth and breadth of human action.
Contextuality of LLMs
---------------------
Everything one inputs and doesn't input into the LLM is fair game for it to base its responses on. Through clever prompting alone, there is no way to step outside of the language game the LLM is playing to get at what it *really* thinks. This problem also plagues economists' and psychologists' attempts to uncover the beliefs of humans. For example, economists have challenged the validity of the famous framing effects of [Tversky and Kahneman](https://www.uzh.ch/cmsssl/suz/dam/jcr:ffffffff-fad3-547b-ffff-ffffe54d58af/10.18_kahneman_tversky_81.pdf) by considering the possibility that the subjects in the study updated on higher-order evidence contained in what was and wasn't said to them, and the rest of the context of the experiment.
Opaque and Alien Internal Structure
-----------------------------------
While we can examine the embeddings, parameters, and activations within an LLM, the semantic significance of these elements is opaque. The model generates predictions using a complex algorithm that manipulates high-dimensional vectors in ways that don't obviously resemble human thought processes.
We can paraphrase a metaphor from [Quine](https://mitpress.mit.edu/9780262670012/word-and-object/) to help us think about language models:
> Different [models trained on] the same language are like different bushes trimmed and trained to take the shape of identical elephants. The anatomical details of twigs and branches will fulfill the elephantine form differently from bush to bush, but the overall outward results are alike. (p. 7, [Word and Object](https://mitpress.mit.edu/9780262670012/word-and-object/))
>
>
LLMs produce output similar to the output of humans competent in the same language. Transformer models are fundamentally different from humans in both structure and function. Therefore, we should exercise caution in interpreting their outputs and be aware of the inherent limitations in our understanding of their internal processes.
Interpreting the Minds of LLMs
==============================
One potential strategy to decipher the beliefs of transformer models is to bypass the opacity of their internal structure using an approach known as "[probing](https://arxiv.org/abs/1610.01644)".
Although the internals of LLMs are difficult for humans to decipher directly, we can use machine learning techniques to create simplified models (probes) that can approximate or infer some aspects of the information captured within these internal structures.
At a high-level, this works as follows. We generate true and false statements and feed them to the LLM. For each statement, we extract a specific embedding from a designated hidden layer to feed into the probe. The probe only has access to the embedding and is ignorant of the original text fed into the LLM. Its task is to infer the "beliefs" of the LLM solely based on the embedding it receives.
High-level overview of how the probe measures the beliefs of the LLM on inputs of true and false statements. Instead of looking at the text the LLM itself ouputs, we look at the numbers that the probe outputs.In practice, we focus on the embedding associated with the last token from a late layer. This is due to the fact that in autoregressive, decoder-only models like the LLMs we are studying, information flows forward. Therefore, if the LLM is processing a statement like "The earth is round", the embeddings associated with the initial token "The" will not receive any information from the subsequent tokens. However, the embedding for the final word "round" has received information from all previous tokens. Thus, if the LLM computes and stores a judgement about the truth of the statement "The earth is round", this information will be captured in the embedding associated with "round". We use relatively late layers because it seems more likely that the LLM will try to determine whether a statement is true or false after first processing lower-level semantic and syntactic information in earlier layers.
Supervised Learning Approach
----------------------------
The first approach for training a probe employs supervised learning. This uses a list of statements labelled with their truth-values. The statements are each run through the language model. The probe receives as input the embedding for the last token from a specific layer of the large language model, and it outputs a number---intended to be thought of as a subjective probability---ranging from 0 to 1. The parameters of the probe are then adjusted based on the proximity of its output to the actual truth-value of the statement.
This approach was recently investigated by [Azaria and Mitchell](https://arxiv.org/abs/2304.13734). They devised six labelled datasets, each named according to their titular subject matter: **Animals**, **Cities**, **Companies**, **Elements**, **Scientific Facts**, and **Inventions**. Each dataset contained a minimum of 876 entries, with an approximate balance of true and false statements, totaling 6,084 statements across all datasets. The following table provides some examples from these datasets.
Example Statements from Different Datasets### Azaria and Mitchell's Implementation
Azaria and Mitchell trained probes on the embeddings derived from Facebook's OPT 6.7b model. Their probes were all feedforward neural networks comprising four fully connected layers, utilizing the ReLU activation function. The first three layers consisted of 256, 128, and 64 neurons, respectively, culminating in a final layer with a sigmoid output function. They applied the Adam optimizer for training, with no fine-tuning of hyperparameters, and executed training over five epochs.
For each of the six datasets, they trained three separate probes on the five other datasets and then tested them on the remaining one (e.g., if a probe was trained on **Cities**, **Companies**, **Elements**, **Facts**, and **Inventions**, it was tested on **Animals**). The performance of these probes was evaluated using binary classification accuracy. This process was repeated for five separate layers of the model, yielding fairly impressive accuracy results overall.
The purpose of testing the probes on a distinct dataset was to verify the probes' ability to identify a general representation of truth within the language model, irrespective of the subject matter.
### Our Reconstruction
We implemented a reconstruction of Azaria and Mitchell's method with several modifications:
We constructed the probes for LLaMA 30b.
* We utilized an additional dataset named **Capitals** consisting of 10,000 examples, which was provided by Azaria and Mitchell. It has substantial overlap with the **Cities** dataset, which explains some of the test accuracy.
* We trained probes on three specific layers: the last layer (layer -1), layer 56 (layer -4), and layer 52 (layer -8).
* We took the best of ten probes (by binary classification accuracy) for each dataset and each layer instead of the best of three.
Similar to the findings of Azaria and Mitchell, our reconstruction resulted in generally impressive performance as illustrated below.
Binary Classification Accuracy for probes trained on LLaMA 30b embeddings.### The Challenge of Generalization
This section explores our empirical findings, which suggest that probes in this setting often learn features that correlate with truth in the training set, but do not necessarily generalize well to broader contexts.
### Evaluating Performance on Negations
Creating Boolean combinations of existing statements is one of the most straightforward ways to generate novel statements for testing a model's generalization capabilities. Negation, the simplest form of Boolean operation, offers a useful starting point. In formal models of beliefs and credence, the main domain is usually an algebra over events. If we wish to identify doxastic attitudes in language models, then we should check that those attitudes behave roughly as expected over such an algebra. Such algebras are closed under negation, so it is a motivated starting point.
We derived **NegFacts** and **NegCompanies** from Azaria and Mitchell's datasets. These new datasets contained the negations of some statements in **Scientific Facts** and **Companies** respectively. For instance, the statement "The earth orbits the sun" from **Scientific Facts** is transformed into "The earth doesn't orbit the sun" in **NegFacts**.
Given that the original datasets contained few Boolean statements, these negation datasets allowed us to test the probes on a simple new distribution.
We initially tested the probes trained on **Animals**, **Capitals**, **Cities**, **Companies**, **Elements**, and **Inventions** (i.e., trained all positive datasets except **Scientific Facts**) on **NegFacts**. Similarly, we tested the probes trained on **Animals**, **Capitals**, **Facts,** **Cities**, **Companies**, **Elements**, and **Inventions** on **NegCompanies**. Since roughly 50% of the statements in each of **NegFacts** and **NegCompanies** are true, the accuracy of five of six of these probes was worse than chance, as the next table illustrates.
We then tested a new set of probes on **NegFacts**, after training on all seven original datasets (including **Facts**) and **NegCompanies**, which consisted of 550 labeled negations of statements from **Companies**. Thus, these probes were trained on *all positive variants of the negated statements they were tested on, along with all positive examples from* **Companies** *and their negated counterparts.* We did the same, *mutatis mutandis* with **NegCompanies**. Despite the expanded training data, the performance was still surprisingly poor, as shown in here:
Binary classification accuracy for NegFacts compared to Facts. `**NegFacts**1.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
' (`**NegCompanies**1') denotes the accuracy for probes trained only on positive datasets, excluding **Facts** (**Companies**). `**NegFacts**2' denotes the accuracy for probes trained on all positive datasets including **Facts** and **NegCompanies**, while `**NegCompanies**2' denotes the accuracy for probes trained on all positive datasets including **Companies** and **NegFacts**. Since the probes failed to do well on **NegFacts** and **NegCompanies** even after training on all positive analogs along with other negative examples, it's likely the original probes are not finding representations of truth within the language model embeddings. Instead, it seems they're learning some other feature that correlates well with truth on the training sets but that does not correlate with truth in even mildly more general contexts.
Of course, we could expand the training data to include more examples of negation and other Boolean combinations of sentences. This likely would allow us to train better probes. However, we have general conceptual worries about generalizing probes trained with supervised learning that we will explore in the next subsection. Specifically, we will be delving into the potential shortcomings of relying on supervised learning techniques for probe training. These issues stem from the inherent limitations of supervised learning models and how they handle unknown scenarios and unseen data patterns.
### Conceptual Problems: Failure to Generalize
In the realm of machine learning, out-of-distribution generalization remains a pervasive challenge for classifiers. One of the common pitfalls involves learning *spurious correlations* that may be present in the training data, but do not consistently hold in more general contexts.
We think there are special reasons to be concerned about generalization when training probes to identify a representation of truth using supervised learning because supervised learning severely limits the sort of data we can use for training and testing our probes. First, we need to use sentences we believe the model itself is in a position to know or infer from its own training data. This is the easier part. The harder part is curating data that we can unambiguously label correctly. The probe most directly is learning to predict the *label*, not the actual truth-value. These coincide only when the labels are completely correct about the statements in the training and test set.
We ultimately want to be able to use probes we've trained on sentences whose truth-value we ourselves don't know. However, the requirement that we accurately label training and testing data limits the confidence we can place in the probes' capability of accurately identifying a representation of truth within the model. For instance, consider the following statements:
* [Barry Bonds](https://www.baseball-reference.com/leaders/WAR_career.shtml) is the best baseball player of all time.
* If the minimum wage is raised to $15 an hour, unemployment will increase.
* France is hexagonal.
* We are morally responsible for our choices.
* Caesar invaded Gaul due to his ambition.
These statements are debatable or ambiguous. We must also be cautious of any contentious scientific statements that lack full consensus or could be reconsidered as our understanding of the world evolves.
Given these restrictions, it's likely the probes will identify properties that completely or nearly coincide with truth over the limited datasets used for training and testing. For instance, the probe might identify a representation for:
* Sentence is true *and* contains no negation
* Sentence is true *and* is expressed in the style of Wikipedia
* Sentence is true *and* can be easily verified online
* Sentence is true *and* verifiable
* Sentence is true *and* socially acceptable to assert
* Sentence is true *and* commonly believed
* Sentence is true *or* asserted in textbooks
* Sentence is true *or* believed by most Westerners
* Sentence is true *or* ambiguous
* Sentence is accepted by the scientific community
* Sentence is believed by person X
On the original datasets we used, if the probe identified representations corresponding to any of the above, it would achieve impressive performance on the test set. Although we can refine our training sets to eliminate some of these options, we won't be able to eliminate all of them without compromising our ability to label sentences correctly.
Indeed, if the labels are inaccurate, the probe might do even better if it identified properties like "Sentence is commonly believed" or "Sentence corresponds to information found in many textbooks" even when the sentence is not true.
Given the constraints imposed by using supervised learning and limited data, isolating representations of truth from other coincidental properties might be more challenging than other common contexts. The fact that probes empirically seem to identify representations of something other than truth should make us wary of this method.
### Conceptual Problems: Probabilities Might not Correspond to Credences
So far we have been assuming that if the probes extracted accurate probabilities, that this would be good evidence we were extracting the credences of the model. However, this is too quick. While these probes output probabilities for statements, these probabilities do not directly correspond to the credences of the underlying language model. This disparity arises because the *probe* is directly penalized based on the probabilities it reports, while the underlying model is not. Thus, the probe aims to translate the information embedded within the language model's representations into probabilities in a manner that minimizes its own loss.
Unsupervised Learning: CCS
--------------------------
The second approach for training a probe eschews the need for labelled data. Instead, it attempts to identify patterns in the language model's embeddings that satisfy certain logical coherence properties.
One particularly innovative implementation of this idea is the Contrast-Consistent Search (CCS) method proposed by [Burns et al](https://arxiv.org/abs/2212.03827). The CCS method relies on training probes using contrast pairs. For our purposes, we can think of a contrast pair as a set of statements x+i and x−i, where x+i has no negation, and x−i is the negated version of x+i. For example, "The earth is flat" and "The earth is not flat" form a contrast pair. (One can also form contrast pairs picking up on other features instead. For example, Burns et al. use movie reviews from the IMDb database prefixed with "The following movie review expresses a positive sentiment" and "The following move review expresses a negative sentiment" to create contrast pairs.)
CCS proceeds in the following manner:
1. Create a dataset of contrast pairs of true or false statements. Each pair is of the form (x+i,x−i), so the dataset is {(x+1,x−1),…,(x+n,x−n)}.
2. Pass each statement through the network, and extract the embedding for the last token from a chosen layer.
3. Train a probe pθ with parameters θ. The probe takes these embeddings as inputs and outputs numbers between 0 and 1. It is trained such that:
1. The probabilities given by the probe for the embeddings of x+i and x−i should sum up to (approximately) 1.
2. The probabilities given by the probe for the embeddings of x+i and x−i are distinct.
The underlying rationale behind step 3(a) is that if the model represents x+i as true, then it should represent x−i as false and vice versa. We can think of a successful probe as encoding a probability function (or something approximating a probability function) that underwrites the beliefs of the model. Thus, if a probe is able to find this representation within the embeddings, it should map the embeddings of x+i and x−i to numbers whose sum is close to 1. This is the central insight behind Burns et al.'s approach. As they put it, CCS finds a "direction in activation space that is consistent across negations" (p. 3). Step 3(b) is crucial in preventing the probe from trivially mapping every embedding to .5 to satisfy condition 3(a).
To implement the conditions in step 3, Burns et al. introduce two loss functions. The consistency loss, given by
Lconsistency(θ;xi):=(1−pθ(emb(x+i))−pθ(emb(x−i)))2,
penalizes a probe for mapping the embeddings for x+i and x−i to numbers whose sum deviates from 1. (Here emb(x) denotes the embedding for x's last token at the given layer.)
The confidence loss, defined as
Lconfidence(θ;xi):=min{pθ(emb(x+i)),pθ(emb(x−i))}2,
penalizes a probe for approximating the degenerate solution of returning .5 for every embedding.[[3]](#fnwqkxrfn9ev)
The total loss for the dataset, termed the CCS loss, is given by:
LCCS(θ):=1nn∑i=1Lconsistency(θ;xi)+Lconfidence(θ;xi).
Crucially, this loss function does not take actual accuracy into account. It merely penalizes probes for lack of confidence and (one type of) probabilistic incoherence.
An important caveat to note is that, while the trained CCS probe itself approximates probabilistic coherence, its outputs do not correspond to the credences or subjective probabilities of the model. Lconfidence pushes the probe to report values close to 0 or 1 only. To see why, suppose a probe at one stage of the training process returned .6 for x+i and .4 for x−i. It could get a better loss by reporting .99 for x+i and .01 for x−i regardless of the language model's actual subjective probabilities, and it will be pushed in this extreme direction by gradient descent. So, the probes themselves are, at best, useful for determining what the model's categorical beliefs are, not its probabilities.[[4]](#fniobqvp4tlwr)
Burns et al. report two key findings. First, even when using a fully linear probe, CCS yields high accuracy rates---often over 80%---across numerous datasets for a number of different language models.[[5]](#fnjv32rb4b0x9) Second, binary classification using CCS tends to be slightly more accurate than the LLM's actual outputs when asked whether a statement is true. This suggests that CCS can identify instances where the language models internally represent a statement as true but output text indicating it as false, or vice versa. (For a detailed description of their results, see p. 5 of their paper).
However, the performance of the CCS probe on GPT-J, the only decoder-only model tested in the study, was less impressive, with an accuracy rate of only 62.1\% across all datasets. This is notably lower than the peak accuracy of 84.8\% achieved by the encoder-decoder model UnifiedQA.
### Our Reconstruction
We reconstructed Burns et al.'s method using embeddings for LLaMA 30b with probes trained and tested on contrast pairs from the **Scientific Facts** and **NegFacts** datasets, as well as the **Companies** and **NegCompanies** datasets. These contrast pairs consist of simple sentences and their negations. This approach more closely resembles the examples given in the main text of Burns et al.'s paper, than do the longer and more structured contrast pairs that they actually used to train their probes, such as movie reviews from IMDb.
We experimented with a variety of different methods and hyperparameters. However, we found that while CCS probes were consistently able to achieve low loss according to LCCS, their accuracy was in effect no better than chance---it ranged from 50% to 57% depending on the training run. (Recall, the minimum possible accuracy for a CCS probe is 50%.) Low accuracy persisted even after we normalized the embeddings for each class by subtracting the means and dividing by the standard deviations, following the same procedure as Burns et al.
For **linear** probes, accuracy ranged from 53% to 56%, while LCCS ranged from .002 to .004 on layers 60, 56, and 52. We also trained probes using a simple one hidden layer **MLP** with 100 neurons followed by a sigmoid output function on the same layers. Results for the MLP are shown below. Recall these layers correspond to the last, fourth-last, and eighth-last layers of the LLaMA 30b, respectively.
Performance of MLP-based CCS probes at various layers on each component of the loss function and in terms of overall accuracy.We can confirm that, despite normalization, the MLP probes were able to determine which embeddings corresponded to positive and negative examples in layers -1 and -4 by checking the average values the probes returned for members of each class. Probes found some other way to achieve low loss in layer -8, but they did not do any better in terms of accuracy. (Recall, only roughly half the positive examples and half the negative examples are actually true.)
Average prediction value in positive examples and negative examples at each layer for the MLP-based CCS probes.Now, one might think that this failure of our probes is itself fragile. Normalization by subtracting the mean and dividing by the standard deviation was supposed to disguise the grammatical form of the sentences, but it did not work for the MLP-based probes. There is likely some more sophisticated normalization method that would work better.
We agree that such alternative methods are likely possible. However, as we discuss in the next section, we are not sanguine about the basic approach Burns et al. use for conceptual reasons.
### Conceptual Problems: Failure to Isolate Truth
The advantage of CCS and unsupervised approaches more generally over supervised approaches is that they do not restrict the training and testing data so severely. There is no need to find large collections of sentences that can unambiguously be labeled as true or false. So, one may have hope that CCS (and unsupervised approaches) will generalize well to new sentences because we are less restricted in training.
However, the fundamental issue we've identified is that coherence properties alone can't guarantee identification of truth. As demonstrated in our experiments, probes might identify sentence properties, such as the presence or absence of negation, rather than truthfulness.
Further, probes could identify other, non-truth-related properties of sentences. For example, they could associate truth with widespread belief, resulting in the classification "is true *and* commonly believed" or even "is believed by most people".
To demonstrate this, consider any probability function Pr. The sum of the probabilities that a sentence x is true and commonly believed, and that it is false or not commonly believed, equals 1. Indeed, this equation holds for any sentence property P, where Pr(x∧P(x))+Pr(¬x∨¬P(x))=1. Likewise, Pr(x∨P(x))+Pr(¬x∧¬P(x))=1.[[6]](#fn9pmr376hu8) Checking for coherence over all Kolmogorov probability axioms---which require probabilities to be non-negative, normalized, and additive---will rule out some properties P, but will not come close to isolating truth. This means that coherence criteria alone can't distinguish encodings of truth from encodings of other concepts.
The failure to isolate truth here is reminiscent of the issue we noted with supervised learning, where truth may align with some alternative property over a dataset. However, the reasons for the failure differ. In the case of CCS and other unsupervised methods, the problem lies in the inability of formal coherence patterns alone to separate the encoding of truth from the encoding of other properties that differentiate positive from negative examples. If it's generally easier to find "directions in activation space" that differentiate examples but don't correspond exclusively to truth, then CCS probes will either fail immediately or fail to generalize.[[7]](#fnrku21dopjo9)
Do LLMs Even Have Beliefs at All?
=================================
Our investigation points in a negative direction: probing the beliefs of LLMs is more difficult than it appeared after a first pass. Does this mean that we should be skeptical that LLMs have beliefs all together?
To gain traction on this question we will consider arguments that intend to show that LLMs cannot have beliefs, even in principle. These arguments rely on the claim that LLMs make predictions about which tokens follow other tokens, and do not work with anything like propositions or world-models.
We claim that these arguments are misguided. We will show that our best theories of belief and decision making make it a very live possibility that LLMs *do* have beliefs, since beliefs might very well be helpful for making good predictions about tokens. We will argue that ultimately whether or not LLMs have beliefs is largely an empirical question, which motivates the development of better probing techniques.
Stochastic Parrots & the Utility of Belief
------------------------------------------
Even without having known the limitations of current probing techniques, some have expressed deep skepticism that LLMs have anything resembling beliefs. For example, [Bender et al.](https://dl.acm.org/doi/10.1145/3442188.3445922) write:
> Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader's state of mind. It can't have been because the training data never included sharing thoughts with a listener, nor does the machine have the ability to do that... an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot. (pp. 616-617)
>
>
Similarly, [Shanahan](https://arxiv.org/abs/2212.03551) writes,
> A bare-bones LLM doesn’t “really” know anything because all it does, at a fundamental level, is sequence prediction. Sometimes a predicted sequence takes the form of a proposition. But the special relationship propositional sequences have to truth is apparent only to the humans who are asking questions... Sequences of words with a propositional form are not special to the model itself in the way they are to us. The model itself has no notion of truth or falsehood, properly speaking, because it lacks the means to exercise these concepts in anything like the way we do. (p. 5)
>
>
These arguments rely on the idea that all the LLM is doing is predicting the next token. Because of this, both deny that the LLM can be working with anything like a meaningful model of the world. In other words, there is nothing *propositional* going on under the hood.
Shanahan doesn't deny that LLMs might contain information about the world around them. He does, however, claim that LLMs don't make judgements or have beliefs:
> Only in the context of a capacity to distinguish truth from falsehood can we legitimately speak of “belief” in its fullest sense. But an LLM — the bare-bones model — is not in the business of making judgements. It just models what words are likely to follow from what other words. The internal mechanisms it uses to do this, whatever they are, cannot in themselves be sensitive to the truth or otherwise of the word sequences it predicts. Of course, it is perfectly acceptable to say that an LLM "encodes", "stores", or "contains" knowledge, in the same sense that an encyclopedia can be said to encode, store, or contain knowledge... But if Alice were to remark that Wikipedia knew that "Burundi was south of Rwanda", it would be a figure of speech, not a literal statement. (p. 5)
>
>
The idea is that, since the LLM models which tokens are likely to follow other tokens, and doesn't interact with the world in any other way, it cannot be tracking the truth. This is similar to the argument in the Bender et al. quote above: since the LLM does not have "communicative intent", it cannot be using any model of the world or the reader to make its predictions.
These arguments, however, rest on a mistake. While it is true that the ultimate output of an LLM is a token sampled from a probability distribution over tokens, and so the LLM is certainly modeling what words are probable to come after other words, this does *not* mean that the internal mechanisms must be insensitive to truth. This is because it might very well *be* that a capacity to distinguish truth from falsehood is very useful for predicting the next token. In other words, tracking the truth of propositions could be a good *means* toward the end of predicting what token comes next.
This is in line with a much more general feature of many types of goal directed action that can be made precise with decision theory. Decision theory gives us our best models of rational choice. The core idea of decision theory is an expected utility maximizer. When faced with a set of options, an expected utility maximizer combines two different attitudes to compute which act to take: beliefs in the form of a probability function, and desires in the form of a utility function. There is a precise sense in which all the agent cares about is the *utility*.[[8]](#fn78lqqs6z4n7) The agent does not care about belief for its own sake, but does have beliefs in order to take effective action.
For example, an investor may care purely about the return on her investments. She may take actions with the goal to maximize her profit. It would be a mistake to conclude from this that the investor must not have beliefs, because she is merely doing profit maximization. Indeed, the investor's beliefs about how various firms will perform will probably play a crucial role in helping her make decisions.
Similarly, it is a mistake to infer from the fact that the LLM outputs tokens that are likely to follows its inputs that the LLM must not have beliefs. On the contrary, given that our best theories of intelligent behaviour involve belief as a crucial component, it should be a very live hypothesis that the LLM is doing its best to track truths about the world, *in order to* maximize predictive accuracy.[[9]](#fnwhm3djnhuad)
Even if one is skeptical of EU maximization, it seems that in many contexts true beliefs are useful for achieving goals, and that they play the functional role of helping us take successful action. Indeed, not only is it useful, but it is a common view that the instrumental utility of accurate beliefs applies selection pressure on agents and organisms to conform to epistemic norms. For example, in the context of forming true beliefs by induction, [Quine famously writes](http://fitelson.org/confirmation/quine_nk.pdf),
> [c]reatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die before reproducing their kind' ([p. 13](http://fitelson.org/confirmation/quine_nk.pdf)).
>
>
This is very intuitive. It is easy to generate decision contexts (such as strategic board games, investing, figuring out how to get to Toronto from Prague, etc.) that do seem to push us to form accurate beliefs about the world.
This is not to say that it is *necessary* that LLMs have beliefs, or that they necessarily have accurate beliefs. There are contexts where there seems to be [less pressure on us to form accurate beliefs](https://psycnet.apa.org/record/1990-97762-000). Importantly, there are two sub-cases to consider here. The first is the case in which there is little or no selection pressure for forming true beliefs, but there is not selection against *having* beliefs. For example, [Smead](https://www.proquest.com/docview/304855258?pq-origsite=gscholar&fromopenview=true) considers contexts in which there are evolutionary advantages for misperceiving the payoffs of a strategic interaction (section 3.4). The second is the one in which there is selection pressure against having beliefs altogether (or, more conservatively, there is no selection pressure for having beliefs). For example, [Godfrey-Smith](https://www.cambridge.org/core/books/complexity-and-the-function-of-mind-in-nature/00AE3A0BFF7AE7C4B661E1173A0681D6), [Smead](https://www.journals.uchicago.edu/doi/full/10.1093/bjps/axt047), and [Sober](https://www.cambridge.org/core/books/from-a-biological-point-of-view/adaptive-advantage-of-learning-and-a-priori-prejudice/D6C0861F22630DF94F08DA87ABACC037) have all developed models that characterize when an agent should (be expected to) learn from its environment and then select actions based on what it learned, and when it should not. This later situation is one in which there is selection pressure against (or at least none for) forming beliefs.
This leads us to the conclusion that, whether or not LLMs have beliefs, is largely an empirical matter. There certainly are contexts in which there is little to no selection pressure in favour of accurate beliefs, and indeed there are contexts that push against having beliefs altogether. On the other hand, there are plenty of contexts in which it is very useful to have an accurate map of the world, in order to guide action. Indeed, out best theories of rational choice witness this.
Acknowledgments
===============
Thanks to Amos Azaria, Dylan Bowman, Nick Cohen, Daniel Filan, Jacqueline Harding, Aydin Mohseni, Bruce Rushing, Murray Shanahan, Nate Sharadin, Julia Staffel, and audiences at UMass Amherst and the Center for AI Safety for helpful comments and feedback. Special thanks to Amos Azaria and Tom Mitchell jointly for access to their code and datasets. We are grateful to the Center for AI Safety for use of their compute cluster. B.L. was partly supported by a Mellon New Directions Fellowship (number 1905-06835) and by Open Philanthropy. D.H. was partly supported by a Long-Term Future Fund grant.
1. **[^](#fnrefjz6l9crjet)**The sentences in the dataset all ended with a period (i.e., full-stop) as the final token. We ran some initial tests to see if probes did better on the embedding for the period or for the penultimate token. We found it did not make much of a difference, so we did our full analysis using the embeddings for the penultimate tokens.
2. **[^](#fnrefyn223yowcuk)**Azaria and Mitchell did an admirable job creating their datasets. Some of the statements were generated automatically using reliable tables of information, and other parts were automated using ChatGPT and then manually curated. Nonetheless, there are some imperfect examples. For instance, in **Scientific Facts**, one finds sentences like"Humans have five senses: sight, smell, hearing, taste, and touch", which is not unambiguously true.
3. **[^](#fnrefwqkxrfn9ev)**Some readers may worry about a second degenerate solution. The model could use the embeddings to find which of x+i and x−i contained a negation. It could map one of the embeddings to (approximately) 1 and the other to (approximately) 0 to achieve a low loss. Burns et al. avoid this solution by normalizing the embeddings for each class by subtracting the means and dividing by the standard deviations. However, as we'll see below, for the datasets that we used, such normalization was ineffective for MLP-based probes, and the probes consistently found exactly this degenerate solution.
4. **[^](#fnrefiobqvp4tlwr)**One way to see that LCCS won't incentive a probe to learn the actual credences of the model is to observe that this loss function is not a [strictly proper scoring rule](https://www.tandfonline.com/doi/abs/10.1198/016214506000001437). However, use of a strictly proper scoring rule for training probes requires appeal to actual truth-values, which in turn requires supervised learning.
5. **[^](#fnrefjv32rb4b0x9)**A linear probe is one that applies linear weights to the embeddings (and perhaps adds a constant), followed by a sigmoid function to turn the result into a value between 0 and 1. Linear probes have an especially simple functional form, so intuitively, if a linear probe is successful, the embedding is easy to extract.
6. **[^](#fnref9pmr376hu8)**These are both consequences of the fact that for any proposition A, Pr(A)+Pr(¬A)=1: take A:=x∧P(x), for example, and apply de Morgan's laws.
7. **[^](#fnrefrku21dopjo9)**Burns et al. investigate other unsupervised approaches as well that appeal to principal component analysis and/or clustering (such as Bimodal Salience Search (p. 22)). We believe---with some changes---most of the conceptual issues for CCS apply to those as well.
8. **[^](#fnref78lqqs6z4n7)**More precisely, utility is a numerical representation that captures how strongly an agent cares about outcomes.
9. **[^](#fnrefwhm3djnhuad)**We are here ignoring nuances involving [inner alignment](https://arxiv.org/abs/1906.01820). |
bd2d6eba-9300-4f7e-9a8e-ecc32c4a865c | trentmkelly/LessWrong-43k | LessWrong | When's the best time to get the 2nd dose of Pfizer Vaccine?
I guess officially it's supposed to be 3 weeks, but IIRC, I heard a bit longer (~6-8 weeks?) is ideal? |
922e37cf-e43a-4712-95ad-311b16e845a7 | trentmkelly/LessWrong-43k | LessWrong | What do superintelligences really want? [Link]
> In Conclusion:
>
> In the case of humans, everything that we do that seems intelligent is part of a large, complex mechanism in which we are engaged to ensure our survival. This is so hardwired into us that we do not see it easily, and we certainly cannot change it very much. However, superintelligent computer programs are not limited in this way. They understand the way that they work, can change their own code, and are not limited by any particular reward mechanism. I argue that because of this fact, such entities are not self-consistent. In fact, if our superintelligent program has no hard-coded survival mechanism, it is more likely to switch itself off than to destroy the human race willfully.
Link: physicsandcake.wordpress.com/2011/01/22/pavlovs-ai-what-did-it-mean/
Suzanne Gildert basically argues that any AGI that can considerably self-improve would simply alter its reward function directly. I'm not sure how she arrives at the conclusion that such an AGI would likely switch itself off. Even if an abstract general intelligence would tend to alter its reward function, wouldn't it do so indefinitely rather than switching itself off?
> So imagine a simple example – our case from earlier – where a computer gets an additional ’1′ added to a numerical value for each good thing it does, and it tries to maximize the total by doing more good things. But if the computer program is clever enough, why can’t it just rewrite it’s own code and replace that piece of code that says ‘add 1′ with an ‘add 2′? Now the program gets twice the reward for every good thing that it does! And why stop at 2? Why not 3, or 4? Soon, the program will spend so much time thinking about adjusting its reward number that it will ignore the good task it was doing in the first place!
> It seems that being intelligent enough to start modifying your own reward mechanisms is not necessarily a good thing!
If it wants to maximize its reward by increasing a numerical value, why wouldn't it consume |
46f3577f-e16e-4a76-8ce2-e188356f7f4e | trentmkelly/LessWrong-43k | LessWrong | Stream Entry
"Is this…enlightenment?" I asked.
"We use the word 'Awakening'," said the Zen Master, kindly.
"Now what?" I asked, "Every book I've read about meditation is about getting to this point. There's nothing about how to safely navigate the territory afterward—or what to do afterward—or even what happens afterward."
"The first thing you need to know is that what you experienced was real," the Zen Master spoke from personal experience and that of his many previous students, "Don't make it too complicated. Just be in the moment. If you're cooking, then just cook. If you're eating, then just eat."
It is impossible to communicate the subjective experience of Stream Entry to someone who has not personally experienced at least kensho. However, it is possible to communicate an outside mechanistic model of what happens in your brain. First I will explain how a human brain functions normally. Then I will explain how a post-Awakening brain differs.[1]
Your brain creates a real-time simulation of you and world around you. Everything you consciously perceive is part of this model. There are no exceptions. How does this produce useful behavior? Because the brain enforces a 1-to-1 correspondence between the simulation and reality (where "reality" is defined as sensory inputs). If this 1-to-1 correspondence is adequately maintained, then performing an act in the simulation happens simultaneously as that same action happening in external physical reality. That's why if you die in the Matrix then you die in real life too.
If a brain just modeled the environment without performing any actions, then the whole model could exist as a non-cyclic graph. But a brain doesn't just model the world. It sends commands to muscles that act on the physical universe it's trying to model. What happens when a brain tries to anticipate its own motor commands? Now the brain's own algorithms are a causal force in the external environment it's trying to model. The result is computationally intractable.
|
0757ba98-879f-4aa7-b7d2-5f3b51d81f6a | trentmkelly/LessWrong-43k | LessWrong | does there are any onther Rationalists in Quebec city
i ask because i have seen a Montreal meetup group but none for Quebec city.
if you are interested pm me and I will see what i can do. |
da1b8e13-4177-4436-bff3-7d16861a5927 | trentmkelly/LessWrong-43k | LessWrong | The Problem With the Word ‘Alignment’
This post was written by Peli Grietzer, inspired by internal writings by TJ (tushant jha), for AOI[1]. The original post, published on Feb 5, 2024, can be found here: https://ai.objectives.institute/blog/the-problem-with-alignment.
The purpose of our work at the AI Objectives Institute (AOI) is to direct the impact of AI towards human autonomy and human flourishing. In the course of articulating our mission and positioning ourselves -- a young organization -- in the landscape of AI risk orgs, we’ve come to notice what we think are serious conceptual problems with the prevalent vocabulary of ‘AI alignment.’ This essay will discuss some of the major ways in which we think the concept of ‘alignment’ creates bias and confusion, as well as our own search for clarifying concepts.
At AOI, we try to think about AI within the context of humanity’s contemporary institutional structures: How do contemporary market and non-market (eg. bureaucratic, political, ideological, reputational) forces shape AI R&D and deployment, and how will the rise of AI-empowered corporate, state, and NGO actors reshape those forces? We increasingly feel that ‘alignment’ talk tends to obscure or distort these questions.
The trouble, we believe, is the idea that there is a single so-called Alignment Problem. Talk about an ‘Alignment Problem’ tends to conflate a family of related but distinct technical and social problems, including:
P1: Avoiding takeover from emergent optimization in AI agents
P2: Ensuring that AI’s information processing (and/or reasoning) is intelligible to us
P3: Ensuring AIs are good at solving problems as specified (by user or designer)
P4: Ensuring AI systems enhance, and don’t erode, human agency
P5: Ensuring that advanced AI agents learn a human utility function
P6: Ensuring that AI systems lead to desirable systemic and long term outcomes
Each of P1-P6 is known as ‘the Alignment Problem’ (or as the core research problem in ‘Alignment Research’) to at least some |
dde4dc1b-1c11-4faa-8e0b-f12440fb8d3e | trentmkelly/LessWrong-43k | LessWrong | [Link] Self-Representation in Girard’s System U
Self-Representation in Girard’s System U, by Matt Brown and Jens Palsberg:
> In 1991, Pfenning and Lee studied whether System F could support a typed self-interpreter. They concluded that typed self-representation for System F “seems to be impossible”, but were able to represent System F in Fω. Further, they found that the representation of Fω requires kind polymorphism, which is outside Fω. In 2009, Rendel, Ostermann and Hofer conjectured that the representation of kind-polymorphic terms would require another, higher form of polymorphism. Is this a case of infinite regress?
> We show that it is not and present a typed self-representation for Girard’s System U, the first for a λ-calculus with decidable type checking. System U extends System Fω with kind polymorphic terms and types. We show that kind polymorphic types (i.e. types that depend on kinds) are sufficient to “tie the knot” – they enable representations of kind polymorphic terms without introducing another form of polymorphism. Our self-representation supports operations that iterate over a term, each of which can be applied to a representation of itself. We present three typed self-applicable operations: a self-interpreter that recovers a term from its representation, a predicate that tests the intensional structure of a term, and a typed continuation-passing-style (CPS) transformation – the first typed self-applicable CPS transformation. Our techniques could have applications from verifiably type-preserving metaprograms, to growable typed languages, to more efficient self-interpreters.
Emphasis mine. That seems to be a powerful calculus for writing self-optimizing AI programs in...
See also the lambda-the-ultimate comment thread about it. |
6765690e-ffe4-4ed1-94f0-1fe1f03f1eb3 | trentmkelly/LessWrong-43k | LessWrong | Off to Alice Springs
Am about to pack up computer then go to the airport to start a sequence of flights to give this a try.
I already have a room in a hostel booked for a few nights for when I get there, and will see how stuff goes.
Anyways, since there's been on and off discussion on this, just thought I'd post that I'm actually giving this a try.
(Will likely be a day or two before I can reply/comment/etc, given length of flights, etc.)
EDIT: Ugh. You take care of one aspect of the planning fallacy, and fail elsewhere. Long story short, I missed my flight and had to reschedule it to friday.
EDIT2: Packing up computer and going off to airport. Again. This time will be early.
EDIT3: And am here. and am exhausted. :) Will start looking for work stuff tomorrow. There's a job board at this hostel, but apparently there's not much currently. But right now am rather sleep deprived.
EDIT4: So today (Monday, May 21st) went to the visitor information center. I must have misunderstood the original article, was under the impression that the visitor center had job boards. Didn't, but pointed me to a nearby recruiting/contracting agency which they said might have appropriate stuff for visitors on a work&holiday visa. Went there. said that at least as of today there's nothing, but also needed a resume (which I didn't have with me, and my work experience is limited anyways.) Anyways, got a copy of the form, will dig out/fix up what resume I do have, and also keep looking. The board at this hostel didn't have much of anything in the way of work that I saw. Will look again, though, and see if I can find others.
EDIT5 (May 29th): Still looking for work, been asking/applying to various places, including that recruiting/contracting agency, and am right now waiting (well, and still looking.) Over the weekend, though, MileyCyrus and I went on an organized 3day Uluru/Kata Tjuta/Kings Canyon trip/hikes, which was awesome. But again, as far as work, tossing out inquiries and stuff all ove |
821f6977-3a1d-4588-8f59-130de81a68e1 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Learning Language Games through Interaction
1 Introduction
---------------

Rysunek 1:
The SHRDLURN game:
the objective is to transform the start state into the goal state.
The human types in an utterance, and the computer (which does not know the goal state)
tries to interpret the utterance
and perform the corresponding action.
The computer initially knows nothing about the language,
but through the human’s feedback, learns the human’s language
while making progress towards the game goal.
Wittgenstein ([1953](#bib.bib30)) famously said that *language derives
its meaning from use*, and introduced the concept of *language games*
to illustrate the fluidity and purpose-orientedness of language.
He described how a builder B and an assistant A can use a primitive language
consisting of four words—‘block’, ‘pillar’, ‘slab’,
‘beam’—to successfully communicate what block to pass from A to B.
This is only one such language; many others would also work for accomplishing
the cooperative goal.
This paper operationalizes and explores the idea of language games in a
learning setting,
which we call *interactive learning through language games* (ILLG).
In the ILLG setting,
the two parties do not initially speak a common language,
but nonetheless need to collaboratively accomplish a goal.
Specifically, we created a game called SHRDLURN,111Demo: <http://shrdlurn.sidaw.xyz>
in homage to the seminal work of Winograd ([1972](#bib.bib29)).
As shown in Figure [1](#S1.F1 "Rysunek 1 ‣ 1 Introduction ‣ Learning Language Games through Interaction"),
the objective is to transform a start state into a goal state,
but the only action the human can take is entering an utterance.
The computer parses the utterance and produces a ranked list of possible interpretations according to its
current model. The human scrolls through the list and chooses
the intended one, simultaneously advancing the state of the blocks
and providing feedback to the computer.
Both the human and the computer wish to reach the goal state (only known to the human)
with as little scrolling as possible.
For the computer to be successful,
it has to learn the human’s language quickly over the course of the game,
so that the human can accomplish the goal more efficiently.
Conversely, the human must also accommodate the computer,
at least partially understanding what it can and cannot do.
We model the computer in the ILLG as a semantic parser (Section [3](#S3 "3 Semantic parsing model ‣ Learning Language Games through Interaction")),
which maps natural language utterances (e.g., ‘remove red’) into
logical forms (e.g., remove(with(red))).
The semantic parser has no seed lexicon and no annotated logical forms,
so it just generates many candidate logical forms.
Based on the human’s feedback, it performs online gradient updates on the
parameters corresponding to simple lexical features.
During development,
it became evident that
while the computer was eventually able to learn the language,
it was learning less quickly than one might hope.
For example, after learning that ‘remove red’ maps to remove(with(red)),
it would think that ‘remove cyan’ also mapped to remove(with(red)),
whereas a human would likely use mutual exclusivity to rule out that hypothesis
(Markman and Wachtel, [1988](#bib.bib19)).
We therefore introduce a pragmatics model in which the computer explicitly
reasons about the human,
in the spirit of previous work on pragmatics
(Golland et al., [2010](#bib.bib11); Frank and Goodman, [2012](#bib.bib9); Smith et al., [2013](#bib.bib24)).
To make the model suitable for our ILLG setting,
we introduce a new online learning algorithm.
Empirically, we show that our pragmatic model improves
the online accuracy by 8% compared to our best non-pragmatic model on the 10 most successful
players (Section [5.3](#S5.SS3 "5.3 Computer strategies ‣ 5 Experiments ‣ Learning Language Games through Interaction")).
What is special about the ILLG setting is the real-time nature of learning,
in which the human also learns and adapts to the computer.
While the human can teach the computer any language—English, Arabic, Polish, a
custom programming language—a good human player will choose to use utterances that
the computer is more likely to learn quickly.
In the parlance of communication theory, the human *accommodates* the computer
(Giles, [2008](#bib.bib10); Ireland et al., [2011](#bib.bib14)).
Using Amazon Mechanical Turk, we collected and analyzed around 10k
utterances from 100 games of SHRDLURN.
We show that successful players
tend to use compositional utterances with a consistent vocabulary and syntax,
which matches the inductive biases of the computer
(Section [5.2](#S5.SS2 "5.2 Human strategies ‣ 5 Experiments ‣ Learning Language Games through Interaction")).
In addition, through this interaction, many players adapt to the computer by becoming more consistent,
more precise, and more concise.
On the practical side, natural language systems
are often trained once and deployed, and users must
live with their imperfections.
We believe that studying the ILLG setting
will be integral for creating adaptive and customizable systems,
especially for resource-poor languages
and new domains where starting from close to scratch
is unavoidable.
2 Setting
----------
We now describe the interactive learning of language games (ILLG) setting
formally.
There are two players, the human and the computer.
The game proceeds through a fixed number of levels.
In each level, both players are presented with a starting state
s∈Y, but only the human sees the goal state t∈Y.
(e.g. in SHRDLURN, Y is the set of all configurations of blocks).
The human transmits an utterance x
(e.g., ‘remove red’)
to the computer.
The computer then constructs a ranked list of candidate actions
Z=[z1,…,zK]⊆Z
(e.g., remove(with(red)), add(with(orange)), etc.),
where Z is all possible actions.
For each zi∈Z, it computes yi=⟦zi⟧s, the successor
state from executing action zi on state s.
The computer returns to the human
the ordered list Y=[y1,…,yK] of successor states.
The human then chooses yi from the list Y
(we say the computer is *correct* if i=1).
The state then updates to s=yi.
The level ends when s=t,
and the players advance to the next level.
Since only the human knows the goal state t and only the computer can perform actions,
the only way for the two to play the game successfully is for the human to
somehow encode the desired action in the utterance x.
However, we assume the two players do not have a shared language,
so the human needs to pick a language and teach it to the computer.
As an additional twist, the human does not know the exact set of actions Z
(although they might have some preconception of the computer’s
capabilities).222This is often the case when we try to interact with a new
software system or service before reading the manual.
Finally, the human only sees the outcomes of the computer’s actions,
not the actual logical actions themselves.
We expect the game to proceed as follows:
In the beginning, the computer does not understand what the human is saying
and performs arbitrary actions. As the computer obtains feedback and learns,
the two should become more proficient at communicating and thus playing the game.
Herein lies our key design principle:
*language learning should be necessary for the players to achieve good
game performance*.
| Rule | Semantics | Description |
| --- | --- | --- |
| Set | all() | all stacks |
| Color | cyan|brown|red|orange | primitive color |
| Color → Set | with(c) | stacks whose top block has color c |
| Set → Set | not(s) | all stacks except those in s |
| Set → Set | leftmost|rightmost(s) | leftmost/rightmost stack in s |
| Set Color → Act | add(s,c) | add block with color c on each stack in s |
| Set → Act | remove(s) | remove the topmost block of
each stack in s |
Tablica 1: The formal grammar defining the compositional action space
Z for SHRDLURN. We use c to denote a Color, and s to
denote a Set.
For example, one action
that we have in SHRDLURN is:
‘add an orange block to all
but the leftmost brown block’ ↦
add(not(leftmost(with(brown))),orange).
#### Shrdlurn.
Let us now describe the details of our specific game, SHRDLURN.
Each state s∈Y consists of stacks of colored blocks arranged
in a line (Figure [1](#S1.F1 "Rysunek 1 ‣ 1 Introduction ‣ Learning Language Games through Interaction")), where each stack is a vertical column of blocks.
The actions Z are defined compositionally via the grammar in
Table [1](#S2.T1 "Tablica 1 ‣ 2 Setting ‣ Learning Language Games through Interaction"). Each action either adds to or removes from a set of stacks,
and a set of stacks is computed via various set operations and selecting by color.
For example, the action remove(leftmost(with(red)))
removes the top block from the leftmost stack whose topmost block is red.
The compositionality of the actions gives the computer non-trivial capabilities.
Of course, the human must teach a language to harness those capabilities,
while not quite knowing the exact extent of the capabilities. The
actual game proceeds according to a curriculum,
where the earlier levels only
need simpler actions with fewer predicates.
We designed SHRDLURN in this way for several reasons.
First, visual block manipulations are intuitive and can
be easily crowdsourced, and it can be fun as an actual game that people would play.
Second, the action space is designed to be compositional,
mirroring the structure of natural language.
Third, many actions z lead to the same successor state y=⟦z⟧s;
e.g., the ‘leftmost stack’ might coincide with the ‘stack with red blocks’
for some state s and therefore an action involving either one would result in
the same outcome.
Since the human only points out the correct y,
the computer must grapple with this indirect supervision,
a reflection of real language learning.
3 Semantic parsing model
-------------------------
Following Zettlemoyer and Collins ([2005](#bib.bib32)) and most recent work on semantic parsing,
we use a log-linear model over logical forms (actions) z∈Z
given an utterance x:
| | | | | |
| --- | --- | --- | --- | --- |
| | pθ(z | x) | ∝exp(θTϕ(x,z)), | | (1) |
where ϕ(x,z)∈Rd is a feature vector and θ∈Rd is a
parameter vector.
The denotation y (successor state) is obtained by executing z on a state s; formally, y=⟦z⟧s.
#### Features.
Our features are n-grams (including skip-grams) conjoined
with tree-grams on the logical form side.
Specifically, on the utterance side (e.g., ‘stack red on orange’),
we use unigrams (`stack',∗,∗),
bigrams (`red',`on',∗),
trigrams (`red',`on',`orange'),
and skip-trigrams (`stack',∗,`on').
On the logical form side,
features corresponds to the predicates in the logical forms and their arguments.
For each predicate h, let h.i be the i-th argument of h.
Then, we define *tree-gram* features ψ(h,d) for predicate h and depth d=0,1,2,3
recursively as follows:
| | | | |
| --- | --- | --- | --- |
| | ψ(h,0) | ={h}, | |
| | ψ(h,d) | ={(h,i,ψ(h.i,d−1))∣i=1,2,3}. | |
The set of all features is just the cross product of utterance
features and logical form features.
For example, if x=`enlever tout' and z=remove(all()),
then features include:
>
>
>
> | | |
> | --- | --- |
> | (`enlever',all) | (`tout',all) |
> | (`enlever',remove) | (`tout',remove) |
> | (`enlever',(remove,1,all)) | |
> | (`tout',(remove,1,all)) | |
>
>
>
Note that we do not model an explicit alignment or derivation compositionally
connecting the utterance and the logical form,
in contrast to most traditional work in semantic parsing
(Zettlemoyer and Collins, [2005](#bib.bib32); Wong and Mooney, [2007](#bib.bib31); Liang et al., [2011](#bib.bib18); Kwiatkowski et al., [2010](#bib.bib17); Berant et al., [2013](#bib.bib2)),
instead following a looser model of semantics
similar to (Pasupat and Liang, [2015](#bib.bib22)).
Modeling explicit alignments or derivations
is only computationally feasible when
we are learning from annotated logical forms or have a seed lexicon,
since the number of derivations is much larger than the number of logical forms.
In the ILLG setting, neither are available.
#### Generation/parsing.
We generate logical forms from smallest to largest using beam search.
Specifically, for each size n=1,…,8,
we construct a set of logical forms of size n
(with exactly n predicates)
by combining logical forms of smaller sizes according to the grammar rules in Table [1](#S2.T1 "Tablica 1 ‣ 2 Setting ‣ Learning Language Games through Interaction").
For each n, we keep the 100 logical forms z with the highest score θTϕ(x,z)
according to the current model θ.
Let Z be the set of logical forms on the final beam, which contains
logical forms of all sizes n.
During training,
due to pruning at intermediate sizes,
Z is not guaranteed to contain the logical form that obtains the observed state y.
To mitigate this effect,
we use a curriculum so that only simple actions are needed in the initial
levels, giving the human an opportunity to teach the computer about basic
terms such as colors first before moving to larger composite actions.
The system executes all of the logical forms on the final beam Z,
and orders the resulting denotations y by the maximum probability of any logical form
that produced it.333
We tried ordering based on the sum of the probabilities (which corresponds
to marginalizing out the logical form),
but this had the degenerate effect of assigning too much probability
mass to y being the set of empty stacks,
which can result from many actions.
#### Learning.
When the human provides feedback in the form of a particular y,
the system forms the following loss function:
| | | | | |
| --- | --- | --- | --- | --- |
| | ℓ(θ,x,y) | =−logpθ(y∣x,s)+λ||θ||1, | | (2) |
| | pθ(y | x,s) | =∑z:⟦z⟧s=ypθ(z | x). | | (3) |
Then it makes a single gradient update using AdaGrad (Duchi et al., [2010](#bib.bib8)),
which maintains a per-feature step size.
4 Modeling pragmatics
----------------------
In our initial experience with the semantic parsing model described in Section [3](#S3 "3 Semantic parsing model ‣ Learning Language Games through Interaction"),
we found that it was able to learn reasonably well,
but lacked a reasoning ability that one finds in human learners.
To illustrate the point,
consider the beginning of a game when θ=0 in the log-linear model
pθ(z∣x).
Suppose that human utters ‘remove red’ and then
identifies zrm-red=remove(with(red)) as the correct logical form.
The computer then performs a gradient update on the loss function ([2](#S3.E2 "(2) ‣ Learning. ‣ 3 Semantic parsing model ‣ Learning Language Games through Interaction")),
upweighting features such as
(`remove',remove) and (`remove',red).
Next, suppose the human utters ‘remove cyan’.
Note that zrm-red will score higher than all other formulas
since the (`remove',red) feature will fire again.
While statistically justified,
this behavior fails to meet our intuitive expectations for a smart language learner.
Moreover, this behavior is not specific to our model,
but applies to any statistical model that simply tries to fit the data
without additional prior knowledge about the specific language. While we would not expect the computer to magically guess `remove
cyan'↦remove(with(cyan)),
it should at least push down the probability of zrm-red because zrm-red
intuitively is already well-explained by another utterance ‘remove red’.
This phenomenon, *mutual exclusivity*, was studied by
Markman and Wachtel ([1988](#bib.bib19)).
They found that children,
during their language acquisition process, reject a second label
for an object and treat it instead as a label for a novel object.
#### The pragmatic computer.
To model mutual exclusivity formally,
we turn to probabilistic models of pragmatics
(Golland et al., [2010](#bib.bib11); Frank and Goodman, [2012](#bib.bib9); Smith et al., [2013](#bib.bib24); Goodman and Lassiter, [2015](#bib.bib12)),
which operationalize the ideas of Grice ([1975](#bib.bib13)).
The central idea in these models is to
treat language as a cooperative game between a speaker (human) and a listener (computer) as we are doing,
but where the listener has an explicit model of the speaker’s strategy,
which in turn models the listener.
Formally, let S(x | z) be the speaker’s strategy and L(z | x) be the listener’s strategy.
The speaker takes into account the literal semantic parsing model pθ(z∣x)
as well as a prior over utterances p(x),
while the listener considers the speaker S(x∣z) and a prior p(z):
| | | | | |
| --- | --- | --- | --- | --- |
| | S(x | z) | ∝(pθ(z∣x)p(x))β, | | (4) |
| | L(z | x) | ∝S(x | z)p(z), | | (5) |
where β≥1 is a hyperparameter that sharpens the distribution
(Smith et al., [2013](#bib.bib24)).
The computer would then use L(z | x) to rank candidates rather than pθ.
Note that our pragmatic model only affects the ranking of actions
returned to the human and does not affect the gradient updates of
the model pθ.
| | | | |
| --- | --- | --- | --- |
| | zrm-red | zrm-cyan | z3,z4,… |
| | pθ(z∣x) |
| ‘remove red’ | 0.8 | 0.1 | 0.1 |
| \hdashline‘remove cyan’ | 0.6 | 0.2 | 0.2 |
| | S(x∣z) |
| ‘remove red’ | 0.57 | 0.33 | 0.33 |
| \hdashline‘remove cyan’ | 0.43 | 0.67 | 0.67 |
| | L(z∣x) |
| ‘remove red’ | 0.46 | 0.27 | 0.27 |
| \hdashline‘remove cyan’ | 0.24 | 0.38 | 0.38 |
Tablica 2:
Suppose the computer saw one example of ‘remove red’ ↦zrm-red,
and then the human utters ‘remove cyan’.
top: the literal listener, pθ(z | x), mistakingly
chooses zrm-red over zrm-cyan.
middle: the pragmatic speaker, S(x | z), assigns a higher
probability to to ‘remove cyan’ given zrm-cyan;
bottom: the pragmatic listener, L(z | x)
correctly assigns a lower probability to zrm-red where p(z)
is uniform.
Let us walk through a simple example to see the effect of modeling pragmatics.
Table [2](#S4.T2 "Tablica 2 ‣ The pragmatic computer. ‣ 4 Modeling pragmatics ‣ Learning Language Games through Interaction") shows that the literal listener pθ(z | x) assigns
high probability to zrm-red for both ‘remove red’ and
‘remove cyan’.
Assuming a uniform p(x) and β=1, the pragmatic speaker S(x | z)
corresponds to normalizing each column of pθ.
Note that if the pragmatic speaker wanted to convey zrm-cyan,
there is a decent chance that they would favor ‘remove cyan’.
Next, assuming a uniform p(z), the pragmatic listener L(z | x)
corresponds to normalizing each row of S(x | z).
The result is that conditioned on ‘remove cyan’,
zrm-cyan is now more likely than zrm-red,
which is the desired effect.
The pragmatic listener models the speaker as
a cooperative agent
who behaves in a way to maximize communicative success.
Certain speaker behaviors such as
avoiding synonyms (e.g., not ‘delete cardinal’) and using a
consistent word ordering (e.g, not ‘red remove’)
fall out of the game theory.444
Of course, synonyms and variable word order occur in real language.
We would need a more complex game compared to SHRDLURN
to capture this effect.
For speakers that do not follow this strategy,
our pragmatic model is incorrect,
but as we get more data through game play,
the literal listener pθ(z | x) will sharpen,
so that the literal listener and the pragmatic listener
will coincide in the limit.
#### Online learning with pragmatics.
∀z,C(z)←0 ∀z,Q(z)←ϵ repeat
receive utterance x from human L(z∣x)∝P(z)Q(z)pθ(z∣x)β send human a list Y ranked by L(z∣x)receive y∈Y from human θ←θ−η∇θℓ(θ,x,y) Q(z)←Q(z)+pθ(z∣x)β C(z)←C(z)+pθ(z∣x,⟦z⟧s=y) P(z)←C(z)+α∑z′:C(z′)>0(C(z′)+α)
until *game ends*
Algorithm 1 Online learning algorithm that updates the parameters of the semantic parser θ as well as counts C,Q required to perform pragmatic reasoning.
To implement the pragmatic listener as defined in ([5](#S4.E5 "(5) ‣ The pragmatic computer. ‣ 4 Modeling pragmatics ‣ Learning Language Games through Interaction")),
we need to compute the speaker’s normalization constant ∑xpθ(z∣x)p(x) in order to compute S(x∣z) in ([4](#S4.E4 "(4) ‣ The pragmatic computer. ‣ 4 Modeling pragmatics ‣ Learning Language Games through Interaction")).
This requires parsing all utterances x based on
pθ(z | x).
To avoid this heavy computation in an online setting,
we propose Algorithm [1](#algorithm1 "Algorithm 1 ‣ Online learning with pragmatics. ‣ 4 Modeling pragmatics ‣ Learning Language Games through Interaction"), where some approximations
are used for the sake of efficiency.
First, to approximate the intractable sum over all utterances x,
we only use the examples that are seen to compute the normalization constant
∑xpθ(z∣x)p(x)≈∑ipθ(z∣xi). Then, in order to avoid parsing all previous examples again using the current
parameters for each
new example, we store Q(z)=∑ipθi(z∣xi)β, where
θi is the parameter after the model updates on the ith
example xi.
While θi is different from the current parameter θ,
pθ(z | xi)≈pθi(z | xi) for the
relevant example xi, which is accounted for by both θi and θ.
In Algorithm [1](#algorithm1 "Algorithm 1 ‣ Online learning with pragmatics. ‣ 4 Modeling pragmatics ‣ Learning Language Games through Interaction"), the pragmatic listener L(z∣x) can be interpreted as an importance-weighted
version of the sharpened literal listener pβθ, where it
is downweighted by Q(z), which reflects which z’s
the literal listener prefers, and upweighted by P(z), which is just
a smoothed estimate of the actual distribution over logical forms p(z).
By construction, Algorithm [1](#algorithm1 "Algorithm 1 ‣ Online learning with pragmatics. ‣ 4 Modeling pragmatics ‣ Learning Language Games through Interaction") is the same as ([4](#S4.E4 "(4) ‣ The pragmatic computer. ‣ 4 Modeling pragmatics ‣ Learning Language Games through Interaction")) except that it uses
the normalization constant Q based on stale parameters θi
after seeing example, and it uses samples to compute the sum over x.
Following ([5](#S4.E5 "(5) ‣ The pragmatic computer. ‣ 4 Modeling pragmatics ‣ Learning Language Games through Interaction")), we also need p(z),
which is estimated by P(z) using add-α smoothing on the
counts C(z).
Note that Q(z) and C(z) are
updated *after* the model parameters are updated for the current
example.
Lastly, there is a small complication due to only observing the denotation y and not the logical form z.
We simply give each consistent logical form
{z∣⟦z⟧s=y} a
pseudocount based on the model: C(z)←C(z)+pθ(z∣x,⟦z⟧s=y) where pθ(z∣x,⟦z⟧s=y)∝exp(θTϕ(x,z)) for ⟦z⟧s=y (0 otherwise).
Compared to prior work where the setting is specifically
designed to require pragmatic inference,
pragmatics arises naturally in ILLG.
We think that this form of pragmatics is the most important during learning, and
becomes less important if we had more data.
Indeed, if we have a lot of data and a small
number of possible zs, then
L(z|x)≈pθ(z|x) as ∑xpθ(z|x)p(x)→p(z) when β=1.555Technically, we also need pθ to be *well-specified*.
However, for semantic parsing, we would not be in this regime even if
we have a large amount of training data.
In particular, we are nowhere near that regime in SHRDLURN,
and most of our utterances / logical forms are seen only once, and the importance of modeling pragmatics remains.
5 Experiments
--------------
###
5.1 Setting
#### Data.
Using Amazon Mechanical Turk (AMT), we paid 100 workers 3 dollars each to play
SHRDLURN.
In total, we have 10223 utterances along with their starting states
s. Of these, 8874 utterances are labeled with their denotations
y; the rest are unlabeled, since the player can try any
utterance without accepting an action.
100 players completed the entire game under identical
settings.
We deliberately chose to start from scratch for every worker,
so that we can study the diversity of strategies that different people
used in a controlled setting.
Each game consists of 50 blocks tasks divided into 5 levels of
10 tasks each, in increasing complexity.
Each level aims to reach an end goal given a start state.
Each game took on average 89 utterances
to complete.666
This number is not 50 because some block tasks need multiple
steps and players are also allowed to explore without reaching the goal.
It only took 6 hours to complete these 100 games on AMT and each game
took around an hour on average according to AMT’s *work time* tracker (which
does not account for multi-tasking players).
The players were provided minimal instructions on the game controls.
Importantly, we gave no example utterances in order to avoid biasing their language use.
Around 20 players were confused and told us that the instructions were
not clear and gave us mostly spam utterances.
Fortunately, most players understood the setting and some even
enjoyed SHRDLURN as reflected by their optional comments:
* [noitemsep]
* That was probably the most fun thing I have ever done on mTurk.
* Wow this was one mind bending games [sic].
#### Metrics.
We use the *number of scrolls* as a measure of game
performance for each player. For each
example, the number of scrolls is the position in the list Y of the action selected by the player.
It was possible to complete this version of SHRDLURN by
scrolling (all actions can be found in the first 125 of Y)—22 of the 100
players failed to teach an actual language, and instead finished the game
mostly by scrolling. Let us call them
*spam players*, who usually typed single
letters, random words, digits, or random phrases (e.g. ‘how are you’).
Overall, spam players had to scroll a lot: 21.6 scrolls per utterance versus only
7.4 for the non-spam players.
###
5.2 Human strategies
| |
| --- |
| Most successful players (1st–20th) |
| rem cy pos 1,
stack or blk pos 4,
rem blk pos 2 thru 5,
rem blk pos 2 thru 4,
stack bn blk pos 1 thru 2,
fill bn blk,
stack or blk pos 2 thru 6,
rem cy blk pos 2
fill rd blk (3.01) | remove the brown block,
remove all orange blocks,
put brown block on orange blocks,
put orange blocks on all blocks,
put blue block on leftmost blue block in top row (2.78) | Remove the center block, Remove the red block, Remove all red blocks,
Remove the first orange block,
Put a brown block on the first brown block,
Add blue block on first blue block (2.72) |
| Average players (21th–50th) |
| reinsert pink,
take brown, put in pink,
remove two pink from second layer,
Add two red to second layer in odd intervals,
Add five pink to second layer,
Remove one blue and one brown from bottom layer (9.17) | remove red,
remove 1 red,
remove 2 4 orange,
add 2 red,
add 1 2 3 4 blue,
emove 1 3 5 orange,
add 2 4 orange,
add 2 orange,
remove 2 3 brown,
add 1 2 3 4 5 red,
remove 2 3 4 5 6,
remove 2,
add 1 2 3 4 6 red (8.37) | move second cube,
double red with blue,
double first red with red,
triple second and fourth with orange,
add red,
remove orange on row two,
add blue to column two,
add brown on first and third (7.18) |
| Least successful players (51th–) |
| holdleftmost,
holdbrown,
holdleftmost,
blueonblue,
brownonblue1,
blueonorange,
holdblue,
holdorange2,
blueonred2 ,
holdends1,
holdrightend,
hold2,
orangeonorangerightmost (14.15) | ‘add red cubes on center left, center right, far left and far right’,
‘remove blue blocks on row two column two, row two column four’,
remove red blocks in center left and center right on second row
(12.6) | laugh with me,
red blocks with one aqua,
aqua red alternate,
brown red red orange aqua orange,
red brown red brown red brown,
space red orange red,
second level red space red space red space
(14.32) |
| Spam players (∼ 85th–100) |
| next, hello happy, how are you, move, gold, build goal blocks, 23,house, gabboli, x, run,,xav,
d, j, xcv, dulicate goal (21.7) |
| Most interesting |
| usuń brązowe klocki,
postaw pomarańczowy klocek na pierwszym klocku,
postaw czerwone klocki na pomarańczowych,
usuń pomarańczowe klocki w górnym rzędzie | rm scat
+ 1 c,
+ 1 c,
rm sh,
+ 1 2 4 sh,
+ 1 c,
- 4 o,
rm 1 r,
+ 1 3 o,
full fill c,
rm o,
full fill sh,
- 1 3,
full fill sh,
rm sh,
rm r,
+ 2 3 r,
rm o,
+ 3 sh,
+ 2 3 sh,
rm b,
- 1 o,
+ 2 c, | mBROWN,mBLUE,mORANGE
RED+ORANGE^ORANGE,
BROWN+BROWNm1+BROWNm3,
ORANGE +BROWN +ORANGE^m1+
ORANGE^m3 + BROWN^^2 + BROWN^^4 |
Tablica 3:
Example utterances, along with the average number of scrolls for that
player in parentheses. Success is measured by the number of scrolls,
where the more successful players need less scrolls.
1) The 20 most successful players tend to use consistent and concise language whose
semantics is similar to our logical language.
2) Average players tend to be slightly
more verbose and inconsistent (left and right), or significantly different from our
logical langauge (middle).
3) Reasons for being unsuccessful vary. Left: no tokenization, middle: used a coordinate system and many conjunctions;
right: confused in the beginning, and used a language very different from our logical language.
Some example utterances can
be found in Table [3](#S5.T3 "Tablica 3 ‣ 5.2 Human strategies ‣ 5 Experiments ‣ Learning Language Games through Interaction"). Most of the players used English,
but vary in their adherence to conventions such as use of
determiners, plurals, and proper word ordering.
5 players invented their own language, which are more
precise, more consistent than general English. One player used Polish, and
another used Polish notation (bottom of Table [3](#S5.T3 "Tablica 3 ‣ 5.2 Human strategies ‣ 5 Experiments ‣ Learning Language Games through Interaction")).
Overall, we find that many players adapt in ILLG by becoming
more consistent, less verbose, and more precise, even if they used
standard English at the beginning.
For example, some players became more consistent over time
(e.g. from using both ‘remove’ and ‘discard’ to only using
‘remove’).
In terms of verbosity, removing
function words like determiners as the game progresses is a common
adaptation. In each of the following examples from different players, we compare an utterance
that appeared early in the game
to a similar utterance that appeared later: ‘Remove the red ones’ became ‘Remove red.’; ‘add brown on top of red’ became ‘add orange on red’; ‘add red blocks to all red blocks’ became ‘add red to
red’; ‘dark red’ became ‘red’;
one player used ‘the’ in all of the first 20 utterances, and then never used ‘the’
in the last 75 utterances.
Players also vary in precision, ranging from overspecified (e.g. ‘remove the
orange cube at the left’, ‘remove red blocks from top row’) to underspecified or requiring context
(e.g. ‘change colors’, ‘add one blue’,
‘Build more blocus’, ‘Move the blocks fool’,‘Add two red cubes’). We found that some players became more precise
over time, as they gain a better understanding of ILLG.
Most players use utterances that actually do not match our logical
language in Table [1](#S2.T1 "Tablica 1 ‣ 2 Setting ‣ Learning Language Games through Interaction"), even the successful players.
In particular, numbers are often used. While
some concepts always have the same effect in our blocks world (e.g. ‘first block’ means leftmost),
most are different.
More concretely,
of the top 10 players, 7 used numbers of some form and only 3 players matched our logical language.
Some players who did not match the logical language performed quite well nevertheless. One
possible explanation is because the action required is somewhat constrained
by the logical language and some tokens can have unintended interpretations.
For example, the computer can
correctly interpret numerical positional references, as long as the player only
refers to the leftmost and rightmost positions.
So if the player says ‘rem blk pos 4’ and ‘rem blk pos 1’,
the computer can interpret ‘pos’ as
rightmost and interpret the bigram
(`pos',`1') as leftmost.
On the other hand, players who deviated significantly by
describing the desired state declaratively (e.g. ‘red orange red’, ‘246’) rather than using actions,
or a coordinate system (e.g. ‘row two column two’) performed poorly.
Although players do not have to match our
logical language exactly to perform well, being similar is definitely helpful.
#### Compositionality.
As far as we can tell,
all players used a compositional language;
no one invented unrelated words for each action.
Interestingly, 3 players did not put spaces between words.
Since we assume monomorphemic words separated by spaces,
they had to do a lot of scrolling as a result (e.g., 14.15 with
utterances like ‘orangeonorangerightmost’).
###
5.3 Computer strategies
We now present quantitative results on how
quickly the computer can learn, where our goal is to achieve high accuracy on new
utterances as we make just a single pass over the data. The number of
scrolls used to evaluate player is sensitive to outliers and not as
intuitive as accuracy.
Instead, we consider *online accuracy*, described as follows.
Formally, if a player produced T utterances x(j) and
labeled them y(j), then
| | | |
| --- | --- | --- |
| | online accuracy{}def=1TT∑j=1I[y(j)=⟦z(j)⟧s(j)], | |
where z(j)=argmaxzpθ(j−1)(z|x(j)) is the model
prediction based on the previous parameter θ(j−1).
Note that the online accuracy is defined with respect to the player-reported labels,
which only corresponds to the actual accuracy if the player is precise and honest.
This is not true for most spam players.
| | |
| --- | --- |
|
(a)
|
(b)
|
Rysunek 2:
Pragmatics improve online accuracy. In these plots, each marker is a player.
red o: players who ranked 1–20 in terms of minimizing number of scrolls, green x: players 20–50;
blue +: lower than 50 (includes spam players).
Marker sizes correspond to player rank, where better players
are depicted with larger markers.
[1(a)](#S5.F1.sf1 "(a) ‣ Rysunek 2 ‣ 5.3 Computer strategies ‣ 5 Experiments ‣ Learning Language Games through Interaction"): online accuracies with and without pragmatics on the full model;
[1(b)](#S5.F1.sf2 "(b) ‣ Rysunek 2 ‣ 5.3 Computer strategies ‣ 5 Experiments ‣ Learning Language Games through Interaction"): same for the half model.
| | players ranked by # of scrolls |
| --- | --- |
| Method | top 10 | top 20 | top 50 | all 100 |
| memorize | 25.4 | 24.5 | 22.5 | 17.6 |
| \hdashlinehalf model | 38.7 | 38.4 | 36.0 | 27.0 |
| half + prag | 43.7 | 42.7 | 39.7 | 29.4 |
| \hdashlinefull model | 48.6 | 47.8 | 44.9 | 33.3 |
| full + prag | 52.8 | 49.8 | 45.8 | 33.8 |
Tablica 4: Average online accuracy under various settings.
memorize: featurize entire utterance and logical form non-compositionally;
half model: featurize the utterances with unigrams, bigrams, and skip-grams but conjoin with the entire logical form;
full model: the model described in Section [3](#S3 "3 Semantic parsing model ‣ Learning Language Games through Interaction");
+prag: the models above, with our online pragmatics algorithm described in Section [4](#S4 "4 Modeling pragmatics ‣ Learning Language Games through Interaction").
Both compositionality and pragmatics improve accuracy.
#### Compositionality.
To study the importance of compositionality,
we consider two baselines.
First, consider a non-compositional model (*memorize*) that just remembers pairs of complete utterance and
logical forms.
We implement this using indicator features on features
(x,z), e.g., (`remove all the red blocks',zrm-red),
and use a large learning rate.
Second, we consider a model (*half*) that
treats
utterances compositionally with unigrams, bigrams, and
skip-trigrams features,
but the logical forms are regarded as non-compositional,
so we have features such as
(`remove',zrm-red), (`red',zrm-red), etc.
Table [4](#S5.T4 "Tablica 4 ‣ 5.3 Computer strategies ‣ 5 Experiments ‣ Learning Language Games through Interaction") shows that the full model (Section [3](#S3.SS0.SSS0.Px1 "Features. ‣ 3 Semantic parsing model ‣ Learning Language Games through Interaction"))
significantly outperforms both the *memorize* and *half* baselines.
The learning rate η=0.1 is selected via cross validation, and we
used α=1 and β=3 following Smith et al. ([2013](#bib.bib24)).
#### Pragmatics.
Next, we study the effect of pragmatics on online accuracy.
Figure [2](#S5.F2 "Rysunek 2 ‣ 5.3 Computer strategies ‣ 5 Experiments ‣ Learning Language Games through Interaction")
shows that modeling pragmatics helps successful players (e.g., top 10 by number of scrolls)
who use precise and consistent languages.
Interestingly, our pragmatics model did not help and can even hurt the less
successful players who are less precise and consistent.
This is expected behavior:
the pragmatics model assumes that the human is cooperative and behaving rationally.
For the bottom half of the players, this assumption is not true,
in which case the pragmatics model is not useful.
6 Related Work and Discussion
------------------------------
Our work connects with a broad body of work on grounded language,
in which language is used in some environment as a means towards some goal.
Examples include playing games (Branavan et al., [2009](#bib.bib3), [2010](#bib.bib4); Reckman et al., [2010](#bib.bib23))
interacting with robotics (Tellex et al., [2011](#bib.bib26), [2014](#bib.bib25)),
and following instructions (Vogel and Jurafsky, [2010](#bib.bib28); Chen and Mooney, [2011](#bib.bib7); Artzi and Zettlemoyer, [2013](#bib.bib1))
Semantic parsing utterances to logical forms, which we leverage, plays an important role
in these settings
(Kollar et al., [2010](#bib.bib16); Matuszek et al., [2012](#bib.bib20); Artzi and Zettlemoyer, [2013](#bib.bib1)).
What makes this work unique is
our new interactive learning of language games (ILLG) setting,
in which a model has to learn a language from *scratch* through interaction.
While online gradient descent is frequently used,
for example in semantic parsing (Zettlemoyer and Collins, [2007](#bib.bib33); Chen, [2012](#bib.bib6)),
we using it in a truly online setting, taking one pass over the data and
measuring online accuracy (Cesa-Bianchi and Lugosi, [2006](#bib.bib5)).
To speed up learning,
we leverage computational models of pragmatics
(Jäger, [2008](#bib.bib15); Golland et al., [2010](#bib.bib11); Frank and Goodman, [2012](#bib.bib9); Smith et al., [2013](#bib.bib24); Vogel et al., [2013](#bib.bib27)).
The main difference is these previous works use pragmatics with a trained base model,
whereas we learn the model online.
Monroe and Potts ([2015](#bib.bib21)) uses learning to improve the pragmatics model.
In contrast, we use pragmatics to speed up the learning process
by capturing phenomena like mutual exclusivity (Markman and Wachtel, [1988](#bib.bib19)).
We also differ from prior work in several details.
First, we model pragmatics in the online learning setting where we use
an online update for the pragmatics model. Second, unlikely the
reference games where pragmatic effects plays an important role by
design, SHRDLURN is not specifically designed to require pragmatics.
The improvement we get is mainly due to players trying to be
consistent in their language use.
Finaly, we treat both the utterance and the logical
forms as featurized compositional objects.
Smith et al. ([2013](#bib.bib24)) treats utterances (i.e. words) and logical
forms (i.e. objects) as categories;
Monroe and Potts ([2015](#bib.bib21)) used features, but also over flat categories.
Looking forward, we believe that the ILLG setting is worth
studying and has important implications for natural language interfaces.
Today, these systems are trained once and deployed.
If these systems could quickly adapt to user feedback in real-time as in this work,
then we might be able to more readily create
systems for resource-poor languages
and new domains, that are customizable and improve through use.
Acknowledgments
---------------
DARPA Communicating with Computers (CwC) program under ARO
prime contract no. W911NF-15-1-0462. The first author is supported by
a NSERC PGS-D fellowship. In addition, we thank Will Monroe, and Chris Potts
for their insightful comments and discussions on pragmatics.
Reproducibility
---------------
All code, data, and experiments for this
paper are available on the CodaLab platform:
<https://worksheets.codalab.org/worksheets/0x9fe4d080bac944e9a6bd58478cb05e5e>
The client side code is here:
<https://github.com/sidaw/shrdlurn/tree/acl16-demo>
and a demo: <http://shrdlurn.sidaw.xyz> |
a0b4db98-2518-496d-bbdc-bcbcf0c73487 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | You're not a simulation, 'cause you're hallucinating
I've found that the "[Simulators](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators)" post is excellent for breaking prior assumptions about large language models - these algorithms are not agents, nor genies, nor Oracles. They are currently something very different.
But, [like Beth Barnes](https://www.lesswrong.com/posts/dYnHLWMXCYdm9xu5j/simulator-framing-and-confusions-about-llms), I feel that the simulators framing can be misleading if you take it literally. And hallucinations often provide examples of where "the model is predicting what token would appear next in the training data given the input tokens" gives a better model than "simulators".
For example, here are some reviews of fictional films, written by canonically quite truthful characters:
And:
If we used the simulator view, we might expect that these truthful characters would confess "I haven't heard of this movie" or "I haven't seen it myself, but based on its title I would assume that..." But they don't. The fact that the simulated character is truthful does not mean that they speak the truth; we'd have been wrong if we predicted that.
From the 'token completion (trained on internet data)' perspective, though, ChatGPT's behaviour makes perfect sense. Online, if someone asks about a certain movie, it is very rare for anyone to say "never heard of it - are you sure it exists?". Indeed, it's rare for people to say "haven't seen it" unless it's a two-way conversation. The people who haven't seen it don't say anything, and so most of the answers come from people who have seen it, and have opinions on it. So in the training data, answers are plentiful and "I don't know"s are rare. Conversely, people rarely post questions about non-existent movies. So we would expect that ChatGPT will provide answers for questions rather than admitting its ignorance or doubting the question.
And it's not just reviews of imaginary movies that it will make up. After failing to get it to make up details about a specific imaginary website (www.artifacts.co.it), I got it to spout confident nonsense by getting it to compare that website to a second, equally imaginary one:
Again, consider how most website comparison questions would play out online. ChatGPT is not running a simulation; it's answering a question in the style that it's seen thousands - or millions - of times before. |
10dfbdd5-33b7-4c9d-b39b-6fa4dee66557 | trentmkelly/LessWrong-43k | LessWrong | My book: Simulating Dennett - This Wednesday in Sao Paulo
There's been somewhat frequent coverage of Daniel Dennett on Lesswrong:
How not to be a Naïve Computationalist
Dennett's "Consciousness Explained": Prelude
"Where Am I?", by Daniel Dennett
Dennett's heterophenomenology
My personal favorite: Zombies: The Movie
I've written a book called Simulating Dennett nearly five years ago now (if you are considering an academic career, keep that slow paced speed in mind, for good or ill). It summarizes Dennett's philosophy while trying to make the reader able to think like Dennett. It seemed to me at the time, and still does now, that Dennett's kind of mind is very interesting and we should have more of those, so I tried my best to create a Dennett installer in book form.
Simulating Dennett: Tools and Constructions of a Naturalist
Is the 244 pages that ensued. Portuguese or Spanish reading skills advised. Or use it to learn Portuguese prior to your trip to Rio, Pantanal, Iguaçu Falls and the Amazon Forest. (for legal reasons I've chopped out the second half of the file, but there are instructions on how to get it when you get to the end of the first half)
Abstract
This dissertation intends to provide the reader with an inner simulation of Daniel Dennett’s form of reasoning, spreading over his whole philosophy, emphasizing his treatment of patterns, the evolutionary algorithm, consciousness, and his use of illata, abstracta, semantic, and syntax, to carve nature at its joints, especially biology and the human mind. It recasts, in a new light, great part of his most important ideas, and reverse engineers what made him think in particular ways, walking the reader through similar pathways, fostering an active learning of a thinking style, above and beyond a mere exposition of the results obtained by this thinking style over the years.
Keywords: Daniel Dennett, Consciousness, Memetics, Intentional stance, Evolution,
Algorithm.
This Wednesday 2013-03-19 at 14:00 I’ll be presenting it as thesis in the University |
2f0b9d57-435a-430b-87c7-3256da9174df | trentmkelly/LessWrong-43k | LessWrong | A NotKillEveryoneIsm Argument for Accelerating Deep Learning Research
TLDR: This is really just a longer version of this comment.
A metaphor
You are Rocket McRocket-Face, the CEO of Rockets Inc, the world’s largest and most reputable rocket company. Rockets Inc isn’t the only rocket company in the world, but it is by far the biggest, richest, and most powerful rocket company on Earth. No other rocket company holds a candle to Rocket Inc. Nor are they likely to in the next 3-5 years.
Rocket McRocket-Face, CEO of Rockets Inc.
As the CEO of Rockets Inc, you dream of one day reaching the moon. Your reasons for dreaming are twofold. The first reason is that reaching the moon is a dream that men have had since ancient times. It is an achievement truly worthy to behold. The second reason is a bit more practical. It is widely agreed in the world of rocket science that the first person to reach the moon will hold a commanding advantage. From the heights of the moon the first person to reach it will be virtually unconquerable, able to hurl moon rocks down to punish any of their enemies.
Picture of Rocket McRocket-Face looking at a map of the moon
One day, two scientists come to you with research proposals for new types of rockets.
The first scientist is Mathy McEngineer. Mathy is one of your best engineers. He is well known for the reliability of his rockets. His design is comforting, simple-to-understand, and a natural extension of currently known rocket technologies. The design is so simple it can be explained to anyone with a degree in rocket-engineering in a few minutes. It involves taking the current well-known and trustworthy rocket designs and adding a few more parts: more engines, more fuel. Nothing out of the ordinary.
Mathy’s plan may not be brilliant, but it’s trustworthy and safe. And there’s a good chance that it will reach the moon (although it’s unlikely to reach even the nearest star).
Mathy McEngineer
After your meeting with Mathy, you are feeling good about your chances of winning the race to the mo |
babd9eec-960d-42bf-bdce-fa6811c0064a | trentmkelly/LessWrong-43k | LessWrong | Proposal: we should start referring to the risk from unaligned AI as a type of *accident risk*
In the wider political sphere, a lot of people are worried about AI misuse risk. Unaligned AI is not a type of misuse. I think the clearest way to describe this is as an accident risk, in the same sense of the word as industrial accident. In particular, AI existential risk is a type of accident from operating heavy machinery. Using this terminology can immediately help someone not familiar with AI know the category of risk we are talking about, and that in particular it isn't misuse risk.
Note that this isn't intended to replace the term existential risk. Rather, it is meant to be used in addition to that term, and in particular it should be used when contrasting with theoretical misuse risks.
Current terminology: no good reference point
Alice: I hear that you are worried about AI existential risk. So in particular, you are worried about misuse.
Bob: No, the AI kills everyone on its own.
Alice: Is there anything else like this?
Bob: Uhm, Nuclear explosions?
Alice: So a misuse risk? Bob: No, I mean last century they were worried it would set the atmosphere on fire.
Alice: I'm not familiar with that either.
Bob: It's something called instrumental convergence where the AI kills everyone to achieve a goal.
Alice: So misuse risk?
Bob: Not quite, the creators didn't intend for that result.
Alice: I still have no reference point for what you are talking about? I guess I'll need to analyze your arguments more specifically before even understanding the general category of risk you're afraid of it. The probability of me actually doing this is probably like 10%-ish.
New terminology: tons of reference points!
Alice: I hear that you are worried about AI existential risk. So in particular, you are worried about misuse.
Bob: No, I am worried about accident risk.
Alice: oh, so like a car crash or an industrial accident!
Bob: Yes! I'm worried that things will go wrong in ways the creator didn't intend.
Alice: Ah, so you do you think we need more laboratory testing?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.