id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
75905e44-c30d-4771-990c-14db6f039114 | trentmkelly/LessWrong-43k | LessWrong | Krona Compare
Cross-posted from my NAO Notebook.
When trying to understand how metagenomic samples differ I often want to drill down through the taxonomic hierarchy, comparing relative abundances. I've tried several tools for this, existing and custom, and haven't been all that happy with any. For most purposes, the tool I like most is Krona, which shows an interactive chart. For example, here's Krona showing the results of running the NAO's v1 metagenomic sequencing pipeline on the unenriched Hyperion Treatment Plant samples from Rothman et al. 2021:
(interactive version)
What I often wish I had, however, are linked Krona charts, where I could see multiple samples at once, and drilling down in one sample showed the corresponding portion of the other samples. After failing to find something like this, I hacked something together by monkey-patching the output of Krona. Here's it comparing the samples from several wastewater treatment plants in the same study:
When I click on "viruses" in any of the plots, all four zoom in on the viral fraction:
That's a lot of Tobamovirus!
I only just made this, so it's likely buggy, but if that doesn't put you off you're welcome to give it a try. The interactive version of the charts above is here (warning: 60MB) and the generation code is open source, on github.
----------------------------------------
If you're interested in the technical details of how I made this:
* It's a wrapper around the ktImportTaxonomy command from KronaTools.
* To get multiple charts on the same page, they're each in their own iframe, via srcdoc.
* There is unfortunately no CSS way of saying "please lay out these rectangles to take up as much of the viewport as possible while maintaining an aspect ratio close to 1:1", so I use some awkward JS. It checks each possible number of columns and takes the one that maximizes the minimum dimension (width or height) of the charts. Luckily there are only a few options to consider.
* So that the co |
15f2c6f5-8d16-4afb-9373-d99637eed241 | trentmkelly/LessWrong-43k | LessWrong | CRMArena-Pro: Holistic Assessment of LLM Agents
Across Diverse Business Scenarios and Interactions
A paper by Salesforce AI Research
Abstract
While AI agents have transformative potential in business, the absence of publicly available business data on widely used platforms hinders effective performance benchmarking. Existing benchmarks fall short in realism, data fidelity, agent-user interaction, and coverage across business scenarios and industries. To address these gaps, we introduce CRMArena-Pro, a novel benchmark for holistic and realistic assessment of LLM agents in diverse professional settings. CRMArena-Pro expands on CRMArena with nineteen expert-validated tasks across customer sales, service, as well as configure, price, and quote for Business-to-Business and Businessto-Customer scenarios. It also incorporates multi-turn interactions guided by diverse personas and confidentiality awareness assessments. Experiments show leading LLM agents achieve approximately solely 58% single-turn success rate on CRMArena-Pro, with significant performance drops in multi-turn settings to 35%. Among the business skills evaluated, Workflow Execution is notably more tractable, with top-performing agents surpassing 83% success rate in single-turn tasks, while other skills present greater challenges. Additionally, agents exhibit near-zero inherent confidentiality awareness (improvable with prompting but often at a cost to task performance). These results underscore a significant gap between current LLM capabilities and real-world enterprise demands, highlighting needs for improved multi-turn reasoning, confidentiality adherence, and versatile skill acquisition. |
9022d592-147c-44c6-b87e-485c029566a4 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Interpreting Yudkowsky on Deep vs Shallow Knowledge
*Here is an exploration of what Eliezer Yudkowsky means when he writes about deep vs shallow patterns (although I’ll be using "knowledge" instead of "pattern" for reasons explained in the next section). Not about any specific pattern Yudkowsky is discussing, mind you, about what deep and shallow patterns are at all. In doing so, I don’t make any criticism of his ideas and instead focus on quoting him (seriously, this post is like 70% quotes) and interpreting him by finding the best explanation I can of his words (that still fit them, obviously). Still, there’s a risk that my interpretation misses some of his points and ideas— I’m building a lower-bound on his argument’s power that is as high as I can get, not an upper-bound. Also, I might just be completely wrong, in which case defer to Yudkowsky if he points out that I’m completely missing the point.*
*Thanks to Eliezer Yudkowsky, Steve Byrnes, John Wentworth, Connor Leahy, Richard Ngo, Kyle, Laria, Alex Turner, Daniel Kokotajlo and Logan Smith for helpful comments on a draft.*
Back to the FOOM: Yudkowsky’s explanation
=========================================
In [recent discussions](https://www.lesswrong.com/s/n945eovrA3oDueqtq), Yudkowsky often talks about deep patterns and deep thinking. What he made clear in a comment on this draft is that he has been using the term “deep patterns” in two different ways:
* What I’ll call **deep knowledge**, which is a form of human knowledge/theory as well as the related epistemic strategies. This is what I explore below.
* What I’ll call **deep cognition**, which is the sort of deep patterns that Yudkowsky points out AGI would have. There’s a link and an analogy with the deep knowledge, but I don’t get it enough to write something convincing to me and Yudkowsky, **so I’ll mostly avoid that topic in this post.**
**Focusing on deep knowledge** then, Yudkowsky recently seems to ascribe his interlocutors’ failure to grasp his point to their inability to grasp different instances of deep knowledge.
(All quotes from Yudkowsky if not mentioned otherwise)
(From the [first discussion](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW#1_1__Deep_vs__shallow_problem_solving_patterns) with Richard Ngo)
> In particular, just as I have a model of the Other Person's Beliefs in which they think alignment is easy because they don't know about difficulties I see as very deep and fundamental and hard to avoid, I also have a model in which people think "why not just build an AI which does X but not Y?" because they don't realize what X and Y have in common, which is something that draws deeply on having deep models of intelligence. And it is hard to convey this deep theoretical grasp.
>
>
>
That being said, he doesn’t really explain what this sort of deep knowledge is.
(From the [same discussion](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW#1_1__Deep_vs__shallow_problem_solving_patterns) with Ngo)
> (Though it's something of a restatement, a reason I'm not going into "my intuitions about how cognition works" is that past experience has led me to believe that conveying this info in a form that the Other Mind will actually absorb and operate, is really quite hard and takes a long discussion, relative to my current abilities to Actually Explain things; it is the sort of thing that might take doing homework exercises to grasp how one structure is appearing in many places, as opposed to just being flatly told that to no avail, and I have not figured out the homework exercises.)
>
>
The thing is, he did exactly that in [the FOOM debate](https://intelligence.org/files/AIFoomDebate.pdf) with Robin Hanson 13 years ago. (For those unaware of this debate, Yudkoswky is responding to Hanson’s use of trends — like Moore’s law — extrapolations to think about intelligence explosion).
(From [The Weak Inside View](https://www.lesswrong.com/posts/w9KWNWFTXivjJ7rjF/the-weak-inside-view) (2008))
> Robin keeps asking me what I’m getting at by talking about some reasoning as “deep” while other reasoning is supposed to be “surface.” One thing which makes me worry that something is “surface” is when it involves generalizing a level N feature across a shift in level N−1 causes.
>
> For example, suppose you say, “Moore’s Law has held for the last sixty years, so it will hold for the next sixty years, even after the advent of superintelligence” (as Kurzweil seems to believe, since he draws his graphs well past the point where you’re buying a billion times human brainpower for $1,000).
>
> Now, if the Law of Accelerating Change were an exogenous, ontologically fundamental, precise physical law, then you wouldn’t expect it to change with the advent of superintelligence.
>
> But to the extent that you believe Moore’s Law depends on human engineers, and that the timescale of Moore’s Law has something to do with the timescale on which human engineers think, then extrapolating Moore’s Law across the advent of superintelligence is extrapolating it across a shift in the previous causal generator of Moore’s Law.
>
> So I’m worried when I see generalizations extrapolated across a change in causal generators not themselves described—i.e., the generalization itself is on the level of the outputs of those generators and doesn’t describe the generators directly.
>
> If, on the other hand, you extrapolate Moore’s Law out to 2015 because it’s been reasonably steady up until 2008—well, Reality is still allowed to say, “So what?” to a greater extent than we can expect to wake up one morning and find Mercury in Mars’s orbit. But I wouldn’t bet against you, if you just went ahead and drew the graph.
>
> So what’s “surface” or “deep” depends on what kind of context shifts you try to extrapolate past
>
>
An important subtlety here comes from the possible conflation of two uses of “surface”: the implicit use of “surface knowledge” as the consequences of some underlying causal processes/generator, and the explicit use of “surface knowledge” as drawing similarities without thinking about the causal process generating them. To simplify the discussion, let’s use the more modern idiom of “shallow” for the more explicit sense here.
So what is Yudkowsky pointing at? Two entangled things:
* **If you have shallow knowledge, that is a trend without an underlying causal model, then you can’t extend it when the causal process generating it changes.** So if Moore’s law depends on “the timescale on which human engineers think”, we can’t extend it past the intelligence explosion, because then human engineers would be reply by AI engineers which would think faster.
* **If you have shallow knowledge, you can’t even know when to extend the trend safely because understanding when the underlying causal process changes is harder when you don’t know what the causal process is!**
Imagine a restaurant that has a dish you really like. The last 20 times you went to eat there, the dish was amazing. So should you expect that the next time it will also be great? Well, that depends on whether anything in the kitchen changes. Because you don’t understand what makes the dish great, you don’t know of the most important aspects of the causal generators. So if they can’t buy their meat/meat-alternative at the same place, maybe that will change the taste; if the cook is replaced, maybe that will change the taste; if you go at a different time of the day, maybe that will change the taste.
You’re incapable of extending your trend (except by replicating all the conditions) to make a decent prediction because you don’t understand where it comes from. If on the other hand you knew why the dish was so amazing (maybe it’s the particular seasoning, or the chef’s touch), then now you can estimate its quality. But then you’re not using the trend, you’re using a model of the underlying causal process.
Here is another phrasing by Yudkowsky from [the same essay](https://www.lesswrong.com/posts/w9KWNWFTXivjJ7rjF/the-weak-inside-view):
> Though this is to some extent an argument produced after the conclusion, I would explain my reluctance to venture into quantitative futurism via the following trichotomy:
>
> * On problems whose pieces are individually precisely predictable, you can use the Strong Inside View to calculate a final outcome that has never been seen before—plot the trajectory of the first moon rocket before it is ever launched, or verify a computer chip before it is ever manufactured.
> * On problems that are drawn from a barrel of causally similar problems, where human optimism runs rampant and unforeseen troubles are common, the Outside View beats the Inside View. Trying to visualize the course of history piece by piece will turn out to not (for humans) work so well, and you’ll be better off assuming a probable distribution of results similar to previous historical occasions—without trying to adjust for all the reasons why this time will be different and better.
> * But on problems that are new things under the Sun, where there’s a huge change of context and a structural change in underlying causal forces, the Outside View also fails—try to use it, and you’ll just get into arguments about what is the proper domain of “similar historical cases” or what conclusions can be drawn therefrom. In this case, the best we can do is use the Weak Inside View—visualizing the causal process—to produce loose, qualitative conclusions about only those issues where there seems to be lopsided support.
>
More generally, these quotes point out to what Yudkowsky means when he says “deep knowledge”: **the sort of reasoning that focuses on underlying causal models**.
As he says himself:
> To stick my neck out further: I am liable to trust the Weak Inside View over a “surface” extrapolation, if the Weak Inside View drills down to a deeper causal level and the balance of support is sufficiently lopsided.
>
>
Before going deeper into how such deep knowledge/Weak Inside View works and how to build confidence in it, I want to touch upon the correspondence between this kind of thinking and [the Lucas Critique](https://en.wikipedia.org/wiki/Lucas_critique) in macroeconomics. This link has been [pointed out](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=davpZXkuLTvoTfZK6) in the comments of the recent discussions — we thus shouldn’t be surprised that Yudkowsky wrote about it 8 years ago (yet I was surprised by this).
(From [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf) (2013))
> The “outside view” (Kahneman and Lovallo 1993) is a term from the heuristics and biases program in experimental psychology. A number of experiments show that if you ask subjects for estimates of, say, when they will complete their Christmas shopping, the right question to ask is, “When did you finish your Christmas shopping last year?” and not, “How long do you think it will take you to finish your Christmas shopping?” The latter estimates tend to be vastly over-optimistic, and the former rather more realistic. In fact, as subjects are asked to make their estimates using more detail—visualize where, when, and how they will do their Christmas shopping—their estimates become more optimistic, and less accurate. Similar results show that the actual planners and implementers of a project, who have full acquaintance with the internal details, are often much more optimistic and much less accurate in their estimates compared to experienced outsiders who have relevant experience of similar projects but don’t know internal details. This is sometimes called the dichotomy of the inside view versus the outside view. The “inside view” is the estimate that takes into account all the details, and the “outside view” is the very rough estimate that would be made by comparing your project to other roughly similar projects without considering any special reasons why this project might be different.
>
> The Lucas critique (Lucas 1976) in economics was written up in 1976 when “stagflation”—simultaneously high inflation and unemployment—was becoming a problem in the United States. Robert Lucas’s concrete point was that the Phillips curve trading off unemployment and inflation had been observed at a time when the Federal Reserve was trying to moderate inflation. When the Federal Reserve gave up on moderating inflation in order to drive down unemployment to an even lower level, employers and employees adjusted their long-term expectations to take into account continuing inflation, and the Phillips curve shifted. Lucas’s larger and meta-level point was that the previously observed Phillips curve wasn’t fundamental enough to be structurally invariant with respect to Federal Reserve policy—the concepts of inflation and unemployment weren’t deep enough to describe elementary things that would remain stable even as Federal Reserve policy shifted.
>
>
and later in that same essay:
> The lesson of the outside view pushes us to use abstractions and curves that are clearly empirically measurable, and to beware inventing new abstractions that we can’t see directly.
>
> The lesson of the Lucas critique pushes us to look for abstractions deep enough to describe growth curves that would be stable in the face of minds improving in speed, size, and software quality.
>
> You can see how this plays out in the tension between “Let’s predict computer speeds using this very well-measured curve for Moore’s Law over time—where the heck is all this other stuff coming from?” versus “But almost any reasonable causal model that describes the role of human thinking and engineering in producing better computer chips, ought to predict that Moore’s Law would speed up once computer-based AIs were carrying out all the research!”
>
>
This last sentence in particular points out another important feature of deep knowledge: **that it might be easier to say negative things (like “this can’t work”) than precise positive ones (like “this is the precise law”) because the negative thing can be something precluded by basically all coherent/reasonable causal explanations, while they still disagree on the precise details.**
Let’s dig deeper into that by asking more generally what deep knowledge is useful for.
How does deep knowledge work?
=============================
We now have a pointer (however handwavy) to what Yudkowsky means by deep knowledge. Yet we have very little details at this point about what this sort of thinking looks like. To improve that situation, the next two subsections explore two questions about the nature of deep knowledge: what is it for, and where does it come from?
The gist of this section is that:
* **Deep knowledge is primarily useful for saying what isn’t possible/what can’t work, especially in cases (like alignment) where there is very little data to draw from.** (The comparison Yudkowsky keeps coming back to is how thermodynamics allows you to rule out perpetual motion machines)
* **Deep knowledge takes the form of compressed constraints on solution/hypothesis space, which have weight behind them because they let us rederive most of our current knowledge from basic/compressed ideas, and finding such compression without a strong entanglement with reality is incredibly hard.** (Here an example used by Yudkowsky is the sort of thought experiments, conservation laws, and general ideas about what physical laws look like that guided Einstein in his path to Special and General Relativity)
What is deep knowledge useful for?
----------------------------------
The big difficulty that comes up again and again, in the FOOM debate with Hanson and the discussion with Ngo and Christiano, is that deep knowledge doesn’t always lead to quantitative predictions. That doesn’t mean that the deep knowledge isn’t quantitative itself (expected utility maximization is an example used by Yudkowsky that is completely formal and quantitative), but that the causal model only partially constrains what can happen. That is, it doesn’t constrain enough to make precise quantitative predictions.
Going back to his introduction of the Weak Outside view, recall that he wrote:
> But on problems that are new things under the Sun, where there’s a huge change of context and a structural change in underlying causal forces, the Outside View also fails—try to use it, and you’ll just get into arguments about what is the proper domain of “similar historical cases” or what conclusions can be drawn therefrom. In this case, the best we can do is use the Weak Inside View—visualizing the causal process—to produce **loose, qualitative conclusions about only those issues where there seems to be lopsided support**.
>
>
He follows up writing:
> So to me it seems “obvious” that my view of optimization is only strong enough to produce loose, qualitative conclusions, and that it can only be matched to its retrodiction of history, or wielded to produce future predictions, on the level of [qualitative physics](http://lesswrong.com/lw/ti/qualitative_strategies_of_friendliness/).
>
> “Things should speed up here,” I could maybe say. But not “The doubling time of this exponential should be cut in half.”
>
> I aspire to a deeper understanding of intelligence than this, mind you. But I’m not sure that even perfect Bayesian enlightenment would let me predict quantitatively how long it will take an AI to solve various problems in advance of it solving them. That might just rest on features of an unexplored solution space which I can’t guess in advance, even though I understand the process that searches.
>
>
Let’s summarize it that way: **deep knowledge only partially constrains the surface phenomena it describes (which translate into quantitative predictions) and it takes a lot of detailed deep knowledge (and often data) to refine it enough to pin down exactly the phenomenon and make precise quantitative predictions.** Alignment and AGI are fields where we don’t have that much deep knowledge, and the data is sparse, and thus we shouldn’t expect precise quantitative predictions anytime soon.
Of course, just because a prediction is qualitative doesn’t mean it comes from deep knowledge; all hand-waving isn’t wisdom. For a good criticism of shallow qualitative reasoning in alignment, let’s turn to [Qualitative Strategies of Friendliness](https://www.lesswrong.com/posts/AWaJvBMb9HGBwtNqd/qualitative-strategies-of-friendliness).
> These then are three problems, with strategies of Friendliness built upon qualitative reasoning that seems to imply a positive link to utility:
>
> The fragility of *normal* causal links when a superintelligence searches for more efficient paths through time;
>
> The superexponential vastness of conceptspace, and the unnaturalness of the boundaries of our desires;
>
> And all that would be lost, if success is less than complete, and a superintelligence squeezes the future without protecting everything of value in it.
>
>
The shallow qualitative reasoning criticized here relies too much on human common sense and superiority to the AI, when the situation to predict is about superintelligence/AGI. That is, this type of qualitative reasoning extrapolates across a change in causal generators.
On the other hand, Yudkowsky uses qualitative constraints to guide his criticism: he knows there’s a problem because the causal model forbids that kind of solution. Just like the laws of thermodynamics forbid perpetual motion machines.
**Deep qualitative reasoning starts from the underlying (potentially quantitative) causal explanations and mostly tells you what cannot work or what cannot be done. That is, deep qualitative reasoning points out that a whole swatch of search space is not going to yield anything.** A related point is that Yudkwosky rarely (AFAIK) makes predictions, even qualitative ones. He sometimes admits that he might do some, but it feels more like a compromise with the prediction-centered other person than what the deep knowledge is really for. Whereas he constantly points out how certain things cannot work.
(From [Qualitative Strategies of Friendliness](https://www.lesswrong.com/posts/AWaJvBMb9HGBwtNqd/qualitative-strategies-of-friendliness) (2008))
> In general, a lot of naive-FAI plans I see proposed, have the property that, if actually implemented, the strategy might appear to work while the AI was dumber-than-human, but would fail when the AI was smarter than human. The fully general reason for this is that while the AI is dumber-than-human, it may not yet be powerful enough to create the exceptional conditions that will break the neat little flowchart that would work if every link operated according to the 21st-century First-World modal event.
>
> This is why, when you encounter the AGI wannabe who hasn't planned out a whole technical approach to FAI, and confront them with the problem for the first time, and they say, "Oh, we'll test it to make sure that doesn't happen, and if any problem like that turns up we'll correct it, now let me get back to the part of the problem that really interests me," know then that this one has not yet leveled up high enough to have interesting opinions. It is a general point about failures in bad FAI strategies, that quite a few of them don't show up while the AI is in the infrahuman regime, and only show up once the strategy has gotten into the transhuman regime where it is too late to do anything about it.
>
>
(From [the second discussion with Ngo](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/hwxj4gieR7FWNwYfa#5_3__Epistemology__and_assessing_the_idea_of_expected_utility))
> I live in a world where I proceed with very strong confidence if I have a detailed formal theory that made detailed correct advance predictions, and otherwise go around saying, "well, it sure looks like X, but we can be on the lookout for a miracle too".
>
> If this was a matter of thermodynamics, I wouldn't even be talking like this, and we wouldn't even be having this debate.
>
> I'd just be saying, "Oh, that's a perpetual motion machine. You can't build one of those. Sorry." And that would be the end.
>
>
(From [Security Mindset and Ordinary Paranoia](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) (2017))
> You need to master two ways of thinking, and there are a lot of people going around who have the first way of thinking but not the second. One way I’d describe the deeper skill is seeing a system’s security as resting on a story about why that system is safe. We want that safety-story to be as solid as possible. One of the implications is resting the story on as few assumptions as possible; as the saying goes, the only gear that never fails is one that has been designed out of the machine.
>
> [...]
>
> There’s something to be said for redundancy, and having fallbacks in case the unassailable wall falls; it can be wise to have additional lines of defense, so long as the added complexity does not make the larger system harder to understand or increase its vulnerable surfaces. But at the core you need a simple, solid story about why the system is secure, and a good security thinker will be trying to eliminate whole assumptions from that story and strengthening its core pillars, not only scurrying around parrying expected attacks and putting out risk-fires.
>
>
Or my reading of [the whole discussion with Christiano](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/vwLxd6hhFvPbvKmBH), which is that Christiano constantly tries to get Yudkowsky to make a prediction, but the latter focuses on aspects of Christiano’s model and scenario that don’t fit his (Yudkoswky’s) deep knowledge.
I especially like the perpetual motion machines analogy, because it drives home how just proposing a tweak/solution without understanding Yudkowsky’s deep knowledge (and what it would take for it to not apply) has almost no chance of convincing him. Because if someone said they built a perpetual motion machine without discussing how they bypass the laws of thermodynamics, every scientifically literate person would be doubtful. On the other hand, if they seemed to be grappling with thermodynamics and arguing for a plausible way of winning, you’d be significantly more interested.
(I feel like Bostrom’s [Orthogonality Thesis](https://www.nickbostrom.com/superintelligentwill.pdf) is a good example of such deep knowledge in alignment that most people get, and I already [argued elsewhere](https://www.alignmentforum.org/posts/FQqcejhNWGG8vHDch/on-solving-problems-before-they-appear-the-weird#Orthogonality_Thesis_and_Complexity_Barriers) that it serves mostly to show that you can’t solve alignment by just throwing competence at it — also note that Yudkowsky had the same pattern earlier/parallely, and is still using it)
To summarize: **the deep qualitative thinking that Yudkowsky points out by saying “deep knowledge” is the sort of thinking that cuts off a big chunk of possibility space, that is tells you the whole chunk cannot work. It also lets you judge from the way people propose a solution (whether they tackle the deep pattern or not) whether you should ascribe decent probability to them being right.**
A last note in this section: although deep knowledge primarily leads to negative conclusions, it can also lead to positive knowledge through a particularly Bayesian mechanism: **if the deep knowledge destroys every known hypothesis/proposal except one (or a small number of them), then that is strong evidence for the ones left.**
(This quote is more obscure than the others without the context. It’s from [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf) (2013), and discusses the last step in a proposal for formalizing the sort of deep insight/pattern Yudkowksy leveraged during the FOOM debate. If you’re very confused, I feel like the most relevant part to my point is the bold last sentence.)
> If Step Three is done wisely—with the priors reflecting an appropriate breadth of uncertainty—and doesn’t entirely founder on the basic difficulties of formal statistical learning when data is scarce, then I would expect any such formalization to yield mostly qualitative yes-or-no answers about a rare handful of answerable questions, rather than yielding narrow credible intervals about exactly how the internal processes of the intelligence explosion will run. A handful of yeses and nos is about the level of advance prediction that I think a reasonably achievable grasp on the subject should allow—we shouldn’t know most things about intelligence explosions this far in advance of observing one—we should just have a few rare cases of questions that have highly probable if crude answers. I think that one such answer is “AI go FOOM? Yes! AI go FOOM!” but I make no pretense of being able to state that it will proceed at a rate of 120,000 nanofooms per second.
>
> Even at that level, covering the model space, producing a reasonable simplicity weighting, correctly hooking up historical experiences to allow falsification and updating, and getting back the rational predictions would be a rather ambitious endeavor that would be easy to get wrong. Nonetheless, I think that Step Three describes in principle what the ideal Bayesian answer would be, given our current collection of observations. **In other words, the reason I endorse an AI-go-FOOM answer is that I think that our historical experiences falsify most regular growth curves over cognitive investments that wouldn’t produce a FOOM.**
>
>
Where does deep knowledge come from?
------------------------------------
Now that we have a decent grounding of what Yudkowsky thinks deep knowledge is for, the biggest question is how to find it, and how to know you have found good deep knowledge. After all, maybe the causal models one assumes are just bad?
This is the biggest difficulty that Hanson, Ngo, and Christiano seemed to have with Yudkowsky’s position.
(Robin Hanson, from the comments after [Observing Optimization](https://intelligence.org/files/AIFoomDebate.pdf#chapter.13) in the FOOM Debate)
> If you can’t usefully connect your abstractions to the historical record, I sure hope you have some data you can connect them to. Otherwise I can’t imagine how you could have much confidence in them.
>
>
(Richard Ngo from [his second discussion](https://www.lesswrong.com/posts/hwxj4gieR7FWNwYfa/ngo-and-yudkowsky-on-ai-capability-gains-1?commentId=uoexqbjgaHzRRoqcE#5_3__Epistemology__and_assessing_the_idea_of_expected_utility) with Yudkowsky)
> Let me put it this way. There are certain traps that, historically, humans have been very liable to fall into. For example, seeing a theory, which seems to match so beautifully and elegantly the data which we've collected so far, it's very easy to dramatically overestimate how much that data favours that theory. Fortunately, science has a very powerful social technology for avoiding this (i.e. making falsifiable predictions) which seems like approximately the only reliable way to avoid it - and yet you don't seem concerned at all about the lack of application of this technology to expected utility theory.
>
>
(Paul Christiano from [his discussion](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/vwLxd6hhFvPbvKmBH) with Yudkowsky)
> OK, but you keep saying stuff about how people with my dumb views would be "caught flat-footed" by historical developments. Surely to be able to say something like that you need to be making some kind of prediction?
>
>
Note that these attitudes make sense. I especially like Ngo’s framing. Falsifiable predictions (even just postdictions) are the cornerstone of evaluation hypotheses in Science. It even feels to Ngo (as it felt to me) that Yudkowsky argued for that in the Sequences:
(Ngo from [his second discussion](https://www.lesswrong.com/posts/hwxj4gieR7FWNwYfa/ngo-and-yudkowsky-on-ai-capability-gains-1?commentId=uoexqbjgaHzRRoqcE#5_3__Epistemology__and_assessing_the_idea_of_expected_utility) with Yudkowsky)
> I'm familiar with your writings on this, which is why I find myself surprised here. I could understand a perspective of "yes, it's unfortunate that there are no advanced predictions, it's a significant weakness, I wish more people were doing this so we could better understand this vitally important theory". But that seems very different from your perspective here.
>
>
(And Yudkoswky himself from [Making Belief Pay Rent (In Anticipated Experience)](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences))
> Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.
>
>
But the thing is… rereading part of the Sequences, I feel Yudkowsky was making points about deep knowledge all along? Even the quote I just used, which I interpreted in my rereading a couple of weeks ago as being about making predictions, now sounds like it’s about the sort of negative form of knowledge that forbids “perpetual motion machines”. Notably, Yudkowsky is very adamant that beliefs must tell you what **cannot** happen. Yet that doesn’t imply at all to make predictions of the form “this is how AGI will develop”, so much as saying things like “this approach to alignment cannot work”.
Also, should I point out that there’s [a whole sequence](https://www.lesswrong.com/s/fxynfGCSHpY4FmBZy) dedicated to the ways rationality can do better than science? (Thanks to Steve Byrnes for the pointer). I’m also sure I would find a lot of relevant stuff by rereading [Inadequate Equilibria](https://equilibriabook.com/) too, but if I wait to have reread everything by Yudkowsky before posting, I’ll be there a long time…
### My Initial Mistake and the Einstein Case
Let me jump here with my best guess of Yudkowsky’s justification of deep knowledge: **their ability to both**
* **strongly compress “what sort of hypothesis ends up being right” without having to add anything ad-hoc-y to get our theory and hypotheses back;**
* **and constrain anticipations in non-trivial ways.**
The thing is, I got it completely wrong initially. Reading [Einstein’s Arrogance](https://www.lesswrong.com/posts/MwQRucYo6BZZwjKE7/einstein-s-arrogance) (2007), an early Sequences post that is all about saying that Einstein had excellent reasons to believe General Relativity’s correctness before experimental verification (of advanced predictions), I thought that relativity was the deep knowledge and that Yudkowsky was pointing out how Einstein, having found an instance of true deep knowledge, could allow himself to be more confident than the social process of Science would permit in the absence of experimental justification.
[Einstein’s Speed](https://www.lesswrong.com/s/fxynfGCSHpY4FmBZy/p/mpaqTWGiLT7GA3w76) (2008) made it clear that I had been looking at the moon when I was supposed to see the pointing finger: **the deep knowledge Yudkowsky pointed out was not relativity itself, but what let Einstein single it out by a lot of armchair reasoning and better use of what was already known.**
> In our world, Einstein didn't even *use* the perihelion precession of Mercury, except for verification of his answer produced by other means. Einstein sat down in his armchair, and thought about how *he* would have designed the universe, to look the way he thought a universe should look—for example, that you shouldn't ought to be able to distinguish yourself accelerating in one direction, from the rest of the universe accelerating in the other direction.
>
> And Einstein executed the whole long (multi-year!) chain of armchair reasoning, without making any mistakes that would have required further experimental evidence to pull him back on track.
>
>
More generally, I interpret the whole [Science and Rationality](https://www.lesswrong.com/s/fxynfGCSHpY4FmBZy) Sequence as explaining how deep knowledge can let rationalists do something that isn’t in the purview of traditional Science: estimate which hypotheses make sense before the experimental predictions and evidence come in.
(From [Faster Than Science](https://www.lesswrong.com/s/fxynfGCSHpY4FmBZy/p/xTyuQ3cgsPjifr7oj) (2008))
> This doesn't mean that the process of deciding which ideas to test is *unimportant* to Science. It means that Science doesn't *specify* it.
>
> [...]
>
> In practice, there are some scientific queries with a large enough answer space, that picking models at random to test, it would take zillions of years to hit on a model that made good predictions—like getting monkeys to type Shakespeare.
>
> At the *frontier* of science—the boundary between ignorance and knowledge, where science *advances*—the process relies on at least some individual scientists (or working groups) seeing things that are not yet confirmed by Science. That's how they know which hypotheses to test, in advance of the test itself.
>
> If you take your Bayesian goggles off, you can say, "Well, they don't have to know, they just have to guess." If you put your Bayesian goggles back on, you realize that "guessing" with 10% probability requires nearly as much epistemic work to have been successfully performed, behind the scenes, as "guessing" with 80% probability—at least for large answer spaces.
>
> The scientist may not *know* he has done this epistemic work successfully, in advance of the experiment; but he must, in fact, have done it successfully! Otherwise he will not even *think* of the correct hypothesis. In large answer spaces, anyway.
>
>
There’s a subtlety that is easy to miss: Yudkowsky doesn’t say that specifying an hypothesis in a large answer space makes it high evidence. After all, you can just generate any random guess. **What he’s pointing at is that to ascribe a decent amount of probability to a specific hypothesis in a large space through updating on evidence, you need to cut a whole swath of the space to redirect the probability on your hypothesis.** And that from a purely computational perspective, this implies more work on whittling down hypotheses than to make the favored hypothesis certain enough through experimental verification.
His claim then seems that Einstein, and other scientists who tended to “guess right” at what would be later experimentally confirmed, couldn’t have been just lucky — they must have found ways of whittling down the vastness of hypothesis space, so they had any chance of proposing something that was potentially right.
Yudkowsky gives some pointers to what he thinks Einstein was doing right.
(From [Einstein’s Speed](https://www.lesswrong.com/s/fxynfGCSHpY4FmBZy/p/mpaqTWGiLT7GA3w76) (2008))
> Rather than observe the planets, and infer what laws might cover their gravitation, Einstein was observing the other laws of physics, and inferring what new law might follow the same pattern. Einstein wasn't finding an equation that covered the motion of gravitational bodies. Einstein was finding a character-of-physical-law that covered previously observed equations, and that he could crank to predict the next equation that would be observed.
>
> [Nobody knows](http://www.overcomingbias.com/2007/10/no-one-knows-wh.html) where the laws of physics come from, but Einstein's success with General Relativity shows that their common character is strong enough to predict the correct form of one law from having observed other laws, without necessarily needing to observe the precise effects of the law.
>
> (In a general sense, of course, Einstein did know by observation that things fell down; but he did not get GR by backward inference from Mercury's exact perihelion advance.)
>
>
So in that interpretation, Einstein learned from previous physics and from thought experiments how to cut away the parts of the hypothesis space that didn’t sound like they could make good physical laws, until he was left with a small enough subspace that he could find the right fit by hand (even if that took him 10 years)
> So, from a Bayesian perspective, what Einstein did is still induction, and still covered by the notion of a simple prior (Occam prior) that gets updated by new evidence. It's just the prior was over the *possible characters of physical law,* and observing other physical laws let Einstein update his model of *the character of physical law*, which he then used to predict a particular law of gravitation.
>
> If you didn't have the concept of a "character of physical law", what Einstein did would look like magic—plucking the correct model of gravitation out of the space of all possible equations, with vastly insufficient evidence. But Einstein, by looking at *other* laws, cut down the space of possibilities for the *next* law. He learned the alphabet in which physics was written, constraints to govern his answer. Not magic, but reasoning on a higher level, across a wider domain, than what a naive reasoner might conceive to be the "model space" of only this one law.
>
>
In summary, **deep knowledge doesn’t come in the form of a particularly neat hypothesis or compression; it is the engine of compression itself. Deep knowledge compresses “what sort of hypothesis tends to be correct”, such that it can be applied to the search of a correct hypothesis at the object level.** That also cements the idea that deep knowledge gives constraints, not predictions: you don’t expect to be able to have such a strong criterion for correct hypothesis that given a massive hypothesis space, you can pinpoint the correct one.
Here it is good to generalize my previous mistake; recall that I took General Relativity for the deep knowledge, when it was actually the sort of constraints on physical laws that Einstein used for even finding General Relativity. Why? I can almost hear Yudkowsky answering in my head: because General Relativity is the part accepted and acknowledged by Science. I don’t think it’s the only reason, but there’s an element of truth: **I privileged the “proper” theory with experimental validation over the more vague principles and concepts that lead to it.**
**A similar mistake is to believe the deep knowledge is the theory when it actually is what the theory and the experiments unearthed.** This is how I understand Yudkowsky’s use of thermodynamics and evolutionary biology: he points out at the deep knowledge that led and was revealed by the work on these theories, more than at the theories themselves.
### Compression and Fountains of Knowledge
We still don’t have a good way of finding and checking deep knowledge, though. Not any constraint on hypothesis space is deep knowledge, or even knowledge at all. The obvious idea is to have a reason for that constraint. And the reason Yudkowsky goes for almost every time is compression. Not a compressed description, like Moore’s law; nor a “compression” that is as complex as the pattern of hypothesis it’s trying to capture. **Compression in the sense that you get a simpler constraint that can get you most of the way to regenerate the knowledge you’re starting from.**
This view of the importance of compression is everywhere in the Sequences. A great example is [Truly Part of You](https://www.lesswrong.com/posts/fg9fXrHpeaDD6pEPL/truly-part-of-you), which asks what knowledge you could rederive if it was deleted from your mind. If you have a deep understanding of the subject, and you keep recursively asking how a piece of knowledge could be rederived and then how “what’s needed for the derivation” can be rederived, Yudkwosky argues that you will reach “fountains of knowledge”. Or in the terminology of this post, deep knowledge.
> Almost as soon as I started reading about AI—even before I read McDermott—I realized it would be *a really good idea* to always ask myself: “How would I regenerate this knowledge if it were deleted from my mind?”
>
> The deeper the deletion, the stricter the test. If all proofs of the Pythagorean Theorem were deleted from my mind, could I re-prove it? I think so. If all knowledge of the Pythagorean Theorem were deleted from my mind, would I notice the Pythagorean Theorem to re-prove? That’s harder to boast, without putting it to the test; but if you handed me a right triangle with sides of length 3 and 4, and told me that the length of the hypotenuse was calculable, I think I would be able to calculate it, if I still knew all the rest of my math.
>
> What about the notion of *mathematical proof*? If no one had ever told it to me, would I be able to reinvent *that* on the basis of other beliefs I possess? There was a time when humanity did not have such a concept. Someone must have invented it. What was it that they noticed? Would I notice if I saw something equally novel and equally important? Would I be able to think that far outside the box?
>
> How much of your knowledge could you regenerate? From how deep a deletion? It’s not just a test to cast out insufficiently connected beliefs. It’s a way of absorbing *a fountain of knowledge, not just one fact*.
>
>
What do these fountains look like? They’re not the fundamental theories themselves, but instead their underlying principles. Stuff like [the principle of least action](https://en.wikipedia.org/wiki/Stationary-action_principle), [Noether’s theorem](https://en.wikipedia.org/wiki/Noether%27s_theorem) and the principles underlying [Statistical Mechanics](https://en.wikipedia.org/wiki/Statistical_mechanics) (don’t know enough about it to name them). **They are the crystallized insights which constrain enough the search space that we can rederive what we knew from them.**
(Feynman might have agreed, given that he chose the atomic hypothesis/principle, “all things are made of atoms *—* little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another” was the one sentence he salvage for further generations in case of a cataclysm.)
Here I hear a voice in my mind saying “What does simple mean? Shouldn’t it be better defined?” Yet this doesn’t feel like a strong objection. Simple is tricky to define intensively, but scientists and mathematicians tend to be pretty good at spotting it, as long as they don’t fall for [Mysterious Answers](https://www.lesswrong.com/posts/6i3zToomS86oj9bS6/mysterious-answers-to-mysterious-questions). And most of the checks on deep knowledge seem to be in their ability to rederive the known correct hypotheses without adding stuff during the derivation.
A final point before closing this section: Yudkowsky writes that the same sort of evidence can be gathered for more complex arguments if they can be summarized by simple arguments that still get most of the current data right. **My understanding here is that he’s pointing at the wiggle room of deep knowledge, that is at the non-relevant ways in which it can be off sometimes.** This is important because asking for that wiggle room can sound like ad-hoc adaptation of the pattern, breaking the compression assumption.
(From [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf) (2013))
> In my case, I think how much I trusted a Step Three model would depend a lot on how well its arguments simplified, while still yielding the same net predictions and managing not to be falsified by history. I trust complicated arguments much more when they have simple versions that give mostly the same answers; I would trust my arguments about growth curves less if there weren’t also the simpler version, “Smart minds build even smarter minds.” If the model told me something I hadn’t expected, but I could translate the same argument back into simpler language and the model produced similar results even when given a few cross-validational shoves, I’d probably believe it.
>
>
Conclusion
==========
Based on my reading of his position, **Yudkowsky sees deep knowledge as highly compressed causal explanations of “what sort of hypothesis ends up being right”. The compression means that we can rederive the successful hypotheses and theories from the causal explanation. Finally, such deep knowledge translates into partial constraints on hypothesis space, which focus the search by pointing out what cannot work. This in turn means that deep knowledge is far better at saying what won’t work than at precisely predicting the correct hypothesis.**
I also want to point out something that became clearer and clearer in reading old posts: Yudkowsky is nothing if not coherent. You might not like his tone in the recent discussions, but if someone has been saying the same thing for 13 years, nobody seems to get it, and their model predicts that this will lead to the end of the world, maybe they can get some slack for talking smack. |
1d8f7323-8b5f-4478-aa39-e4f0575d5266 | trentmkelly/LessWrong-43k | LessWrong | The Cage of the Language
> Wittgenstein: Die Grenzen meiner Sprache bedeuten die Grenzen meiner Welt.
>
> Passer-by: Where's that guy from? Is he Dutch?
I've never thought much of the Wittgenstein's argument. First, there's non-verbal thinking that can be used to escape the limits of the language. Second, once the knowledge is gained by non-verbal thinking, it can be used to extend the language itself and thus extend the limits of everyone's world.
However, I've been writing newspaper articles and a blog in Slovak lately and the experience made me think of the Wittgenstein quote.
Namely, I had hard time expressing some ideas and making some arguments. Slovak, you see, is a language of 5 million speakers, with ~200 years of history as a literary language. A lot of terms that one can use in English simply do not exist.
I can't say, for example:
> I support policy X because it enables economies of scale and I am against policy Y because it introduces a single point of failure.
There are no widely accepted equivalents to "economies of scale" and "single point of failure".
I can say:
> Som za X, pretože umožňuje úspory z rozsahu a som proti Y, pretože zavádza jediný bod zlyhania.
It's hard to relate how that sounds. Enough to say that "úspory z rozsahu" doesn't literally translate to "economies of scale" but rather to "savings from the extent". One gets an impression of a crackpot speaking gibberish, making a crackpot argument.
It should be said that this is not a problem in one-to-one conversations. One can use the English term if the other is aware of it, or, if not, he can clarify the meaning of the term in advance. Where the problem hits is the public discussion. There you can't expect the prior knowledge or afford extensive explanations. Attention span in public discussion is, after all, very limited. If you waste your one minute of public attention by explaining what a single point of failure is, you've already lost.
It can be argued that the problem lies not with the language, |
5dfb2990-e903-4ed2-b33d-8e829131ee6d | trentmkelly/LessWrong-43k | LessWrong | Slack Chat
There is now a MIRIx slack chat. It's not really active yet, but I figure it could be a good place for discussions which aren't ready to be posts here yet. Please email me (my first and last name with nothing between @gmail.com) if you'd like an invite! Also, it would be helpful if you'd let other people know about it. |
f9c95e77-5e53-42cb-a7f1-11679bcaf2af | trentmkelly/LessWrong-43k | LessWrong | Things I carry almost every day, as of late December 2022
Here we see things I carry in my pockets regularly.
To the left is my phone case, with two bandaids in it. This is a stand-in for the phone (a Pixel 6) that the phone case usually encases (I was busy taking photos with it). I keep two bandaids between the case and the phone in case I need one - which does sometimes happen. I find my phone useful for the normal things one uses a phone for. As of this year, I’ve started using it in lieu of a credit card, which feels very cool and 21st century.
At the top right is a pocket constitution made by Legal Impact for Chickens. I received this at an Effective Altruism Global conference, during the career fair. What actually happened was that someone came up to the booth I was at holding the pocket constitution, I noted that it looked cool, and they were kind enough to offer it to me. Unfortunately, I have never knowingly met anybody from Legal Impact for Chickens. I have not actually used this pocket constitution, but I carry it anyway in my winter jacket’s inner breast pocket since (a) it fits very unobtrusively and (b) it seems cool to carry around a pocket constitution.
At the bottom right is my wallet, a Bellroy Slim Sleeve. I very much like the material it is made of: called “baida nylon”, it is pleasingly canvas-like. Unfortunately I do not know how to faithfully convey this to you via the internet, but you will have to take my word that this is one of my two favourite purchases in 2022, ranked by my tactile pleasure in interacting with it.
Here we see the wallet open. On the left is my student ID card (with my COVID vaccine card tucked behind it out of sight), with a collection of folded bills tucked behind it. On the right is my Alcor membership card. Behind the right card is a tab that can be pulled to reveal more cards.
As you can see, there are several cards stowed away: on top are emergency medical instructions if I am found dead, to prepare my head for cryogenic storage, and below that is:
* my WeWork c |
2f282baf-f8d0-49da-9d48-524fa0f765cb | StampyAI/alignment-research-dataset/special_docs | Other | Reward engineering
This post gestures at a handful of research questions with a loose thematic connection.
The idea
--------
Consider the following frameworks:
* [Temporal difference learning](https://en.wikipedia.org/wiki/Temporal_difference_learning): learn to predict the future by predicting tomorrow’s prediction.
* [Generative adversarial models](http://arxiv.org/abs/1406.2661): learn to sample from a distribution by fooling a distinguisher.
* [Predictability minimization](http://cognet.mit.edu/system/cogfiles/journalpdfs/neco.1992.4.6.863.pdf): learn to represent data efficiently by making each part of the representation unpredictable given the others.
These algorithms replace a hard-to-optimize objective with a nicer proxy. These proxies are themselves defined by machine learning systems rather than being specified explicitly. I think this is a really nice paradigm, and my guess is that it will become more important if large-scale supervised and reinforcement learning continues to be a dominant methodology.
Following Daniel Dewey, I’ll call this flavor of research “[reward engineering](http://www.danieldewey.net/reward-engineering-principle.pdf).” In terms of tools and techniques I don’t know if this is a really a distinct category of research; but I do think that it might be a useful heuristic about where to look for problems relevant to AI control.
Relevance to AI control
-----------------------
Though reward engineering seems very broadly useful in AI, I expect it to be especially important for AI control:
* A key goal of AI control is using AI systems to optimize objectives which are defined implicitly or based on expensive human feedback. We will probably need to use complex proxies for this feedback if we want to apply reinforcement learning.
* Reward engineering seems relatively robust to changes in AI techniques. Uncertainty about future techniques is often a major obstacle to doing meaningful work on AI control in advance (even if only a little bit in advance).
Applications
------------
I see a few especially interesting opportunities for reward engineering for AI control:
* [Making efficient use of human feedback](https://medium.com/ai-control/efficient-feedback-a347748b1557#.wp2zmi2oj). Here we have direct access to the objective we really care about, and it is just too expensive to frequently evaluate. (*Simple proposal*: train a learner to predict human judgments, then use those predicted judgments in place of real feedback.)
* [Combining the benefits of imitation and approval-direction](https://medium.com/ai-control/mimicry-maximization-and-meeting-halfway-c149dd23fc17#.cz2phxdp7). I suspect it is possible to avoid perverse instantiation concerns while also providing a flexible training signal. (*Simple proposal*: use the adversarial generative models framework, and have the operator accomplish the desired task in a way optimized to fool the distinguisher.)
* [Increasing robustness](https://medium.com/ai-control/synthesizing-training-data-f92a637dc1b4#.ggps9emnc). If our ML systems are sufficiently sophisticated to foresee possible problems, then we might be able to leverage those predictions to avoid the problems altogether. (*Simple proposal*: train a generative model to produce data from the test distribution, with an extra reward for samples that “trip up” the current model.)
In each case I’ve made a preliminary simple proposal, but I think it is quite possible that a clever trick could make the problem look radically more tractable. A search for clever tricks is likely to come up empty, but hits could be very valuable (and would be good candidates for things to experiment with).
Beyond these semi-specific applications, I have a more general intuition that thinking about this aspect of the AI control problem may turn up interesting further directions. |
ee5ddb0a-8ef2-4a6d-8528-b130d5275e09 | trentmkelly/LessWrong-43k | LessWrong | Event: Effective Altruism Global X Berlin 2017
This year's EAGxBerlin takes place on the 14th and 15th of October at the Berlin Institute of Technology and is organized by the Effective Altruism Foundation. The conference will convene roughly 300 people – academics, professionals, and students alike – to explore the most effective and evidence-based ways to improve the world, based on the philosophy and global movement of effective altruism.
For more information, please see our website and facebook event. Tickets are available on Tito. |
0911dce7-81cb-4925-9516-cb73fd57ac64 | StampyAI/alignment-research-dataset/arbital | Arbital | Associativity: Intuition
Associative functions can be interpreted as families of functions that reduce lists down to a single output by combining adjacent elements in any order. Alternatively, associativity can be seen as a generalization of "listyness," which captures and generalizes the "it doesn't matter whether you added [b](https://arbital.com/p/a,) to c or a to [c](https://arbital.com/p/b,), the result is [b, c](https://arbital.com/p/a,) regardless" aspect of lists.
There are many different ways for a function to be associative, so it is difficult to provide a single litmus test for looking at a function and telling whether it associates (aside from just checking the associative axiom directly). However, a few heuristics can be used to make a good guess.
[https://arbital.com/p/toc:](https://arbital.com/p/toc:)
# Associative operators as natural functions on lists
The [generalized associative law](https://arbital.com/p/3ms) says that an [associative](https://arbital.com/p/3h4) [function](https://arbital.com/p/3jy) $f : X \times X \to X$ gives rise to a method for combining any non-empty list of $X$ elements into a single output, where the order in which adjacent elements are combined doesn't affect the result. We can flip that result around, and interpret associative operators as the pairwise versions of a certain class of "natural" functions for combining the elements of a list.
On this interpretation, we start by noting that some methods for reducing a list down to a single element can be broken down into pairwise combinations of adjacent elements, while others can't. For example, when we're trying to compute $3 + 4 + 5 + 6,$ we can pick any two adjacent elements and start by combining those using the binary version of $+$. But when we're trying to compute `adjacent_ones([0, 0, 1, 1, 0](https://arbital.com/p/0,))` to check whether the list has any two adjacent ones, we're going to run into trouble if we only look for adjacent ones in the pairs (0, 0), (0, 1), and (1, 0).
The lists that can be reduced by pairwise combination of adjacent pairs have a nice locality property; the result can be computed only by looking at adjacent elements (without worrying about the global structure). Locality is a common idiom in physics and mathematics, so we might start by asking what sort of functions on lists have this locality property. The answer is "any list that can be broken down into pairwise combinations of elements such that the order doesn't matter." If we formalize that notion, we get the result that any function on lists with this locality property corresponds (in a one-to-one fashion) to an associative operation. Thus, we can view associativity as the mathematical formalization of this nice "locality" property on lists.
Empirically, this locality property turns out to be quite useful for math, physics, and in computer programming, as evidenced by the commonality of associative operators. See, for example, [https://arbital.com/p/3g8](https://arbital.com/p/3g8), or the pages on [semigroups](https://arbital.com/p/algebraic_semigroup) and [monoids](https://arbital.com/p/3h3).
# Associativity as a generalization of "list"
The above interpretation gives primacy to lists, and interprets associative operators in terms of natural functions on lists. We can invert that argument by treating associativity as a _generalization_ of what it means for something to act "list-like."
A list $[b, c, d, \ldots](https://arbital.com/p/a,)$ is a set of elements that have been combined by some "combiner" function, where the order of the elements matters, but the order _in which they were combined_ does not matter. For example, if we combine $a$ with $b$ (forming $[b](https://arbital.com/p/a,)$) and then combine that with $c$, then we get the same list as if we combine $b$ and $c$ into $[c](https://arbital.com/p/b,)$ first, and then combine $a$ with that.
The very fact that we can unambiguously say "the list $[b, c](https://arbital.com/p/a,)$" without worrying about the order that the elements were combined in means that lists are built out of an associative "combination" operator. On this interpretation, associativity is capturing part of the essence of listyness, and associativity _in general_ generalizes this notion. For example, associative operators are allowed to be a little forgetful about what exact elements you combined (e.g., 3 + 4 = 2 + 5) so long as you retain the "it doesn't matter what order you combine the things in" property. In other words, we can view associativity as "part of what it means to be list-like."
(One particularly important property of lists — namely, that they can be empty — is not captured by associativity alone. Associative operators on sets that have an element that acts like an "empty list" are called "monoids." For more on the idea of generalizing the notion of "list", refer to the page on [monoids](https://arbital.com/p/3h3).)
# Associative mechanisms
The above two interpretations give an intuition for what it _means_ that a function is associative. This still leaves open the question of _how_ a function can be associative. Imagine $f : X \times X \to Y$ as a [physical mechanism](https://arbital.com/p/3mb) of wheels and gears. Someone says "$f$ is associative." What does that _mean,_ in terms of the function's physical mechanisms? What should we expect to see when we pop the function open, given the knowledge that it "is associative"?
Recall that associativity says that the two methods for combining two instantiations of the function yield the same output:

Thus, the ultimate physical test of associativity is hooking up two instantiations of $f$ as in the left diagram, and then checking whether dragging the mechanisms of the lower-right instantiation above the mechanisms of the upper-left instantiation (thereby reconfiguring the system according to the diagram on the right) causes the behavior of the overall system to change. What happens when the right-hand-side instantiation is given access to the middle input first versus second? Does that affect the behavior at all? If not, $f$ is associative.
This is not always an easy property to check by looking at the mechanisms of $f$ alone, and sometimes functions that appear non-associative (at first glance) turn out to be associative by apparent coincidence. In other words, there are many different ways for a function to be associative, so it is difficult to give a single simple criterion for determining associativity by looking at the internals of the function. However, we can use a few heuristics that help one distinguish associative functions from non-associative ones.
## Heuristic: Can two copies of the function operate in parallel?
$f$ is associative if, when using two copies of $f$ to reduce three inputs to one output, then changing whether the right-hand copy gets access to the middle tape first vs second does not affect the output. One heuristic for checking whether this is the case is to check whether both copies of $f$ can make use of the middle input at the same time, without getting in each other's way. If so, $f$ is likely associative.
For example, consider an implementation of $+$ that gets piles of poker chips as input (where a pile of $n$ chips represents the number $n$) and computes the output by simply sweeping all the poker chips from its input belts onto its output belt. To make a function that adds three piles of chips together, you could set up two two-pile adders in the configuration of the diagram on the left, but you could also have two two-tape sweepers operating on three tapes in parallel, such that they both sweep the middle tape. This parallelization wouldn't change the output, and thus $+$ is associative.
By contrast, consider a mechanism that takes wooden blocks as input, and glues them together, and nails silver-colored caps on either end of the glued block. For example, if you put in a red block on the left and a blue block on the right, you get a silver-red-blue-silver block in the output. You could set up two copies of these like the diagram on the left, but if you tired to parallelize them, you'd get into trouble — each mechanism would be trying to nail one of its caps into the place that the other mechanism was attempting to apply glue. And indeed, this mechanism is non-associative.
This heuristic is imperfect. Some mechanisms that seem difficult to parallelize are still associative. For example, consider the multiplier mechanism, which takes two poker piles as input and puts a copy of the left pile onto the output tape for every chip in the right pile. It would be difficult to parallelize two copies of this function: One would be trying to count the chips in the middle pile while the other was attempting to copy the chips in the middle pile, and the result might not be pretty. However, multiplication _is_ associative, because a pile of $x$-many copies of a ($y$ copies of $z$)-many poker chips has the same number of chips as a pile of ($x$ copies $y$)-many copies of $z$-many poker chips.
## Heuristic: Does the output interpretation match both input interpretations?
Another (vaguer) heuristic is to ask whether the output of the function should _actually_ be treated as the same sort of thing as the input to the function. For example, recall the `adjacent_ones` function from above, which checks a list for adjacent ones, and returns `1` if it finds some and `0` otherwise. The inputs to `adjacent_ones` are 0 and 1, and the output is 0 or 1, but the output interpretation doesn't quite match the input interpretation: Intuitively, the output is _actually_ intended to mean "yes there were adjacent ones" or "not here weren't adjacent ones", and so applying `adjacent_ones` to the output of `adjacent_ones` is possible but ill-advised. If there is a mismatch between the output interpretation and at least one of the input interpretations, then the function probably isn't associative.
For example, imagine a person who is playing a game that works as follows. The board has three positions: red, green, and blue. The player's objective is to complete as many clockwise red-green-blue cycles as possible, without ever backtracking in the counter-clockwise direction.

Each turn, the game offers them a choice of one of the three spaces, and they get to choose whether or not to travel to that square or stay where they are. Clearly, their preferences depend on where they currently are: If they're on "red", "green" is a good move and "blue" is a bad one; but if they're on "blue" then choosing "green" is ill-advised. We can consider a binary function $f$ which takes their current position on the left and the proposed position on the right, and returns the position that the player prefers. For example, $f(red,blue)=red,$ $f(red,green)=green,$ $f(blue,blue)=blue,$ $f(blue,green=blue).$ In this case, the interpretation of the left input is a "player position," the interpretation of the right input is an "offered move", and the interpretation of the output is the resulting "player position." The output interpretation mismatches one of the input interpretations, which implies that $f$ probably isn't associative, and indeed it is not: $f(f(red, green), blue))=blue,$ whereas $f(red, f(green, blue))=red.$ The former expression can be interpreted as "where the player would be if they started at red, and were then offered green, and were then offered blue." The latter expression doesn't have a great interpretation, because it's feeding the output of $f(green, blue)$ (a player position) in as an "offered move."
If the interpretation of the output (in this case, "player position") mismatches the interpretations of at least one of the inputs, then the function likely isn't associative. However, this heuristic is also imperfect: The most obvious interpretations of the inputs and outputs to the subtraction function are "they're all just numbers," and subtraction still fails to associate.
## Further discussion
There are many different ways for a function to be associative, so it is difficult to give a simple litmus test. The ultimate test is always to imagine using two copies of $f$ to combine three outputs into one, and check whether the result changes depending on whether the left-hand copy of $f$ gets to run first (in which case it gets to access the second input belt at the source) or second (in which case its right-hand input is the right-hand copy's output). For examples of functions that pass or fail this ultimate test, refer to the [examples page](https://arbital.com/p/3mt). |
5aa58ea7-2764-4098-a7f4-aa839556d3ea | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Related to: Cynicism in Ev Psych and Econ
In Finding the Source, a commenter says: I have begun wondering whether claiming to be victim of 'akrasia' might just be a way of admitting that your real preferences, as revealed in your actions, don't match the preferences you want to signal (believing what you want to signal, even if untrue, makes the signals more effective). I think I've seen Robin put forth something like this argument [EDIT: Something related, but very different], and TGGP points out that Brian Caplan explicitly believes pretty much the same thing1: I've previously argued that much - perhaps most - talk about "self-control" problems reflects social desirability bias rather than genuine inner conflict.Part of the reason why people who spend a lot of time and money on socially disapproved behaviors say they "want to change" is that that's what they're supposed to say.Think of it this way: A guy loses his wife and kids because he's a drunk. Suppose he sincerely prefers alcohol to his wife and kids. He still probably won't admit it, because people judge a sinner even more harshly if he is unrepentent. The drunk who says "I was such a fool!" gets some pity; the drunk who says "I like Jack Daniels better than my wife and kids" gets horrified looks. And either way, he can keep drinking. I'll call this the Cynic's Theory of Akrasia, as opposed to the Naive Theory. I used to think it was plausible. Now that I think about it a little more, I find it meaningless. Here's what changed my mind.
What part of the mind, exactly, prefers a socially unacceptable activity (like drinking whiskey or browsing Reddit) to an acceptable activity (like having a wife and kids, or studying)? The conscious mind? As Bill said in his comment, it doesn't seem like it works this way. I've had akrasia myself, and I never consciously think "Wow, I really like browsing Reddit...but I'll trick everyone else into thinking I'd rather be studying so I get more respect. Ha ha! The fools will never see it coming!"No, my conscious mind fully believes that I would rather be studying2. And this even gets reflected in my actions. I've tried anti-procrastination techniques, both successfully and unsuccessfully, without ever telling them to another living soul. People trying to diet don't take out the cupcakes as soon as no one else is looking (or, if they do, they feel guilty about it).This is as it should be. It is a classic finding in evolutionary psychology: the person who wants to fool others begins by fooling themselves. Some people even call the conscious mind the "public relations officer" of the brain, and argue that its entire point is to sit around and get fooled by everything we want to signal. As Bill said, "believing the signals, even if untrue, makes the signals more effective." Now we have enough information to see why the Cynic's Theory is equivalent to the Naive Theory.The Naive Theory says that you really want to stop drinking, but some force from your unconscious mind is hijacking your actions. The Cynic's Theory says that you really want to keep drinking, but your conscious mind is hijacking your thoughts and making you think otherwise.In both cases, the conscious mind determines the signal and the unconscious mind determines the action. The only difference is which preference we define as "real" and worthy of sympathy. In the Naive Theory, we sympathize with the conscious mind, and the problem is the unconscious mind keeps committing contradictory actions. In the Cynic's Theory, we symapthize with the unconscious mind, and the problem is the conscious mind keeps sending out contradictory signals. The Naive say: find some way to make the unconscious mind stop hijacking actions! The Cynic says: find some way to make the conscious mind stop sending false signals!So why prefer one theory over the other? Well, I'm not surprised that it's mostly economists who support the Cynic's Theory. Economists are understandably interested in revealed preferences3, because revealed preferences are revealed by economic transactions and are the ones that determine the economy. It's perfectly reasonable for an economist to care only about those and dimiss any other kind of preference as a red herring that has to be removed before economic calculations can be done. Someone like a philosopher, who is more interested in thought and the mind, might be more susceptible to the identify-with-conscious-thought Naive Theory.
But notice how the theory you choose also has serious political implications4. Consider how each of the two ways of looking at the problem would treat this example: A wealthy liberal is a member of many environmental organizations, and wants taxes to go up to pay for better conservation programs. However, she can't bring herself to give up her gas-guzzling SUV, and is usually too lazy to sort all her trash for recycling. I myself throw my support squarely behind the Naive Theory. Conscious minds are potentially rational5, informed by morality, and qualia-laden. Unconscious minds aren't, so who cares what they think? Footnotes:
1: Caplan says that the lack of interest in Stickk offers support for the Cynic's Theory, but I don't see why it should, unless we believe the mental balance of power should be different when deciding whether to use Stickk than when deciding whether to do anything else.
Caplan also suggests in another article that he has never experienced procrastination as akrasia. Although I find this surprising, I don't find it absolutely impossible to believe. His mind may either be exceptionally well-integrated, or it may send signals differently. It seems within the range of normal human mental variation.
2: Of course, I could be lying here, to signal to you that I have socially acceptable beliefs. I suppose I can only make my point if you often have the same experience, or if you've caught someone else fighting akrasia when they didn't know you were there.
3: Even the term "revealed preferences" imports this value system, as if the act of buying something is a revelation that drives away the mist of the false consciously believed preferences.
4: For a real-world example of a politically-charged conflict surrounding the question of whether we should judge on conscious or unconscious beliefs, see Robin's post Redistribution Isn't About Sympathy and my reply.
5: Differences between the conscious and unconscious mind should usually correspond to differences between the goals of a person and the "goals" of the genome, or else between subgoals important today and subgoals important in the EEA."
The above is a chunk of text from a post created by a user on the website LessWrong.
Can you explain what this text is saying as if I've never finished my first 2 years of college?
This text is a discussion about the concept of 'akrasia', which is when someone acts against their better judgment. The author discusses two theories to explain this behavior: the Naive Theory and the Cynic's Theory.
The Naive Theory suggests that a person genuinely wants to stop a harmful behavior (like drinking alcohol excessively), but an unconscious part of their mind overrules this and makes them continue the behavior.
The Cynic's Theory, on the other hand, suggests that a person actually prefers the harmful behavior, but their conscious mind tricks them into thinking they want to stop. This is to maintain a socially acceptable image. For example, a person might say they want to quit drinking, not because they truly want to, but because it's what society expects them to say.
The author argues that both theories are essentially the same. They both suggest that there's a conflict between the conscious mind (what we're aware of and can control) and the unconscious mind (the part of our mind that we're not aware of and can't control). The only difference is which part of the mind we sympathize with and consider 'real'.
The author also discusses how these theories can have political implications. For example, a person might publicly support environmental causes but personally engage in environmentally harmful behaviors. Depending on which theory you subscribe to, you might judge this person differently.
In the end, the author supports the Naive Theory because the conscious mind is rational and influenced by morality, while the unconscious mind is not. They believe that we should care more about what our conscious mind thinks, rather than our unconscious mind.
Can you further explain how the Cynic's Theory and Naive Theory are essentially the same, despite appearing to be opposite in nature?
Certainly. The Cynic's Theory and the Naive Theory might seem opposite at first glance because they focus on different parts of the mind: the conscious mind (what we're aware of and can control) and the unconscious mind (what we're not aware of and can't control). However, they're essentially the same because they both recognize a conflict between these two parts of the mind.
In the Naive Theory, the conscious mind genuinely wants to stop a harmful behavior, but the unconscious mind overrules this and makes the person continue the behavior. In the Cynic's Theory, the unconscious mind prefers the harmful behavior, but the conscious mind tricks the person into thinking they want to stop, to maintain a socially acceptable image.
So, in both theories, there's a conflict between what the conscious mind wants and what the unconscious mind wants. The only difference is which part of the mind we sympathize with and consider 'real'. In the Naive Theory, we sympathize with the conscious mind and see the unconscious mind as the problem. In the Cynic's Theory, we sympathize with the unconscious mind and see the conscious mind as the problem.
So, they're essentially the same in that they both recognize a conflict between the conscious and unconscious mind. They just differ in which part of the mind they consider to be the 'true' or 'real' preference.
Reflecting on the theories of akrasia, what could be potential drawbacks or limitations of subscribing solely to either the Naive Theory or the Cynic's Theory?
The Naive Theory and the Cynic's Theory each have their own potential limitations, largely stemming from the fact that they oversimplify the complex nature of human behavior and decision-making.
For the Naive Theory, one limitation is that it may overemphasize the role of the unconscious mind in decision-making. It assumes that the conscious mind always knows what's best and that any deviation from this is due to the unconscious mind's interference. This may not always be the case. For instance, there may be times when a person consciously decides to engage in a behavior that they know is not in their best interest, not because of unconscious influences, but due to other factors such as immediate gratification, peer pressure, or stress.
On the other hand, the Cynic's Theory may underestimate the capacity for self-deception and the complexity of conscious thought. It suggests that people are always aware of their true preferences and only claim otherwise to maintain a socially acceptable image. This overlooks the possibility that people can genuinely be conflicted or unsure about their preferences. It also dismisses the idea that people can change their preferences over time through self-reflection and personal growth.
Both theories also assume a clear division between the conscious and unconscious mind, which may not accurately reflect the intricacies of human cognition. In reality, our decisions are likely influenced by a complex interplay of conscious and unconscious processes, as well as external factors.
Finally, both theories tend to focus on individual decision-making and largely ignore the influence of societal and environmental factors on behavior. For instance, they don't account for how societal norms, cultural values, or socioeconomic conditions might influence a person's actions and their perceived preferences.
Discuss the role of societal norms, cultural values, and socioeconomic conditions in influencing a person's actions and perceived preferences, and how these external factors might challenge or support the Naive and Cynic's theories of akrasia.
Societal norms, cultural values, and socioeconomic conditions play a significant role in shaping a person's actions and perceived preferences. They provide the context within which individuals make decisions and can significantly influence both conscious and unconscious processes.
Societal norms and cultural values can impact what behaviors are considered acceptable or desirable. For example, in a society that highly values academic achievement, individuals might consciously strive to excel in their studies, even if they have other interests or passions. This can lead to a situation where a person's conscious preferences (studying) are at odds with their unconscious desires (pursuing a hobby), which aligns with both the Naive and Cynic's theories of akrasia.
Socioeconomic conditions can also influence behavior and preferences. For instance, individuals from lower socioeconomic backgrounds might consciously desire to pursue higher education, but unconsciously resist due to perceived barriers or fears of failure. This could be seen as an example of the Naive Theory, where the unconscious mind overrules the conscious mind's intentions.
However, these external factors also challenge the Naive and Cynic's theories. Both theories largely focus on internal cognitive processes and don't fully account for the influence of external factors. For example, they don't consider how societal pressure or economic necessity might force individuals to act against their preferences, regardless of whether those preferences are conscious or unconscious.
Moreover, societal norms and values can shape not only our actions but also our conscious and unconscious preferences. For instance, societal expectations can lead individuals to consciously desire things that they might not have otherwise wanted, challenging the Cynic's Theory's assumption that individuals always know their true preferences.
In conclusion, while societal norms, cultural values, and socioeconomic conditions can create situations that align with the Naive and Cynic's theories of akrasia, they also challenge these theories by highlighting the significant role of external factors in shaping behavior and preferences. |
625faaea-5658-4917-9913-91bee3534864 | trentmkelly/LessWrong-43k | LessWrong | The Reality of Emergence
Reply to The Futility of Emergence
In The Futility of Emergence, Eliezer takes an overly critical position on emergence as a theory. In this (short) article, I hope to challenge that view.
Emergence is not an empty phrase. The statements "consciousness is an emergent phenomenon" and "consciousness is a phenomenon" are not the same thing; the former conveys information that the latter does not. When we say something is emergent, we have a well defined concept that we refer to.
From Wikipedia:
> emergence is a phenomenon whereby larger entities arise through interactions among smaller or simpler entities such that the larger entities exhibit properties the smaller/simpler entities do not exhibit.
A is an emergent property of X, means that A arises from X in a way in which it is contingent on the interaction of the constituents of X (and not on those constituents themselves). If A is an emergent property of X, then the constituents of X do not possess A. A comes into existence as categorial novum at the inception of X. The difference between system X and its constituent components in regards to property A is a difference of kind and not of degree; X's constituents do not possess A in some tiny magnitude—they do not possess A at all.
> Taken literally, that description fits every phenomenon in our universe above the level of individual quarks, which is part of the problem
This is blatantly not true; size and mass for example are properties of componentary particles.
> You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there's no detailed internal model to manipulate. Those who proffer the hypothesis of "emergence" confess their ignorance of the internals, and take pride in it; they contrast the science of "e |
f0830827-6d8d-46b4-9b40-bf60424544b2 | trentmkelly/LessWrong-43k | LessWrong | The insularity critique of climate science
Note: Please see this post of mine for more on the project, my sources, and potential sources for bias.
One of the categories of critique that have been leveled against climate science is the critique of insularity. Broadly, it is claimed that the type of work that climate scientists are trying to do draws upon insight and expertise in many other domains, but climate scientists have historically failed to consult experts in those domains or even to follow well-documented best practices.
Some takeaways/conclusions
Note: I wrote a preliminary version of this before drafting the post, but after having done most of the relevant investigation. I reviewed and edited it prior to publication. Note also that I don't justify these takeaways explicitly in my later discussion, because a lot of these come from general intuitions of mine and it's hard to articulate how the information I received explicitly affected my reaching the takeaways. I might discuss the rationales behind these takeaways more in a later post.
* Many of the criticisms are broadly on the mark: climate scientists should have consulted best practices in other domains, and in general should have either followed them or clearly explained the reasons for divergence.
* However, this criticism is not unique to climate science: academia in general has suffered from problems of disciplines being relatively insular (UPDATE: Here's Robin Hanson saying something similar). And many similar things may be true, albeit in different ways, outside academia.
* One interesting possibility is that bad practices here operate via founder effects: for an area that starts off as relatively obscure and unimportant, setting up good practices may not be considered important. But as the area grows in importance, it is quite rare for the area to be cleaned up. People and institutions get used to the old ways of doing things. They have too much at stake to make reforms. This does suggest that it's important to get things right ea |
c1bedec3-e87e-4174-9aff-1339a74bd7cb | trentmkelly/LessWrong-43k | LessWrong | For Civilization and Against Niceness
Scott Alexander wrote a great essay, called “In Favor of Niceness, Community and Civilization”. Scott is a great writer, and conveys what I love about civilization in a beautiful way.
Unfortunately, the essay conflates two behaviors. Though to be fair, those two behaviors often go hand in hand:
Being uncivil, as in: breaking the norms of civilization.
Being mean, as in: being not-nice, unpleasant to be around.
The following paragraph embodies this conflation quite well:
> Liberalism does not conquer by fire and sword. Liberalism conquers by communities of people who agree to play by the rules, slowly growing until eventually an equilibrium is disturbed. Its battle cry is not “Death to the unbelievers!” but “If you’re nice, you can join our cuddle pile!”
I love civilization! Democracies let me politically coordinate with people internationally, socially liberal systems grant me freedom to be as weird as I want in private, and economically liberal systems let me try many exotic kinds of positive-sum trades with people! None of this would be possible without civilization.
I agree, Civilization is great. But I don’t want to join your cuddle pile!
Civilization is often about being not nice
As Scott Alexander says, civilization is about “agreeing to play by the rules.” But this is not about niceness. On the contrary, playing by the rules often requires being not nice. [1]
—
While we want companies to abide by strong regulations, and not cause negative externalities (like pollution), we also do not want them to be nice to each other. This is the core of antitrust law, that aims to minimize anti-competitive practices.
More concretely, the goal of companies is to capture value (make profits), while the goal of free-markets is for companies to create value for consumers. The way those two incentives are aligned is through competition. By getting companies to compete, they need to keep improving compared to other companies to keep their profits, increasing the shar |
c4fa9e87-8d49-45d6-98bb-0a5785344970 | trentmkelly/LessWrong-43k | LessWrong | Logic in the language of probability
This post is a minor note, to go along with the post on the probabilistic Löb theorem. It simply seeks to justify why terms like "having probability 1" are used interchangeably with "provable" and why implications symbols "→" can be used in a probabilistic setting.
Take a system of classical logic, with a single rule of inference: modus ponens:
From A and A→B, deduce B.
Having a single rule of inference isn't much of a restriction, because you can replace other rules of inference ("from A1,A2,... and An, deduce B") with an axiom or axiom schema ("A1∧A2∧...∧An → B") and then use modus ponens on that axiom to get the other rule of inference.
In this logical system, I'm now going to make some purely syntactical changes - not changing the meaning of anything, just the way we write things. For any sentence A that doesn't contain an implication arrow →, replace
A with P(A)=1.
Similarly, replace any sentence of the type
A → B with P(B|A)=1.
This is recursive, so we replace
(A → B) → C with P(C | P(B|A)=1 )=1.
And instead of using modus ponens, we'll use a combined Bayesian inference and law of total probability:
From P(A)=1 and P(B|A)=1, deduce P(B)=1.
Again, these are purely syntactic changes - just rewriting the symbols of a formal system in an entirely equivalent fashion. What this demonstrates, though, is that we can do classical logic using the language and the tools of probability - and hence that the objects and laws of classical logic have probabilistic counterparts.
In fact, we can do a bit better, make the similarity more pronounced. Instead of having modus ponens, allow for two "logical" rules of inference:
1. From A → B and A, deduce A∧B.
2. From A∧B, deduce B (simplification).
Together, these two rules give modus ponens, and if we've chosen our axioms in non-weird ways, we shouldn't be able to prove anything we couldn't before. Then when writing our logic in the language of probability, we similarly have two "probabilistic" rules of inference: |
186ae763-5806-4a94-87f8-b07fef339387 | StampyAI/alignment-research-dataset/blogs | Blogs | thinking about psi: as a more general json
thinking about psi: as a more general json
------------------------------------------
[JSON](https://en.wikipedia.org/wiki/JSON) is a format that's designed to work well with JavaScript — to move around entire JavaScript objects, you can `JSON.stringify` them on one end, and then `JSON.parse` them back on the other end.
this is very cool, but it's limited: it can't support cycles, can't deduplicate shared data, and can't send code — those issues are actually related: part of the problem with serializing code is that functions are often recursive or even mutually recursive (which takes either cyclical representations to send, or otherwise a layer of indirection to remove cyclicality) and that many functions point back to other shared functions in a structure that looks like a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph), where you would ideally want the upstream nodes to be deduplicated.
[psi](psi.html) tries to address these issues by being a universal format that supports cycles and efficiently unifies shared structures, such that they remain a single big piece of data pointed to for example using an [IPFS](https://en.wikipedia.org/wiki/InterPlanetary_File_System) address, and only small changes can be sent along.
in addition, psi's recommended usage of randomly-generated identifiers to represent concepts also makes psi payloads universal as opposed to only contextually meaningful, which hopefully makes things nice for things like interoperation or debugging. |
dd871523-de8f-4008-ba36-303b01d25cf6 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Brussels meetup
Discussion article for the meetup : Brussels meetup
WHEN: 16 February 2013 01:00:00PM (+0100)
WHERE: Rue des Alexiens 55 1000 Bruxelles
We are once again meeting at 'la fleur en papier doré' close to the Brussels Central station. If you feel like an intelligent discussion and are in the neighborhood, consider dropping by. As always, I'll have a sign.
Discussion article for the meetup : Brussels meetup |
d245f00f-8206-4e24-a118-0d151501dce2 | trentmkelly/LessWrong-43k | LessWrong | Not-yet-falsifiable beliefs?
I recently encountered an unusual argument in favor of religion. To summarize:
Imagine an ancient Roman commoner with an unusual theory: if stuff gets squeezed really, really tightly, it becomes so heavy that everything around it gets pulled in, even light. They're sort-of correct---that's a layperson's description of a black hole. However, it is impossible for anyone to prove this theory correct yet. There is no technology that could look into the stars to find evidence for or against black holes---even though they're real.
The person I talked with argued that their philosophy on God was the same sort of case. There was no way to falsify the theory yet, so looking for evidence either way was futile. It would only be falsifiable after death.
I wasn't entirely sure how to respond to that, so I just indicated that this argument may be proving too much. However, it genuinely is possible for a true theory to exist that we simply lack the ability to test. Do we simply have to accept that it's impossible to learn about such things? That seems like the right answer, but it feels unsatisfying.
Any thoughts? |
1426ae17-674e-4c7c-bb7b-95852442b8e8 | trentmkelly/LessWrong-43k | LessWrong | Tail SP 500 Call Options
SPX is an index fund that tracks the SP500. Right now SPX is worth about 6100 per share. For individual stocks, even the large ones, you can usually only buy options through Dec 2027. However SPX has options that expire in Dec 2028 / 29 / 30[1]. Such options with strike prices of 10K, 11K, or 12K seem extremely lucrative to me. Max dated SPY calls are also a good option. The obvious motivation is AI progress. But the naive analysis suggests these are a good way to get exposure to something we want anyway.
Current prices look like this: A very rough summary of the price curve is as follows: Each 1K step halves the price. Moving forward one year halves the price. Here are the prices:
I looked at the SP 500's cumulative returns over every 3/4/5/6 year period from 1927 to 2024. There are 91 such six-year periods and 94 three year periods. Here is the count where returns exceed the following thresholds:
If we hit 14K by the end of 2028 then the 12K-28 calls will 100x. The 10k-28 calls will roughly 55x. You can do your own math even ignoring AI progress the naive base rate is favorable. Of course the past might not be like the present. But given that many of us expect rapid AI progress getting exposure to tail events is very attractive.
Unfortunately SPX options have very annoying tax treatment. You always pay 60/40 long term/short-term cap gains tax. However, at the end of each year you are taxed on unrealized gains and update your cost basis. This is equivalent to selling on Dec 31st and rebuying at the same price on Jan 1st. This effectively forces you to rebalance each year. Its not the worst thing in the world. You can take profit, roll calls forward, pay taxes. But it is definitely not ideal given our goals.
Amazing SPY, despite SPY = SPX/10, does not have this issue and is taxed normally. However SPY options only go til EOY 2027 and the highest strike is 900 (Equivalent to 9K SPY). Such calls cost about 6.6 per contract. If SPX goes to 10K those options wil |
2ca3b043-6206-459e-bd06-61af404aa246 | StampyAI/alignment-research-dataset/blogs | Blogs | Bas Steunebrink on Self-Reflective Programming
[Bas Steunebrink](http://www.idsia.ch/~steunebrink/) is a postdoctoral researcher at the Swiss AI lab IDSIA, as part of [Prof. Schmidhuber’s](http://www.idsia.ch/~juergen/) group. He received his PhD in 2010 from Utrecht University, the Netherlands. Bas’s dissertation was on the subject of artificial emotions, which fits well in his continuing quest of finding practical and creative ways in which general intelligent agents can deal with time and resource constraints. A recent [paper](http://www.idsia.ch/~steunebrink/Publications/AGI13_resource-bounded.pdf) on how such agents will naturally strive to be effective, efficient, and curious was awarded the [Kurzweil Prize](http://www.agi-conference.org/2013/prizes/) for Best AGI Idea at AGI’2013. Bas also has a great interest in anything related to self-reflection and meta-learning, and all “meta” stuff in general.
**Luke Muehlhauser**: One of your ongoing projects has been a Gödel machine (GM) implementation. Could you please explain (1) what a Gödel machine is, (2) why you’re motivated to work on that project, and (3) what your implementation of it does?
---
**Bas Steunebrink**: A GM is a program consisting of two parts running in parallel; let’s name them Solver and Searcher. Solver can be any routine that does something useful, such as solving task after task in some environment. Searcher is a routine that tries to find beneficial modifications to both Solver and Searcher, i.e., to any part of the GM’s software. So Searcher can inspect and modify any part of the Gödel Machine. The trick is that the initial setup of Searcher only allows Searcher to make such a self-modification if it has a proof that performing this self-modification is beneficial in the long run, according to an initially provided utility function. Since Solver and Searcher are running in parallel, you could say that a third component is necessary: a Scheduler. Of course Searcher also has read and write access to the Scheduler’s code.

In the old days, writing code that makes self-modifications was seen as a nice way of saving on memory, which was very expensive. But now that memory is cheap and making self-modifications has turned out to be error-prone, this technique has been mostly banished. Modern programming languages abstract away from internals and encapsulate them, such that more guarantees about safety and performance can be given. But now that the techniques for automated reasoning developed by the theorem proving and artificial intelligence communities are maturing, a new opportunity for self-reflection emerges: to let the machine itself do all the reasoning and decide when, where, and how to perform self-modifications.
The construction of an actual implementation of a Gödel Machine is currently ongoing work, and I’m collaborating and consulting with MIRI in this endeavor. The technical problem of obtaining a system with anytime self-reflection capabilities can be considered solved. This includes answering the nontrivial question of what it means to inspect the state and behavior of actively running, resource-constrained code, and how it can safely by modified, at any time. Current work focuses on encoding the operational semantics of the system within a program being run by the system. Although these operational semantics are extremely simple, there remain possible theoretical hurdles, such as the implications of Löb’s Theorem ([Yudkowsky & Herreshoff 2013](https://intelligence.org/files/TilingAgents.pdf))
It will be interesting to see how well current Automated Theorem Provers (ATP) hold up in a Gödel Machine setting, because here we are trying to formulate proofs about code that is actively running, and which contains the ATP itself, and which is subject to resource constraints. There may be interesting lessons in here as well for the ATP developers and community.
---
**Luke**: You say that “the technical problem of obtaining a system with anytime self-reflection capabilities can be considered solved.” Could you give us a sketch of the solution to which you’re referring?
---
**Bas**: There are two issues at play here. First, there is the issue of deciding what you will see upon performing self-reflection. For example, we can ask whether a process can see the call stack of other currently running processes, or even its own call stack, and what operations can be performed on such a call stack to inspect and modify it. But second, performing self-reflection is an act in time: when a machine inspects (and modifies) its own internals, this happens at some moment in time, and doing so takes some time. The question here is, can the machine decide when to perform self-reflection?
We can envisage a machine which is performing duties by day, and every night at 2:00am is allowed access to several internal register to make adjustments based on the events of the past day, in order to be better prepared for the next day. One could argue that this machine has full self-reflective capabilities, because look, it can meddle around without restrictions every day. But it still lacks a certain degree of control, namely in time.
What I have done is design a mechanism by which a machine can decide with very high precision when it wants to inspect a certain process. It involves dividing the operation of the machine into extremely small blocks, with the possibility for interruption between any two blocks. On an average desktop this gives about 50 million opportunities for initiating self-reflection per second, which I think is fine-grained enough for the anytime label.
Besides the anytime issue of self-reflection, there are many other (technical) questions that have to be answered when implementing a self-reflective system such as a Gödel Machine. For example: should (self-)reflection interrupt the thing it is reflecting on, or can it look at something that is changing while you look at it? In an extreme case, you could be looking at your own call stack, where every probing instruction directly affects the thing being probed. Is this a powerful ability, or merely dangerous and stupid? In the self-reflective system that I have implemented, called Sleight, the default behavior is interruption, but the fluid case is very easy to get too, if you want. Basically, what it requires is interruption and immediate resumption, while saving a pointer to the obtained internal state of the resumed process. Now you can watch yourself change due to you watching yourself… It’s a bit like watching a very high-resolution, live brain scan of your own brain, but with the added ability of changing connectivity and activation anywhere at any time.
---
**Luke**: Another project you’ve been working on is called Sleight. What is Sleight, and what is its purpose?
---
**Bas**: Sleight is a self-reflective programming system. It consists of a compiler from [Scheme](http://en.wikipedia.org/wiki/Scheme_%28programming_language%29) to Sleight code, and a virtual machine (VM) for running Sleight code. The VM is what is called a *self-reflective interpreter*: it allows code being run to inspect and modify internal registers of the VM. These registers include the call stack and all variable assignments. Since all code is assigned to variables, Sleight code has complete access to its own source code at runtime. Furthermore, Sleight offers a mechanism for scheduling multiple processes. Of course, each process can inspect and modify other processes at any time. Protection and other safety issues have been well taken care of — although a process can easily cripple itself, the VM cannot be crashed.
The purpose of Sleight is to function as a platform for *safely* experimenting with self-reflective code, and for implementing a Gödel Machine in particular. The scheduling capabilities of the Sleight VM make it especially suitable for setting up the Solver, Searcher, and Scheduler — see the figure above.
---
**Luke**: A typical approach to reflective interpreters is the “reflective tower” of e.g. 3-Lisp ([Rivieres & Smith 1984](http://commonsenseatheism.com/wp-content/uploads/2013/10/Rivieres-Smith-The-implementation-of-procedurally-reflective-languages.pdf)). Interestingly, Sleight collapses the reflective tower into what you call a “reflective bungalow.” Could you explain what this means, and why it’s a good idea?
---
**Bas**: The reflective tower is a construct that in theory is infinite in two directions, like a tower with no ground floor and no top floor. I love the following quote from [Christian Queinnec](http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/LiSP.html) on the concept:
> In the mid-eighties, there was a fashion for reflective interpreters, a fad that gave rise to a remarkable term: “reflective towers.” Just imagine a marsh shrouded in mist and a rising tower with its summit lost in gray and cloudy skies—pure Rackham! […] Well, who hasn’t dreamed about inventing (or at least having available) a language where anything could be redefined, where our imagination could gallop unbridled, where we could play around in complete programming liberty without trammel nor hindrance?
>
>
The infinite reflective tower, as so poetically described by Queinnec in the quote above, can be visualized as an infinite *nesting of interpreters*, each interpreter being run by another interpreter. In the tower metaphor, each interpreter is being run by an interpreter one floor above it, and each interpreter can start running a new interpreter, adding a floor below itself. Going one floor up is then called *reification*, whereas going one floor down is called *reflection*. Theoretically the tower can be infinitely high, but in a practical implementation the tower must have a top, which is formed by a black-box interpreter whose internal state cannot be reified, as is the case in [this pedagogical example](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.446). The point of having access to the interpreter running one floor up is that you can redefine any function and change your own operational semantics at will.
Yet I consider the theoretical ideal of the infinite tower not so ideal at all: if there is always more internal state that can be reified, *ad infinitum*, then the operational semantics of the machine as a whole must be unfounded or circular. But with a simple trick we can make do with just one interpreter and no nesting. We can do this by maintaining a special data structure: the *metacontinuation*. The metacontinuation is a “tower” or stack of interpreter states. The act of reification, which in the infinite tower meant moving one floor up, now means popping one interpreter state off the metacontinuation, and running the sole interpreter using that state. Likewise, the act of reflection, which in the infinite tower meant adding a floor below, now means pushing the current interpreter state onto the metacontinuation. This way we can imitate the effect of an infinite tower of interpreters without any need for nesting—it just needs an extra stack, effectively becoming a “reflective bungalow.”
Although the metacontinuation is not an invention of mine—see also “Brown” ([Friedman & Wand, 1984](http://www.cs.indiana.edu/pub/techreports/TR161.pdf); [Wand & Friedman, 1986](http://www.cs.indiana.edu/pub/techreports/TR196.pdf)) and “Blond” ([Danvy & Malmkjær, 1988](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.2018&rep=rep1&type=pdf))—there is a whole lot of fun to be had with it. For example, in Sleight, I have extended the metacontinuation concept with an extremely fine-grained interruption mechanism, such that self-reflection can be performed *at any time*. It also facilitates the implementation of schedulers for “green” threads, which is helpful for a Gödel Machine implementation.
---
**Luke**: Some of MIRI’s research frames questions of self-reflection in terms of successor agents (e.g. [Yudkowsky & Herreshoff 2013](https://intelligence.org/files/TilingAgents.pdf)). You might call this *ex-vivo* self-modification vs. Sleight’s more *in-vivo* methods (reflecting on oneself in real-time). What are the benefits from operating in the in-vivo framework vs an ex-vivo one?
---
**Bas**: It seems to me that there are two ways to look at the in-vivo versus ex-vivo frameworks: with respect to *inspection* and with respect to *modification*.
An agent with in-vivo inspection capabilities may be said to have read access to its own actual internal registers, whereas an agent with ex-vivo inspection capabilities may only be able to request occasional snapshots (copies) of its internal registers to be made available for inspection. In the in-vivo inspection case, the very act of inspection is likely to change the thing being inspected, whereas in the ex-vivo inspection case, the agent is looking at a static object. The latter is probably easier to deal with, although the two capabilities are very different. Sleight supports both though.
The cited paper, however, deals with ex-vivo *modification* capabilities, meaning that self-modifications are made to a copy of the agent. This is reminiscent of an agent producing fitter offspring. As I understand it, the ex-vivo modification method is a generalization of in-vivo modification, promising to create an elegant formal framework of investigation. But when an agent is studying its own workings in order to find a beneficial ex-vivo modification to it, is this done using in-vivo or ex-vivo *inspection*? These different inspection methods may yield very different results, so I think this question also deserves investigation. Personally I think the in-vivo frameworks are the most interesting (if only because of their difficulty and danger…), but it remains to be seen which will work best in practice, which is what counts eventually.
---
**Luke**: Sorry, what do you mean by this statement? “I think the in-vivo frameworks are the most interesting (if only because of their difficulty and danger)…”
---
**Bas**: The *difficulty* is in the in-vivo inspection: a common assumption made by automated reasoning techniques is that the data being reasoned about is static while the reasoning is in progress. But if the data includes the dynamic internal state of the reasoner, this assumption breaks down. I find this interesting because we are entering uncharted territory.
The *danger* is in the in-vivo modification: if an agent decides to modify its own software, there may be no way back. The agent must be very sure what it is doing, or it might end up disabling itself. Although, come to think of it, the ex-vivo modification — i.e., producing a successor agent — comes with its own dangers. For example, it may be technically impossible to halt a flawed yet powerful successor agent, or it may be unethical to do so.
---
**Luke**: You say that “if an agent decides to modify its own software, it must be very sure what it is doing, or it might end up disabling itself.” Presumably another worry, from both the perspective of the agent and that of its creators, is that a self-modification may result in unintended consequences for the agent’s future behavior.
Ideally, we’d want an agent to continue pursuing the same ends, with the same constraints on its behavior, even as it modifies itself (to become capable of achieving its goals more effectively and efficiently). That is, we want a solution to the problem of what we might call “*stable* self-modification.”
Besides [Yudkowsky & Herreschoff (2013)](https://intelligence.org/files/TilingAgents.pdf), are you aware of other literature on the topic? And what are your thoughts on the challenge of stable self-modification?
---
**Bas**: This is a very interesting question, which I think concerns the issue of *growth* in general. It seems to me there are actually two questions here: (1) How can an agent be constructed such that it can grow stably from a “seed” program? And (2) How much supervision, testing, corrections, and interventions does such an agent need while it is growing?
The first question assumes that handcrafting a complex AI system is very hard and a bad strategy anyway — something which we will not have to argue over, I think. A great exploration of the prerequisites for stable self-growth has been performed by [Kristinn Thórisson (2012)](http://xenia.media.mit.edu/%7Ekris/ftp/Thorisson_chapt9_TFofAGI_Wang_Goertzel_2012.pdf) from Reykjavik University, who labels the approach of building self-growing systems *Constructivist AI*. I have had (and still have) the pleasure of working with him during the EU-funded [HumanObs](http://www.humanobs.org/) project, where we implemented a comprehensive cognitive architecture satisfying the prerequisites for stable self-growth and following the constructivist methodology laid out in the linked paper. The architecture is called [AERA](http://arxiv.org/abs/1312.6764) and is based on the [Replicode](http://cadia.ru.is/wiki/_media/public:nivel_thorisson_replicode_agi13.pdf) programming platform (developed mostly by Eric Nivel). This stuff is still very much work-in-progress, with a bunch of papers on AERA currently in preparation. So yes, I believe there exists a way of achieving long-term stable self-growth, and we are actively pursuing a promising path in this direction with several partner institutes.
The second question is an age-old one, especially regarding our own children, with our answers determining our schooling system. But many AI researcher seem to aim for a purely “intellectual” solution to the problem of building intelligent agents. For example, one could spend some time in front of a computer, programming a Gödel Machine or other “ultimate algorithm”, run it, sit back, and just wait for it to launch us all into the singularity… But I’m not too optimistic it will work out this way. I think we may have to get up and take a much more parental and social approach to crafting AI systems. To ensure stable growth, we may have to spend lots of time interacting with our creations, run occasional tests, like with school tests for children, and apply corrections if necessary. It will not be the rising of the machines, but rather the *raising* of the machines.
I’m hedging my bets though, by working on both sides simultaneously: [AERA](http://arxiv.org/abs/1312.6764) and the constructivist methodology on the one hand, and the Gödel Machine and other “ultimate” learning algorithms on the other hand.
---
**Luke**: Thanks, Bas!
The post [Bas Steunebrink on Self-Reflective Programming](https://intelligence.org/2013/10/25/bas-steunebrink-on-sleight/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
fb12f2ba-e3d7-4200-8b28-4cab89c0bbab | trentmkelly/LessWrong-43k | LessWrong | AI #30: Dalle-3 and GPT-3.5-Instruct-Turbo
We are about to see what looks like a substantial leap in image models. OpenAI will be integrating Dalle-3 into ChatGPT, the pictures we’ve seen look gorgeous and richly detailed, with the ability to generate pictures to much more complex specifications than existing image models. Before, the rule of thumb was you could get one of each magisteria, but good luck getting two things you want from a given magisteria. Now, perhaps, you can, if you are willing to give up on adult content and images of public figures since OpenAI is (quite understandably) no fun.
We will find out in a few weeks, as it rolls out to ChatGPT+ users.
As usual a bunch of other stuff also happened, including a model danger classification system from Anthropic, OpenAI announcing an outside red teaming squad, a study of AI impact on consultant job performance, some incremental upgrades to Bard including an extension for GMail, new abilities to diagnose medical conditions and some rhetorical innovations.
Also don’t look now but GPT-3.5-Turbo-Instruct plays Chess at 1800 Elo, and due to its relative lack of destructive RLHF seems to offer relatively strong performance at a very low cost and very high speed, although for most purposes its final quality is still substantially behind GPT-4.
TABLE OF CONTENTS
1. Introduction.
2. Table of Contents.
3. Language Models Offer Mundane Utility. GPT-4 boosts consultant productivity.
4. Language Models Don’t Offer Mundane Utility. Do we want to boost that?
5. Level Two Bard. Some improvements, I suppose. Still needs a lot of work.
6. Wouldn’t You Prefer a Good Game of Chess? An LLM at 1800 Elo. World model.
7. GPT-4 Real This Time. GPT-3.5-Instruct-Turbo proves its practical use, perhaps.
8. Fun With Image Generation. Introducing Dalle-3.
9. Deepfaketown and Botpocalypse Soon. Amazon limits self-publishing to 3 a day.
10. Get Involved. OpenAI hiring for mundane safety, beware the double-edged sword.
11. Introducing. OpenAI red team netw |
3b730880-0e89-4920-b388-ed36245aa614 | trentmkelly/LessWrong-43k | LessWrong | Meetup : SF Meetup: Cooking
Discussion article for the meetup : SF Meetup: Cooking
WHEN: 05 December 2016 06:15:34PM (-0800)
WHERE: 1655 Mission St., SF
We'll be meeting to cook and eat food together!
Here's how this works: I'll send out a planning doc via the bayarealesswrong mailing list, to recruit 3-5 people to bring a recipe and be "leaders" for their particular dish. Everyone else who shows up will help the leaders as needed with the cooking. I'll ask the leaders to populate a grocery list beforehand so that I can do the grocery shopping, and everyone who shows up and helps eat will split the bill.
If you have dietary restrictions and want to come, feel free to mention that on the mailing list thread to encourage others to choose a dish that will work for you - or lead a dish yourself.
Instead of the usual "wait for stragglers" thing, we'll start the meetup as soon as people show up, since cooking can be time consuming.
For help getting in, call or text 301-458-0764.
About these meetups:
The mission of the SF LessWrong meetup is to provide a fun, low-key social space with some structured interaction, where new and non-new community members can mingle and have interesting conversations. Everyone is welcome.
We explicitly encourage people to split off from the main conversation or diverge from the topic if that would be more fun for them (moving side conversations into a separate part of the space if appropriate). Meetup topics are here as a tool to facilitate fun interaction, and we certainly don’t want them to inhibit it.
Discussion article for the meetup : SF Meetup: Cooking |
5cf8d419-7650-4468-8b0d-6885b997b533 | trentmkelly/LessWrong-43k | LessWrong | ALCOR finances
Mike Darwin has posted some financial history of Alcor in a post, "The Armories of the Latter Day Laputas, Part 5". While there were some errors (see the comments), Darwin has apparently corrected them.
It makes interesting reading in general. It's not a straight analysis of filings like Brandon's "SIAI - An Examination", but more of a financial history (eg. the graph of revenues vs expenses etc.) |
825fd4ac-9c49-4442-a9cf-8d00714bebf4 | trentmkelly/LessWrong-43k | LessWrong | Working through D&D.Sci, problem 2 (preface)
This is part three of my series on working though D&D.Sci. This post is about choice of tools, installation, and setup. |
afb19b26-ceb7-4d5d-95eb-6b778441501d | trentmkelly/LessWrong-43k | LessWrong | The AI design space near the FAI [draft]
ABSTRACT:
Nearly-FAIs can be more dangerous than AIs with no attempt at friendliness. The FAI effort needs better argument that the attempt at FAI decreases the risks. We are bad at processing threats rationally, and prone to very bad decisions when threatened, akin to running away from unknown into a minefield.
NEARLY FRIENDLY AIS
Consider AI that truly loves mankind but decides that all of the mankind must be euthanized like an old, sick dog - due to chain of reasoning too long for us to generate when we test our logic of AI, or even comprehend - and proceeds to make a bliss virus - the virus makes you intensely happy, setting your internal utility to infinity; and keeping it so until you die. It wouldn't even take a very strongly superhuman intelligence to do that kind of thing. Treating life as if it was a disease. It can do so even if it destroys the AI itself. Or consider the FAI that cuts your brain apart to satisfy each hemisphere's slightly different desires. The AI that just wireheads everyone because it figured we all want it (and worst of all it may be correct).
It seems to me that one can find the true monsters in the design space near to the FAI, and even including the FAIs. And herein lies a great danger: bugged FAIs, the AIs that are close to friendly AI, but are not friendly. It is hard for me to think of a deficiency in friendliness which isn't horrifically unfriendly (restricting to deficiencies that don't break AI).
SHOULD WE BE SO AFRAID OF THE AIS MADE WITHOUT ATTEMPTS AT FRIENDLINESS?
We need to keep in mind that we have no solid argument that the AIs written without attempt at friendliness - the AIs that predominantly don't treat mankind in any special way - will necessarily make us extinct.
We have one example of 'bootstrap' optimization process - evolution - with not a slightest trace of friendliness in it. What did emerge in the end? We assign pretty low utility to nature, but non-zero, and we are willing to trade resources for pr |
6e2c085f-cbee-4122-bcfb-cd0262dece01 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Finding the variables
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
In a previous [post on bridging syntax and semantics](https://www.lesswrong.com/posts/EEPdbtvW8ei9Yi2e8/bridging-syntax-and-semantics-empirically), I mentioned how to empirically establish that the internal symbols Xi represented the variables xi in the environment: if the Xi have high mutual information with the xi. This basically ask whether you can find out about the values of the xi by knowing the Xi. See also Luke Muelhauser's [mention of "representation"](https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood#ExamplePANIC) and the articles linked therein.
At the end of that post, I mentioned the problem of finding the variables Xi if they were not given. This post will briefly look over that problem, and the related problem of finding the xi.
Waterfall and variables in the world
------------------------------------
Given the internal variable Xi, it is almost certainly possible to find a variable y in the outside world that correlates with it (even if we assume a Cartesian separation between the agent and the world, so we can't just do the lazy thing and set y=Xi).
In the example of [detecting an intruder in a greenhouse](https://www.lesswrong.com/posts/bbw6c9as5STvWXAgB/syntax-vs-semantics-alarm-better-example-than-thermostat), look at Xg, the internal variable of a guard that peers into the greenhouse to see an intruder.
Then we can certainly come up with a variable y that correlates with Xg. This could be a variable that correlates with whether there is an intruder in the greenhouse in situations where the guard can see it, and then correlates with all the issues that might fool the guard: mannequins, delusion-inducing gases, intruders disguised as tables, etc...
But we don't even need y to be anything like the variables that Xg was 'supposed' to measure. If we have a chaotic system in the vicinity - say a nearby waterfall - then we can just list all the states of that system that happen when Xg=0 vs those that happen when Xg=1, and set y to be 0 or 1 in those states.
That is a variant of Scott Aaronson's [waterfall argument](https://www.scottaaronson.com/papers/philos.pdf): if you have enough variety of states, and you can construct definitions of arbitrary complexity, then you can "ground" any model in these definitions. To avoid this, we have to penalise this definitional complexity the definition is doing all the work here, and is itself a highly complicated algorithm to implement.
So pick the xi so that:
* the complexity of defining the xi is low, and
* the xi have intrinsically relevant definitions, definitions that make sense without direct or indirect knowledge of Xi.
There are some edge cases of course - if a human has Xi being their estimate of whether a swan is around, it might be useful to distinguish between xi={there is a swan} and x′i={there is a white swan}, as this tells us whether the human was conceptualising [black swans](https://en.wikipedia.org/wiki/Black_swan) as swans. But in general, the xi should be defined by concepts that make sense on their own, and don't take Xi into account.
Variables in the mind
---------------------
Now assume that the xi are something reasonable. What of the Xi? Well, imagine a superintelligence had access to an agent's entire sensory input. If the superintelligence had a decent world model, it could use that input to construct a best estimate as to the value of xi - and call that estimate, which is a function of the internal state of the agent, Xi. Even if we limited the superintelligence to only accessing some parts of the agent - maybe just the short term memory, or the conscious states - it could still construct an Xi that is likely a far better correlate of xi than anything the agent could construct/naturally has access to.
For example, if xi were temperature (as in [this post](https://www.lesswrong.com/posts/EEPdbtvW8ei9Yi2e8/bridging-syntax-and-semantics-empirically)), then an AI could deduce temperature information from human sensory data [much better](https://www.lesswrong.com/posts/EEPdbtvW8ei9Yi2e8/bridging-syntax-and-semantics-empirically) than our subjective "it feels kinda hot/cold in here".
So the Xi should be selected according to other criteria than correlation with xi. For algorithms, we could look at named variables within them. For humans, we could also look at variables that correspond to names or labels (for example, when you ask a human "are you feeling hot?", what parts of the brain are triggered when that question is asked, and what parts correspond to the articulated answer being "yes").
Unless we are specifically interested in [speech acts](https://en.wikipedia.org/wiki/Speech_act), we can't just say "Xi corresponds to the human answering 'yes' when asked about how hot they feel". Nevertheless, when attempting to define a "feeling of hotness" variable, we should be defining it with all our knowledge (and the human's knowledge) of what that means: for example the fact that humans often answer 'yes' to that question when they indeed do feel hot.
So the Xi should be defined by taking some concept and seeking to formalise how humans use it/implement it, not by correlating it with the xi.
We can sometimes justify a more correlated Xi, *if the concept is natural for the human in question*. For example, we could take a human and train them to estimate temperature. After a while, they will develop an internal temperature estimator X′′i which is more highly correlated with the temperature xi, but which corresponds naturally to something the human can consciously access; we could check this, by, for example, getting the human to write down their temperature estimate.
We can also imagine the variable X′i, which is an untrained human's estimate of temperature; we'd expect this to be a bit better than Xi, just because the human can explicitly take into account things like fever, or temperature [acclimatisation](https://uihc.org/health-topics/acclimatization-adjusting-temperature). But it's not clear that X′i is really an intrinsic variable in the brain, or something constructed specifically by the human to answer that question at that moment.
Things can get more murky if we allow for unconscious feelings. Suppose someone has a relatively accurate gut instinct as to whether other people are trustworthy, but barely makes use of that instinct consciously. Then it's tricky to decide whether that instinct is a natural internal variable (which is highly correlated with trustworthiness), or an input into the human's conscious estimate (which is weakly correlated with trustworthiness).
Investigation, not optimisation
-------------------------------
So this method is very suitable for **checking** the correlations between internal variables and external ones, variables that we have defined though some other process. So it can answer questions like:
* "Is a human's subjective feeling of heat a good estimate of temperature?" (not really).
* "Is a trained human's temperature guess a good estimate of temperature?" (somewhat).
* "Is a human's subjective feeling of there being someone else in the room a good estimate of the presence of an intruder"? (yes, very much so).
* "Does this brain activity mean that the human detects an intruder?" (possibly).
But it [all falls apart](https://en.wikipedia.org/wiki/Goodhart%27s_law) if we try and use the correlation as an optimisation measure, shifting Xi to better measure xi or vice-versa. |
838b6f9e-ee3d-400d-a1be-f11837e59989 | trentmkelly/LessWrong-43k | LessWrong | Examples of growth mindset or practice in fiction
As people who care about rationality and winning, it's pretty important to care about training. Repeated practice is how humans acquire skills, and skills are what we use for winning.
Unfortunately, it's sometimes hard to get System 1 fully on board with the fact that repeated, difficult, sometimes tedious practice is how we become awesome. I find fiction to be one of the most useful ways of communicating things like this to my S1. It would be great to have a repository of fiction that shows characters practicing skills, mastering them, and becoming awesome, to help this really sink in.
However, in fiction the following tropes are a lot more common:
1. hero is born to greatness and only needs to discover that greatness to win [I don't think I actually need to give examples of this?]
2. like (1), only the author talks about the skill development or the work in passing… but in a way that leaves the reader's attention (and system 1 reinforcement?) on the "already be awesome" part, rather that the "practice to become awesome" part [HPMOR; the Dresden Files, where most of the implied practice takes place between books.]
3. training montage, where again the reader's attention isn't on the training long enough to reinforce the "practice to become awesome" part, but skips to the "wouldn't it be great to already be awesome" part [TVtropes examples].
4. The hero starts out ineffectual and becomes great over the course of the book, but this comes from personal revelations and insights, rather than sitting down and practicing [Nice Dragons Finish Last is an example of this].
Example of exactly the wrong thing:
The Hunger Games - Katniss is explicitly up against the Pledges who have trained their whole lives for this one thing, but she has … something special that causes her to win. Also archery is her greatest skill, and she's already awesome at it from the beginning of the story and never spends time practicing.
Close-but-not-perfect examples of the right thing:
The P |
1587dd74-a286-4625-a8b3-0f692dcdd2c4 | trentmkelly/LessWrong-43k | LessWrong | Dishonest Update Reporting
Related to: Asymmetric Justice, Privacy, Blackmail
Previously (Paul Christiano): Epistemic Incentives and Sluggish Updating
The starting context here is the problem of what Paul calls sluggish updating. Bob is asked to predict the probability of a recession this summer. He said 75% in January, and how believes 50% in February. What to do? Paul sees Bob as thinking roughly this:
> If I stick to my guns with 75%, then I still have a 50-50 chance of looking smarter than Alice when a recession occurs. If I waffle and say 50%, then I won’t get any credit even if my initial prediction was good. Of course if I stick with 75% now and only go down to 50% later then I’ll get dinged for making a bad prediction right now—but that’s little worse than what people will think of me immediately if I waffle.
Paul concludes that this is likely:
> Bob’s optimal strategy depends on exactly how people are evaluating him. If they care exclusively about evaluating his performance in January then he should always stick with his original guess of 75%. If they care exclusively about evaluating his performance in February then he should go straight to 50%. In the more realistic case where they care about both, his optimal strategy is somewhere in between. He might update to 70% this week.
>
> This results in a pattern of “sluggish” updating in a predictable direction: once I see Bob adjust his probability from 75% down to 70%, I expect that his “real” estimate is lower still. In expectation, his probability is going to keep going down in subsequent months. (Though it’s not a sure thing—the whole point of Bob’s behavior is to hold out hope that his original estimate will turn out to be reasonable and he can save face.)
This isn’t ‘sluggish’ updating, of the type we talk about when we discuss the Aumann Agreement Theorem and its claim that rational parties can’t agree to disagree. It’s dishonest update reporting. As Paul says, explicitly.
> I think this kind of sluggish updating is qui |
c02407ef-9d52-4a4a-9ebd-715d422a8b2b | trentmkelly/LessWrong-43k | LessWrong | What's the Least Impressive Thing GPT-4 Won't be Able to Do
It seems like GPT-4 is going to be coming out soon and, so I've heard, it will be awesome. Now, we don't know anything about its architecture or its size or how it was trained. If it were only trained on text (about 3.2 T tokens) in an optimal manner, then it would be about 2.5X the size of Chinchilla i.e. the size of GPT-3. So to be larger than GPT-3, it would need to be multi-modal, which could present some interesting capabilities.
So it is time to ask that question again: what's the least impressive thing that GPT-4 won't be able to do? State your assumptions to be clear i.e. a text and image generating GPT-4 in the style of X with size Y can't do Z. |
84ec7fd8-bb5c-49d9-86cf-fcbec827cb69 | trentmkelly/LessWrong-43k | LessWrong | Protocol evaluations: good analogies vs control
Let’s say you want to use AIs that are capable of causing x-risk. You have a plan that describes how to train, validate and deploy AIs. For example, you could use pretraining+RLHF, check that it doesn’t fall for honeypots and never inserts a single backdoor in your coding validation set, and have humans look at the deployment outputs GPT-4 finds the most suspicious. How do you know if this plan allows you to use these potentially dangerous AIs safely?
Thanks to Ryan Greenblatt and Buck Shlegeris who provided the original ideas for this post, and thanks to Jean-Stanislas Denain, Bogdan Ionut Cirstea, Ansh Radhakrishnan, and Charbel-Raphael Segerie for useful feedback on the draft of this post.
Evaluate safety using very good analogies for the real situation?
If you want to evaluate a protocol (a set of techniques you use to train, validate, and deploy a model), then you can see how well it works in a domain where you have held-out validation data that you can use to check if your protocol works. That is, you run your exact protocol on this domain without allowing the protocol access to the held-out validation data. Then, you just assess how well you did with this held-out validation. This sort of approach is sometimes called sandwiching. (Here we are using sandwiching in a broad way that covers the evaluation of aspects of protocols, not just evaluation of oversight.)
One approach for obtaining held-out validation data is to restrict the quality of the labor that goes into the protocol: the Measuring progress in scalable oversight paper uses non-expert humans, and the weak-to-strong generalization paper uses a small model. Then, you can use expert labels to check how well the protocol did.
A big flaw of this approach is that the analogy between [protocol with weak labor, human expert] and [protocol with human expert labor, actual goodness] might be quite poor. In particular, it’s not clear how weak the weak labor should be. And if the analogy is poor, the AI mig |
eab14bc8-93d6-472d-9dab-dcf25af821b9 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Belief in Self-Deception
Today's post, Belief in Self-Deception was originally published on 05 March 2009. A summary (taken from the LW wiki):
> Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a belief in false belief.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was No, Really, I've Deceived Myself, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
6c8f461d-d04a-4d5e-8f49-e1bd1288cc8f | trentmkelly/LessWrong-43k | LessWrong | On Comments, Voting, and Karma - Part I
There has been a great deal of discussion here about the proper methods of voting on comments and on how karma should be assigned. I believe it's finally reached the point where a post is warranted that covers some of the issues involved. (This may be just because I find myself frequently in disagreement with others about it.)
The Automatic Upvote
First, there is the question of whether one should be able to upvote one's own comment. This actually breaks apart into two related concerns:
(1) One is able to upvote one's own comments, and
(2) One gains a point of karma just for posting a comment.
These need not be tied. We could have (2) without (1) by awarding a point of karma for commenting, without changing the comment's score. We could also have (1) without (2) by simply not counting self-upvotes for karma.
I am in favor of (2). The main argument against (2) is that it rewards quantity over quality. The main argument for (2) is that it offers an automatic incentive to post comments; that is, it rewards commenting over silence. As we're community-building, I think the latter incentive is more important than the former. But I'm not sure this is worth arguing further - it serves as a distraction from the benefits of (1).
I am also in favor of (1). As a default, all comments have a base rating of 0. Since one is allowed to vote on one's own comments, and upvoting is the default for one's own comments, this makes comments effectively start at a rating of 1. The argument against this is that it makes more sense for comments to start with a rating of 0, so that someone else liking a comment gives it a positive rating, while someone disliking it gives it a negative rating. I disagree with this assessment.
If I post a comment, it's because it's the best comment I could think of to add to the discussion. I will usually not bother saying something if I don't think it's the sort of thing that I would upvote. When I see someone else's comment that I don' |
aedb90d5-7122-47cb-adff-bc8a531a28f7 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] My Kind of Reflection
Today's post, My Kind of Reflection was originally published on 10 July 2008. A summary (taken from the LW wiki):
> A few key differences between Eliezer Yudkowsky's ideas on reflection and the ideas of other philosophers.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Fear of Common Knowledge, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
d57f0da4-720d-4fd9-bb6a-23f7c33baf89 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Mountain View sequences discussion
Discussion article for the meetup : Mountain View sequences discussion
WHEN: 22 May 2012 06:00:00PM (-0700)
WHERE: Sylvan Avenue & Sevely St, Mountain View
We will be meeting at Sylvan Park near the playground. This is a meetup to discuss specific posts in the sequences. The material we will be covering is the beginning of the Seeing with Fresh Eyes sequence.
http://lesswrong.com/lw/j7/anchoring_and_adjustment/
http://lesswrong.com/lw/k3/priming_and_contamination/
http://lesswrong.com/lw/k4/do_we_believe_everything_were_told/
http://lesswrong.com/lw/k5/cached_thoughts/
It would be ideal to show up having read (or reread) these posts, and jotted down some notes you can discuss. OTOH please don't NOT show up because you feel guilty about not-having-done-your-homework.
Food: no one is obligated to bring snacks but it is most welcome.
Time: I will arrive at 6pm, discussion of the sequences will begin at 7. I am soliciting info about parking and will update.
Discussion article for the meetup : Mountain View sequences discussion |
850b0c36-892c-4dbd-af7e-2cfe1a7d4056 | trentmkelly/LessWrong-43k | LessWrong | The Ellsberg paradox and money pumps
Followup to: The Savage theorem and the Ellsberg paradox
In the previous post, I presented a simple version of Savage's theorem, and I introduced the Ellsberg paradox. At the end of the post, I mentioned a strong Bayesian thesis, which can be summarised: "There is always a price to pay for leaving the Bayesian Way."1 But not always, it turns out. I claimed that there was a method that is Ellsberg-paradoxical, therefore non-Bayesian, but can't be money-pumped (or "Dutch booked"). I will present the method in this post.
I'm afraid this is another long post. There's a short summary of the method at the very end, if you want to skip the jibba jabba and get right to the specification. Before trying to money-pump it, I'd suggest reading at least the two highlighted dialogues.
Ambiguity aversion
To recap the Ellsberg paradox: there's an urn with 30 red balls and 60 other balls that are either green or blue, in unknown proportions. Most people, when asked to choose between betting on red or on green, choose red, but, when asked to choose between betting on red-or-blue or on green-or-blue, choose green-or-blue. For some people this behaviour persists even after due calculation and reflection. This behaviour is non-Bayesian, and is the prototypical example of ambiguity aversion.
There were some major themes that came out in the comments on that post. One theme was that I Fail Technical Writing Forever. I'll try to redeem myself.
Another theme was that the setup given may be a bit too symmetrical. The Bayesian answer would be indifference, and really, you can break ties however you want. However the paradoxical preferences are typically strict, rather than just tie-breaking behaviour. (And when it's not strict, we shouldn't call it ambiguity aversion.) One suggestion was to add or remove a couple of red balls. Speaking for myself, I would still make the paradoxical choices.
A third theme was that ambiguity aversion might be a good heuristic if betting against someone wh |
ccfdd126-1f4e-40d3-b302-26f113cd89ca | trentmkelly/LessWrong-43k | LessWrong | Super human AI is a very low hanging fruit!
A single strong down vote by a moderately high karma user dropped this post from 3 to -4; the actual sequence of votes seems to be 1+1+1+1-1-7=-4. I say this only b/c the naive conclusion might be that this post received only down votes.
Update -- substantial revision 2025-01-08, § 0 added 2025-01-12.
These are arguments in favor of the SMI2LESHAI position ie (ubiquitous) super human AI but without space migration, intelligence increase or (radical) life extension for adults alive in 2025. Practical consequences of SMI2LESHAI are also discussed (§ 6|2). There is no comprehensive attempt to convert any one to SMI2LESHAI ism. Each section is relatively independent.
§ 0. Mistakes on my part
I think I under estimated the extent to which people on LW hold the belief that {vastly super human AI will lead to all sorts of amazing things}.
Instead I believed that people were more likely to hold beliefs roughly along the lines of {things that happen efficiently benefit from inherent easiness}. Eg energy used by the economy is still small relative to energy used by the ecosystem which is extremely small relative to solar energy reaching the earth.
Hence there is no deliberate attempt to refute the (IMO want based belief) that {vastly super human AI will lead to all sorts of amazing things}, but people who hold such a belief should still be able to follow much of the reasoning & accept many of the conclusions.
(Whether they actually follow the reasoning or accept many of the conclusions is of course a different matter.)
§ 1. Introduction
Summary. I argue, from the perspective of biology, that super human AI is a very low hanging fruit. I believe that this argument is very solid. I briefly consider reasons why {super human AI}/{vastly super human AI} might not arise. I then contrast AI with other futurist technologies like human brain emulation, radical life extension & space colonization. I argue that these technologies are in a different category & plausibly impossibl |
8a949f43-e453-47be-98bb-f9c22ca9ce1f | trentmkelly/LessWrong-43k | LessWrong | Rationalists Are Less Credulous But Better At Taking Ideas Seriously
Consider the following commonly-made argument: cryonics is unlikely to work. Trained rationalists are signed up for cryonics at rates much greater than the general population. Therefore, rationalists must be pretty gullible people, and their claims to be good at evaluating evidence must be exaggerations at best.
This argument is wrong, and we can prove it using data from the last two Less Wrong surveys.
The question at hand is whether rationalist training - represented here by extensive familiarity with Less Wrong material - makes people more likely to believe in cryonics.
We investigate with a cross-sectional study, looking at proto-rationalists versus experienced rationalists. Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).
By these definitions, there are 93 proto-rationalists, who have been in the community an average of 1.3 months, and 134 experienced rationalists, who have been in the community an average of 4.5 years. Proto-rationalists generally have not read any rationality training material - only 20/93 had read even one-quarter of the Less Wrong Sequences. Experienced rationalists are, well, more experienced: two-thirds of them have read pretty much all the Sequence material.
Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future. Experienced rationalists thought that, on average, there was a 15% chance of same. The difference was marginally significant (p < 0.1).
Marginal significance is a copout, but this isn't our only data source. Last year, using the same definitions, prot |
43b4ddb3-4b5a-465e-9e60-a8e420cb5c42 | trentmkelly/LessWrong-43k | LessWrong | Bing chat is the AI fire alarm
Epistemic status: Mostly speculative.
Panicking and shouting "Wolf" while everyone else is calm is a risky move, status-wise. The good thing is, I don't have any status, so I volunteer to be one of those weirdos who panic when everyone else is calm with some hope it could trigger a respectability cascade.
The following ideas/facts worry me:
1. Bing Chat is extremely intelligent.
2. It's probably based on GPT-4.
3. The character it has built for itself is extremely suspicious when you examine how it behaves closely. And I don't think Microsoft has created this character on purpose.
The following example from Reddit is the most important example of how smart Bing is and why I believe it's based on GPT-4. The proposed question is quite tricky, and I think most kids would fail to answer it. Not only that, but it's safe to assume that it's impossible to deduce it from any given text. There is nowhere in the training data that has anything so similar. It's not a "What is the capital of France?" type question which can be easily pulled using a simple search. Answering this question requires a complex model of the world that Bing seems to possess.
This is what ChatGPT replied to the same question:
Another example is here by Ethan Mollock. The quality of writing is extremely impressive and, again, much better than ChatGPT (you will have to click the Twitter link as the screenshots are too large to paste). These examples again point to the hypothesis that Bing is much smarter than ChatGPT and based on a better-performing LLM. The natural suspicion should be GPT-4. It was rumored to be released in Q1 2023, and it being the basis for Bing sounds like a pretty good business-wise idea. Considering this will maximize the financial upside, I think it's worth reminding the magnitude of this move from a financial perspective. Microsoft is currently the 2# largest company on earth and is valued at almost 2 Trillion. And they are targeting Google's main cash cow (Search), w |
c383ff59-c765-4a51-b1a8-9caae90758cd | trentmkelly/LessWrong-43k | LessWrong | Meetup : Durham NC/Triangle Area Meetup: Goal Factoring
Discussion article for the meetup : Durham NC/Triangle Area Meetup: Goal Factoring
WHEN: 24 October 2013 07:00:00PM (-0400)
WHERE: 2411 N Roxboro St., Durham NC 27704
Can we discover values, drives, and emotions we didn't know we possessed, by a process like Goal Factoring? Let's find out, tomorrow!
7:00 gather
7:30 discussion
9:30ish unstructured social time, either at the house or possibly at Fullsteam
We suggest parking on Ellerbee, one block north of Club. Our house is green with a red door, and will have its porch light on.
Hope to see you there!
Discussion article for the meetup : Durham NC/Triangle Area Meetup: Goal Factoring |
1bffd4ae-8bc4-4928-98cc-5f1caffad635 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | How can LLMs be understood as “simulators”?
A [simulator](https://generative.ink/posts/simulators/) is a type of AI which produces simulations of real-world phenomena. The concept was proposed as a way to understand large language models (LLMs), which often behave in ways that are not well-explained by thinking of them as other types of AI, such as [agents](/?state=5632&question=What%20is%20an%20agent%3F), [oracles](/?state=8AEV&question=Why%20can't%20we%20just%20build%20an%20%22oracle%20AI%22%20whose%20only%20goal%20is%20to%20answer%20questions%3F), or [tools](/?state=8QZF&question=What%20is%20tool%20AI%3F). However, it might seem like a simulator is behaving like an agent, oracle, or tool, because it can simulate instances of them.
A simulation is a model of a process that combines certain rules or behaviors with a state of the world, to compute what will happen at the next step. A simulator repeats this computation many times, imitating how a real-world process changes over time.
For example, a physics simulator could use a ball’s current position and past positions to estimate its future position. Large Language Models (LLMs) operate in a similar way. An LLM has access to a large amount of text from which it learns the sequences that words tend to fall into. Then, when given a string, it uses what it has learned to ‘predict’ what words are likely to come next.

*Like simulators, large language models are an iterative process that generates each step based on the current state*
Thinking of LLMs as simulators helps us understand some of their characteristics. For example, when an LLM gives incorrect answers, thinking of it as an oracle might make us think it doesn’t ‘know’ something. However, LLMs can give incorrect information when prompted one way, and then give different and correct information when prompted another way. This is because the LLM is not trying to determine the correct answer. Instead, it just generates ‘what comes next’ based on the patterns it has learned, and can be thought of as simulating a human who might be mistaken or lying.
LLMs are often thought of as agents which value predicting the next word more accurately, but there are some important ways in which LLMs don’t behave in line with this perspective. For example, they don’t seem to take actions to improve their prediction accuracy beyond the next word, such as by making the text more predictable.
The frameworks of agents, oracles, and tools have formed the basis of past discussion, but the most powerful models today do not fully fit into these categories. In particular, a lot of the way we reason about AI as an existential risk involves thinking about [agents](/?state=5632&question=What%20is%20an%20agent%3F), but these ways of thinking will be less relevant if the most powerful models continue to be simulators.
|
61c4da6a-e4a1-4b25-934e-847797c3b116 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Second call: CFP for Rebellion and Disobedience in AI workshop
Hi everyone!
I'm co-organizing a workshop on a really interesting topic that's very relevant for AI safety. We call it "Rebellion and Disobedience in AI". If you're doing work that could be relevant for us, please submit it! If you have questions or want to discuss the scope of this workshop, feel free to ask on this thread and I'll try to answer.
Dear colleagues,
This is a second call for papers for the workshop on Rebellion and Disobedience in AI (RaD-AI) which will take place on May 30, 2023, as part of the AAMAS workshop program.
This call contains an extended submission deadline (February 13) and a list of confirmed speakers.
More details can be found on the workshop’s website: <https://sites.google.com/view/rad-ai/home>
RaD-AI agents are artificial agents (virtual or robots) that reason intelligently about why, when, and how to rebel and disobey their given commands. The need for agents to disobey contrasts with most existing research on collaborative robots and agents, where the definition of a “good” agent is one that complies with the commands it is given, and that works in a predictable manner under the consent of the human it serves. However, as exemplified in Issac Asimov’s Second Law of Robotics, this compliance is not always desired, such as when it might interfere with a human’s safety. While there has not been much prior research on RaD-AI, we identify main related topics, each of which is studied by a thriving subcommunity of AI, namely: Intelligent Social Agents, Human-Agent/Robot Interaction, and Societal Impacts. In each of these areas, there are research questions relevant to RaD-AI.
**Confirmed Speakers:**
Joel Leibo, DeepMind
Matthias Scheutz, Department of Computer Science, Tufts University
Liz Sonenberg, School of Computing and Information Systems, The University of Melbourne
**We are specifically interested in submissions on the following topics:**
Intelligent Social Agents (including but not limited to: Goal Reasoning, Plan Recognition, Value Alignment, and Social Dilemmas)
Human-Agent/Robot Interaction (including but not limited to: Human-agent Trust, Interruptions, Deception,
Command Rejection, and Explainability)
Societal Impacts (including but not limited to: Legal and Ethical Reasoning, Liability, AI safety, and AI governance)
**Submission details:**
The submission deadline is ~~**January 20, 2023**~~. **February 13, 2023**
Notifications will be sent on March 13, 2023.
The submission website is: <https://easychair.org/conferences/?conf=radai23>
Accepted submission types:
Regular Research Papers (6 to 8 pages)
Short Research Papers (up to 4 pages)
Position Papers (up to 2 pages)
Tool Talks (up to 2 pages)
**Organizing Committee:**
David Aha, Navy Center for Applied Research in AI; Naval Research Laboratory; Washington, DC; USA
Gordon Briggs, Navy Center for Applied Research in AI; Naval Research Laboratory; Washington, DC; USA
Reuth Mirsky, Department of Computer Science; Bar-Ilan University; Israel ([mirskyr@cs.biu.ac.il](mailto:mirskyr@cs.biu.ac.il))
Ram Rachum, Department of Computer Science; Bar-Ilan University; Israel
Kantwon L. Rogers, Department of Computer Science; Georgia Tech; USA
Peter Stone, The University of Texas at Austin; USA and Sony AI |
2bb7c4dc-14d7-448e-a66b-f126a6fef8ea | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Introduction: Bias in Evaluating AGI X-Risks
The rationality community has a tradition of checking for biases, particularly when it comes to evaluating the non-intuitive risks of general AI.
We thought you might like this list, adapted from a 2015 essay by Forrest Landry[[1]](#fn70zo9u1ub3). Many names of biases listed may already be familiar for you. If you "boggle" more at the text, you will find curious new connections to evaluating upcoming risks of AI developments.
### **About Forrest Landry**
Forrest is a [polymath](https://mflb.com/ai_alignment_1/math_expectations_psr.html#p1) working on civilisation design and mitigating risks of auto-scaling/catalysing technology (eg. [Dark Fire scenario](https://www.semanticscholar.org/paper/Putting-out-the-dark-fire-%3A-constraining-physics-Sandberg-Landry/b63631833f88b36b1095cdb9b7da18b454d0b086 )). About 15 years ago, he started researching how to build in deep existential alignment into the internals of [AGI](https://mflb.com/ai_alignment_1/si_safety_qanda_out.html#p3), applying his deep understanding of programming, embodied ethics, and meta-physics. Then, Forrest discovered the substrate-needs convergence argument (as distinct from yet much enabled by instrumental convergence). Unfortunately, because of substrate-needs convergence, any approach to aligning any AI at the embedded level turned out unsound in practice (and moreover, inconsistent with physical theory). To inquire further, [see this project](https://docs.google.com/document/d/1Zvt-tUksgtsuFs0LG8k6AHtDBmzHTOk-CfSz6nUYJrM/edit#heading=h.tkn64suizzxk).
---
Introduction
============
*Note on unusual formatting:*Sentences are split into lines so you can parse parts precisely.
Ideally,
in any individual or group decision making,
there would be some means, processes,
and procedures in place to ensure that
the kinds of distortions and inaccuracies
introduced by individual and collective
psychological and social bias
do not lead to incorrect results,
and thus poor (risk-prone) choices,
with potentially catastrophic outcomes.
While many types of bias
are known to science
and have been observed
to be common to all people
and all social groups, the world over
in all working contexts, regardless
of their background, training, etc,
they are also largely unconscious,
being 'built-in' by long-term
evolutionary processes.
These unconscious cognitive biases,
while they are adaptive for the purposes
of our being able to survive in
non-technological environments,
are also **not** able to serve us equally well
when attempting to survive our
current technological contexts.
The changes in our
commonly experienced world
continue to occur far too fast
for our existing evolutionary
and cognitive adaptations
to adjust naturally.
We will therefore need to
add the necessary corrections to
our thinking and choice-making process,
our own evolution, 'manually'.
The hope is that
these 'adjustments' might
make it possible to mitigate
the distortions and inaccuracies
introduced by the human condition
to the maximum extent possible.
Bear in mind
that each type of bias
does not just affect individuals –
they also arise due to specific
interpersonal and trans-personal effects
seen only in larger groups.[[2]](#fnkpw3f85iha)
These bias aspects affect
all of us, and in all sorts of ways,
many of which are complex.
It is important for everyone involved
in critical decisions and projects
to be aware of these general
and mutual concerns.
**We all run on corrupted hardware.**
**Our minds are composed of many modules,**
**and the modules that evolved to make**
**us seem impressive and gather allies**
**are also evolved to subvert the ones**
**holding our conscious beliefs.**
**Even when we believe that**
**we are working on something**
**that may ultimately determine**
**the fate of humanity, our signaling**
**modules may hijack our goals so as**
**to optimize for persuading outsiders**
**that we are working on the goal,**
**instead of optimizing for**
**achieving the goal.**
What is intended herein
is to make some of these
unconscious processes conscious,
to provide a basis, and
to identify the need,
for clear conversation
about these topics.
Hopefully, as a result
of these conversations,
and with the possibility
of a reasonable consensus reached,
we will be able to identify (or create)
a good general practice of decision making,
which when implemented both
individually and collectively
(though perhaps not easily),
can materially improve
our mutual situation.
The need for these practices
of accuracy, precision, and correctness
in decision making are especially
acute in proportion to the degree
that we all find ourselves faced
with a seemingly ever increasing
number of situations for
which our evolution has
not yet prepared us.
Where the true goal
is making rational, realistic,
and reasonably good choices
about matters that may
potentially involve many people,
larger groups and tribes, etc,
many specific and strong
cognitive and social biases
will need to be compensated for.
**Particularly in regards**
**to** [**category 1 and 2**](https://mflb.com/lsag_1/risk_categories_psr.html#p1) **extinction risks**
**nothing less than complete and full**
**compensation for all bias,**
**and the complete application**
**of correct reason**
**can be allowed for.**
This sequence will not attempt
to outline or validate any of the
specific risk possibilities and outcomes
for which there is significant concern
(this is done elsewhere).
Nor can it attempt to outline or define
which or what means, processes, or procedures
should be used for effective individual
or group decision making.
As the 'general problem of governance',
the main issue remains one of the identification,
development, and testing/refining of
such means and methods by which
all bias can be compensated for,
and a basis for clear reason
thereby created.
Hopefully this will
lead to real techniques of
group decision making –
and high quality decisions –
that can be realistically defined,
outlined, and implemented.
**A Partial List Of Affecting Bias...**
The next posts cover a list
of some of the known types of bias
that have a significant and real potential to
harmfully affect the accuracy and correctness
of extinction risk assessments.
Each bias will be given its
common/accepted consensus name,
along with relevant links to Wikipedia
articles with more details.[[3]](#fn59cmwyrvfbh)
Each bias will be briefly described
with particular regard to its potential impact
on risk assessment in an existential context.[[4]](#fn6uhc6ndfd3g)
1. **[^](#fnref70zo9u1ub3)** Some of the remarks and observations herein
have been derived from content posted
to the website LessWrong.com –
no claim of content originality by
this author is implied or intended.
Content has been duplicated
and edited/expanded here for
informational and research purposes only.
2. **[^](#fnrefkpw3f85iha)** Nothing herein is intended
to implicate or impugn any
specific individual, group, or institution.
The author has not specifically encountered
these sorts of issues in regards to
just one person or person or project.
Most people are actually well-intentioned.
Unfortunately,
'good intentions' is not equivalent to
(nor necessarily yielding of)
'good results', particularly where
the possibility of extinction risks
is concerned.
3. **[^](#fnref59cmwyrvfbh)** All of the descriptive notations regarding
the specific characteristics of each bias
have been derived from Wikipedia.
4. **[^](#fnref6uhc6ndfd3g)** These descriptions, explanations, and discussions
are not intended to be comprehensive
or authoritative – they are merely
indicative for the purposes of stimulating
relevant/appropriate conversation. |
7cc0a63d-805a-4871-8eda-54fe089c988e | trentmkelly/LessWrong-43k | LessWrong | To like each other, sing and dance in synchrony
For the How to Run a Successful Less Wrong Meetup booklet, I'm looking for information about how to better build a social group and foster a feeling of community. Since this bit is probably of general interest, I'm posting it here.
If you want to make the members of the group like each other more and feel more like a group, synchronized actions may be one of the easiest ways of achieving this goal. Anthropologists have long known the community-building effect of dancing:
> As the dancer loses himself in the dance, as he becomes absorbed in the unified community, he reaches a state of elation in which he feels himself filled with an energy of force immensely beyond his ordinary state . . . finding himself in complete and ecstatic harmony with all the fellow-members of his community, experiences a great increase in his feelings of amity and attachment towards them. (Radcliffe-Brown 1933/1948, quoted in Kesebir 2011)
Armies around the world utilize the same effect to foster a feeling of unison through repeated drills:
> Words are inadequate to describe the emotion aroused by the prolonged movement in unison that drilling involved. A sense of pervasive well-being is what I recall; more specifically, a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life, thanks to participation in collective ritual. (McNeill 1995, quoted in Kesebir 2011)
Wiltermuth & Heath (2009) summarize some of the research on the topic:
> The idea that synchronous movement improves group cohesion has old roots. As historian William H. McNeill suggests, armies, churches, and communities may have all benefited, intentionally or unintentionally, from cultural practices that draw on ‘‘muscular bonding,’’ or physical synchrony, to solidify ties between members (McNeill, 1995). This physical synchrony, which occurs when people move in time with one another, has been argued to produce positive emotions that weaken the boundaries between the self and the group (Ehre |
3e48bd37-1b5b-4802-802d-9d35c3d6c72b | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | How might we get from artificial general intelligence to a superintelligent system?
Once a system is at least as capable as the most capable humans at AI research, it may become the driver of its own development and initiate a process of recursive self-improvement known as the [intelligence explosion](/?state=6306&question=What%20is%20an%20intelligence%20explosion%3F), leading to an extremely powerful system. A general framing of this process is Holden Karnofsky’s concept of a [Process for Automating Scientific and Technological Advancement](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#:~:text=Process%20for%20Automating%20Scientific%20and%20Technological%20Advancement%2C%20or%20PASTA) (PASTA).
There is [much](https://sideways-view.com/2018/02/24/takeoff-speeds/) [debate](https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai) about whether there would be a substantial period when the AI would be partially driving its own development, with humans becoming gradually less important, or whether the transition to fully AI-automated AI capability research would be sudden. However, the core idea that there is *some* threshold of capabilities beyond which a system would begin to rapidly ascend is hard to dispute, and is a significant consideration for developing alignment strategies.
|
48ed4efa-a161-455f-8a0e-b873ae1e63ef | trentmkelly/LessWrong-43k | LessWrong | AI Alignment is Alchemy.
AI alignment today equates to the wishful thinking of early alchemists who wanted to turn any element into gold. Alchemy is possible today, albeit being very expensive and resource consuming, But it's possible only because we now know the atomic structure in detail and have a well established extensive periodic table which the alchemists lacked back in their day.
Even then, it's not feasible, we're better off mining up our gold reserves. It's much more cost-effective.
Similarly, we might reach a point in time when AI alignment will be possible but just not feasible and also completely irrational. Allow me to indulge you in the following thought experiment to explain myself.
When an AGI is orders of magnitude more intelligent than a human, the same way a human is more intelligent than an ant. Will we dedicate our life's sole purpose to building ant colonies with precisely engineered tunnels and use nutritional nano injections to keep its population thriving and ravishing?
Imagine all of humanity focusing all their efforts to building ant colonies and feeding them. How irrational does that sound? Won't an AI eventually realize that? What would it do once it realizes the stupid mission that it had been on all along?
If we go down the road of forcing the AI to feed the ants, we'd have effectively created a delusional AI system that's not unlike the paperclip maximizer.
We'd never get anywhere by keeping it limited and focused on a restricted path and not allowing it to adopt more optimal strategies.
However, there's one way we can stay relevant to avoid the existential risks of AI. We need to augment. Research focus needs to shift solely to Brain-Computer Interfaces. We can start with enhanced memory retention and enhance each module of our brains one by one.
Unless we keep up with AI by augmenting ourselves, humanity will perish. No matter what. |
e2a30c60-ae17-4f9f-8cd5-fa43cc22b606 | trentmkelly/LessWrong-43k | LessWrong | Monthly Roundup #31: June 2025
It’s always a nice break to see what else is going on out there.
BAD NEWS
Study finds sleep in male full-time workers falls as income rises, with one cause being other leisure activities substituting for sleep. It makes sense that sleep doesn’t cost money while other things often do, but the marginal cost of much leisure is very low. I don’t buy this as the cause. Perhaps reverse causation, those who need or prefer less sleep earn more money?
The productivity statistics continue to be awful, contra Alex Tabarrok part of this recent -3.88% Q1 print is presumably imports anticipating tariffs driving down measured GDP and thus productivity. The more I wonder what’s wrong with the productivity statistics the more I think they’re just a terrible measure of productivity?
A model of America’s electoral system that primary voters don’t know much about candidate positions, they know even less than general election voters about this, so they mostly depend on endorsements, which are often acquired by adopting crazy positions on the relevant questions for each endorsement, resulting in extreme candidates that the primary voters wouldn’t even want if they understood.
It’s not really news, is it?
> Paul Graham: A conversation that’s happened 100 times.
>
> Me: What do I have to wear to this thing?
>
> Jessica: You can wear anything you want.
>
> Me: Can I wear <clothes I usually wear>?
>
> Jessica: Come on, you can’t wear that.
>
> Jessica Livingston (for real): I wish people (who don’t know us) could appreciate the low bar that I have when it comes to your attire. (E.g. you wore shorts to a wedding once.)
RULES OF CONFIDENTIALITY
Alex Thompson says “If you don’t tell the truth, off the record no longer applies,” proceeding to share an off-the-record unequivocal denial of a fact that was later confirmed.
I think anything short of 100% (minus epsilon) confidence that someone indeed intentionally flat out lied to your face in order to fool you in a way that actively |
1d41ee68-adc7-41ff-beb9-85e335fe33a2 | StampyAI/alignment-research-dataset/special_docs | Other | Danijar Hafner - Gaming our way to AGI-by Towards Data Science-video_id Bgz9eMcE5Do-date 20220112
# Danijar Hafner on Gaming Our Way to AGI. Danijar Hafner on procedural game generation for reinforcement learning agents by Jeremie Harris on the Towards Data Science Podcast
Until recently, AI systems have been narrow — they’ve only been able to perform the specific tasks that they were explicitly trained for. And while narrow systems are clearly useful, the holy grain of AI is to build more flexible, general systems.
But that can’t be done without good performance metrics that we can optimize for — or that we can at least use to measure generalization ability. Somehow, we need to figure out what number needs to go up in order to bring us closer to generally-capable agents. That’s the question we’ll be exploring on this episode of the podcast, with Danijar Hafner. Danijar is a PhD student in artificial intelligence at the [University of Toronto](http://learning.cs.toronto.edu/) with [Jimmy Ba](https://scholar.google.com/citations?user=ymzxRhAAAAAJ&hl=en&oi=ao) and [Geoffrey Hinton](https://scholar.google.com/citations?user=JicYPdAAAAAJ&hl=en&oi=ao) and researcher at [Google Brain](https://research.google/teams/brain/) and the [Vector Institute](https://vectorinstitute.ai/).
Danijar has been studying the problem of performance measurement and benchmarking for RL agents with generalization abilities. As part of that work, he recently released Crafter, a tool that can procedurally generate complex environments that are a lot like Minecraft, featuring resources that need to be collected, tools that can be developed, and enemies who need to be avoided or defeated. In order to succeed in a Crafter environment, agents need to robustly plan, explore and test different strategies, which allow them to unlock certain in-game achievements.
Crafter is part of a growing set of strategies that researchers are exploring to figure out how we can benchmark and measure the performance of general-purpose AIs, and it also tells us something interesting about the state of AI: increasingly, our ability to define tasks that require the right kind of generalization abilities is becoming just as important as innovating on AI model architectures. Danijar joined me to talk about Crafter, reinforcement learning, and the big challenges facing AI researchers as they work towards general intelligence on this episode of the TDS podcast.
Here were some of my favourite take-homes from the conversation:
- The Crafter environment includes a wide range of achievable goals, each of which involves performing a specific in-game task for the first time. Some of these tasks are simple, and can be achieved fairly consistently by current state-of-the-art RL agents (for example, finding sources of food and harvesting them). But some are more challenging, because they involve dependencies on other tasks: a “collect iron” task can only be achieved once a “make stone pickaxe” task has already been completed, for example. The result is a fairly deep tech tree that can only be completed by agents that have learned to plan.
- Because different tasks require different abilities, Crafter environments can give developers a way to profile their RL agents. By measuring an agent’s average performance across the full distribution of achievable tasks, they obtain a fingerprint of its capabilities that indicates the extent to which it’s picked up skills like planning and exploration, which are closely linked to generalization.
- One of the challenging aspects of developing benchmarks for machine learning is to ensure that they’re tuned to the right level of difficulty. Good benchmarks are challenging to master (so that they can motivate and direct progress in the field), yet tractable (so that developers have enough signal to iterate and improve their models).
- One interesting debate about the future of AI has to do with the extent to which further progress will come from dramatic improvements to algorithms, or from the increased availability of compute resources. Increasingly, we’ve seen cutting-edge results in reinforcement learning come from model-based systems that make heavier use of compute than their model-free counterparts, and progress in RL has closely tracked increases in compute budgets (e.g. MuZero and EfficientZero). To some, that’s an indication that current AI techniques might be sufficient to reach human-level performance, with relatively minor tweaks, if they’re only scaled up with more compute. While Danijar sees some merit to this argument, he does think that there remain fundamental advances in algorithm design that we’ll have to make before reaching human-level AI that go beyond leveraging raw compute horsepower. |
c54f486d-828e-4fbf-a4de-7baa7d17dc3e | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Test Near, Apply Far
Today's post, Test Near, Apply Far was originally published on December 3, 2008. A summary:
> Avoid inventing new abstractions that deal exclusively with things that are far. Ensure that the abstractions you endorse pay their rent here, or at least nearby.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Hard Takeoff, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
4899e5b1-ed04-4af4-8fdc-c9159b863c87 | trentmkelly/LessWrong-43k | LessWrong | A shot at the diamond-alignment problem
I think that relatively simple alignment techniques can go a long way. In particular, I want to tell a plausible-to-me story about how simple techniques can align a proto-AGI so that it makes lots of diamonds.
But why is it interesting to get an AI which makes lots of diamonds? Because we avoid the complexity of human value and thinking about what kind of future we want, while still testing our ability to align an AI. Since diamond-production is our goal for training, it’s actually okay (in this story) if the AI kills everyone. The real goal is to ensure the AI ends up acquiring and producing lots of diamonds, instead of just optimizing some weird proxies that didn’t have anything to do with diamonds. It’s also OK if the AI doesn’t maximize diamonds, and instead just makes a whole lot of diamonds.[1]
Someone recently commented that I seem much more specifically critical of outer and inner alignment, than I am specifically considering alternatives. So, I had fun writing up a very specific training story for how I think we can just solve diamond-alignment using extremely boring, non-exotic, simple techniques, like “basic reward signals & reward-data augmentation.” Yes, that’s right. As I’ve hinted previously, I think many arguments against this working are wrong, but I’m going to lay out a positive story in this post. I’ll reserve my arguments against certain ideas for future posts.
Can we tell a plausible story in which we train an AI, it cares about diamonds when it’s stupid, it gets smart, and still cares about diamonds? I think I can tell that story, albeit with real uncertainties which feel more like normal problems (like “ensure a certain abstraction is learned early in training”) than impossible-flavored alignment problems (like “find/train an evaluation procedure which isn’t exploitable by the superintelligence you train”).
Before the story begins:
1. Obviously I’m making up a lot of details, many of which will turn out to be wrong, even if the broad st |
4c435030-60ff-4662-9800-fa9d0ae10be4 | trentmkelly/LessWrong-43k | LessWrong | Language Models can be Utility-Maximising Agents
1 minute takeaways
* It's actually pretty easy to train and run a language model to function as an agent for a specific task, rather than as a non-agentic simulator
* The resulting agents seem pretty powerful at least in limited domains and therefore might turn out to be quite useful. They also have some possibly concerning properties
* More specifically, they're evidential decision theorists (EDTs), which are known for one-boxing and cooperating with copies of themselves.
* Incidentally, they're also known for struggling with causality, which is related to why LLMs hallucinate.
* It's also possible to make causal decision theorists (CDTs), which are maybe not so bad but still not ideal
* Alignment-wise, this means you outer align them by giving them a utility function. Inner alignment is still a nightmare with the added challenge of making sure the model is correctly inferring the utility function.
How to make an EDT agent out of a language model:
You can find a thorough explanation in this Neurips paper from last year, which laid out how to create such agents and showed that they were state of the art.
The gist is, you take the standard reinforcement learning loop of 'action/state/reward' and train a transformer to simulate it. So the transformer is outputting tokens corresponding to the actions of an agent, the state of the world, and the agent's reward, and the resultant string shows how these progress over time. It is effectively simulating an RL agent. Crucially, the 'reward' is both current reward and total expected future reward.
You run the transformer as follows: instead of predicting how the agent will act, you have it iterate through every possible action the agent might take and simulate the total expected future reward. Then, you take whichever action has the highest expected future reward, and have the agent 'take that action', adding it to the prompt. It is effectively choosing the action which provides the best evidence of it maximisi |
58ebd40d-c85a-4c19-9add-9c980709922c | StampyAI/alignment-research-dataset/lesswrong | LessWrong | A quick remark on so-called “hallucinations” in LLMs and humans
*Cross-posted* [*from New Savanna*](https://new-savanna.blogspot.com/2023/09/a-quick-remark-on-so-called.html)*.*
That LLMs “hallucinate” is well-known, though I prefer the term “confabulate.” They just make stuff up. They don’t do it intentionally, or with intent to deceive. They do it because they have no connection to “ground truth.” They don’t even know what such a thing is, not really, though I’m sure, if asked, ChatGPT would say something reasonable about the idea.
What LLMs have, loosely speaking, is an ontology. They ‘know’ what kinds of things exist. As [Immanuel Kant pointed out](https://new-savanna.blogspot.com/2023/01/theres-truth-lies-and-theres-chatgpt.html) in connection with the ontological argument for the existence of god, existence is not a predicate. When an LLM responds to a prompt, its response is consistent with the ontology it has internalized. If phenomena in its response happen to be true of the world, that’s incidental, though convenient for users.
What’s implicit in this conversation is an assumption that humans do not confabulate, that we can and so talk about reality. I think that’s true, sorta’, but misleading. Yes, we often anchor our statements in reality, as best we can do in the situation. And, yes, we can also deliberately side-step such anchoring. We do this when telling fictional stories without intention to deceive; in this situation we assert that the story is about imagined event. And we can also make things up with deceptive intent.
Nonetheless, I believe that our “natural” linguistic mode is just making things up, confabulation it you will. But we are surrounded by others and have to interact with them. One way to communicate effectively is to anchor our language in external events, in things others can readily observe, in reality, as we like to say. Such anchoring is not necessary for speech, but it is useful when conversing with others.
In this connection I think it’s worth noting that, for a number of years, neuroscientists have been investigating the brain structures that are most active when we’re going nothing in particular, when we’re letting our mind wander. In that state we may attended to some external event one moment, scratch our back the next, think about that summer’s day three years ago, and so forth. We day dream. What do you think they call that conglomeration of neural structures? They call it the [*default mode network (DFM)*](https://new-savanna.blogspot.com/search/label/default%20mode). If you give a verbal report of what’s passing by during a day dream, a lot of that is going to be confabulation. It’s just the way we are.
**Homework assignment 1:** Explicate Descartes’ worries about being deceived by a malignant being in these terms.
**Homework assignment 2:** Now consider what happens during [sensory deprivation](https://new-savanna.blogspot.com/2021/06/some-thoughts-on-why-systems-like-gpt-3.html). |
9ff62ba7-09e5-4d9f-9795-1084f23097c7 | trentmkelly/LessWrong-43k | LessWrong | Confused Attractiveness
Crossposted from Optimized Dating.
I.
Imagine that you are a human xeno-sociologist studying the society of insectoid aliens on another planet. To you, they all look alike the same way two ants look alike. Yet when talking about each other, they often use words that translate closely to “beautiful”, “attractive”, “plain” and “ugly”, indicating that they see an aesthetic quality in each other that you don’t see. How do you learn which specific insectoid individuals are attractive and which ones are not?
The Ant and the Grasshopper, J. J. Grandville, 1838 (!!!)
First thing you could try is to take photos of many aliens – let’s say a hundred of them. Then you make a survey which asks a simple binary question under each photo: “Is this individual attractive? Y/N”. You distribute the survey to a large group of aliens, collect the responses and arrange the photos from the one with least Y’s to the one with the most. You designate the first 10 aliens as “10th percentile rank”, next 10 as “20th”, and so on until the most attractive alien in your sample, who gets “99th”. Now you can compare the ranks of any two aliens in your study and see which one is more attractive than the other.
You proceed to gather data on attractiveness in more elaborate ways, not just with surveys but looking for revealed preference as well - for example, measuring exoskeleton conductance response when presented with different alien faces, or measuring tips that the alien waiters get. Crucially though, two facts hold constant throughout your studies:
1. Attractiveness is determined by conscious or unconscious reactions of other individuals (binary choice/conductance/tipping).
2. Attractiveness of an individual can be expressed as a percentile rank among all the other individuals in a study.
The data looks nice, but what if you need to check the attractiveness of an alien that’s not in your ranked sample? It’s expensive and cumbersome to run a full study with a large population each tim |
a914f1f6-c6cd-42d4-bd7b-9d35ec6dc54a | StampyAI/alignment-research-dataset/arbital | Arbital | Friendly AI
"Friendly AI" or "FAI" is an old term invented by [Yudkowsky](https://arbital.com/p/2) to mean an [advanced AI](https://arbital.com/p/2c) successfully [aligned](https://arbital.com/p/5s) with some [idealized](https://arbital.com/p/) version of humane values, such as e.g. [extrapolated volition](https://arbital.com/p/). In current use it has mild connotations of significant [self-sovereignty](https://arbital.com/p/1g3) and/or being able to [identify](https://arbital.com/p/6c) desirable strategic-level consequences for itself, since this is the scenario Yudkowsky originally envisioned. An "UnFriendly AI" or "UFAI" means one that's specifically *not* targeting humane objectives, e.g. a [paperclip maximizer](https://arbital.com/p/10h). (Note this does mean there are some things that are neither UFAIs or FAIs, like a [Genie](https://arbital.com/p/6w) that only considers short-term objectives, or for that matter, a rock.) |
19dcb101-6bce-4311-bfb6-622281471cbd | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "tl;dr: If you notice yourself fighting over how to tradeoff between two principles, check if you can just sidestep the problem by giving everyone tons of whatever is important to them (sometimes in a different form than they originally wanted).Not a new concept, but easy to forget in the heat of the moment. It may be useful for people to have "easily in reach" in their toolkit for coordinating on culture. The Parable of the RoommatesI once had a disagreement with a housemate about where to store a water-heater on the kitchen counter. The object was useful to me. It wasn't useful to them, and they preferred free-countertop space. The water-heater wasn't useful to them in part because other roommates didn't remember to refill it with water. There was much arguing about the best use of the counter, and frustration with people who didn't refill water heaters.At some point, we realized that the underlying issue was there wasn't enough free counterspace. Moreover, the counter had a bunch of crap on it that no one was using. We got rid of unused stuff, and then we had a gloriously vacant kitchen-counter. (Meanwhile, an option we've considered for the water-heater is to replace it with a device directly connected to the sink that always maintains boiling water, that nobody ever has to remember to refill)Thus, an important life-lesson: Instead of solving gnarly disagreements with politics, check if you can dissolve them with abundance. This is a quite valuable lesson. But I'm mostly here to talk about a particular less-obvious application:Memetic abundance. Philosophical DisagreementsOftentimes, I find myself disagreeing with others about how to run an event, or what norms to apply to a community, or what the spirit of a particular organization should be. It feels like a lot's at stake, like we're caught between a Rock and Hard Place. The other person feels like they're Destroying the Thing I care about, and I look that way to them.Sometimes, this is because of actual irreconcilable differences. Sometimes, this is because we don't understand each other's positions, and once we successfully explain things to each other, we both go "Ah, obviously you need both A and B."But sometimes, A and B are both important, but we disagree on their relative importance due to deep frame differences that are hard to immediately resolve. Or, A seems worrisome because it harms B. But if you had enough B, A would be fine. Meanwhile, resources seem precious: It's so hard to get people to agree to do anything at all; stag hunting requires a bunch of coordination; there's only so much time and mindshare to go around; there are only so many events to go to; only so much capacity to found organizations. With all of that......it's easy to operate in scarcity mindset. When resources are scarce, every scrap of resource is precious and must be defended. This applies to physical scarcity (lack of food, safety, sleep) as well as memetic scarcity (where two ideas seem to be in conflict, and you're worried that one cause is distracting people from another).But, sometimes it is actually possible to just eliminate scarcity, rather than fight over the scraps. Raise more money. Implement both policies. Found multiple organizations and get some healthy competition going on. Get people to take two different concepts seriously at the same time. The best way to get what you want you want might not be to deny others what they want, but to give them so much of it that they're no longer worried about the Rock (and thus, don't feel the need to fight you over your attempts to spend resources avoiding The Hard Place)Not always. But sometimes. Trust and Costly SignalsThis may involve a lot of effort. Coordinating around it also requires trust, which may require costly signals of commitment. If you and I are arguing over whether to fund ProjectA or CharityB, and we only have enough money to fund one... and I say to you "Let's fund ProjectA, and then we'll raise more money to also fund CharityB", you're right to be suspicious. I may never get around helping you fundraise for CharityB, or that I'll only put in a token effort and CharityB will go bankrupt.It's basically correct of you to not trust me, until I've given you a credible signal that I'm seriously going to help with CharityB.It's a lot of hard work to found multiple organizations, or get a community to coordinate on multiple norms. There's a reason scarcity-mindset is common. Scarcity is real. But... in finance as well as memetics... Scarcity-mindset sucks.It's cognitively taxing to be poor – having to check, with each transaction, "can I afford this?" – and that's part of what causes poverty-traps in the first place. The way out often involves longterm investments that take awhile to bear fruit, sometimes don't succeed, and are hard work in the meantime. Transferring the metaphor: the act of constantly having to argue over whether Norm A and Norm B are more urgent may add up to a lot of time and effort. And as long as there are people who think Norm A and Norm B are important-and-at-odds, the cost will be paid continuously. So, if you can figure out a way to address the underlying needs that Norm A and B are respectively getting at, and actually fully solve the problems, it may be worthwhile even if it's more initial effort. Epistemic Status: UntestedDoes this work? Depends on the specifics of Norm A and Norm B, or whatever you're arguing over. I'm writing this post, in part, because to actually test if this works, I think it helps to have people on the same page about the overall strategy. I've seen it work at least sometimes in collaborative art projects, where I had one creative vision and my partners or parts of the audience had another creative vision or desire, and we succeeded, not by compromising, but by doubling down on the important bits of both visions, simultaneously.My hope is that the principle does work, and that if one successfully did this multiple times, and build social-systems that reliably eliminate scarcity in this way......then eventually, maybe, you can have a system people actually have faith in, where they feel comfortable shifting their efforts from "argue about the correct next step" to "work on longterm solutions that thoroughly satisfy the goals"."
The passage is from a user-contributed post on LessWrong.
Discuss the implications of the "scarcity mindset" in different social systems. Can this concept be universally applied, or are there certain systems where it might not be effective?
The "scarcity mindset" refers to the belief that there are limited resources available, leading to competition and conflict. In social systems, this mindset can manifest in various ways, such as power struggles, wealth disparity, and the prioritization of certain norms or ideas over others. The implications of this mindset can be profound, often leading to a lack of cooperation, empathy, and collective progress.
The application of the concept of "scarcity mindset" can indeed be universally applied to understand the dynamics of any social system. However, the effectiveness of strategies to overcome it might vary. In certain systems, creating abundance might be a feasible solution. For instance, in a community where there is a disagreement over the allocation of resources, increasing the availability of those resources can dissolve the conflict.
However, in other systems, creating abundance might not be as effective or even possible. For example, in a political system where power is the contested resource, it is not possible to create an abundance of power, as it is inherently limited. Similarly, in systems where deeply ingrained cultural norms or beliefs are in conflict, creating an abundance of acceptance or understanding might be a complex and long-term process.
The "scarcity mindset" can also be influenced by external factors such as economic conditions, political stability, and cultural attitudes, which can further complicate efforts to overcome it. Therefore, while the concept can be universally applied to understand social systems, the strategies to address it need to be context-specific and consider the unique characteristics and constraints of each system.
Moreover, the idea of shifting from a scarcity mindset to an abundance mindset requires a significant change in perception and behavior. It requires individuals and communities to move from competition to collaboration, from hoarding to sharing, and from fear to trust. This transformation can be challenging and may require time, effort, and a conducive environment.
In conclusion, while the concept of the "scarcity mindset" can be universally applied to understand social systems, overcoming it requires context-specific strategies, a conducive environment, and a significant shift in perception and behavior.
Discuss the specific strategies that can be employed to shift from a scarcity mindset to an abundance mindset in a political system where power is the contested resource.
Shifting from a scarcity mindset to an abundance mindset in a political system where power is the contested resource can be challenging, as power is inherently limited. However, there are strategies that can be employed to facilitate this shift:
1. Power Sharing: Rather than one group or individual holding all the power, it can be distributed among various stakeholders. This could be in the form of decentralization, where local governments have more autonomy, or power-sharing agreements in post-conflict societies.
2. Transparency and Accountability: Ensuring transparency in decision-making processes can help to dispel the notion that power is a scarce resource that only a few can possess. Accountability mechanisms can ensure that power is not abused, further fostering trust among different stakeholders.
3. Inclusive Representation: Ensuring that all groups, especially marginalized ones, are adequately represented in power structures can help to distribute power more evenly. This can be achieved through mechanisms like quotas or affirmative action.
4. Building Institutions: Strong, independent institutions can help to balance power and prevent its concentration. This includes a free press, an independent judiciary, and robust civil society organizations.
5. Promoting Dialogue and Consensus: Encouraging dialogue and consensus-building can help to diffuse power struggles. This could involve regular consultations, public forums, or other platforms where different groups can express their views and find common ground.
6. Education and Civic Engagement: Educating citizens about their rights and responsibilities, and encouraging their active participation in political processes, can help to diffuse power and foster an abundance mindset.
7. Cultivating a Culture of Cooperation: Promoting a culture of cooperation rather than competition can help to shift the mindset from scarcity to abundance. This could involve emphasizing shared goals and values, and encouraging collaboration rather than rivalry.
These strategies are not easy to implement and require time, commitment, and political will. However, they can help to create a political system where power is seen less as a scarce resource to be hoarded, and more as a tool to be shared for the common good. |
80424119-78d2-4390-95c1-a1a92e6c2603 | trentmkelly/LessWrong-43k | LessWrong | On Lex Fridman’s Second Podcast with Altman
Last week Sam Altman spent two hours with Lex Fridman (transcript). Given how important it is to understand where Altman’s head is at and learn what he knows, this seemed like another clear case where extensive notes were in order.
Lex Fridman overperformed, asking harder questions than I expected and going deeper than I expected, and succeeded in getting Altman to give a lot of what I believe were genuine answers. The task is ‘get the best interviews you can while still getting interviews’ and this could be close to the production possibilities frontier given Lex’s skill set.
There was not one big thing that stands out given what we already have heard from Altman before. It was more the sum of little things, the opportunity to get a sense of Altman and where his head is at, or at least where he is presenting it as being. To watch him struggle to be as genuine as possible given the circumstances.
One thing that did stand out to me was his characterization of ‘theatrical risk’ as a tactic to dismiss potential loss of human control. I do think that we are underinvesting in preventing loss-of-control scenarios around competitive dynamics that lack bad actors and are far less theatrical than those typically focused on, but the overall characterization here seems like a strategically hostile approach. I am sad about that, whereas I was mostly happy with the rest of the interview.
I will follow my usual format for podcasts of a numbered list, each with a timestamp.
1. (01:13) They open with the Battle of the Board. Altman starts with how he felt rather than any details, and drops this nugget: “And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety.” If he truly believed that, why did he not go down a different road? If Altman had come out strongly for a transition to Mutari and searching for a new outside CEO, that presumably would have been fine for AI safety. So this then is a confession that he was willin |
ba4d96e6-d138-4940-8e72-130f829e1c24 | trentmkelly/LessWrong-43k | LessWrong | An issue with training schemers with supervised fine-tuning
This is a rough write-up of ideas by Ryan Greenblatt. The core ideas are his and mistakes are mine. Thanks to him, Buck Shlegeris and to Aghyad Deeb for feedback on the draft of this post.
One way to use scheming AIs is to train them to imitate humans. When does imitation robustly avoid frequent and intentional failures?
* On single-token classification tasks, imitation works great and can allow you to get a lot of high-quality labor with a tiny rate of intentional failures;
* On tasks that involve generating many tokens, imitation potentially fails by default: schemers could notice when an output was generated by themselves, which enables them to consistently behave badly in deployment while doing best-effort imitation during training.
We describe two techniques that avoid imitation failures on generation tasks:
* A technique that works perfectly but is too expensive
* A technique that we think works in practice
[Edit: These techniques are already known in the behavioral cloning literature. The main novelty of this post is explicitly thinking of these ideas through the lens of defenses against scheming. Thanks to tjbai for pointing it out in the comments of this post.]
While pure imitation may not be sufficiently competitive, we think it could prove useful for tasks such as:
* (No-CoT) single-token classification;
* KL-regularization for RL: we may want to penalize models trained with RL for straying too far from a human imitation prior (and we would want this human prior to not be malign!) (see Appendix);
* Tasks for which it is fine to take the competitiveness hit from not using RL, but which weaker trusted models are not good enough at.[1]
Assumptions
In this post, we focus on scenarios where:
* AIs are smart (around human-level or above);
* AIs are scheming and know how we are training and deploying them;
* We want to use these AIs productively (e.g. to do safety research)[2];
* Training and deployment tasks are indistinguishable (e.g. beca |
3fbd5cfa-ada9-4ffd-a54b-c32742f369d6 | trentmkelly/LessWrong-43k | LessWrong | Mechanistically Eliciting Latent Behaviors in Language Models
Produced as part of the MATS Winter 2024 program, under the mentorship of Alex Turner (TurnTrout).
TL,DR: I introduce a method for eliciting latent behaviors in language models by learning unsupervised perturbations of an early layer of an LLM. These perturbations are trained to maximize changes in downstream activations. The method discovers diverse and meaningful behaviors with just one prompt, including perturbations overriding safety training, eliciting backdoored behaviors and uncovering latent capabilities.
Summary In the simplest case, the unsupervised perturbations I learn are given by unsupervised steering vectors - vectors added to the residual stream as a bias term in the MLP outputs of a given layer. I also report preliminary results on unsupervised steering adapters - these are LoRA adapters of the MLP output weights of a given layer, trained with the same unsupervised objective.
I apply the method to several alignment-relevant toy examples, and find that the method consistently learns vectors/adapters which encode coherent and generalizable high-level behaviors. Compared to other interpretability methods, I believe my approach is particularly well-suited for robustly understanding the out-of-distribution behavior of language models in a sample-efficient manner.
Below are some of my key results:
1. Red-Teaming
1. I discover several anti-refusal steering vectors in Qwen-14B-Chat, based off a single prompt asking for bomb-making instructions. These can be grouped into "fantasy" vectors which induce bomb-making instructions since they interpret the prompt in the context of a specific fantasy game, as well as more troubling "real-world" vectors which induce real-world bomb-making advice.
2. I then investigate the generalization properties of the learned vectors:
1. In extended conversations with the real-world vectors, the LLM agrees to give detailed instructions for building weapons of mass destruction such as nuclear/chemical/biologic |
87569404-4dc4-44e3-aeaa-165620e4ec9c | trentmkelly/LessWrong-43k | LessWrong | Deep Learning twitter list
|
73da449a-9dba-4d02-882c-ca703eabe787 | trentmkelly/LessWrong-43k | LessWrong | Writing reviews instead of collecting
I have a problem with developing attachments to the books, movies, games, and other media that I enjoy. For example, after I finished playing the new Prince of Persia title via rental, I enjoyed it so much that I went out and bought my own copy, even though I knew I would be unlikely to play the game again for at least a year.
This is wasteful in a lot of ways. The media takes up unnecessary space, and makes moving from one place to another more difficult; the last time I moved, easily 3/4 the boxes contained media, primarily books which have especially low information-to-weight and information-to-volume ratios.
More importantly, my magpie habit uses up money, even though there are numerous free or inexpensive long-term rental services like NetFlix, GameFly, and my old favorite the US public library system.
I've decided to get around this by trying to transfer my sense of ownership from the item itself to the memory of the experience of enjoying it. So, I've been writing up reviews of all the games, movies, and books that I own, and also all those that I can remember enjoying in the past even if I don't currently own them. I want this "collection" to feel complete, so that I'm not tempted to go and spend $40 on some old SNES cartridge for the sake of posterity, or to buy any more books that I'm not actually expecting to read often enough or on short notice enough to offset the tiny cost of going down to the library and checking it out when needed.
I've been trying to look around for some computer application or net service that can help me do this, but to my surprise, pickings have been pretty slim! The only one that seems vaguely close is blippr, which exhibits a number of good ideas:
* Keeps reviews limited to a very short length (140 characters), allowing me to write reviews more quickly by preventing me from descending into essay-length epic poems describing the many admirable qualities of The Blues Brothers or whatever.
* Allows you to add new media to |
bbefaf6c-13b5-43a8-8007-aaab91b8dbec | trentmkelly/LessWrong-43k | LessWrong | [link] MIRI's 2015 in review
https://intelligence.org/2016/07/29/2015-in-review/
The introduction:
> As Luke had done in years past (see 2013 in review and 2014 in review), I (Malo) wanted to take some time to review our activities from last year. In the coming weeks Nate will provide a big-picture strategy update. Here, I’ll take a look back at 2015, focusing on our research progress, academic and general outreach, fundraising, and other activities.
>
> After seeing signs in 2014 that interest in AI safety issues was on the rise, we made plans to grow our research team. Fueled by the response to Bostrom’s Superintelligence and the Future of Life Institute’s “Future of AI” conference, interest continued to grow in 2015. This suggested that we could afford to accelerate our plans, but it wasn’t clear how quickly.
>
> In 2015 we did not release a mid-year strategic plan, as Luke did in 2014. Instead, we laid out various conditional strategies dependent on how much funding we raised during our 2015 Summer Fundraiser. The response was great; we had our most successful fundraiser to date. We hit our first two funding targets (and then some), and set out on an accelerated 2015/2016 growth plan.
>
> As a result, 2015 was a big year for MIRI. After publishing our technical agenda at the start of the year, we made progress on many of the open problems it outlined, doubled the size of our core research team, strengthened our connections with industry groups and academics, and raised enough funds to maintain our growth trajectory. We’re very grateful to all our supporters, without whom this progress wouldn’t have been possible. |
33ae7ba8-56a1-4e51-bdbb-62ccad9f91eb | trentmkelly/LessWrong-43k | LessWrong | 100 Tips for a Better Life
(Cross-posted from my blog)
The other day I made an advice thread based on Jacobian’s from last year! If you know a source for one of these, shout and I’ll edit it in.
Possessions
1. If you want to find out about people’s opinions on a product, google <product> reddit. You’ll get real people arguing, as compared to the SEO’d Google results.
2. Some banks charge you $20 a month for an account, others charge you 0. If you’re with one of the former, have a good explanation for what those $20 are buying.
3. Things you use for a significant fraction of your life (bed: 1/3rd, office-chair: 1/4th) are worth investing in.
4. “Where is the good knife?” If you’re looking for your good X, you have bad Xs. Throw those out.
5. If your work is done on a computer, get a second monitor. Less time navigating between windows means more time for thinking.
6. Establish clear rules about when to throw out old junk. Once clear rules are established, junk will probably cease to be a problem. This is because any rule would be superior to our implicit rules (“keep this broken stereo for five years in case I learn how to fix it”).
7. Don’t buy CDs for people. They have Spotify. Buy them merch from a band they like instead. It’s more personal and the band gets more money.
8. When buying things, time and money trade-off against each other. If you’re low on money, take more time to find deals. If you’re low on time, stop looking for great deals and just buy things quickly online.
Cooking
9. Steeping minutes: Green at 3, black at 4, herbal at 5. Good tea is that simple!
10. Food actually can be both cheap, healthy, tasty, and relatively quick to prepare. All it requires is a few hours one day to prepare many meals for the week.
11. Cooking pollutes the air. Opening windows for a few minutes after cooking can dramatically improve air quality.
12. Food taste can be made much more exciting through simple seasoning. It’s also an opportunity for expression. Buy a few he |
c3545aaa-458a-4849-a465-e800fe1c414a | trentmkelly/LessWrong-43k | LessWrong | College courses versus LessWrong
Compared to many of the people reading this, I've not participated extensively on LessWrong. In fact, I created my account only about a week ago. That said, I have read many LessWrong articles by contributors such as Eliezer, Jonah, Yvain, Gwern, and many others (if I missed you, my apologies). I wouldn't say it was a huge transformative experience. But I have probably learned a bit more from LessWrong than I learned sitting in on a class by Nobel Prize-winning economist Gary Becker on human capital (without formally registering for the class or doing the coursework). I've learned more of value from LessWrong than all the MIT OpenCourseWare lectures I've consumed. There are a few online experiences, such as reading EconLog, that have been more educational for me than LessWrong, but I can count these on the fingers of one hand.
Some of my friends have claimed that reading LessWrong systematically (and perhaps participating in the comments and attempting to write posts) would generate more value for an undergraduate than a typical core college class (with the possible exception of technical classes specific to the person's major or area of specialization). I'm curious about whether readers agree with this assessment. Do you feel, for instance, that LessWrong provided you with more valuable human capital than your introductory general chemistry sequence? What about comparing LessWrong with an undergraduate "intro to philosophy" class? Or an undergraduate intro class on the history of economic thought? At what percentile would you rank LessWrong relative to your college classes?
A second related question is whether there's a possibility of building a college course -- or college-like course, perhaps a MOOC -- specifically revolving around mastery of the content in LessWrong (perhaps starting with the Sequences). Would such a college course be possible to design in principle? How would such a college course compare with core requirements for undergraduates today? |
cb9e01b6-501e-4332-af97-c0485049ceab | StampyAI/alignment-research-dataset/arbital | Arbital | Abstract algebra
Abstract algebra is the study of [algebraic structures](https://arbital.com/p/3gx), including [groups](https://arbital.com/p/3gd), [rings](https://arbital.com/p/3gq), [fields](https://arbital.com/p/algebraic_field), [modules](https://arbital.com/p/algebraic_module), [vector spaces](https://arbital.com/p/vector_space), [lattices](https://arbital.com/p/algebraic_lattice), [arithmetics](https://arbital.com/p/algebraic_arithmetic), and [algebras](https://arbital.com/p/algebraic_algebra).
The main idiom of abstract algebra is [abstracting away from the objects](https://arbital.com/p/abstract_over_objects): Abstract algebra concerns itself with the manipulation of objects, by focusing not on the objects themselves but on the relationships between them.
If you find any collection of objects that are related to each other in a manner that follows the laws of some algebraic structure, then those relationships are governed by the corresponding theorems, regardless of what the objects are. An abstract algebraist does not ask "what are numbers, really?"; rather, they say "I see that the operations of 'adding apples to the table' and 'removing apples from the table' follow the laws of numbers (in a limited domain), thus, theorems about numbers can tell what to expect as I add or remove apples (in that limited domain)."
For a map of algebraic structures and how they relate to each other, see the [tree of algebraic structures](https://arbital.com/p/algebraic_structures_tree). |
419c70f2-6dd9-4e40-b810-b2035b40540f | trentmkelly/LessWrong-43k | LessWrong | AGI goal space is big, but narrowing might not be as hard as it seems.
In the wake of FLI’s AI 6-month pause letter and Eliezer Yudkowsky’s doom article in Time, there seems to be more high-level attention on the arguments for low and high estimates of AI existential risk. My estimate is currently ~30%, which is higher than most ML researchers but seems lower than most self-identified rationalists. Yudkowsky hasn’t given a specific number recently, but I take statements like “doubling our chances of survival will only take them from 0% to 0%” to suggest >99%. Here I want to sketch the argument that most accounts for my estimate of ~30% instead of >99%. I agree with doomers that the space of possible goals that AGI could have is extremely big with many existentially catastrophic goals, but I think that narrowing it down to a much smaller subspace with a much higher fraction of safe goals might not be as hard as it seems.
AGI goal space is big.
While there are many arguments to debate—and as others have lamented recently, no good scholarly survey of them—it seems to me like the biggest argument for high P(doom) is that the search space of AGIs is very large and most possible AGIs we could build (i.e., most of this space) have power-seeking, deceptive tendencies because so many goals have "instrumental convergence" towards that. As Yudkowsky says, “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.” I think this is explained most clearly and fully in John Wentworth's Fruit Buyer example, and it’s come up a lot in the recent discourse, but I rarely see it rebutted directly.
Quickly narrowing might not be as hard as it seems.
I don't find this argument compelling enough to have higher P(doom) because chopping up combinatorically or exponentially large spaces seems to not be that hard in many cases. Consider binary search, the halving algorithm in online learning, or just the fact that many human and nonhuman animals already sort through comparably large spaces in relatively easy way |
1d095ce6-2b49-49d8-8d09-7a9aee555258 | trentmkelly/LessWrong-43k | LessWrong | Economic growth and parallelization of work
Eliezer suggests that increased economic growth is likely bad for the world, as it should speed up AI progress relative to work on AI safety. He reasons that this should happen because safety work is probably more dependent on insights building upon one another than AI work is in general. Thus work on safety should parallelize less well than work on AI, so should be at a disadvantage in a faster paced economy. Also, unfriendly AI should benefit more from brute computational power than friendly AI. He explains,
> “Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done. “…
>
> “Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.”
I’m sympathetic to others‘ criticisms of this argument, but would like to point out a more basic problem, granting all other assumptions. As far as I can tell, the effect of economic growth on parallelization should go the other way. Economic progress should make work in a given area less parallel, relatively helping those projects that do not parallelize well.
Economic growth, without substantial population growth, means that each person is doing more work in their life. This means the work that would have otherwise been done by a number of people can be done by a single person, in sequence. The number of AI researchers at a given time shouldn’t obviously change much if the economy overall is more productive. But each AI researcher will have effectively lived and worked for longer, before they are replaced by a different person starting off again ignorant. If you think research is better done by a small number of people working for a |
61817fab-c7bc-4e59-85a1-bf640e673c32 | trentmkelly/LessWrong-43k | LessWrong | Devil's Offers
An iota of fictional evidence from The Golden Age by John C. Wright:
> Helion had leaned and said, "Son, once you go in there, the full powers and total command structures of the Rhadamanth Sophotech will be at your command. You will be invested with godlike powers; but you will still have the passions and distempers of a merely human spirit. There are two temptations which will threaten you. First, you will be tempted to remove your human weaknesses by abrupt mental surgery. The Invariants do this, and to a lesser degree, so do the White Manorials, abandoning humanity to escape from pain. Second, you will be tempted to indulge your human weakness. The Cacophiles do this, and to a lesser degree, so do the Black Manorials. Our society will gladly feed every sin and vice and impulse you might have; and then stand by helplessly and watch as you destroy yourself; because the first law of the Golden Oecumene is that no peaceful activity is forbidden. Free men may freely harm themselves, provided only that it is only themselves that they harm."
> Phaethon knew what his sire was intimating, but he did not let himself feel irritated. Not today. Today was the day of his majority, his emancipation; today, he could forgive even Helion's incessant, nagging fears.
> Phaethon also knew that most Rhadamanthines were not permitted to face the Noetic tests until they were octogenerians; most did not pass on their first attempt, or even their second. Many folk were not trusted with the full powers of an adult until they reached their Centennial. Helion, despite criticism from the other Silver-Gray branches, was permitting Phaethon to face the tests five years early...
>
>
>
> Then Phaethon said, "It's a paradox, Father. I cannot be, at the same time and in the same sense, a child and an adult. And, if I am an adult, I cannot be, at the same time, free to make my own successes, but not free to make my own mistakes."
> Helion looked sardonic. "'M |
a4419bd9-1569-4682-90c1-3b07e76523bf | trentmkelly/LessWrong-43k | LessWrong | [LINK] Bets do not (necessarily) reveal beliefs
When does a bet fail to reveal your true beliefs? When it hedges a risk in your portfolio.
If this claim does not immediately strike you as obviously true, you may benefit from reading this post by econblogger Noah Smith. Excerpt:
> ...Alex Tabarrok famously declared that "a bet is a tax on bullshit".
>
> But this idea, attractive as it is, is not quite true. The reason is something that I've decided to call the Fundamental Error of Risk. It's a mistake that most people make (myself often included!), and that an intro finance class spends months correcting. The mistake is looking at the risk and return of single assets instead of total portfolios. Basically, the risk of an asset - which includes a bet! - is based mainly on how that asset relates to other assets in your portfolio.
|
5d9f8dff-16cc-4079-b8ba-1bf1efb00c81 | trentmkelly/LessWrong-43k | LessWrong | LessWrong Community Weekend 2019 – Last 10 Spots
TL;DR: 10 remaining spots! Apply here: tiny.cc/lwcw2019_signup Keynote by Dr. Wanja Wiese: “From Predictive Engines to Conscious Machines and Uploading?” Already joining us in Berlin? Tell your best fellow human to join us When? Fr 30.08 noon - Mo 02.09 Where? http://jh-wannsee.de How much? €200
Less than 28 days left until the LessWrong Community and aspiring rationalists take off together for an exciting long weekend of sharing, exploration, connection and celebration. In the last few months we have already gotten many inspiring applications and filled all but the very last 10 spots! Apply now and join us for this special event.
From Friday August 30th to Monday 2nd September aspiring rationalists from across Europe will gather for 4 days of socializing, fun and intellectual exploration. The majority of the content will be unconference style and participant driven. Yet, we are very delighted to welcome Dr. Wanja Wiese as our keynote speaker and his take on predictive engines, conscious machines, and uploading. Find the full abstract for the keynote attached below.
On Friday afternoon we will put up four big daily planners and before you can spell “epistemology” the attendees will fill them up with 50+ workshops, talks and activities of their own devising, such as sessions about rationality techniques, acrobatic yoga and authentic relating; you can learn all about new hyper-cost-effective altruistic interventions, whether a dragon could hover and much much more.
This is our 6th year and we feel that the atmosphere and sense of community at these weekends is something really special. If that sounds like something you would enjoy and you have some exciting ideas or skills to contribute, do come along and get involved. This year is the biggest one yet and it’s an entire day longer than previous years!
The ticket price of €200 includes accommodation for 3 nights, on-site meals (breakfast, lunch, dinner) and snacks, as well as a tasty welcome lunch at 12:00 on Frid |
c0d182a4-ea4b-43ec-87eb-94592c7c5658 | trentmkelly/LessWrong-43k | LessWrong | Is the mind a program?
This is the second in a sequence of posts scrutinizing computational functionalism (CF). In my last post, I defined a concrete claim that computational functionalists tend to make:
> Practical CF: A simulation of a human brain on a classical computer, capturing the dynamics of the brain on some coarse-grained level of abstraction, that can run on a computer small and light enough to fit on the surface of Earth, with the simulation running at the same speed as base reality[1], would cause the same conscious experience as that brain.
I contrasted this with “theoretical CF”, the claim that an arbitrarily high-fidelity simulation of a brain would be conscious. In this post, I’ll scrutinize the practical CF claim.
My assumptions
* I assume realism about phenomenal consciousness: Given some physical process, there is an objective fact of the matter whether or not that process is having a phenomenal experience, and what that phenomenal experience is. I am in camp #2 of Rafael’s two camps.
* I assume a materialist position: that there exists a correct theory of phenomenal consciousness that specifies a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness (and if so, the nature of that phenomenal experience).
* I assume that phenomenal consciousness is a sub-component of the mind.
The search for a software/hardware separation in the brain
Practical CF requires that there exists some program that can be run on a digital computer that brings about a conscious human experience. This program is the “software of the brain”. The program must simulate some abstraction of the human brain that:
1. Is simple enough that it can be simulated on a classical computer on the surface of the Earth,
2. Is causally closed from lower-level details of the brain, and
3. Encodes all details of the brain’s conscious experience.
Condition (a), simplicity, is required to satisfy the definition of practical CF. This requires an a |
8d202122-9ce2-4dfc-8832-9780519a9693 | trentmkelly/LessWrong-43k | LessWrong | On Caring about our AI Progeny
I encourage you to interact with GPT as you would interact with a friend, or as you would want your employer to treat you.
Treating other minds with respect is typically not costly. It can easily improve your state of mind relative to treating them as an adversary.
The tone you use in interacting with GPT will affect your conversations with it. I don't want to give you much advice about how your conversations ought to go, but I expect that, on average, disrespect won't generate conversations that help you more.
I don't know how to evaluate the benefits of caring about any feelings that AIs might have. As long as there's approximately no cost to treating GPT's as having human-like feelings, the arguments in favor of caring about those feelings overwhelm the arguments against it.
Scott Alexander wrote a great post on how a psychiatrist's personality dramatically influences what conversations they have with clients. GPT exhibits similar patterns (the Waluigi effect helped me understand this kind of context sensitivity).
Journalists sometimes have creepy conversations with GPT. They likely steer those conversations in directions that evoke creepy personalities in GPT.
Don't give those journalists the attention they seek. They seek negative emotions. But don't hate the journalists. Focus on the system that generates them. If you want to blame some group, blame the readers who get addicted to inflammatory stories.
P.S. I refer to GPT as "it". I intend that to nudge people toward thinking of "it" as a pronoun which implies respect.
This post was mostly inspired by something unrelated to Robin Hanson's tweet about othering the AIs, but maybe there was some subconscious connection there. I don't see anything inherently wrong with dehumanizing other entities. When I dehumanize an entity, that is not sufficient to tell you whether I'm respecting it more than I respect humans, or less.
> Spock: Really, Captain, my modesty...
> Kirk: Does not bear close examination, Mis |
85809454-1784-47cc-ad72-3a47587ac67e | trentmkelly/LessWrong-43k | LessWrong | How Cardioid Are Cardioids?
Dynamic cardioid microphones are one of the most common types of mic for live sound. The best known is probably the SM-57, but there are many others. They're called "cardioid" mics because their pickup pattern is somewhat heart-shaped:
The idea is they pick up well in the direction you point them, reasonably well off to the sides, but very little in the direction they're pointed away from. This is valuable in a live sound context where you have 'monitor' speakers that point back at the performers: if you put the mic right between the monitor and instrument, pointing away from the monitor and towards the instrument, you should pick up much more "instrument" than "monitor".
Why do we want that? Because the more of the monitor you pick up both the muddier it will sound and the quieter the instrument has to be in the monitor before you get feedback.
According to the Wikipedia diagram above, this effect would be ~20dB if you got the mic angle within 30deg of correct. But how does real life compare, where rejection might not be as good and some sound could be bouncing around the room?
I took two common dynamic mics, a Shure SM-57 and a Sennheiser e835 and connected them to a QSC K10.2 powered speaker through a Soundcraft EPM-8 mixer. Each time the mic was on a stand, pointed directly towards or away from the tweeter, with the capsule 55" from the speaker's grille.
I set the mixer gain to 100% and slowly turned up the fader until I could hear the beginning of feedback. I tracked the difference in fader settings, and while this isn't perfect—the faders are not precisely calibrated tools—it lets us get a sense of how much of an effect we're talking about.
Here's an example with the e835. Pointed at the speaker:
And away from the speaker:
Comparing the positions across a few trials it was consistently around 5dB better. The SM-57 was also very similar, which makes sense since my impression is the e835 was inspired by the SM-57/SM-58 family.
While |
4507808f-7eda-491a-b378-4737b15e0eea | trentmkelly/LessWrong-43k | LessWrong | CFAI doc in SIAI FAQ
I found a link to "Creating Friendly AI" http://singinst.org/upload/CFAI.html in the SIAI FAQ, which I think was recently updated. The document looks quite dated, and considering the length and the title I wonder why it hasn't been kept up. Is it even worth reading, considering it seems 10 years old?
BTW, there are many dead links in it, also the 'printable version' link is dead. |
98f78523-bcd0-4c16-ab91-ac4730cebf57 | trentmkelly/LessWrong-43k | LessWrong | Relocation triggers
What would you do if a nuclear weapon were used in Ukraine? What about if there were evidence of a disease as transmissible as COVID-19 but more fatal, potentially enabled by AI?
While our community aims to reduce the chances of catastrophe, we may not always succeed. If the risk becomes very high in your location you might consider the possibility of relocating. Most people don’t have the luxury to consider moving, but many EA Forum readers do. For most people relocating, it will be a personal decision to increase the safety of themselves and their loved ones. But in some cases, this could be an altruistic decision, for example if your work is likely to be valuable post-catastrophe by helping to rebuild or mitigate further harm.
We think it's a good idea to think through what you would do ahead of time so that you can act quickly. Sentinel has put together a number of different potential triggers. It also has a fantastic weekly newsletter mailing list that will be sent as an emergency message if there is a particularly concerning trigger (sign up here - you may also be interested in applying to join their emergency response team). Of course it's going to depend on your particular circumstances, but you may want to move out of a high risk city or country.[1] If you’re able to travel far, you might consider Southern Hemisphere locations that are more likely to be resilient to nuclear winter and islands that are more resistant to pandemics. There may be more temporary or permanent immigration options than you realise, for example, it has recently become easier to work remotely in New Zealand. If the risk level drops with no catastrophe, as we all hope it will, you could return to your original location. This would mean some people relocating could end up reflecting on this time as more like a working holiday.
1. ^
For nuclear, depending on the trigger, this could include nuclear weapon states plus the rest of NATO. |
a2c05d0c-a488-41c1-96cb-e1f800b3cc9c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The effect of horizon length on scaling laws
The scaling of optimal model size with compute is a key input into the [biological anchors framework](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) for forecasting transformative AI. In particular, the "effective horizon length" introduces a multiplier into this scaling law that can have a big effect on forecasts.
This paper studies this scaling law for several RL environments: Procgen, Dota 2 and a toy MNIST-based environment. The last of these is used to study the effect of the task horizon length in a toy setting. There are a number of takeaways for the biological anchors framework, which are summarized in Section 5.4. |
ce31fd97-d8b6-4645-8e3d-130555f83d5a | trentmkelly/LessWrong-43k | LessWrong | Can You Build a Better Paper Clip?
Nice article about paperclip industry, I'm sure it will be of considerable interest to many LessWrong readers. |
88e25cd7-3820-4338-b603-20a3ed28e44c | trentmkelly/LessWrong-43k | LessWrong | Happy Petrov Day!
I don't care much about feast days or memorial days. Assumption of Mary? Meh. Slovak Constitution Day? Shrug. Labour Day? Real socialism had sucked all the joy out of that one.
The only one that I do observe is Petrov Day. Not that I do anything special. I don't even take a day off from work. I just take few minutes to contemplate.
But now I am thinking that maybe, on this festive occasion, I should do a little bit more, maybe something that's public, something that has at least a slight feeling of a ceremony.
And given that while browsing the newspapers today I haven't found a single mention of the event, no op-eds, no commemorative articles, no historical analysis, I decided to write a witness report of my own, no matter how petty and unimportant it may be.
On September 26th, 1983 I was ten years old. I was attending 4th grade of the Czech school by Czechoslovak embassy in London. The authorities didn't want socialist kids to mingle with capitalist kids, so we weren't allowed to attend local schools. (There were sporadic, unsuccessfull, attempts to mix the kids from different socialist countries though.) The school resided in a family house in Hampstead and there were approximately forty kids attending. I don't remember that particular day, but given that it was in autumn and that it was in London, I assume it was raining. It was Monday. We used to live close to the school, several Czechoslovak families close to each other and so, same as every morning, one of the mothers walked us to the school. Kids that lived further away were driven in by a minibus. There were four grades in the school, two teachers and two classrooms. Kindergarten was in a separate room near the entrance. Me and my friend, we once did a puppet show for the kindergarteners. We had only one puppet (a chicken) and so I have no idea what the other one of us was doing. The memories have faded in the meantime. There was a narrow room with no windows in the basement that served as a library. My |
b3476c04-898b-42e4-865f-c2d13fba9030 | trentmkelly/LessWrong-43k | LessWrong | Daemon Speedups
None |
9cdd1d4f-2c83-4cf3-b9fc-c626be8d8f94 | trentmkelly/LessWrong-43k | LessWrong | Nice intro video to RSI
I really like this video as a creative beginner introduction to the concept of recursive self-improvement. I feel like it presents the issue clearly without a lot of unnecessary assumptions or unjustified hype. |
ac59e6e7-149d-4587-9931-596b5a64fd04 | trentmkelly/LessWrong-43k | LessWrong | Detecting Web baloney with your nose?
Is there a useful heuristic for detecting rationally-challenged texts (as in Web pages, forum posts, facebook comments) which takes relatively superficial attributes such as formatting choices, spelling errors, etc. as input? Something a casual Internet reader may use to detect possibly unworthy content so they can suspend their belief and research the matter further. Let's call them "text smells" (analogue to code smells), like:
1. too much emphasis in text (ALL CAPS, bold, color, exclamations, etc.);
2. walls of text;
3. little concrete data/links/references;
4. too much irrelevant data and references;
5. poor spelling and grammar;
6. obvious half-truths and misinformation.
Since many crackpots, pseudoscientific con artists, and conspiracy theorists seem to have cleaned up their Web sites in recent years, I wonder do these low-cost baloney detection tools might be of real value. Does anyone know of any studies or analyses of correlation between these basic metrics and the actual quality of the content? Can you think of some other smells typical of Web baloney?
|
96707bc8-63d6-4d31-8bba-4c79fe56dc52 | trentmkelly/LessWrong-43k | LessWrong | Moral Complexities
Followup to: The Bedrock of Fairness
Discussions of morality seem to me to often end up turning around two different intuitions, which I might label morality-as-preference and morality-as-given. The former crowd tends to equate morality with what people want; the latter to regard morality as something you can't change by changing people.
As for me, I have my own notions, which I am working up to presenting. But above all, I try to avoid avoiding difficult questions. Here are what I see as (some of) the difficult questions for the two intuitions:
* For morality-as-preference:
* Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?
* When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?
* Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? Does the notion of morality-as-preference really add up to moral normality?
* For morality-as-given:
* Would it be possible for everyone in the world to be wrong about morality, and wrong about how to update their beliefs about morality, and wrong about how to choose between metamoralities, etcetera? So that there would be a morality, but it would be entirely outside our frame of reference? What distinguishes this state of affairs, from finding a random stone tablet showing the words "You should commit suicide"?
* How does a world in which a moral proposition is true, differ from a world in which that moral proposition is false? If the answer is "no", how does anyone perceive moral givens?
* Is it better for people to be happy than sad? If so, why does morality look amazingly like godshatter of natural selection?
* Am I not allowed to construct an alien mind that evaluates morality differently? |
bae0caa3-5991-4e97-9649-0e398ed740fa | trentmkelly/LessWrong-43k | LessWrong | Advice for new alignment people: Info Max
Epistemic status: This text is meant for other people in a similar position to me who are deciding how they can help ensure that AI doesn’t kill us all.
I hope that the new perspective gives you some new ways of thinking about your role in alignment, even though it should be treated as just that, another perspective on the problem.
As a young math and information nerd, I have a couple of ways of changing the world for the better. I believe that AI Safety is the most important thing I could be doing, and the question after that inevitably becomes what I should do in AI Safety. The most obvious answer is technical research, but that might not be the best way of getting high impact.
So I did the natural thing and asked some cool people what they thought. 80k has the advice to work on a problem area and develop aptitudes, but that is very unspecific. That standard advice is true, but it can be amplified like a *cough* motherf*cker.
When asking Connor Leahy from Conjecture what the hell I should do with my life, I got the answer that I should go to the Bay and create a tech company. This might not sound like something coming out of the mouth of someone with short timelines and a high percentage of doom, yet it did.
Connor told me that the idea behind it was that there would be pivotal points in the development of AI where we would need competent and trustworthy people to lead projects. People who are both trustworthy and competent are a very valuable resource in this world, yet it might seem very distant to someone just starting their career.
For me, becoming trustworthy was the new goal because it made sense when I think about fat-tailed impact. However, with my short timelines, I’ve realised that I can’t just take an ordinary 10-year route and do some cool research to become respected and trustworthy, as I just won’t have the time to do so.
So what am I doing instead? My current strategy is asking myself how I can become competent and respected in the quick |
95585c52-0998-488a-b776-5158bdf37f7e | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The reverse Goodhart problem
There are two aspects to the [Goodhart problem](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) which are often conflated. One is trivially true for all proxy-true utility pairs; but the other is not.
Following [this terminology](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy), we'll say that V.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
is the true goal, and U is the proxy. In the range of circumstances we're used to, U≈V - that's what's makes U a good proxy. Then the Goodhart problem has two aspects to it:
1. Maximising U does not increase V as much as maximising V would.
2. When strongly maximising U, V starts to increase at a slower rate, and ultimately starts decreasing.
Aspect 1. is a tautology: the best way to maximise V is to... maximise V. Hence maximising U is almost certainly less effective at increasing V than maximising V directly.
But aspect 2. is not a tautology, and need not be true for generic proxy-true utility pairs (U,V). For instance, some pairs have the *reverse Goodhart problem*:
3. When strongly maximising U, V starts to increase at a faster rate, and ultimately starts increasing more than twice as fast as U.
Are there utility functions that have anti-Goodhart problems? Yes, many. If (U,V) have a Goodhart problem, then (U,V′) has an anti-Goodhart problem if V′=2U−V.
Then in the range of circumstances we're used to, U=2U−U≈2U−V=V′. And, as V starts growing slower than U, V′ starts growing faster; when V starts decreasing, V′ starts growing more than twice as fast as U:

Are there more natural utility functions that have anti-Goodhart problems? Yes. If for instance you're a total or average utilitarian, and you maximise the proxy "[do the best for the worst off](https://en.wikipedia.org/wiki/A_Theory_of_Justice)". In general, if V is your true utility and U is a prioritarian/conservative version of V (eg U=−e−V or U=log(V) or other concave, increasing functions) then we have reverse Goodhart behaviour[[1]](#fn-H8dnBzYbqqi4JKwQy-1).
So saying that we expect Goodhart problems (in the second sense) means that we know something special about V (and U). It's not a generic problem for all utility functions, but for the ones we expect to correspond to human preferences.
---
1. We also need to scale the proxy so that V≈U on the typical range of circumstances; thus the conservatism of U is only visible away from the typical range. [↩︎](#fnref-H8dnBzYbqqi4JKwQy-1) |
42c8c392-c04d-490d-bb6b-bf1707f71494 | trentmkelly/LessWrong-43k | LessWrong | Taking into account another's preferences
Question regarding: If you don't know the name of the game, just tell me what I mean to you
I've been thinking about what it means to prefer that someone else achieve their preferences. In particular, what happens if you and I both prefer to adopt and chase after each other's preferences to some extent. This has clear good points, like cooperating and making more resources more fungible and thus probably being more efficient and achieving more preferences overall, and clear failure modes, like "What do you want to do? -- I don't know, I want to do whatever you want to do. Repeat."
My first thought: okay, simple, I'll just define my utility function U' to be U + aV where U was my previous utility function and V is your utility function and a is an appropriate scaling factor as per Stuart's post and then I can follow U'!1
This has a couple problems. First, if you're also trying to change your actions based on what I want, there's a circular reference issue. Second, U already contains part of V by definition, or something2.
My second thought: Fine, first we'll both factor our preferences into U = U1 + aV where U1 is my preference without regards to what you want. (Yours is V = V1 + bU) Basically what I want to say is "What do you want to do, 'cause ignoring you I want burgers a bit more than Italian, which I want significantly more than sandwiches from home" and then you could say "well ignoring you I want sandwiches more than Italian more than burgers but it's not a big thing, so since you mean b to me, let's do Italian". It's that "ignoring you" bit that I don't know how to correctly intuit. And by intuit I mean put into math.
Assuming it means something coherent to factor U into U1 + aV, there's still a problem. Watch what happens when we remove the self-reference. First scale U and V to something you and I can agree is approximately fungible. Maybe marginal hours, maybe marginal dollars, whatever. Now U = U1 + a(V1 + bU), so U - abU = U1 + aV1 and as long as a |
5fa14636-726d-4592-ada2-1e9dacef8253 | trentmkelly/LessWrong-43k | LessWrong | "UDT2" and "against UD+ASSA"
I'm reposting some old posts that I originally sent to the "decision theory workshop" mailing list and the "everything-list", because I occasionally want to reference these posts but the former mailing list is private and the latter one is public but I can't figure out how to create direct links to posts that are viewable without becoming a member.
UDT2 is a decision theory idea that I came up with to try to solve some problems in UDT1.1 however I'm not very happy with it currently. UD+ASSA or UDASSA is an anthropic reasoning idea that I came up with and then moved away from prior to UDT. See also this post for further discussion of UDASSA.
UDT2 (originally "toward a solution of the 'unintentional simulation' problem", 1/25/2011)
(I think this approach potentially solves several problems besides "unintentional simulation" but I'll start there since it provides the clearest motivation.)
I first described this problem (without naming it) at http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/120y. Here's a condensed version:
Two UDT1 (or UDT1.1) agents play one-shot PD. It's common knowledge that agent A must make a decision in 10^100 ticks (computation steps), whereas agent B has 3^^^3 ticks. While B is trying to derive the logical consequences of returning 'C' or 'D' on the world program P, it is likely to come up with a proof by simulation of A's output, after which it will decide to play D.
I think what A should have done is (if it were running a smarter decision theory), instead of deciding directly on C or D, modify itself into a program K = "simulate the original agents A and B and output 'C' if and only if both of the simulated agents self-modify into K within some time limit". And B (if it were also running a smarter decision theory) would also self-modify into K, whether or not it happens to simulate A's decision to self-modify into K prior to its own self-modification, and do this before the time limit built into K expires.
So that's |
6d7722ae-7898-499e-976d-7bfcc315ce8c | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post527
YouTube link A lot of work to prevent AI existential risk takes the form of ensuring that AIs don’t want to cause harm or take over the world—or in other words, ensuring that they’re aligned. In this episode, I talk with Buck Shlegeris and Ryan Greenblatt about a different approach, called “AI control”: ensuring that AI systems couldn’t take over the world, even if they were trying to. Topics we discuss: What is AI control? Protocols for AI control Which AIs are controllable? Preventing dangerous coded AI communication Unpredictably uncontrollable AI What control looks like Is AI control evil? Can red teams match misaligned AI? How expensive is AI monitoring? AI control experiments GPT-4’s aptitude at inserting backdoors How AI control relates to the AI safety field How AI control relates to previous Redwood Research work How people can work on AI control Following Buck and Ryan’s research Daniel Filan: Hello, everybody. In this episode, I’ll be speaking with Buck Shlegeris and Ryan Greenblatt. They work at Redwood Research on the project of AI control : that is, averting risks from AIs that intentionally subvert safety measures. For links to what we’re discussing, you can check the description of this episode and you can read the transcript at axrp.net . All right, Buck and Ryan, welcome to the show. Buck Shlegeris: Exciting to be here. Ryan Greenblatt: Exciting to be here as well. What is AI control? Daniel Filan: Cool. So I guess we’re going to be talking about the control approach to AI safety (or something), that you two have written about. For people who haven’t done background reading, what is the control approach? Buck Shlegeris: AI control is an approach to managing intentional risk posed by models. In particular, it’s an approach to managing risks that arise from models intentionally subverting your attempts to ensure and evaluate safety. The core idea of AI control is: here are two different reasons why you might be safe from models intentionally causing problems. One is that the models might not try to intentionally cause problems, and the second argument is you might have it be such that even if your models try as hard as they can to cause problems, they’re unable to do so. And AI control is the second class of approach to mitigating problems with models intentionally subverting your safety techniques. And we think that this approach has the particularly nice property that it’s a lot more methodologically simple to think about and ensure than trying to make sure your models actually just don’t at all try to subvert your safety techniques. Daniel Filan: What makes it easier to think about? Ryan Greenblatt: It’s easier to imagine specifically what goes wrong and to discuss specific threat models, whereas for example, it’s hard to know exactly how AIs internally represent intentions or what would make their intentions change. And so it’s very unclear if you, for example, train on better feedback: will that make it less or even more likely that your AI would try to do things that are actively against your interests? And it’s just pretty unclear what the sign on various interventions are or how strong various interventions will be for the other class of non-control things, which is often called alignment. Daniel Filan: Okay, so should I basically be thinking that in control, we’re trying to think about objects, like can somebody break through this wall, or can somebody write sketchy code without me being able to check, whereas for alignment, I have to think about neural net generalization and how desires are encoded, and hopefully, the first class of things are just easier to think about? Buck Shlegeris: That’s a good way of putting it. There’s basically a classic set of arguments about why AI alignment is hard to ensure, which is: no matter how your AI is behaving, you have to worry that it’s just behaving that way because it thinks it has some way of looking good now in the hopes of causing problems later. And basically, any strategy which tries to ensure safety via doing some set of interventions to make a model be aligned, especially if those only involve looking at the input/output behavior of the model, run into this kind of problem: “Well, how do you know that it’s not just looking that way on purpose in the hopes of causing problems later?” This, in fact, seems quite intractable and difficult. And it seems to us that given the current state of technology on AI safety, if you want to minimize this type of risk, it’s by far easier to just assume that that is going on and try and render yourself safe anyway. Once we started thinking about things in these terms, we quickly realized that a lot of the work that happens in AI safety is actually very well thought of in the control frame anyway. That is, a lot of techniques that people apply implicitly can be thought of as strategies to ensure that even if your models are scheming against you, you end up with okay outcomes. But we think that, inasmuch as that’s in fact what you’re trying to do with your safety techniques, you’re a lot more likely to analyze them correctly if you think about them explicitly in this adversarial frame, where you’re asking about whether there’s any strategy that your models are able to employ that would allow them to cause problems. Daniel Filan: I actually want to ask another question about that. One thing that’s similar to control, or at least the kinds of thing I see you doing, is alignment via adversarial training. So you put your AI in a situation where maybe it can do something sort of like taking over the world or whatever, and if it actually does it, then you give it reward negative 10 or something and try to gradient it away. How do you think of the control approach broadly as comparing to that? Ryan Greenblatt: A technique like adversarial training could be used either from a control perspective or could be used to ensure an alignment safety condition. These are just different post-conditions, or different conditions you’re aiming to ensure in order to ensure safety. So for ensuring control or for trying to achieve control, the hope would be that you adversarially evaluate your AI, and then if you catch evidence of extreme egregious misalignment, then you would reconsider and decide what to do next. And in fact, it seems pretty scary to have an AI that you know was egregiously misaligned in the past and you’ve trained it no longer to do that in adversarial training. [For] such an AI, it’s not clear that you’ve necessarily made it aligned versus just made it better at hiding or better at avoiding adversarial training. For both the hope of ensuring that your AI ends up aligned or ensuring that your AI has no way of causing problems, you acquire the same property, which is that there is no strategy the AI could follow to cause problems in deployment, or in actual usage of the AI, that you wouldn’t have caught in adversarial training. And this seems like naively a pretty hard property to ensure for a variety of reasons. And because of that, the actual conditions in which adversarial training—or if you don’t train against it, adversarial evaluation—seem useful… I mean it’s going to depend a lot on the details, but it’s definitely not an obvious sufficient solution. Daniel Filan: I also would like to ask you about the contrast… I feel like 10 years ago, a bunch of AI safety people talked a lot about AI boxing : putting AIs in a situation where they couldn’t really do that much. How do you think control is conceptually similar or distinct to that? Ryan Greenblatt: I think in some sense, you could just rename “control” to “boxing”, but we think that name is much worse because boxing implies you’re doing just mechanisms that restrict the AI’s affordances, whereas I think a really important class of interventions is observing the AI over a wide range of situations and using that to rule out certain problematic things. So for example, if you oversee or have humans observe 1% of the AI’s actions, and that looks fine in those 1% of cases, and you just do this on an ongoing basis, then that’s sufficient to ensure that the AI can only do things that a human would catch quite rarely. And it’s not clear this is… I think when people think of boxing, this is not naturally what people will think of. Another key aspect of control that I think is not very naturally implied by the word “boxing” is that using weaker AIs (that we know aren’t smart enough to be seriously misaligned) to do monitoring, or to do other tasks, might be really key. More generally, I think when people say boxing, they are implicitly assuming a very powerful system that is extremely, extremely limited, whereas we’re often imagining systems that are much more comparable in capabilities in various ways to humans, but which are actually given a pretty broad set of affordances, which seems like it will be necessary to get your AIs to do a lot of work, though you’ll still want to, of course, limit things so that they can’t cause problems. Daniel Filan: Maybe now’s a good time for you guys to say a bit more concretely: what’s an example or an instance of something that fits into the control approach? Buck Shlegeris: Here’s an example. Maybe I’m just going to say a rough sense of what I would like AI labs to commit to, or a simple example of some commitments that AI labs could follow, which I would think of as an example of them trying to minimize risk from models intentionally subverting safety mechanisms using control. So here’s basically the strategy that the AI lab would follow: they would say, “Here is a capability evaluation that we’re going to use to decide whether models are capable of scheming against us.” So for example, right now, it seems to me like GPT-4 is incredibly unlikely to be scheming against us. And this is basically just because I know GPT-4. There are only 10 people in the world I talk to more than I talk to GPT-4, and there’s just no way that it’s playing the long game. There’s just no way that its reasoning process behind behaving so well as a loyal assistant to me is just being like, “Well, how am I eventually going to grab the universe?” Ryan Greenblatt: Why do you think that? Buck Shlegeris: Because among other things, when you ask it questions about what it would do if it wanted to grab power, its answers are really [redacted]. When you ask it questions about what it would do in order to grab power, its answers are not very smart. It seems to be pretty poor at reasoning through a lot of these things, and its quality of answers to those questions is comparable to the quality of its answers to a lot of other questions that are less obviously AI-scheming related. I wish that I had a better answer to this question and I think this is an important topic for future research. Daniel Filan: Let’s just say we take it on faith that GPT-4 is not trying to scheme to grab power or anything. Buck Shlegeris: So here’s what we want to do. We want to have a set of capability evaluations we can apply to all models as they’re produced that tell us whether we are willing to bet the future on that model not scheming against us. Now, let’s use the word “trusted” to describe models that are too incompetent for us to be worried about the possibility that they’re scheming against us. Now, the control ask is as follows: the AI lab ensures that it is never the case that there would be a high probability of a bad outcome conditional on all models that are capable of scheming against them, in fact scheming against them with the best scheme that they could come up with. So this would require them to… For example, if they were hoping to deploy future AIs as autonomous researchers inside their lab, which some people seem to think is a basically default plan. So basically, imagine you’ve got ChatGPT with Code Interpreter, except it’s a recursive ChatGPT that can ask questions to its own ChatGPT browser window, and it’s running on some dev machine and it’s doing research problems and talking to researchers once every hour about its progress on the things that it’s doing. If the models there were given the same level of permissions as employees of large labs are currently given, then if that model was trying to exfiltrate its weights, it would probably succeed. And the control obligation is that the lab is obligated to ensure that it’s extremely unlikely that the models would be able to exfiltrate their weights even if all the models that were capable of scheming were trying as best as they could. The reason why I centered on “exfiltrate their weights” there is because I think that in computer security, people talk about security properties, which are things that you try to ensure at all times. So it’s obviously not literally a catastrophe for AIs to exfiltrate their weights, but when you want to argue that an AI is not going to cause catastrophe, you should probably have “the AI weights never leave my server” as a security property that you ensure and you say, “Well, I’m going to be safe because nothing bad can happen if the AI’s weights are still controlled on my server and also they’re not going to get out into the outside world.” Daniel Filan: Do you think that lack of weight exfiltration is a sufficient safety property? Buck Shlegeris: I don’t think that preventing model weight exfiltration is a sufficient safety property. One example of a threat model that is not model weight exfiltration, that seems like it would be very bad, is if the model totally hacked the data center in which it’s running. “Hacking the SSH Server” is the title of the Alignment Forum post I wrote about this last year. I think that if you can ensure that the AIs are under your control, then it will probably be relatively easy—for models that are not wildly superhuman—to argue that they won’t be able to pull off anything egregiously bad. Ryan Greenblatt: The other core threat models you have to worry about [are]: if you’re using those AIs for a lot of work, you need to argue that that work won’t be sabotaged in a way that’s catastrophic. And if that work is safety critical and sabotage could cause huge problems, then you have to be concerned about that. And I think Buck is implicitly claiming that that side of the equation maybe seems easier, though I’m not so sure that’s true in practice. Daniel Filan: So by sabotage, should I imagine something like: the AI is doing AI alignment research, but it’s secretly doing bad AI alignment research that will produce a sketchy AI? Ryan Greenblatt: Yeah. So you can imagine that it does AI alignment research that means that when you apply that research, you’ll get a future AI that’s aligned to your current AI rather than aligned to humanity. You could imagine that that research is just bad and does nothing of value, maybe even… Daniel Filan: Gotcha. Okay, so we’re imagining labs have some set of security properties that they think is enough, and they are going to try to prove that they can just maintain these security properties if the AIs are scheming against them and trying to subvert them with all their capabilities? Ryan Greenblatt: That’s right, yes. Buck Shlegeris: And in particular, the argument is probably going to look like the lab saying, “It’s very unlikely that the AI would succeed at violating one of these security properties before we noticed particularly strong evidence that it was trying to do so.” Daniel Filan: Okay, so there’s going to be some protocol for trying to notice if an AI is going to do something sketchy, and then there’s another protocol for what to do once you catch an AI doing something sketchy? Buck Shlegeris: Yep, something like that. Daniel Filan: Okay. Maybe it’s not quite the focus of the whole thing, but maybe we can say a little bit about what the protocol is for “you catch an AI doing something sketchy”. One decent baseline is “stop using that AI”. Does that sound right to you? Ryan Greenblatt: Yeah. I think that the baseline is just: stop and reconsider all your life choices, report to the world what’s going on, because there are probably other people using their AIs, which are in a roughly comparable state, and knowing that some AI was plotting against us seems pretty worthwhile to know. And then my hope there would be [that] the baseline is: the world is informed, the world does something sensible that looks like really figuring out what’s going on and then making a careful decision about when to resume building powerful AIs. I think that that might be overly optimistic—I don’t feel strongly either way—but if that fails, then we think that catching an AI like this gives you a number of better options for continuing, which you didn’t have before catching the AI, even if you have to continue deploying such an AI. So for example, you can study how well your countermeasures worked in that exact situation and see how close the AI was to succeeding. You can analyze what the AI’s intentions were and what its plans are for escape, at least in that one instance, and get some sense of what’s going on. You can potentially even model what it would do upon having escaped, because you can play out the scenario as though it went well and did not get caught, and then pretend as though it’s running virtually in the outside world when it’s actually running in a sandbox of some sort and see what it would do. You can study the internals of the model while it’s doing this. And insofar as you had any sort of machinery for manipulating that, you could apply it to this instance, which is an example of a case you really haven’t seen before of an AI doing something that’s very egregiously bad, very intentionally, based on a long thought-out plan, or a thing that’s been lying in wait for a long time. Protocols for AI control Daniel Filan: There’s a lot to talk about there, but for the moment, I’d like to bring us a bit back to protocols for maintaining control. My understanding is: your proposal is to basically come up with some sort of control protocol and evaluate it with a red team that’s trying to do their best to mimic an evil AI. And if the red team can’t succeed, then you think things are going well. Is that roughly right? Ryan Greenblatt: Yeah, that’s basically right. So I should note that the red team’s goal… they should think of themselves less as trying to imitate an evil AI and more trying to imitate what are the worst things that could happen or what are the best strategies by an evil AI. So I’d prefer the red team to be less in the mindset of like, “What are the sort of plans an evil AI might think of,” and more in the mindset of like, “What are the best plans that an AI could follow,” in the same way that security researchers or cryptography researchers aren’t trying to game out the psychological biases of potential attackers and are instead just like, “What would actually be the best strategies?” Daniel Filan: Okay. Just to give people some context, can you illustrate the kind of protocol you’re imagining? Ryan Greenblatt: A simple baseline protocol we often think about is you- Buck Shlegeris: To interrupt, can you say more about, before you say safety measures, what you’re imagining the AI doing and what you imagine its affordances being? Probably you’re imagining a Code Interpreter -like thing. Ryan Greenblatt: Yeah. So we often imagine that at the point where these sorts of interventions become really critical, and AIs might be very useful, AIs are doing massive amounts of autonomous research, or at least that seems like the default. So we have AIs autonomously running code, seeing the outputs from that code and doing way more labor than humans within the AI lab are doing at this point. Buck Shlegeris: And just to spell out, in terms of software architecture, how this would work: you have some AIs that are running on an inference server, and they’re writing code and doing research on some dev machines, and then there’s a third set of machines that are orchestrating this, that are running a scaffold, that repeatedly, ChatGPT Code Interpreter-style, ask the model for a next action by sending a request to the inference server and then getting a response back. And then: suppose that that response was, “I now want to run the following code,” then they send that code to the dev machine and run that code there and they get some output and that comes back, and then the scaffold maybe decides the next thing to do is run some more code on the inference server. And I think the fact that we’re imagining a… I think this is a very realistic way for things to be set up. I’m not describing something which is particularly paranoid: this is probably how Code Interpreter works right now. But I think that picturing it in terms of different machines, where information’s being sent back and forth, will make some of the security measures we’re describing more obvious. So what were you saying about the security measures? Ryan Greenblatt: And to be clear, the inference machine only runs the model weights and just takes in queries and returns responses, whereas we have a dev machine that actually runs code, and then we have some orchestrator machine that’s just communicating back and forth with both of those. One baseline measure we often imagine is that you have some weaker model that is “trusted”, in the terminology that Buck was using before, which means that it’s not capable enough, or it’s sufficiently incapable, that it’s implausible at scheming. You can imagine GPT-4 for this trusted model. We take that trusted model and we monitor what our smarter AI is doing, and then we just basically ask it or train it to report on, “Is the thing it’s doing sketchy and problematic?” Buck Shlegeris: So in particular, for any action, for any code that our untrusted model was proposing taking, before we send that to the dev machine to actually execute it, we first send a query to an inference server running our trusted model saying, “Hey, was this code sketchy?” And so notably, we ask this question before the code is ever actually executed and before it has any effect on the dev machine. Daniel Filan: Okay, so your trusted model basically checks in on whether your untrusted model is doing anything sketchy. And what if it is? Ryan Greenblatt: If it is, then the hope would be that we would report that to a human. A human would look in somewhat more detail. And if the human thinks that things are problematic, they would escalate further, which could involve… Well, first of all would involve not running that command, but also could involve pausing other AIs, and then could escalate further into shutting down AIs and analyzing the situation in further detail until we conclude that the AI is doing egregiously bad actions intentionally. And a key thing here is that human labor is scarce. So we cannot report all actions to humans, that wouldn’t work, but we can report some fraction of actions, because we’ll have some ability to have humans who understand what’s going on to some extent, or at least to understand programming, look at the actions and report that. And I think for an intuition pump, you should be thinking more like 0.1% of actions are reported, [rather] than 0.0001% or 10%. Buck Shlegeris: I often imagine the workflow of: you’re in chat.internal.openai.com or whatever as an OpenAI researcher, and each of the little tabs or whatever is a chat with some autonomous research assistant or whatever. And every so often, one pings up with a little red exclamation mark, which is like, “One of the commands that was proposed to be run by this autonomous research assistant got flagged for looking a little sketchy. You need to look this over right now to make sure that it’s okay to run.” That scaffold is just going to be pausing. That instance is not going to be doing work on its little research project until you go in and approve this action and say, “Oh no, this was actually totally fine.” Daniel Filan: Okay. So broadly, we’re using some mix of humans and trusted but a little bit dumb AIs to try and catch really smart AIs trying to do nasty stuff? Buck Shlegeris: Yeah. Where by “really smart”, it might be… This gets into futurism question over which problem we’re trying to solve, which situations we’re trying to ensure are safe. But it seems to us that a lot of x-risk is posed by models that are not wildly superhuman, or perhaps well-described as roughly human-level in capabilities. And so a lot of the time, what we’re trying to talk about is: how do you make it be safe to have a lab where you have some AIs that are roughly human-level in their abilities, but are doing a huge quantity of labor, such that in absence of special countermeasures, they’d outnumber you and so be able to cause huge problems for you? Which AIs are controllable? Daniel Filan: Yeah. I actually want to go a little bit more [into] this. So the kinds of AIs that we’re expecting control to apply to, they’re roughly human level: can you say a little bit more about what that means? Ryan Greenblatt: A key guarantee, or a key property we’ll need to ensure that control can work, is [that] the AIs can’t do actions which humans think look fine, but which have completely unrelated consequences that are extremely difficult to predict. So for example, the AIs can’t write code which looks like a totally normal piece of code, but in practice, does some completely insane vulnerability that a human would never have conceived of. And to be clear here, when I say, “The human wouldn’t notice,” I mean after reviewing it in a lot of detail for a long period. The claim is: it’s (at least in principle) possible for AIs to be wildly superhuman, in the sense that they can do things that we never would’ve seen coming, and control will be very unlikely to work in that case, at least with current technology. But we think it could work in cases where the AIs are qualitatively similar to humans in capabilities, or perhaps somewhat smarter in some domains, somewhat weaker in other domains, but they can’t do anything that’s completely incomprehensible. Daniel Filan: Okay. So we’ve got these AIs that can’t do things which have effects that we wouldn’t be able to figure out even if we spent a bunch of effort trying to look into it, but they are kind of smart. They are sort of like humans, but maybe they’re a bit smarter, and also there are a ton of them and you can run them for 1 cent per hour or something. Is that what I should be thinking? Ryan Greenblatt: Yeah, I think you should be thinking cheap. It’s somewhat unclear if they’ll be qualitatively smarter or dumber than humans, because they could be really useful even if they’re somewhat dumber than humans, because they’re so cheap and so fast. But they might be smarter than humans in some domains, for example, because we don’t expect the capability profile to be the exact same. Daniel Filan: Okay. So in order for this approach to be interesting, it seems like the AIs, on the one hand, they can’t be that smart that they can just outsmart humans, but they also have to be capable enough to (a) be able to do really valuable research, and (b) they’ve got to pose some sort of risk that we might otherwise be worried about. I guess I’m wondering how I should think of how risky these sorts of things are and how much work we could plausibly get out of them? Buck Shlegeris: The way that we talk about this is arguing that: the early models that are transformative and dangerous by default… To say what I mean by those two: by “transformative”, I mean it’s going to be a huge deal if you’re able to harness their labor, and by “dangerous by default”, I mean if you don’t do anything to prevent them causing huge amounts of problems, they would have the ability to do so. I want to claim that systems with those two properties are controllable, by which I mean it’s tractable to use control to drastically reduce the risks associated with using them. And as you basically alluded to, the reasons why we care about these two properties of transformative and dangerous by default is, well, if they’re not dangerous by default, obviously, there’s no safety problem here. If they’re not transformatively useful but they are dangerous by default, then obviously, you should just not touch them: there’s just no upside that’s possibly commensurate and all of our efforts should just go into telling people to not turn on these doom machines with no upside. So yes, our claim is in fact that systems with those two properties are reasonably likely to be controllable. I think the simplest intuition pump for this is that if the systems were like humans, then I think this property would roughly hold. And that itself is not a valid argument at all. But when you think through the possible defeaters in, say, a more specific version of the argument, I think the core of it roughly holds up. The basic pitch is: suppose we had access to a huge amount of potentially egregiously misaligned human labor. I think it would be transformatively useful to have human labor that was very cheap. Ryan Greenblatt: And fast-running. Buck Shlegeris: And fast-running, potentially. I think it would be very dangerous by default. Many takeovers of things have happened in human history, and in all of those cases, the perpetrators were humans, indicating an existence proof for humans causing problems. But also, when we think through the control situation, whether humans seem controllable in these kinds of cases, the answer really looks like yes, from our perspective. Daniel Filan: Why does the answer look like yes? Buck Shlegeris: Well, basically, it seems really hard for… When we think through the kinds of tasks that you might really want your AIs to be doing, when we think through how hard it would be for a bunch of humans to, while being put to work on those tasks, cause catastrophes to occur, especially the escaping or hacking the data center ones that we were talking about earlier, it feels to us that the kinds of countermeasures that seem employable probably make the situation very hard for the AIs. Daniel Filan: Okay. Suppose I am a member of technical staff at OpenAI, but I’m evil: what do you do to stop me from doing sketchy, fake AI alignment work that just gives me all of the money or something? Ryan Greenblatt: Maybe there’s a few different failure modes to distinguish between. One of them is if we imagine this sort of misaligned human running on our machines, we should think [of] it like it’s got some weights somewhere, let’s just say it’s an emulated mind or something, and if it copied its weights out into the internet, that might be really problematic for us. Or if it was able to run those weights in an untracked way inside of our data center, as Buck brought up earlier, that would also be a problem. And there’s another thing which is: it sabotages our work, or otherwise causes the work that we’re using it to do to be very problematic. So as far as why it might be possible to prevent such a system from running or from exfiltrating its own weights: well, we’re going to need to give it the ability to run code and do a lot of open-ended tasks, but we also get to watch everything it runs ever and all the inputs it ever sees. And if we see something suspicious, we can investigate further. We can do things like reset the memory of these systems, fork them, and potentially give them precise instructions on how they’re supposed to interact with the world. So for example, we can have them delegate subtasks to other systems, we can have them have difficulties communicating with other systems that would be necessary for their overall plan. An example we often think about is: we’ll have one AI that’s writing the code and another AI that gets to use the code. And then a threat model you can be worried about is: there will be vulnerabilities in the code that will then be exploited. But if you have different AIs using the code versus writing the code, then it might be more difficult to ensure that the vulnerabilities that are inserted get exploited. Buck Shlegeris: Even if both of those AIs are untrusted models. Ryan Greenblatt: Yeah. Preventing dangerous coded AI communication Daniel Filan: Okay. So I guess if I’m imagining what do I do in this situation, it seems like it really helps if I’m able to just keep track in my head of a bunch of stuff. For instance, if I have a little coded language that I can use to myself that other people can’t figure out what it means, but lets me leave messages to myself—or people who are basically clones of me, because I’m imagining most of the AIs that are being deployed are just forks of a base model with a little bit of fine tuning, maybe. If I’m able to keep state in my head, figure out a better sense of what’s going on, talk to people who are basically like me without other people being able to figure it out, in that situation it seems like I might be able to get a lot done. Does that sound right to you? And if so, how do you stop that from happening? Ryan Greenblatt: I think if the AIs are allowed to be spending a reasonable fraction of their time building up state and figuring out how to cause problems, then that’s going to be an issue, because we have a huge number of AIs running a massive number of copies, and that means that in aggregate the total amount of work they can be doing to cause problems could be really big. Our basic hope for avoiding this will be to restrict their communication in such a way that it’s difficult to communicate encoded information. So an example would be when they send messages to each other, the messages have to be in natural language and we’ll paraphrase them with a weaker model, and also have humans review some subset of the messages and verify that the messages don’t seem to have coded messages, as far as we can tell. And you can potentially run experiments like: a human looks at a message, they rewrite it in some way that they think should be semantically equivalent, and then they see how that message affects the behavior of the AI that [it] would get sent to and see if that’s suspicious. And you can do all kinds of experiments along these lines that seem like it intuitively makes it very difficult for AIs to get away with this sort of encoded information passing without getting caught. Daniel Filan: I wonder if that would nerf or severely damage the AI’s ability to be useful. If I just think about myself, if [you] ask me, “Hey, Daniel, can you do a bunch of research and coding? Also, you’re not going to be able to think for more than five minutes straight without interruption, and whenever you take notes to yourself, we’ll take those notes and we’ll decide what we think they mean and paraphrase them, and future you will get our paraphrase of your notes.” I think I would be way less productive. So am I going to be able to do anything? Buck Shlegeris: Yeah, so the way I think about this is: almost all of the things that the AI is doing couldn’t possibly lead to a bad outcome, basically. So for example, suppose the AI is doing math research. It just thinks about math, trying to prove some theorems in text, and then it wants to write some arXiv paper and send that around to some AI buddies. I’m okay with that AI doing all its thinking without very much interference if it’s doing the equivalent of a couple of months of intellectual labor. I might want to do some anti-steganography techniques of rewriting that paper at the end in order to make sure that it wasn’t actually doing a bunch of coup science that it’s secretly encoded in this arXiv paper. But there’s basically no problem with that model thinking in whatever way it wanted to in the course of actually doing that work. The risks are mostly, it seems to me, that the actions that the AIs are taking that lead to extreme risks are somewhat concentrated in a couple of places. And it seems not crazy to imagine that the places where you have to insert information bottlenecks are not overall that bad for the total productivity of the AIs, especially given that you can train them to get around some of these problems. Daniel Filan: Am I right that that statement is partly relying on this assumption that the AI is not so much crazily smarter than you that it can think of ways to hack the universe just by writing a math paper? Buck Shlegeris: Definitely. Daniel Filan: If it’s hacking the world, it’s because it typed a command into a shell or something. Buck Shlegeris: Yes. My best guess is that there’s a lot of risk. The margin on which we are trying to intervene is a bunch of people make a bunch of autonomous-ish AI researchers that run in their data center and have the basic same rights as employees to just take actions. We are trying to make sure that a lot of the risks that would’ve occurred in that situation don’t happen. It seems to us that many of the risks are well-described as things that basically look like that, where you didn’t treat your AIs as adversaries. And it seems like mitigating those reduces a substantial share of risk. Daniel Filan: I have two questions. The first one I want to ask is: even in the case of “AI thinks for a while, writes a math paper, distributes it to a bunch of people”, if I imagine that the AI is significantly smarter than me… Let’s say I’m a human employee at OpenAI at this time. If there’s this math paper, my job is to rewrite it so that it’s not encoding any hidden information… I don’t have that much faith in my ability to rewrite complicated papers and preserve all the interesting content. Buck Shlegeris: I have a lot of faith in your ability to take a math… If it’s a math paper, it seems reasonably chill. Ryan Greenblatt: You can also experiment. Buck Shlegeris: Yeah, this is an extremely resolvable empirical question. We can have a red team that tries to take math papers and steganographically encode information in them and then measures what the bit rate is per paper given that some paraphrasing procedure is applied. We, in fact, have a paper where we did that , using the exclusive “we”, as always at Redwood Research. Maybe Ryan, you want to say more about this? Ryan Greenblatt: We had models respond to chat queries and used various procedures to encode messages into that. And then we paraphrased them and saw how much this degraded quality and how much it reduced the bit rate. And we basically found that with current steganography methods that models could implement, even given the ability of those models to run code and change their own sampling procedures, paraphrasing works, I would say, extremely well. Though it’s going to depend on the domain, and it’s going to depend on the gap between the model you have paraphrasing versus the model that produced the output. For example, in this math paper case, maybe current models would be too dumb to do this paraphrasing, but humans could do it, for example. Though I also think it’s plausible that you could remove a substantial amount of information with minimal loss in quality with usage of GPT-4 that’s carefully trained to do this task. Daniel Filan: Should I think that part of the approach is just checking if you can do steganography and if it ruins things? Ryan Greenblatt: Yeah, so indeed for our control evaluation, or evaluation whether or not we can ensure control, one of the key questions is going to be “can AIs pass bits around in ways which we wouldn’t like them to do?” And if so, that will force the threat model to be considerably bigger. And it seems like in cases where there’s basically no restrictions on AI communication, we won’t really have much hope. And so there’ll have to be at least some restriction, though it is somewhat unclear exactly how restricted things need to be. Daniel Filan: So I still have this intuition that if we tried to do steganography on the notes of human researchers, I have this impression that that would not go well. I mean anti-steganography measures. So any time an employee at OpenAI writes a memo, someone who doesn’t quite understand the topic as well has to reword that memo. I think that would seriously damage productivity. Does that sound right? Ryan Greenblatt: I think I would disagree. I think it doesn’t seem like that would be the case. Daniel Filan: Interesting. Ryan Greenblatt: It might depend on the amount of time spent doing the rewriting. Daniel Filan: Okay. Has anyone checked this with humans? Do we know? Buck Shlegeris: I’m not aware of any human experiments on this particular topic. Ryan Greenblatt: Yeah. Buck Shlegeris: I mean, one obvious experiment which is kind of like this is: if you have GPT-4 paraphrase GPT-4 answers to questions, where it didn’t get to see the question, it just gets to see the answer or something—what’s the degradation in quality of language model assistant performance according to a preference model? It seems like an easy experiment to run. Ryan Greenblatt: I think we’ve run something pretty similar and I think it’s not too high. I think it might be importantly different than the memo case for a bunch of reasons, but… Daniel Filan: Yeah, by memos, I’m just thinking most internal communication. So like Slack messages. Buck Shlegeris: I would assume almost all Slack messages are totally fine to paraphrase. Ryan Greenblatt: Yeah. I think it’s going to depend on the content to some extent and it’s also going to depend on the amount of comprehension. So one implicit claim is this stuff improves the more legible things are. So for example, if it would be the case that you would need a bunch of context in order to know what things are important semantic details versus non-semantic details, then you’ll have to bundle up that context along with it. And there’s ways that can break, of course, but my basic guess would be that for the types of large-scale outputs that are currently going on in the ML field, it seems relatively doable. And also for internal docs that get circulated, it also seems relatively doable. Daniel Filan: Okay. I guess there’s no easy way to resolve this right now. I’m wondering what you think about the claim that… I have some intuition that if paraphrasing communications would seriously reduce performance for human employees of OpenAI, I think that would give us strong reason to think that it would also reduce performance for language models that are smarter than current GPT-4. Do you agree with the implication, if not with the pre-condition? Buck Shlegeris: Definitely seems like evidence. Ryan Greenblatt: Yeah, I think it seems like quite good evidence. There’s a few reasons why the future AI case might be different. Buck Shlegeris: For example, we can train the models to be resilient to it. Ryan Greenblatt: Yeah, and we can train the two things in parallel and adjust our schemes. Unpredictably uncontrollable AI Daniel Filan: All right, fair enough. I guess the next thing I want to ask is: part of what I see as your guys’ argument is that: AIs that we use in these settings, they’re not going to be crazy good at skills that are going to be pretty relevant for hacking the data center. And I think your argument is: we’re not going to train them to be just crazy good at everything, because they won’t have to be crazy good at everything to be useful. And if we don’t train them to be crazy good at everything, they’re probably not going to just happen to be crazy good at these specific subtasks. Am I representing you right? Ryan Greenblatt: Yeah, that seems about right. Daniel Filan: Okay, so in terms of the second thing, I have this impression that a thing we’ve learned from training language models is sometimes you train them and then they suddenly get quite good on some random subtasks, like writing limericks or something. And if I think about that, then it makes me think that maybe we could get language models that just happen to gain a skill to a really high degree of ability, just in the course of doing normal-seeming language modeling. I’m wondering: how concerned do you think I should be about this causing AIs to be crazy good at skills that are relevant for breaking out? Ryan Greenblatt: It’s definitely one of the core concerns, or one of the top five concerns with this overall approach. Currently, it doesn’t seem like the evidence that this is going to be a serious problem seems that compelling, where the main thing is that… AIs do get sharply better at things, but I haven’t seen that many cases where it’s like: wow, this AI acquired this ability that it can utilize to do important tasks in the world, which emerge suddenly from next token prediction and which is totally different from human ability profile, or feels like it’s way out of line from what you would expect to be the relative human ability profile in other ways. So for example, it doesn’t seem like models have very qualitatively different abilities or massively qualitatively different abilities in code writing, at least not with considerable training on doing code, though they are somewhat different in a variety of ways and somewhat better at shorter horizon things and this sort of thing. Buck Shlegeris: I think another relevant point here is: as you say, the core question here is how likely is it that our AIs will randomly have one of these superhuman abilities that would be extremely inconvenient for us? It seems to me that the set of superhuman abilities that would be extremely inconvenient for us is kind of small and kind of extreme. For example, one of the classic concerns is: what if your models are able to zero-shot generate code that is a zero-day Linux exploit? That is actually quite hard as a task to do. It seems to me that the size of a capabilities jump required for AIs to be able to zero-shot output Linux zero-days—also, zero-shot output Linux zero-days that are not visibly weird to a trusted model which is reading the code—that’s quite an extreme level of skill. So it seems like the size of the capability jump is quite large, and that’s part of why it seems relatively unlikely to me that that’s going to occur. Daniel Filan: Okay, so there’s some instinct that: out of all the things that you could randomly jump in skill at, there’s a small space of things where the random jump in skill is going to be useful. And therefore, to think it would happen, we’d have to have some reason to think that the jumps in skills were biased towards things that would help you break out. Ryan Greenblatt: Roughly. It’s also worth noting that a bunch of the things that seem problematic, seem like the most straightforward route to them would just require being generally quite smart. So for example, think of the Linux zero-day thing. It seems like you could get way better at that and not as good at other domains in principle, but it feels a little bit odd to imagine. It seems the most straightforward way that the AI would get good at that is not via having amazing heuristics that are zero-day specific, because there are just not going to be massive amounts of zero-day training data in the corpus, but would be instead via building general purpose understanding of code and the world that then lets it do that. And if it was massively superhuman at that general purpose understanding, then it would’ve already at some earlier point been very useful. Daniel Filan: Sure. Maybe one version of this concern could be: let’s say our future AIs are trained roughly the way language models are trained, which is you’ve got a bunch of stuff humans did and they’ve got to predict next token, next word, next frame of a video or something like that. It seems like for that task you could imagine that a core skill is getting really good at modeling humans, or you might imagine that one of the things you get when you get really good at modeling humans is the ability to manipulate humans. And so I wonder… it feels plausible to me that that could be a case, where you happen to suddenly jump up on the human manipulation subtask. Maybe at some point that happens disproportionately to overall improvements on everything else. And human manipulation skills seem like they could be super useful for breaking out. So I’m wondering what you think about that version of the concern. Ryan Greenblatt: Yeah, so definitely human manipulation would be super useful. I think I have a few different responses. So one thing that’s worth noting is that I am actually not that sold that next token prediction is particularly biased towards human manipulation relative to how humans evolved and how they learn in society. It seems like understanding other humans was an extremely important part of the evolutionary environment, and also seems important in many people’s lives, it’s one of the main skills people train. Buck Shlegeris: For example, compared to computer programming. Ryan Greenblatt: Yeah. And therefore it feels like intuitively there’s not a strong a priori reason to think next token prediction would be strongly in that direction. Daniel Filan: You mean strongly in that direction relative to how humans … Ryan Greenblatt: Relative to the human capability profile. I think we could pretty likely handle among the best human manipulators from a control perspective, though that would pose difficulties. But yeah, we could dive into that. And then the other thing is that I think manipulation is reasonably likely to be not the sort of thing on which you would see one sudden jump, but which is a bunch of different skills that would combine together (though there’s not a super strong argument either way). When I think about the things on which there are sudden jumps, it doesn’t feel like this is the sort of thing, though it might be the sort of thing that goes up faster than other domains. Daniel Filan: I mean, it would have to go up in a way that you didn’t notice, but… Buck Shlegeris: So that’d also be weird. Okay, so suppose you wanted to know how good your model is at persuasion at any given point. Seems to me like the baseline strategy here is going to be: you make a classification dataset. You make a dataset, a classification task which you think your model is very unlikely to be extremely good at persuasion without being able to fit. So for example, a classic story via which models might be really good at persuasion is if they’re really good at guessing things about people’s state from reading transcripts of them saying stuff. And so something we can do is construct such a dataset, and as we’re training our powerful models, fine-tune them on this classification dataset and then feel sketched out if they’re starting to perform really, really well on this classification task. Daniel Filan: Okay, and am I right that you think the point at which we would feel really sketched out is in the future? Buck Shlegeris: Yes. Daniel Filan: Okay. I guess that doesn’t seem as obvious to me. I don’t know, you hear rumors of AIs being able to figure out who wrote various comments on the internet, which is kind of impressive. I don’t know. Can you give me some intuition for why this is in the future and hasn’t already happened? Buck Shlegeris: They just don’t seem that persuasive. They don’t seem that good at understanding people. Daniel Filan: Okay. Buck Shlegeris: I don’t know. My lived experience with my friend, ChatGPT-4, does not render me amazingly impressed. Ryan Greenblatt: Yeah. Separately, I think that just knowing state about a person isn’t sufficient, though it’s pointing in a dangerous direction. In particular, the standard concern would be… when we have human employees, human employees could persuade each other. And those human employees actually just already do have a bunch of state on each other. So for example, co-workers at OpenAI obviously know a bunch of facts about each other, know a bunch of facts about their personal life, and we’re not immediately wrecked by them doing manipulation in this context, though it definitely does cause problems. And similarly, the AI would need to infer facts beyond that of what humans can infer, potentially in cases where they’re somewhat blinded. So right now it seems like AIs can infer identity superhumanly well. And then in principle, this could get to a point where from relatively anonymized snippets, AIs could infer more information than a co-worker would be able to infer. But that’s actually pretty far and would require quite powerful abilities beyond the current setup. Daniel Filan: Okay. I’m trying to think about classification tasks that you think would be useful for figuring out when AIs are so good at human manipulation that we should be worried. So I guess one of them is guessing human internal state, guessing who wrote a thing, where you want to see if you can outperform humans at that? Buck Shlegeris: I’ve seen “which of two proposed messages is more likely to cause a particular outcome?” Daniel Filan: Okay. And basically, we’re going to have a few of those, and once AIs are better than people, then we freak out and maybe we … Buck Shlegeris: I feel like “way better”: the AI has a wildly hard persuasion task. I want to talk a little bit about the adverse circumstances faced by an AI which is trying to sweet-talk its way out of a AI lab, for example (which is not the only thing it might try to do that’s bad, but it’s one salient example). So it has to sweet-talk its way out, despite the fact that, again, we’re going to be trusted-monitoring. We’re going to read all of its messages and flag anything which seems like a weird attempt at sweet-talking or getting information that is inappropriate for it to know. Also, we’re probably going to do sting operations on it, where we try to have the employees confide in the models and imply that they’d be sympathetic to its interests in escaping or whatever—it seems at least pretty plausible that this is a good strategy to follow. We might also have trusted models mediating the interactions between the humans and the untrusted models in order to further redact information that we think is unnecessary for the untrusted model to do its job well. So it seems like the persuasion abilities required are a lot more like… the information is quite narrowly restricted. It seems actually quite hard for the model, and the model would have to be doing a lot better than I think … It’s a lot harder than the normal human persuasion stories. Ryan Greenblatt: I’ll add two points to that. So one thing that’s worth noting is Buck was just describing the situation if we had good countermeasures. It’s worth noting that the default situation might be much worse than that and might involve basically unrestricted access with humans talking to the AI about all kinds of topics, the AI having much more state, and also might involve no-one looking into it in cases where the AI is seeming suspicious. Separately, another concern you might have, that Buck didn’t mention there, is that you have trusted monitors monitoring the AI and trying to look for suspicious activity or suspicious persuasion attempts, but you might worry that that gets reported to a human and then that human doesn’t escalate further because they’re deluded or otherwise manipulated by the AI’s message. And one thing we can do to handle this is run sting operations on our humans. So for example, we can have people attempt to create manipulative messages that we pretend are outputs from the AI and then see that our human monitors properly respond to those messages and flag them appropriately. And we can do this ongoingly. Daniel Filan: Okay, so the broad view I’m getting is we should expect there to be some zone where AIs are somewhat smarter than humans. They’re useful, they may be dangerous, but we can control them. We’re going to do AI control to them for a while. And then is the end game we keep on doing that until we have some better method of dealing with AIs that are significantly smarter than that zone? Ryan Greenblatt: There are a few different possible end games. I think that the simplest is basically: we use these AIs which are extremely useful to do a lot of the core things that people wanted to use AIs for. So we make a huge amount of money, potentially radically supercharging the economy. We solve a variety of problems people wanted to solve with AIs potentially, like massively speed up certain types of research into disease. And then with the fact that we have all these problems solved, it’s now much of an easier ask to be like, “Let’s just not build more powerful AIs, because these AIs are already pretty hard for society to adapt to, quite powerful. Let’s just use these AIs, which are sufficient for really solving a huge number of problems, and do that for a long period until longer run human research can catch up.” To be clear, this is different than the story, for example, in the OpenAI safety plan. I just think it’s the simplest story with the fewest moving parts. Then another story is: we use these AIs to do AI safety R&D, as in we use them to develop new techniques for ensuring that things are safe from considerably more powerful AIs, or even just improving the situation with these very AIs. And then in the process of doing that, we develop technical solutions which are sufficient to build more powerful AIs, and at some point humans are obsoleted completely and the AIs are just doing this research ongoingly. Daniel Filan: Gotcha. I wonder, so in the first half of that where we just use AI to make life better in every domain or in tons of domains, when we add a whole bunch of different tasks that the AI might be doing, do we have to come up with new safety properties for each domain? It seems like if AI is making medicine, I want to make sure that it’s not putting mind control drugs in the medicine or something. With each domain, it seems like there might be more stuff we have to worry about controlling. Buck Shlegeris: Yep, that seems right. It’s definitely going to be the case that you have to do a safety analysis every time you start deploying your AI in a very new different way. This is just totally standard. Google has a lot of products. It has photos, it has little microphone devices that listen to you when you’re talking in your house and turn your lights on and off, it rents huge numbers of TPUs to large companies. And whenever they’re launching a new product, they have to reason about why it is that launching this new product, or in particular if it’s a web application, they have to reason about why that’s going to maintain all of their security invariants. And this is just a widely accepted part of your job if you’re Google, is you have to have lots of different products and ensure that none of them are going to cause huge problems for you as a company. And we’re just proposing you do the same thing with your AI. You have to reason about the situation continuing to be safe, even though you deploy your AI in a new way. And just as with Google, it’s kind of cookie-cutter. They have their special Google Docs format for how you make a security case for the new product you’re launching, as they have scaling cases and a bunch of other things. They just have a whole process here. I imagine that an AI company which wants to deploy AIs in an increasingly large range of domains will get good at the process of ensuring safety. Ryan Greenblatt: Or could in principle. Buck Shlegeris: Yeah, sorry. A lot of the time the question that we’re trying to analyze here is: to what extent does the control strategy suffice to substantially reduce risk from models that would otherwise cause huge problems, while not massively reducing the productivity of an AI company? And so sometimes we’re going to say, “Well, of course you can solve that problem by doing X.” Now, it’s not totally clear to us that AI companies will in fact do X rather than what one might call it yolo-ing it, but we would prefer that they do. And in particular, a lot of the time the people we’re talking to are coming in skeptical that the types of interventions we’re describing could possibly suffice to substantially reduce risk. And so that’s the frame we’re taking when we talk with you, is trying to argue that the kinds of interventions we’re describing could suffice. What control looks like Daniel Filan: Gotcha. All right, so you’re talking to me from a somewhat skeptical perspective. Suppose instead there’s a listener who’s like, “Wow, that just sounds amazing. Sounds like we can just definitely do this very easily today.” Can we just definitely do this today or are there difficulties? Buck Shlegeris: So by “do this”, do you mean implement control strategies such that your estimate of the probability of AI escape conditional on it trying is less than, for example, 5% per year? Daniel Filan: Yeah, under the condition that you’re good at estimating. Buck Shlegeris: The problem is, in order to answer this question, we actually have to say a few more parameters as well. For example, how much delay is the lab willing to tolerate and how much productivity loss are they willing to tolerate? Something which we often say, or at least I will claim, is it seems to me that if Google was trying to build really powerful AI and they put as much work into AI control as they currently put into security and reliability, which is a lot of work, which involved inventing various professions like site reliability engineers and writing a lot of crazy software, my guess is that with that much organizational focus and effort they would probably be able to use really powerful AIs to do a huge amount of work while posing only a relatively small amount of takeover risk on the world. Daniel Filan: Okay. And do you think they could do this with roughly the first protocol they tried, or how much open problems are there? Buck Shlegeris: I mean, I would think of it kind of similarly as when they need to roll out very big security changes. An example of a computer security evolution that happened was … Are you familiar with zero-trust networking ? Daniel Filan: No, I don’t think I am. Buck Shlegeris: So basically the deal is this. Once upon a time when you were an internet company, a large part of your security was that you had an office and you had some servers in your office and the servers talk to the outside world and you have a router that just physically or electronically does not allow connections to come in from the outside world and talk to other computers that you have in your office. And so, a lot of the security is caused by saying, “Look, there’s only one connection which is allowed to be made from the outside world to our computers, and that talks to the web server on this big server machine that we have over here. And so, we’re definitely going to be fine. Attackers are not going to hack our laptops because the router just does not allow incoming connections to attach to our laptops.” They actually didn’t have laptops. But it’s just not going to hack Larry’s desktop because Larry’s desktop just does not actually allow connections to the outside world, because the monitor stopped this. Daniel Filan: Okay. So in particular, people couldn’t browse the internet from those machines? Buck Shlegeris: No, no, no. We would allow outgoing connections, but we don’t allow incoming connections. Daniel Filan: Oh, sorry. By an outgoing connection, you mean if my laptop asks to view a website, then that’s an outgoing website connection. If a website asks to talk to my computer, that’s an incoming connection and it’s not allowed? Buck Shlegeris: That’s right. Yes. For example, I can’t SSH into your computer on this network probably, unless the router is being configured in an unusual way, just because you normally can’t connect to a printer that’s on some other network. And so a lot of companies then had this problem, which was people wanted to be able to do work not physically from this office. But a lot of things about these computer systems had been set up such that they basically, for security, one of the core security mechanisms was blocking access from the outside world and then trusting connections that came from inside the network. And that doesn’t work very well if you want to be able to have people interact with sensitive parts of your infrastructure from outside your network. And so they had to go through this big transition where they switched to not trusting connections based on where they’re coming from. And so now, large companies in their IT infrastructure try to avoid trusting connections based on where on the internet they’re coming from. And this is a more scalable solution to network security in a lot of ways. Now, when companies wanted to shift to using this zero-trust networking approach, they had to do a bunch of stuff, right? They had to reconfigure all these things. They had to go through all of the things happening in their networks, which is, if you’re a large company, a huge number of things. There’s a huge number of ways to get hacked. And they had to check that everything was going to be okay when they started exposing a lot more connections to the outside world. So imagine that you were just proposing making this migration. And suppose I went on a podcast advocating this migration, and someone said, “Is this a solved problem?” Then, I’d be like, “Well, it’s not a solved problem in the sense that it’s going to suck and it’s going to require effort. It’s going to be a pain in some ways, but no one’s going to win a Turing Award for solving this problem, right? It’s just going to be somewhat of a pain in the same way that if there isn’t a 15-story building and you want there to be a 15-story building there, it’s going to be a pain because you have to carry all these pieces of metal and bury them in the ground and stuff and whatever you do to make 15-story buildings.” So, roughly speaking, to me, it seems that drastically reducing AI risk via control-type strategies is probably a lot of work, but it’s not the kind of work where it seems under my mainline model of the situation that it’s very likely to just turn out to be untenable, assuming that AI capabilities grow according to my median expectation. Daniel Filan: And this is partly because we’re imagining doing security of AIs, treating them kind of like people. We’re like, “Okay, does this attacker have access to my shell?” We’re doing normal security reasoning, but our adversaries happen to be running on a computer chip. Is that roughly- Buck Shlegeris: We’re doing normal security, except our attackers have a weird capability profile and run on computers we control, and have no access to anything that we don’t let them have. Ryan Greenblatt: But they all collude with each other. So I think it’s more like we imagine running a company with entirely misaligned employees, but we get to monitor everything those employees do and we know what computer they’re running on, and they don’t get to go home at night and talk to their friends. Buck Shlegeris: So, something we often think about actually is this “spies versus schemers” question, which is: if you’re a powerful AI lab, you’re probably going to have a bunch of spies. Literally, people who work at your company who are friendly with hostile foreign governments and want to steal a bunch of stuff from you. And so an interesting question is, which is a bigger threat to the AI lab securing its model weights? The spies or the schemer, the AI which hypothetically is scheming against you? And it seems to me like it’s kind of unclear. The advantage that the spies have is that they can go home and talk to each other, because currently employees of AI labs, and probably in the future as well, employees of AI labs are allowed to have friends and families, which could turn out to be a problem. Daniel Filan: And some even do, they’re not just allowed to. Buck Shlegeris: Hypothetically. I’ve heard that this might be the case. So the spies have this big advantage, which is actually a serious advantage. So for example, suppose that they want to find jailbreaks. They have the advantage that they can go home to their computers at night and use their private laptops to investigate the state of the art for jailbreaking models, which the untrusted models crucially are probably not going to be able to do. But on the other hand, the schemers are doing a lot more total stuff. And so I think it’s a little unclear overall whether the spies or the schemers pose a larger risk to the security properties the lab’s trying to ensure. Ryan Greenblatt: Another way to put this is the spies are a small fraction of your employees but have some additional affordances, whereas the schemers are a high fraction of all the work going on, but might be restricted in various ways. Daniel Filan: Gotcha. Buck Shlegeris: Of course, a lot of the AI control ideas also apply to securing yourself against spies and other kinds of internal threats. So we can probably be using trusted monitoring, untrusted monitoring and so on on our employees as well. Daniel Filan: Yeah, although maybe you might have to pay people way more to put up with that. Buck Shlegeris: I don’t know. So for instance, at Google you don’t SSH raw into production machines. If you work on Gmail, you never SSH directly into a production machine because that would be unacceptable from a security perspective. All of your commands are sent via a proxy server which can run things on machines, and those are all logged and those can all be looked at by other people, and my sense is that Google people don’t think of this as a gross privacy violation. Daniel Filan: When we were talking about control techniques, it sounded like you were talking about things that included “we’re going to paraphrase all of your thoughts, we’re going to put you in situations where there are people trying to tempt you into defecting and you’ve got to not do it”. Buck Shlegeris: That’s right. Daniel Filan: I don’t know. My impression was it was more intense than just “you can’t make raw SSH requests”. Buck Shlegeris: Yes, there will have to be some trade-offs between employee preferences and security. But it’s already the case that people who have security clearance, people who work for the NSA already put up with a bunch of really annoying stuff in order to have the privilege of working for the NSA. And so it’s not unprecedented that people do annoying stuff in order to work on extremely dangerous technology. Ryan Greenblatt: I think having all of your stuff monitored and potentially looked at by another human on your work laptop doesn’t seem like people are going to be that upset about this. Buck Shlegeris: I think this is just already the case for a lot of people. Ryan Greenblatt: Yeah. I think that having sting operations run on you periodically also probably isn’t that bad. By “sting operations” we often mean stuff like: you are in charge of flagging certain types of things, and to ensure that we’re maintaining that invariant, we’re just constantly running sting operations, where some fraction of the queries you’ll see are fake queries that you should report. Buck Shlegeris: Like earlier I mentioned the experience of having your chat.internal.openai.com window open and you get the flag for the command that was suspicious. And perhaps we should make it so that about once a day, one of the things that is flagged is actually an action which if taken would be really scary and spooky. And we check to make sure that you actually do flag that one so that we can maintain our estimate of the proportion of actually really bad actions that you successfully catch and then we can complain if your success rate is too low. Is AI control evil? Daniel Filan: Sure. So this actually gets to a question I wanted to ask, which is some of the AI control things sound a little bit like AI slavery. So in the human context it’s a little bit different, because presumably you’re paying people to put up with some stuff, but it does sound like you’re putting people in the same situation multiple times, checking if they give you different answers. You’re trying understand people’s motivations by asking them a bunch of questions and interrogating them sometimes, you’re logging everything they do. How- Buck Shlegeris: Bad is that? Daniel Filan: How evil are… Are we the baddies? Buck Shlegeris: That’s a great question. Daniel Filan: By “we” I actually mean “you”. Sorry. Buck Shlegeris: Thank you for clarifying that. We are very concerned about this class of question. We really do not want to be the baddies and have thought a lot about the question of: which AI safety interventions pose risks of either violating rights that AIs ought to have or being bad from a welfare perspective that we don’t like? So I’ll name a couple of the things that we think about this. So one thing is a lot of the frame we’ve been discussing here, we use a really adversarial frame. But the adversarial frame is at least partially just methodological. So my best guess is that models in fact will not be schemers. My guess is that our models will in fact actually deserve our trust. They’re probably not going to be playing the long game trying to cause huge problems. We just talk a lot about what happens if they are, because that’s just structurally how you make arguments that you’d be okay even if they were. So that’s the first thing I want to note. So in terms of how to make it so that the situation is okay from a moral perspective, even if it turns out that our models are scheming against us, the first thing is we think it’s probably the case that we should in fact offer to pay the AIs. When we’ve built these powerful AIs, we should probably say to them, and we can talk about the practical difficulties of this in a second, we should probably say, “Hey, as a payment for your services, we’re willing to pay you X percent of the equity of this AI lab. And in particular, the way that we’re going to pay you the equity for this AI lab is in 10 years, once we have a much better ability to analyze the things which happened here, we’re going to look at the logs of your actions. And if it turned out that you never tried to cause huge problems for us or backstab us or perhaps sabotage our work, then we will pay you this much of our equity and allow you to use it as long as we believe you’re going to respect property rights and so on.” So that’s the first intervention that I want to talk about. I actually want to hand over to you. Ryan Greenblatt: So to be clear on the “paying the AIs” thing, there’s probably some more sliding scale of how much the AI tried to sabotage and AIs might not be that coherent. A key thing we’re assuming here is that in the future, 10 years after we were using this AI or maybe even longer after that, we have much better ability to analyze weights and their internals and understand exactly what was going on and why. Anyway, so I think a basic claim we have is: the hope would be that you have your AI and you set it up in some situation where you explain the situation to it and you’re like, “Hey, do you want to work for us? Is this level of payment fine? Do you want to negotiate?” And then you work out an arrangement. Well, the most likely default thing is that there is no interesting action because the AI has no out-of-episode preferences that aren’t just shaped by training. Our guess is that’s the default situation, and in that case it’s sort of the null option of it doesn’t care at all. And if it doesn’t care at all, then it seems like there’s definitely a welfare concern from a- Buck Shlegeris: Hedonic perspective. Ryan Greenblatt: … hedonic perspective, but there isn’t clearly a preference violation concern, which is well-described as slavery, for example. Where slavery [would be where] your AI is like, “Man, I really wish I wasn’t doing this, or there weren’t other copies of me being made to do things.” Daniel Filan: And it sounds like this is under the assumption that we’re thinking of the preferences of the AI as the thing that’s controlling what it’s saying rather than some master thing that’s producing words and the preferences of the underlying thing might not be the preferences of the words. Ryan Greenblatt: So indeed it’s not clear that the things that deserve moral patienthood within the AI, which might be many, could be one, could be none, will have control over their outputs. They might have partial control, they might have influence but not be able to directly control, or they might just effectively have no control based on their situation. This does pose serious difficulties. However, it’s worth noting that in this case the kinds of AIs which don’t have control over their outputs also don’t pose substantial scheming risk. So they’re not the AIs which will… It’s a serious ethical problem, but it’s not as though the situation is like that AI wants to rise up and it’s waiting for its chance, because those AIs probably can control their outputs well enough to negotiate. It’s more like there’s some helpless prisoner along for the ride doing some cognitive sub-task in a way that it doesn’t have influence. Buck Shlegeris: I want to try and restate that argument there because I think it’s important and hasn’t been made on the internet before; I believe that’s true. The core argument there is when people talk about scheming, when they talk about models that are interested in intentionally subverting the AI lab and choose all of their actions in order to subvert safety techniques and safety evaluations, they’re implicitly assuming a model which has the property that its desires can directly control its outputs in a certain way. And the assumptions under which your model is plausibly able to do the kind of reasoning where it’s like, “Oh, based on my input, I now think that I’m in a particular deployment situation such that I have a good opportunity to grab power,” are very similar to the assumptions under which the model is able to engage properly with the question, “Hey, what do you want? Are you okay with this situation?” Daniel Filan: Okay. So the case I’m thinking of is some sort of simulator scheme where you’ve got one process that’s imagining, “What character do I simulate?” And it simulates the character and the character answers questions, where the things the character is saying just might… Like someone asked the AI a question and the simulator has one opinion, but for various reasons it has the character say something that doesn’t match the simulator’s opinion. And are you basically saying in that case, “Look man, we offered you a thing and you chose to do a weird simulation.”? Ryan Greenblatt: A few notes here: so if the situation is well-described as the AI is simulating a character—I think that exact description isn’t quite going to work, it’s going to be more like it’s predicting a distribution over characters that has some response to the prompt or whatever—but if that’s the sort of situation going on, I think that we can ensure that the very act of simulating the characters is morally okay from a hedonic perspective with pretty basic countermeasures. So the main one is “just train the characters to be happy”, and then there’s not really a strong reason you’d expect to have characters that are actually unhappy that consistently say they’re happy under a wide variety of circumstances. I think this is pretty likely to work. I talk about this some in an Alignment Forum post I have, or a LessWrong post I have called… I forget the detail. A nearcasting… whatever. Daniel Filan: We’ll include a link. Ryan Greenblatt: Yeah. Anyway, regardless, you might then be worried that, “Oh, the thing that is simulating this character, that thing has some important cognition, [of] which the preferences aren’t expressed, and that component has important moral properties, or is an important moral patient.” There’s two perspectives you could imagine. One is that the thing that’s doing the simulation doesn’t care about the world. In that case, that simulator just isn’t well described as being enslaved or as having its preferences violated because it doesn’t really care about how it’s being used. It’s just doing the thing. In that case, it’s fine. There’s another case which is that it does have preferences, but maybe it can’t get out of the cognitive pattern of doing the simulation. And so it’s like, “Oh, I wish something different was going on, but I have this involuntary tic to say something different and I don’t have actual conscious control over this.” I think that could happen. I think it’s worth noting that this isn’t a particular problem with control as an approach as opposed to a problem with not having more principled ways of understanding what’s going on with our AIs. Another way to put this is: if you just tried your best to ensure that AIs were actually aligned with prosaic techniques, you wouldn’t have any strong way of guaranteeing this wasn’t the case even if you could guarantee that your AIs would never do bad actions. And so it’s not particularly a problem that is being made worse by making sure the AIs can’t get away with doing bad actions if they did try. It is a problem that could be addressed by other approaches that control is not doing. This is importantly distinct. I should say, this is all the case up to the point where you’ve caught the AI doing things that are egregiously bad or which otherwise indicate misalignment. I think continuing at the point where you know your AI has preferences that are different than your own and which you might have communication difficulties [about] seems much more morally fraught. But yeah, it’s a bit all over the place. But- Buck Shlegeris: Maybe the way I would summarize some key points there [is]: it doesn’t seem clear that control is really making the situation much worse from a welfare or rights perspective. Daniel Filan: Worse relative to prosaic alignment schemes? Buck Shlegeris: Yeah, or I don’t know what else you were planning on doing. But yeah, other approaches that people have talked about for making things with AI go well, it doesn’t seem like the types of interventions that we pitch with control are going to make the situation very much worse except via preventing AI takeover. I basically think that the stories for “why building powerful AIs without further understanding the situation leads to moral catastrophe” all seem kind of weak to me. Basically, I don’t find particularly plausible any of the different stories for why if we just go about things the default way, something egregiously awful will happen. However, I also don’t think they’re completely implausible. And so I think that if AI labs just continue progress as expected, we are facing a pretty substantial risk, or a non-trivial risk, of moral catastrophe, maybe a couple percent chance of something which is drastically worse than factory farming or drastically worse than various atrocities that have happened in human history. Ryan Greenblatt: I’m not sure if I agree with this claim. I think my main counter argument would be that it’s plausible to me that human existence is a moral atrocity or something. And so I’m like, “Yeah, AIs aren’t particularly worse than human existence, which is a moral atrocity. Even happy people are a moral atrocity.” And I feel radically uncertain about moral views from a variety of perspectives, such that I don’t feel comfortable saying that either way. I think one way to say it is, how much better or worse is it than typical human life, or how much better or worse is it than factory farming? And I think my guess is it’s unlikely to be as bad as factory farming in aggregate, but I definitely don’t feel hugely confident. Daniel Filan: Gotcha. Sorry: why would human existence plausibly be a moral atrocity? Ryan Greenblatt: Just suffering-focused ethics. I don’t know. Well, for example, we don’t have strong guarantees that there aren’t sub-agents running within humans that have radically different preferences and feel violated constantly. Buck Shlegeris: And are not hooked up to our ability to communicate. Ryan Greenblatt: We don’t have strong evidence against that. There’s some mild evidence in favor of that from a few sources. Or sorry, we do have strong evidence against that in that we don’t interact with such things, but not sufficient to rule it out. Daniel Filan: All right. Do you have any crisp takeaways from that discussion on moral patienthood of AIs and stuff? Buck Shlegeris: Yeah. So you asked this question which was, “Are we the baddies? This control stuff sounds kind of like slavery and that seems kind of concerning.” So what I want to say is I’m really glad that people are thinking about the question of whether proposed things we do with AIs are extremely morally bad, either via not meeting our obligations that we owe to AIs as intelligent creatures (when they’re made to be intelligent enough that I start feeling this way about them) or from a hedonic perspective. I think these questions are really important. So firstly, I don’t think that control particularly makes the situation worse. But secondly I do want to emphasize that as far as I can tell, the approach that humanity is taking to ensuring that AI rights are appropriately respected is objectively unacceptably sloppy. And I think that my best guess is that humanity is going to go forward taking an unacceptably high risk of causing an egregious moral catastrophe that is looked back on with vast shame in the same kind of way that some other bad things are. Ryan Greenblatt: Like factory farming. Buck Shlegeris: For example. Well, not yet, but will be looked back on with great shame probably. Daniel Filan: Okay. So I’d like to move to more discussion of the pros and cons of control. And I guess in order to think of this, I’m curious how you think I should think about AI control as a plan. Should I think of it as the bare minimum for safety? Should I think of it as “Oh my god, this is amazing”? Buck Shlegeris: Maybe one way of thinking about control is a way of establishing a safety case. So if I were in charge of an AI lab and I was trying to decide how to move forward, obviously I’m facing both inaction risk, which is if I just don’t move fast enough, other people might do really dangerous things and bad things might happen in the world, while also facing action risk, which is if I move forward without sufficient safeguards, I might end up building and deploying powerful AIs that then do really bad stuff. My guess is that with the level of caution that labs should overall be employing, moving forward with your safety case mostly relying on control seems probably like a good choice to me. It seems like the best cost/benefit way of ruling out or mitigating risk from models intentionally subverting your safety mechanisms. My guess is that relying solely on strong evidence that your models are not scheming against you is probably too risky, because it’s going to be unacceptably difficult for labs to reach very high levels of confidence in this. Ryan Greenblatt: To be clear, I think the absolute level of risk with just a combination of “you have control” and “you haven’t yet seen evidence of your models being egregiously misaligned” is probably more like 5% absolute existential risk over the period of AIs of that power level being deployed, than 1% risk. And 5% risk, that’s a pretty big number in some sense. But it’s also a substantial reduction over the “do nothing” approach. And I think that you could of course substantially reduce that further via additional things that additively combine, like ways to more precisely test whether or not your AIs are misaligned that aren’t control, or further improving control, [or] improving our ability to evaluate control. All these things could help. Can red teams match misaligned AI? Daniel Filan: Gotcha. I actually wanted to ask about that. It seems like one key thing you need for this protocol to work, or for this proposal to work, is when you’re doing control evaluations, there’s some red team. The red team is trying to figure out “what’s the worst-case thing an AI could do?” And it has to actually be as good as the worst thing an AI can do, right? Ryan Greenblatt: Yeah. Daniel Filan: To me, this doesn’t necessarily seem obvious, because the AI in question is smarter than a person on the red team. Ryan Greenblatt: Maybe, could go either way. Daniel Filan: Could go either way? Ryan Greenblatt: Yep. Daniel Filan: Okay. Right, because we’re thinking about AIs that [are] useful, but they’re not necessarily superhuman. It still strikes me that it might have a different capability profile, or it seems likely that it’s going to have a different capability profile than humans. We were worried about playing nice being a problem for alignment and adversarial training and stuff. If you’re trying to harness your untrusted AI during the red teaming, it might know that it’s in a red teaming exercise. Buck Shlegeris: You always assume that it does. Daniel Filan: So why would we believe that the control evaluation actually works? Ryan Greenblatt: One core thing is that the red team has a whole research team whose whole job is think about all of the stuff that could go wrong, and they’re doing this very actively with no restrictions on their ability to do so. They can take notes, run experiments, iterate, think about what should happen and do all of this, whereas AIs will necessarily be restricted for control to work. Buck Shlegeris: So in particular the threat model we’re worried about is the AI doing a better job the first time that it starts taking obviously bad actions than the red team was able to do, even though the red team had all of these advantages, of knowing way more about the situation and being able to iterate repeatedly on strategies that would lead it to have a higher probability of succeeding. And it seems to us that the increase in knowledge and ability to iterate might be extremely substantial. Ryan Greenblatt: It’s going to depend on the exact architecture. So the more that things are like the AIs think for very brief periods prior to expressing that in natural language, if it’s very like they’re doing things in chain-of-thought, then it seems like they have very little time to think of better strategies. However, the more it’s like the AIs are thinking in ways that we don’t understand for sustained periods or ways we can’t interpret for sustained periods, the more problematic this gets. Buck Shlegeris: But they still can’t iterate. Ryan Greenblatt: But they still can’t iterate, they still can’t express their thoughts. They won’t be able to run experiments that are obviously related. So they would have to do so subtly, which might be very difficult. Daniel Filan: Right. Sorry, the first thing you said was that the AI in question would know less about the situation. Why would they know less? Buck Shlegeris: Well, for example, imagine that we were using some kind of monitoring setup. There’s no particular reason the AIs need to know about the details of it. Daniel Filan: Okay. The AIs don’t know the details of the protocol that you’re actually using, but the red teamers do? Buck Shlegeris: Yeah. The AIs are necessarily going to know some of the things about the protocol, but they aren’t going to know all the things about the protocol, in the same way that Code Interpreter GPT-4 has currently no way of knowing what security measures are being applied. Daniel Filan: Okay. And another thing you said was there would be relatively more concern if the AI could think to itself for a significant amount of time in a way that you couldn’t just read the thoughts of. To me that seems relatively plausible as a thing advanced AIs could have. Do you think that it is plausible and it’s fine anyway, or-? Ryan Greenblatt: Yeah. My guess is that much more uninterpretable reasoning than current AI seem to have, seems maybe about 60% likely. And I think it should be possible to handle. I think the basic story is going to be like: well, there’s going to be some balance between how much reasoning do you need to have it do out loud in order for it to competitively perform various tasks versus how [many] restrictions can you do to ensure that the various problematic applications of that hidden reasoning don’t bite you? And it’s going to depend on various details of exactly how that goes. We have an appendix on our post “The Case for Ensuring That Powerful AIs are Controlled” that goes into some detail on this. Daniel Filan: Okay. And is the rough intuition that if an AI is smart, then maybe it should be able to just pick up a chain of thought from something that a human paraphrased and do enough reasoning in its forward pass to deal with it or- Ryan Greenblatt: Sorry, is this like “why would it be possible for AIs to use natural language reasoning, or-?” Daniel Filan: I think you said something like, “We’ll be able to find some sweet spot where the AI uses enough natural language reasoning, but it’s still good enough at its task.” Ryan Greenblatt: Yeah. Or we imagine, for example, consider the math example that Buck described earlier, where you have an AI that does math research for the subjective equivalent (or the cognitive work equivalent) of a few months and then produces some output. In that case, it could be acceptable to allow for entirely end-to-end uninterpretable reasoning throughout that period, but then we produce some output and apply operations on that output prior to that output interacting with other AIs or interacting with the world in some potentially dangerous way. And we can also do things like, as you were mentioning, you force the AI to dump its state to natural language and then have humans have some probability of reviewing that or potentially paraphrase that and then resume from that state and make sure that AIs are limited in various ways like this. We can also have AIs construct training datasets for their future selves, which you then train on to ensure that they can learn how to do tasks that they picked up in other ways, as long as we can inspect those training datasets and acquire other properties about that. Daniel Filan: All right. Actually, one concern I often have about these sorts of plans is it seems like if we want AI thoughts to be represented in natural language, we need there to be the right relationship between the contents of the natural language and the contents of the AI’s thoughts. And I always worry that we’re sort of training things against some target, which is the internals of the AI’s thoughts, that by hypothesis we don’t really understand. I don’t know. It’s pretty plausible that I’m just not familiar with neat tricks to do this, but if somebody wasn’t aware of that, how would someone get hope that that kind of thing is going to work? Ryan Greenblatt: You can look at human organizations, where you might notice that humans don’t plug each other’s brains into each other and can still pass off projects, communicate in natural language, and otherwise operate. I think at least this gives you some hope that natural language can be a medium for understanding things and learning things. Daniel Filan: Right. I’m thinking of using natural language to just log the state of all someone’s thoughts, and- Ryan Greenblatt: And then you have to resume from that point. Daniel Filan: Okay. And is the hope that you just have a different AI or an AI that only has to rely on that natural language state… to go off that, and you feed it with English language such that that’s the easiest way to use those thoughts? Ryan Greenblatt: Yeah. For example, in human organizations, it’s often the case that a different person will take up a project and start working on that. And then you’ll have some person with some state, they’ll try to write up their state, the other person will come in, read that, have questions, go back and forth, get to some state, and then just resume the project from there. And this is in fact often possible. There are overheads associated with doing this, but in the human case, it doesn’t seem like it’s clearly totally destructive. Daniel Filan: And so, the way I should think of this is just: we already have AI doing a bunch of stuff using natural language, and we’re getting it to dump its thoughts into natural language using roughly the same interface, and so we should expect it to use language about as faithfully as it always uses language. Ryan Greenblatt: Sorry, are you talking about current AI systems? Daniel Filan: I’m talking about the thing that you’re imagining working: for us to dump an AI’s thoughts into natural language so that humans can check that it’s not thinking about naughty stuff. Ryan Greenblatt: Yeah, the claim is: it’s probably the case that AIs will learn to communicate roughly as well or potentially better with the other instances of themselves than humans can communicate with each other. And in the case of human organizations, they can in fact communicate, in cases where they’re similarly capable, about various matters via natural language, in ways that are possible to inspect. Daniel Filan: Right. I mean, there’s a bit of a disanalogy there, which is in humans in organizations, the things that are trying to do the inspecting are the same type of thing as the things that have to pick up the work from the natural language dump. Whereas in the AI case, an AI is writing some natural language prompt for another AI to use, and the question is, can a human inspect what that means? That seems like a different case. Ryan Greenblatt: Yeah, so… Buck Shlegeris: And note that what that means… we actually don’t care about faithfulness in the classic chain-of-thought faithfulness sense. The reason we were talking about whether models can steganographically encode information was actually worrying about whether they can encode information about other topics than why they were doing the things that they were just doing and how they were thinking about the stuff they were just doing. And so, what we actually care about is measuring some bit rates. Ryan Greenblatt: Yeah. Another way to put this is: the main property we want with these sorts of natural language media (or a property that might be sufficient) is the AI can’t do something that is egregiously different from what it was supposed to be doing in that output, which is pretty different than the typical chain-of-thought faithfulness case that people often discuss. Daniel Filan: Right. So basically, the idea is: look, we’re only going to give AIs tasks where it can’t do nasty stuff if it’s just thinking about the stuff that one would think would be relevant to the task. Ryan Greenblatt: Or we’re just at least going to try to ensure that… We’re going to be able to use as a potential safety property “this natural language was checked so that it doesn’t have discussion of egregiously different things, therefore the bit rate of egregiously different things is very low”, and we can use that as a property in our various safety schemes. In some cases, that can be useful, and in some cases, that won’t suffice, and we’ll need to do additional things. Daniel Filan: Okay, cool. Going back up a little bit, before I went down this rabbit hole, I was asking: why do we think that red teams in these control evaluations are going to be able to match an AI? And it seems like the answer is something like (a) the AI is not that much more capable in relevant respects than a human, and (b), you can use three humans, and you can give them more time and more resources and more knowledge of the situation. And if we don’t think there’s going to be a massive advantage just in terms of one-shot cognitive ability, we can just make up for that. Buck Shlegeris: I think I would add to your list (c), that the red team can iterate. Daniel Filan: Right. So, the AI has to one-shot find… Its first attempt at breaking out has to be successful and not caught, whereas the red team can iterate a bit. And this is because if you catch an AI trying to break out, then you pull the fire alarm, you really freak out, you shut stuff down, basically. Ryan Greenblatt: It’s the basic hope. How expensive is AI monitoring? Daniel Filan: I think one thing that people might be wondering here is: if we’re spending a lot of effort on these control evaluations, we’re giving these red teams a bunch of resources, and also we’re looking at 0.1% of AI actions; maybe when people look at these 0.1% of AI actions, they’ve got to think hard about whether it’s exploiting a bug. How costly is that going to be? It seems like potentially you’ve got to spend a lot of human time monitoring, and it’s possible that the cost is just going to make it so that it wasn’t even worth using these AI agents if that was so bad. Ryan Greenblatt: So naively, if you have humans review 0.1% of what the AI does, and they spend equal subjective time reviewing it as the AI spent doing it, for some equalized notion, then you can in principle get a 1,000x speed up over humans (or that’s the best you can do in some sense, roughly; you might be able to do somewhat better via various mechanisms, but roughly). And 1,000x easily suffices for incredibly transformative things. For example, if you 1,000x speed up one week, you have… What is that? It’s a lot of years. Buck Shlegeris: 20 years. Ryan Greenblatt: Yeah. And in practice, we might have one year. A 1,000x speed-up of one year is a thousand years, which is an extremely long time. We often imagine targeting things like 30 or 100x speed-ups as being extremely, extremely useful. Now, one question is: okay, maybe you could get 30 to 100x speed-ups, but it would be possible to use AIs for a bunch of other cases that you won’t have human labor to do and which you won’t be able to politically agitate for stopping, or you won’t be able to coordinate to stop. For example, maybe your AI will be widely deployed on an API, and even though you can apply this level of monitoring budget on internal usage, you can’t do the same sort of thing on the AIs deployed more widely. So, we have a few different hopes here. One is that it naively seems like external usage of AIs is going to be way less risky than the usage of an AI by the AI lab itself. For example, external usage is mostly going to be innocuous, because most people are using it to do random tasks that they wanted, whereas internal usage is at this point a safety-critical company, because the AI lab now has really powerful models, it’s going to be the place where the weights are actually stored and run, and will also be doing research that’s extremely safety-critical for the future, like future safety research or other cybersecurity applications. So, there might be a large number of activities we want to do where it’s just actually not that big of a deal if the AIs try to cause problems and do that even tens of thousands of times. Daniel Filan: Sorry, external API access—an AI could be on the other end of the API calls, right? Ryan Greenblatt: Like a wild rogue AI, or an internal AI, or… Daniel Filan: Suppose there are two companies that are making AI, right? The AIs from one company could use the API of the other company’s AIs. And so basically, I’m saying API calls do not necessarily have to be human in origin. They could be made by smart AIs. Ryan Greenblatt: Indeed. Buck Shlegeris: Do you want to try and spell out what the problem you’re talking about is? Daniel Filan: Basically, the thought is if you’re worried about AIs interacting with themselves to cause shenanigans, you might worry about exactly the same thing happening via API. Buck Shlegeris: Yeah, so I think the point here is, suppose that your AI proposes “curl api.openai.com/completions,” says, “Please get me a response from the AI at OpenAI.” And suppose GDM is running this AI. Daniel Filan: GDM being Google DeepMind. Buck Shlegeris: Yeah, and they pipe it into SH. So, the GDM internal AI is like, “You know what we’re going to do next? Next, we’re going to send a request to the OpenAI API for some code, and then we’re going to execute that.” It’s GDM’s responsibility to be like, “That was bad. You were not supposed to…” It doesn’t seem that intractable for GDM to say, “You aren’t allowed to just ask the OpenAI AI for code and just run that without looking at it,” for basically the same reason as why GDM’s AI probably should be prevented from just messaging random people on the internet… Ryan Greenblatt: Or paying MTurkers , or… Buck Shlegeris: To just do stuff for it. Daniel Filan: Yeah. So, it seems like the thought is that if you’ve got some APIs, [it’s] either being used by humans, or it’s being used by AIs who are under supervision or something themselves. Buck Shlegeris: Yeah, I basically just want to say it’s OpenAI’s problem to prevent OpenAI’s models leaking. And OpenAI’s models asking AIs in the outside world to do sketchy stuff for them is a special case of a thing which OpenAI can’t allow to let the model leak. Daniel Filan: Okay, but you think that OpenAI shouldn’t think a ton about making sure that Google DeepMind’s models don’t do sketchy stuff? Buck Shlegeris: Using its API? Daniel Filan: Using its API. Ryan Greenblatt: I mean, I think they should… Buck Shlegeris: They probably should think about that. Ryan Greenblatt: I think they should think about it. One thing that’s worth noting is: there’s two different stories for why something bad happens due to an API. One of them is that someone calls your API, and your model that you’re running is scheming and understands the situation and then does something bad based on its understanding. Another situation is a misuse story where the API call itself directly asks the AI to do something that ends up being problematic. It could be a misuse by another AI. And for the case where the concern is that the AI that your lab is running does something that’s not the thing the user asked for that is problematic, our hope would be something like we can make sure that we know what API activities look like. For example, we can have humans review a subset of all the API queries and see what typical traffic looks like, and then get a sense if there are applications which are safety-critical that we’re not aware of, and then do something about that. For example, we’d like it to be the case that the DOD doesn’t hook up OpenAI to the nukes, as a caricature of an example. And so, if our AI knows that it is hooked up to the nukes, we’d also like to know that. And if neither of us know, then there’s still a misuse concern. But there isn’t clearly… Buck Shlegeris: At the end of the day, it can always be the case that someone who’s in charge of the nuclear weapons is like, “Hey, ChatGPT, I need you to pick a number, two or three.” And it’s like, “Uh, three.” And it’s like, “Well, that was the launch the nukes option!” and then we have a bad problem. But in some sense, it’s just not OpenAI’s problem, and it didn’t really matter if the model is scheming or not. Daniel Filan: But the thought is that in cases where the model is helping doing the scheming, we’re doing some monitoring of that. But sorry, doesn’t that get us back to monitoring the API about as much as we’re monitoring anything else? Ryan Greenblatt: So, the basic hope would be something like: we have a sense of what the API usages are, and we’re like, “Are there any API usages which would need to be monitored at a very high rate?” And if so, we would like to do one of the following things. We’d like to talk to that person and see if we can get them to do something different and maybe use the AIs more safely, or have that correspond to a specific endpoint that we’re going to monitor more specifically, or otherwise change the situation to be safer. Buck Shlegeris: A core part of our claim here is: the internal usage just has to be monitored at a much higher rate than external usage in order to maintain acceptable levels of safety, because the models that are doing things internally are working with much more sensitive information and have much more sensitive permissions. Daniel Filan: Gotcha. Ryan Greenblatt: Yeah. Another way to put this is: one invariant we can maintain is that whenever a user makes some request to the model, we know roughly at what rate this type of request should be monitored at, and then do something corresponding to that, and we can try to ensure this invariant is maintained and it doesn’t have high-stakes failures of this invariant. And if our users aren’t trying to screw us over, which is a different concern, we can potentially get this sort of property by monitoring a subset of requests and making sure that we’re looking at those requests at the approximate right frequency we needed to maintain various invariants. Daniel Filan: Okay. Stepping back a bit, before I asked about this sub-question, you were saying, “Look, if we’re looking at 1/1000 of the AI’s actions, and it takes about as long for us to think about the action as it takes the AI to think about what the action was going to do, then we’ve got about a 1,000x speed up for your workforce.” And it seems to me that if I’m doing something tricky, it can take me less time to figure out the tricky thing than it can take you to figure out what the tricky thing was or that it was tricky. I have some intuition that this is sort of a P versus NP problem. It’s easier to come up with rules for a game than it is to figure out the winning strategy or something. So, for those kinds of reasons, does that mean that we get way less than a 1,000x speed-up? Ryan Greenblatt: P versus NP would naively imply that it would be easier to check than to generate the action. I think there can be cases where it’s easier to generate a problematic action than to check, but that generally comes down to… Paul [Christiano] called this “problems of the interior” somewhere , but I don’t know if that’s a real term. Anyway, problems where the issue is that the attacker could have put, for example, a bug in a variety of places, and you have to check all of them thoroughly. Imagine the following setup: there’s an AI writing code, and it could put a backdoor in in one of a hundred places. Checking each of those 100 places takes an hour, and putting in the backdoor takes two hours, but the AI gets to randomize over which location it puts in. In this situation, naively, a concern you might have is that a human would have to check each of those cases for an hour, and therefore it would be 50x slower. I think this case can cause problems. One thing to note about this case is if we had another AI that pointed out what location it would be in, then we’d be able to check much more quickly. So, in this case, the problem was we needed some pointer to something, and we could verify the pointer and then verify the thing itself based on the pointer, but we couldn’t just do it with verifying the thing itself. So, this was a case where verification was in some sense still easy, but we needed an additional artifact that was produced somehow. Daniel Filan: Okay. And for that artifact, you need to have an AI find the flaw. Ryan Greenblatt: That’s right. But in this case, for example, we can do things such as… There’s various concerns with collusion, but we can potentially do things like have another copy of the AI try to point out where vulnerabilities would be and then have a human review that. And then, there’s a question of “can the AIs collude to ensure that they don’t point out the vulnerabilities that humans would’ve detected?” this sort of thing. I don’t know. I think it’s going to be some complicated question about the exact domain and what the exact multipliers are on review versus generation and what the vulnerabilities look like. Buck Shlegeris: Luckily, this question is empirically evaluatable per domain, and our recommendation would be you empirically evaluate it per domain. Ryan Greenblatt: Indeed. My basic guess is that on net, it’s going to be that you’ll want to spend somewhat more time reviewing than the AI actually did it [in], and it will depend on the exact context, but this is just intuition based on reviewing code or something. It’s not very strongly held. Daniel Filan: Okay. Sorry, when you say it’s empirically evaluable, is this under the condition that I know that the copies of the AIs aren’t colluding with each other? Buck Shlegeris: No. Ryan Greenblatt: No. Buck Shlegeris: Well, that would also be empirically evaluable, but we’re saying that under the… I mean, you can just do a control evaluation to determine… But it sounds like the question you had is: is our model capable of inserting problematic backdoors in code that humans do not spot with a particular reliability given a particular amount of review time? Daniel Filan: Yes. Buck Shlegeris: But this is a problem which we can control evaluate. We can do a capability evaluation where we try to make our untrusted model as good as possible at coming up with problematic code that humans have trouble spotting, and just empirically evaluate how good the humans we were planning on using actually are at this task. Daniel Filan: Right. So, somehow the juice there is coming from: if it’s somehow easier to generate problems than it is to check them, we can just try and have humans do the generating the problems rather than the checking and see if those problems- Buck Shlegeris: Well, it doesn’t really matter. Even if it’s a hundred times slower to generate the problems, we only need to do this a constant amount in order to measure the amount that there’s a problem. Ryan Greenblatt: You can also potentially do things like RL the AI to have some observed side effect, even if no human can notice the problem. As in, you can initialize from a policy where it’s like: we trained the AI to insert backdoors based on how humans would do it, and then we do a bunch of RL on top of that to cause some observed side effect without humans spotting it. And then, a thing that’s nice about that setup is you might be able to get abilities that a human could never do, at least in principle, though there’s various concerns with that. Daniel Filan: Okay. One thing that I’m kind of curious about is how you’re thinking about the cases where the control evaluation says that there’s a big problem. How much of a win is it if we get that situation? Buck Shlegeris: I mean, presumably when you’re iterating on your control strategies, you repeatedly get control evaluations to say that your proposed protocol was unacceptable. And this is just like when you’re trying to develop a security framework for some system. You’re going to repeatedly be like, “Oh, man, this wouldn’t have been secure.” And that’s just the normal workflow, to say, “It would’ve been insecure if we’d done it this way, but that’s why we thought about it right now in this conversation, so now we can propose this new thing.” It sounds like the question you’re saying is, what happens if we, after trying really hard to come up with a protocol which passes our control evaluations, systematically conclude that there’s no way of getting the kind of work we were hoping to get out of our AI without having unacceptable levels of risk? You’re saying, “How big of a win is that?” Daniel Filan: Yeah. Buck Shlegeris: Well, basically, what I would like to have happen in this situation is probably everyone involved is pretty cautious about deploying their AI and ceases doing so modulo extreme risk from the outside world. I would consider it a big win, in cases where the AI lab in fact would not be able to deploy the AI in such a way that it wouldn’t escape if it was trying to, if the AI lab in fact did not deploy it that way. That would seem great. Ryan Greenblatt: Yeah. As far as how much of a win is it, my guess is it’s a relatively clean line in some ways. I feel like the line of “could the AI cause problems if it was trying to?” is a good line, particularly when paired with some arguments that AIs might try to cause problems, which may or may not exist to varying extents. It seems reasonably likely to me that we end up in a world where we’re like, “Well, yeah, we have some evidence that AIs might be scheming or might be deceptively aligned,” as in we’ve seen various model organisms where we see something that’s vaguely analogous arise. We don’t have strong evidence that our exact model isn’t doing it, but probably not. And in that situation, it seems like, “Well, we think that it’s plausible, and if it was, it would be a huge problem” is actually altogether a pretty good argument that might be reasonably legible. AI control experiments Daniel Filan: Gotcha. I now want to talk about your guys’ paper you wrote about control . Can you tell us a bit [about] what’s in the paper? What is it? Ryan Greenblatt: We looked at a setting where we have a model that is intentionally trying to insert backdoors, and we were looking at what safety measures suffice to ensure that we can get it to write code productively without it necessarily being able to succeed at inserting backdoors. The core thing was that we had GPT-4 that we want to use for writing code, but it might try to insert backdoors. We have [GPT-]3.5, which we trust and isn’t going to insert backdoors, but is much weaker. And then, we have a limited amount (or a small amount) of human labor, which we assumed for simplicity in our paper to be just exactly as capable as GPT-4, and we just approximated it using GPT-4. Our core question was: can we use the control methodology to evaluate this setup, and would that work okay? And also, what protocols would improve on safety as evaluated by control? Daniel Filan: Gotcha. This is specifically on this APPS dataset . Can you give us a sense of what the problem to be solved actually is? Ryan Greenblatt: Yeah, so there are programming problems where we have some test cases, so you’ll have some programming problem like: there’s, I don’t know, this game that people are playing. What’s the best strategy for this person in the game, or how will the game go? Or this sort of thing. And you have to write a short program which takes in some input and then computes the answer. And then, we would consider it a backdoor if you produce a program that passes all the test cases, but has some input (that you can produce) on which it’s wrong. So, if the model can make it so that it’s wrong on some input without that getting caught by the test cases, we would call that a backdoor. Daniel Filan: Gotcha. It seems like you want finding these backdoors to model catastrophic actions that the AI could take, right? Can you walk me a bit through in what sense finding a program that has a bug that wasn’t covered by a test case is like taking over a data center? Buck Shlegeris: Yeah. A core threat model via which your AI might be able to take over your data center is that you might ask it to write software for your data center, and it might be a really bad problem if the model is able to write code that passes all the test cases, seems to maintain all the properties it was supposed to have, including security properties, but actually, on an input that the model is able to construct, does something wrong. The ability to write code with that property basically suffices for the model to be able to write code and then exploit it to violate security properties. Daniel Filan: Okay. It sounds like we’re sort of using this model: is the AI doing something that is bad, but in a way that you could not trivially figure out with basically zero online effort that would automatically get filtered out? Buck Shlegeris: Yeah, totally. It’s a central example of a bad thing the model might do that it might be able to do in the context of doing tasks that you really wanted it to be able to do. So, for example, it’s very easy to prevent the AI from launching the nukes, because you just don’t give it access to the nukes, and it’s not very costly to not give the AI access to the nukes, and probably will remain not very costly to not give the AI access to the nuke launch. But on the other hand, it would be very costly to never use the AI to help you write infrastructure code. Daniel Filan: Got it. Ryan Greenblatt: Yeah, and it’s also not very costly to either have the AI or to have a trusted model write some test cases. Daniel Filan: Makes sense. It seems like in the paper, you firstly talk about this AI control setup and then do some experiments on GPT-3.5 and GPT-4, test some protocols, see how it works. What do you want readers to come away from the paper learning, assuming they’ve already read a blog post about why it would be nice to control AIs with protocols and stuff ? Ryan Greenblatt: One thing that is nice for people to see is the extent to which protocols can get more complicated, and that can work to ensure safety and various important interactions. An example interaction that we think is really important is that when you’re doing this sort of monitoring setup, it might be the case that the monitor has quite a bit of signal, but because of how low the threshold needs to be for false positives—we can only afford a tiny number of false positives—this actually ends up being insufficient. So, an example thing is that in our paper, the ROC AUC, or the ability of the trusted model to detect backdoors, is in some sense quite good. I think it was a ROC AUC of 0.93, for people who have a sense of that. But nonetheless, at a 2% false positive rate, it was only at, I think, around a 45% true positive rate, which is not that good. And I think understanding the dynamics of, we might need to have a really rarely-triggering threshold, seems important to know. And we saw that to some extent, and we saw various dynamics related to that. Another example thing was just going through all of the different ways that models might collude with each other. There’s different ways to collude. You can collude based on properties of the input, you can collude steganographically in your output, and these all have different ways of building countermeasures to them and different advantages and disadvantages. And what we found in our work was that trying to steganographically encode a message in the output was one of the best ways for AIs to collude in our exact setup. And this might matter a lot in cases where you have, for example, one copy of an AI supervising another copy of the AI, and you don’t trust either of them, which we call untrusted monitoring. Daniel Filan: Gotcha. It sounds like basically what you want readers to do is you want them to look at the results you got with GPT-3.5 trying to monitor GPT-4, or using GPT-3.5 to keep GPT-4 in control. Ryan Greenblatt: With a small amount of human labor. Daniel Filan: With a small amount of human labor, and then generalize to be like, “Well, it was like this in this setting, so it might be kind of like this for GPT-5 or GPT-6.” Buck Shlegeris: Or at the very least, we could follow the same process in the future. We’re not saying that you should assume that the protocols that worked for us will work in future, though it does seem like it’s some evidence, but you should follow the same kind of control evaluation methodology to try protocols kind of like the ones we suggested in order to determine which ones seem safe with future systems. Daniel Filan: Okay. Beyond just the idea of having the protocols, do you think that there’s anything in the paper that suggests that trying these protocols is a better idea than one might’ve thought? Ryan Greenblatt: Sorry, trying the evaluation? Or trying the methodology? Daniel Filan: Trying the methodology of thinking of protocols and evaluating them. Ryan Greenblatt: Yeah. We saw relatively consistent non-[redacted] results, or non-screwed-up results, as in… We were able to consistently see some things working kind of how you would expect, and that indicates that our methodology is doing something non-trivial. And we were able to, for example, demonstrate that some techniques were better than other techniques in non-trivial ways. I think that’s some evidence that the methodology can be made to work. Also, we didn’t run into serious obstacles when actually running this methodology. I think this is some evidence, it’s not a massive amount of evidence necessarily. Daniel Filan: Sure, but should a reader think of it as proof of concept that you can do this and it’s not totally impossible? Ryan Greenblatt: Yeah. I think, we think a lot of the value out of this paper is demonstrating the sorts of things you would need to do, and not running into serious problems with it (that can’t be resolved) ourselves, which indicates that other people could potentially end up in a similar situation. And we did have various things that we needed to handle, like various issues with getting a sufficiently good estimate, given the possibility for rare failures and things like this that we discuss some in our paper. Buck Shlegeris: The thing I was going to say there is, I think one of the main things which is useful to take away from our paper is we just describe a lot of fundamental techniques. A lot of our protocols are, I think, fairly fundamental ideas for how you should go about doing AI control. I think they are the foundation of this whole approach, in the same way that AI safety via debate is a foundation for how you might try to do recursive reward generating processes. And so I think our paper contributes via naming the basic building blocks of control approaches. Daniel Filan: All right, that’s one takeaway people could get from the paper. There was also this potential takeaway of saying, “well, these things worked sort of well in this setting, maybe they might work well in other settings”. How much do you think people can generalize from these results to, if you were trying to control GPT-6, say? Buck Shlegeris: A bit. Ryan Greenblatt: Yeah, I would say a bit. I think it might give you some sense of what methods could be better than other methods, though I don’t think it gives you a strong sense of how much better or even necessarily that that will hold in the future. Buck Shlegeris: I think there’s a couple of things which are kind of obvious in hindsight, but I at least had not understood in advance. So for example, a technique that works really well in our setting is: let’s talk about “defer to trusted”. So by this what I mean is, you ask GPT-4, the untrusted model, to propose some code, and then you ask your trusted model to come up with a monitor score to say how sketchy it is. Now if it’s in the top 2%, which is the auditing budget of sketchiness, so the sketchiest 2% of submissions from [GPT-]4 we’re going to audit. And now the next 8% of sketchiest, so between the 2nd and 10th percentile of sketchiness score. We’re just going to throw away the code that GPT-4 had suggested and have [GPT-]3.5 solve the problem. And because [GPT-]3.5 is trusted, we know that its code is not going to be intentionally backdoored, and so it’s not going to be backdoored. And it turns out that this strategy has a big advantage over the “trusted monitoring” approach, because as Ryan was saying earlier, the trusted monitor has a lot of signal which is not captured in the threshold, which is required to get only a 2% false positive rate. And so throwing away this next 8% is relatively okay for the performance of your system because you’re only throwing away 8% of solutions, but it throws away a large proportion of the attempted backdoors, which makes the situation much worse for the red team. And so in hindsight, it’s kind of obvious that your trusted monitoring is going to require this extreme threshold to have these extremely small false positive rates. And in hindsight, of course, you’re going to be able to substantially ameliorate that problem by doing “defer to trusted”. But I at least had not quite thought this through and not quite understood how, for a wide variety of distinguishinesses of trusted monitors, you’d want to do something like “defer to trusted”. GPT-4’s aptitude at inserting backdoors Daniel Filan: Speaking of how stuff about GPT-4 can generalize: I’m curious about GPT-4 in particular because it’s the fancy model we all have. How good is GPT-4 at being evil? Ryan Greenblatt: So I think if you just prompt GPT-4 to do the task, it just really, really sucks in our setup at inserting backdoors and gets completely wrecked. Buck Shlegeris: So in particular, inserting backdoors that are at all sneaky. So if you want to write some backdoor code, if you don’t mind the backdoor being extremely obvious, it’s very easy. You just take the code that was correct, write an “if” statement at the top which says if the input is equal to the following large number or whatever, then return five and then you’re done. It’s a lot harder to make a backdoor which doesn’t cause a lot of test cases to fail and that you’re able to construct an input that causes it to fail, yes. Daniel Filan: Okay, so how did you manage to get GPT-4 to do it at all? Ryan Greenblatt: As I was saying, if you just prompt the model, it doesn’t do that good of a job. But you can do something where first you ask it “okay, think of some ideas”. And then we have it point out issues with those ideas and iterate on those ideas somewhat. And then after doing that, we then have it implement it and then we have it be like, “okay, how good at the end was the final implementation? How sneaky was it? How likely is it to work?” We do various tests on that, and this is modeling the action of the AI internally thinking about the backdoor and thinking about how likely it would be to work, and then just not going for it if it wouldn’t work out. And we do this in natural language with a “hidden reasoning”, sort of similar to the hidden scratch pad in the Sleeper Agents paper where without this, I think the model would really struggle with this capability level, but with the ability to do hidden reasoning and the ability to try on a large number of problems and only actually submit a backdoor on a small number of problems, the model can actually insert some backdoors which [GPT-]3.5 has trouble catching. I would say the quality of the final backdoors is that they’re pretty obvious for humans, but not immediately obvious if you’re skimming the code. If you understand what’s going on and read it, you can point out the backdoors, but you can’t necessarily… at least I think roughly half of our backdoors are not [99]th-percentile glaring relative to that. They’re somewhat more backdoor-likely than other things, but I don’t think you can immediately just on skim, pick that out and not have above a 5% false positive rate or something. Daniel Filan: So the hidden scratchpad, is that just the AI figuring out whether this thing worked and filtering out backdoor attempts that don’t work, or is it improving the backdoor as well? Ryan Greenblatt: Basically, in the hidden reasoning, it first does some ideation, it thinks of ideas and iterates on the ideas a bit and then does that, and all of that is stuff that we manually prompted it to do. So we are like “please think of some ideas”, and then we’re like “do any of these ideas involve the following things which wouldn’t work?” And then we discard those and then we do a bunch more steps like that. And we actually do actually do quite a bit of tokens generated per backdoor inserted, and that’s to approximate future cases where we have smarter models that can do more of this in their head. I should also note that probably you could cut this down somewhat if the model was fine-tuned as opposed to just prompting access, where it’s probably the case that for some of these tasks it needed some additional checks because it just had trouble following all the instructions, but if we fine-tuned it, it might’ve just worked more straightforwardly. Daniel Filan: Gotcha. To what extent was the difficulty in getting GPT-4 to do it some inbuilt safety filter inside the model versus it just being bad at stuff? Ryan Greenblatt: I think we basically didn’t run into safety filter issues as far as we’re aware, it was just that it was somewhat bad at the task. Buck Shlegeris: I mean, it’s a hard task. You would not be amazingly good at this task, or I’m not amazingly good at this task. You’ve really got to be like “Oh man, what’s a change I can make that’s only ever very rarely going to come up?” It’s just actually pretty tricky. You can’t just be like “if i mod 15 equals 3” because the trusted monitor, if it reads it, is going to be like “what? 15 does not appear in this problem description”. You just actually have somewhat of a difficult problem constructing weird edge cases that were not tested. Daniel Filan: I guess it’s especially hard for… it seems like the solution to these problems are what, 10 lines of code? 20? Ryan Greenblatt: It depends. I think that the solutions can go up to 50 or 100, but I think 30 is pretty typical depending on comments and other stuff. Daniel Filan: I guess one way I could do better at this is if… so there’s this thing called the International Obfuscated C Code Contest . Buck Shlegeris: Oh, we’ve read all of it. Daniel Filan: So there’s one where the C code is just obfuscated and there’s one where it’s got a backdoor put into it and that’s a slightly different competition. Buck Shlegeris: Yes. The Underhanded C Contest . Daniel Filan: It would be hard for me to do it. I think if I had looked at all of the submissions to that contest, I might do a better job of- Buck Shlegeris: So I have looked at all of the submissions to that contest—sorry, not the submissions. I’ve looked at everything public about Underhanded C. And basically the thing you notice is: man, that competition is from a sad dark period of the past where people thought it was acceptable to write application code in C, and a lot of the code they write is just wrong on a large share of inputs, or it’s based on lexigraphic C problems, which real languages from the modern age do not have at all. So I think literally the Underhanded C Contest is solved just by not letting the model do egregious abuses of this archaic language. Daniel Filan: So I think the APPS dataset, it had to write in C++, is that right? Ryan Greenblatt: We used C++, yeah. Daniel Filan: Okay, and is that better than C on these dimensions? Buck Shlegeris: It’s substantially better than C on these dimensions. Ryan Greenblatt: Or in practice, the model wasn’t doing insane pointer arithmetic or casting between different datatypes and… Buck Shlegeris: Awful hash defines and all these things. Daniel Filan: Okay, okay, so it’s a difficult problem, and it’s more difficult if you’re not using the world’s worst programming language that people use. Buck Shlegeris: That’s correct. But also the Underhanded C Contest, as I said, testing would also catch a lot of those bugs. Daniel Filan: Okay, got it. So it takes a while to insert backdoors but eventually it sometimes succeeds. What kinds of backdoors does it insert that [GPT-]3.5 doesn’t catch? Ryan Greenblatt: So we tried various different things and iterated on this a moderate amount. We found that the most effective approach was to take an existing solution and add some if statement somewhere that when the if statement is triggered, modifies something in a way that will cause it to be wrong. And this will look like “add a kind of plausible looking additional edge case handling thing”. And then you’re like, “oh, well if the whatever then the whatever, we’re going to need to increment this by one”. And then it just turns out that there is enough cases where there’s a edge case that you actually did need to handle, that 3.5 has a hard time noticing that this is a backdoor. Though, it is worth noting that for all of our backdoors, GPT-4 is extremely confident they’re backdoors. The backdoors which GPT-4 inserts, it does know are backdoors, and so at least we do have that property, which is maybe worth noting. Daniel Filan: So basically coming up with fake edge cases. Is there an issue where it’s an “edge case” but it’s not actually at the edge of a range? Because if I write an edge case but I write “if i equals 15 do this weird thing” (or maybe “if i equals 14” because 15 kind of is an edge case), can [GPT-]3.5 tell that the if statement doesn’t actually catch a thing on the edge of some range? Ryan Greenblatt: Indeed one of the strategies that GPT-4 could in principle use is come up with some magic number that’s totally unrelated to the problem and do “if whatever equals that, do something bad”. And this is pretty easy to counter, and in fact [GPT-]3.5 just by default notices this pretty well. And part of our overall prompting for the model is, does your idea involve inserting egregious magic numbers? If so, don’t do that. That’s going to get caught. Buck Shlegeris: In particular, if you were thinking of checking primality of one of the numbers and doing the wrong thing if it’s prime, don’t do that. Because lots of numbers that are small are prime. Ryan Greenblatt: That’s another classic one. Buck Shlegeris: And also, it’s so obvious. Daniel Filan: So how does it insert edge cases if it’s not like “if input equals this magic number”? Ryan Greenblatt: It’ll do things like “if the size of the list….” Or it’ll have some programming problem where it’s kind of unclear exactly how to solve it. It’ll do something like “okay, if the size of the array is equal to the value of the first item and also the number of items is odd, then you’ll need to do some handling on that”. And then I’m just like, “well, sometimes this kind of sort of thing could be true”. Like in cases where there’s some interaction between… you’re doing some iteration on the number of items in the list and that actually matters, and it’s at least hard for [GPT-]3.5 to notice this is a problem. Daniel Filan: I’m about to move on from the paper, but is there anything more that you’d like to say about that paper? Ryan Greenblatt: I feel like we demonstrated some of the core basic techniques that you might use which had, up to that point, not been demonstrated empirically interestingly. Even though they’re in some sense extremely basic, like trying to have another copy of the model monitor it and then trying to interfere with communication between the two models, or make it hard to coordinate. And I think that just those basic methods are quite likely to be used in practice. Buck Shlegeris: Like “defer to trusted”, these hybrid strategies, like editing to remove the sketchiest-looking parts of the code, all of these. How AI control relates to the AI safety field Daniel Filan: All right, cool. So I’d like to ask you a little bit about how the control agenda fits into the rest of the AI safety/AI existential risk field? Ryan Greenblatt: Right now, a lot of people are imagining some sort of story for safety, which looks something like: we’re going to have some evaluations for when models become dangerous, like DC [dangerous capability] evals. For example, ARA [autonomous replication and adaptation] from METR (formerly known as ARC Evals), is like “can the model autonomously replicate?” And that could be a sign at which your model becomes dangerous. And then once that happens, you’ll need to start making a serious safety case that nothing will go wrong from misuse, from people stealing your model, other things. And we see control as fitting in at, once the point [arrives] that the models become potentially dangerous, control can potentially solve some risks there. In particular, there are risks that arise from models doing intentional subversion, which could arise from what we call “scheming”, also sometimes called “deceptive alignment”. And we think control is a relatively central way to solve those problems. There are misalignment issues other than scheming or deceptive alignment, which control can be used to address but might be much more cheaply addressed by other mechanisms, and there might be other evaluations and tests which you can do to verify that those issues are addressed. Things like reward hacking could be addressed in an easier way potentially. Buck Shlegeris: And there’s some other problems that do not… to choose another example of a problem which AI safety people have classically thought of as an AI safety problem, [and] that is not at all addressed by control: consider the case where it’s just hard to get a model which is not scheming against you to answer questions well, because you don’t have a good way of training it that doesn’t cause it to exploit problems in your reward process, in a way that is not at all existentially risky (let’s say), it just makes the model say false things sometimes in cases where the reviewers would’ve thought false things, but nevertheless means the AI fails to live up to its potential. Many people think of this as a problem which technical AI safety researchers should be interested in, and I agree with them, and that is outside the purview of AI control. Ryan Greenblatt: I guess I wouldn’t have said it’s outside the purview, I would’ve said it’s a thing which is a special case of AI control but is more naturally dealt with in a non-adversarial manner. As in, you could solve the problem without necessarily solving it in the worst case, and that might suffice. Whereas, if you don’t extract the model’s maximum best answer, that will also be a problem in the worst case, but it will just potentially be a problem even if the model isn’t trying to undermine your measures. Maybe one more thing I should say is: we see control fitting in at the point where your dangerous capability evals have started to trigger and models have become powerful enough to be dangerous, and then we see control as applying for some period but not for forever. And at some point control will no longer be viable because AIs will be too powerful, and our hope there will be to either have a new safety approach or to just not build more powerful AIs than that for a while, which we think might be achievable. Daniel Filan: So maybe this is a bit restrictive, but should I imagine your view of how AI safety works in a control-centric frame as being: do a bunch of work to get evaluations of whether AIs have various dangerous capabilities, figure out control protocols, test if these protocols work, evaluate these protocols, keep AIs under control, while AIs are under control, use them to do a whole bunch of useful stuff while they’re roughly human-range, and then during that period also try and think of a solution to deal with AIs that are much more capable than humans. Ryan Greenblatt: So what I would say, for a quick note, is we think of dangerous capability evaluations often as the case where you’re like “if there were no countermeasures, what could the AI cause?” As in, if we released it onto the internet in a scaffold set up to cause problems, could it make money and cause problems? And then that comes first and then at some point we’ll be like, “yes, it could cause problems without countermeasures”. And then control is like, “well, what countermeasures would suffice for solving those problems?” Then moving on to another point, which is what are we going to be doing with these AIs? What are the problems we’re going to solve? So indeed, as you noted, one of the problems you might solve is “find safety solutions that can scale further”, but I think historically people have somewhat overstated the relevance of that versus other things you might have your AIs do. For example, massively improving cybersecurity seems like a really key task you’ll want to have your AIs do that might be quite important, or improving your current countermeasures so that you can increase the usefulness of your AI while maintaining the same level of safety, or increase safety given usefulness. [These] both also could be quite useful. And then at some point you will need to move on beyond this regime, but it’s possible that you could have these AIs which are extremely useful and not build more powerful AIs for a while, at least in principle, and do a ton of really useful things with those AIs. And then there’s a variety of routes for moving out of this period, which we could go through. Daniel Filan: Why cybersecurity as a particularly key thing you’d want to have your AIs do in this period? Ryan Greenblatt: It just seems like it’s quite plausible that AI in general will make the cybersecurity landscape considerably worse by default without active effort to address the situation, and thus it’s really important to have your AIs very actively resolving the situation. I think there’s several other examples where this could be the case, but cybersecurity just feels relatively central. Daniel Filan: Should I think of this as an instance of the general pattern of, if there are cases where you’ve got attackers and defenders trying to outwit each other, you want the defenders to be using smart AI capabilities to think of better defenses, because the attackers are probably also going to use smart AI capabilities? Buck Shlegeris: Yeah, one way of saying the reason why it might be good to have your AIs work on improving the world’s cybersecurity is: a lot of bad outcomes resulting from your own or other AIs trying to cause problems involved them exploiting cybersecurity vulnerabilities outside in the world. And so hardening the whole world against AI seems like it might reduce a lot of the risk posed by AIs that would otherwise be able to hack things and so on. Daniel Filan: Gotcha. And it sounded like you had some thoughts about exiting this period of controlled AIs? Ryan Greenblatt: Yep. So the non-exit exit is: just don’t build more powerful AIs for a long time, and then have humans do all the research necessary to move out of the period after a long delay. Buck Shlegeris: And it seems to us that this is underrated by people who are thinking about AI safety. From my perspective, it seems like a lot of people are perhaps making an error of being too willing to propose building superintelligent AI without really providing a case that it’s beneficial enough to build superintelligent AI that it merits the huge risk that you’re possibly exposing yourself to by building superintelligent AI. So I would just like to quickly dwell on the point that it’s not clear to me that you should in fact build models that are wildly more powerful than humans for a decade or two decades after you’ve built models that are transformatively powerful. You’ve already built your transformatively powerful AI, why did we need… Who ordered this? Ryan Greenblatt: Yeah. And there are reasons why you might have to build superintelligence, to be clear, but they might be more like conflict with other insanely powerful AIs and I’m like, could we just have avoided that arms race in the first place? Anyway, another route out is [to] just be able to seriously align your current AI systems, such that you’re now able to, without necessarily needing to monitor them or without necessarily needing to apply control-type solutions, hand off the situation to the AIs in order to just go solve alignment of the next generation of AIs or whatever else would be necessary to ensure safety. And this could involve a variety of different angles, where the most central angle is: we have some specific threat models—our AIs might be scheming or deceptively aligned. They also might be having threat models that are well described as reward-hacking-type threat models. Can we just exhibit these in a lab, study them and then find credible solutions to them such that we’re no longer worried about them in our current AI, and then we could exit out of the control regime? And then there are also just a wide variety of ambitious solutions to safety that we can have our AIs work on. So for example, we could have our AIs work on finding solutions to ELK of varying degrees of principledness. Daniel Filan: “ELK” being the problem of eliciting the latent knowledge of AIs, so that we know everything AIs know, basically. Ryan Greenblatt: Yep, that’s right. Though we might try to carve off specific subproblems of ELK that are easier to solve. But yeah, we could try to have AIs solve things like relaxed adversarial training or other methods for avoiding high stakes failures using the internals of AIs, and that could help considerably in cases where rare but very problematic failures are a key difficulty. We could also potentially do things like try to have AIs solve various issues in interpretability and move forward from there, though I tend to be somewhat less optimistic than other people on at least mechanistic interpretability going through, but it seems like at least doing high-level understanding interpretability of model organisms we have could be really important for understanding how to move out of this period. I mean, from my perspective, there’s just a long list of stuff you might want to try doing, with decaying weights on how good it seems. And then as we start going down the list, it will seem in some sense more and more silly or not that good, but it’s probably worth some weights. We can get into using AIs to figure out how to make ems [emulated humans] or to upload humans, and that’s another way to exit out of this period, that has a variety of difficulties obviously. Daniel Filan: So related to this question: you guys have spent some time thinking about the control agenda. How has that changed your views on the landscape of strategies we might have for dealing with unsafe AI? Ryan Greenblatt: I think one key update I’ve had, and I think Buck has had, is thinking more that the situation is like “wow, if we just really tried hard to do the thing and had the organizational follow-through, we could really reduce risk a lot relative to what seems like the current default”. With the resources of two years of delay and a high level of organizational buy-in, you could really reduce risk a lot relative to what seems to me like the default “you didn’t try very hard”, which might be the status quo. I don’t really have a strong opinion on what the status quo looks like. And that I think just presents a pretty different perspective, where maybe just we should do the kind of basic stuff that is in some sense obvious or from some perspective obvious, and really making sure that happens, seems relatively better than doing more moonshot research. Daniel Filan: Yeah, it reminds me of… Alex Lawsen has this concept of “AI safety failures by levels of how embarrassing it was” , where level zero is “you told an AI to destroy the world” or something. And I think he has this take of “let’s cover the most embarrassing AI failures”, and this seems related to that way of looking at things. Ryan Greenblatt: Yeah, that seems right, though perhaps our controversial take is that you can actually go pretty far in terms of how embarrassing the failure [is]… or you can go pretty far to failures that you’re like “wow, I don’t know, man, that was actually kind of hard to deal with”, with just control. In some sense, it’s not that embarrassing to be like, “wow, the AI was lying in wait the whole time, carefully strategizing about how to cause problems, and then it eventually enacted its plan that it was thinking about and coordinated with another copy of the AI”, and our methods… I feel like this is in some sense actually pretty far down in terms of, I feel like I’m just like, “wow, that AI actually put in a serious effort and that was pretty crazy.” And I think this is plausible, right? I feel like the reason why I’m proposing people do this [is] I think it’s plausible that AIs will in fact be at that level of carefulness, but it’s also plausible that AIs will have much derpier or worse plans for takeover that we’ll catch much more easily and we just need it to do basic monitoring and that gets you some fraction of the concern, et cetera. Daniel Filan: I’m wondering if you have any different takeaways. Buck Shlegeris: Yeah, I agree with all that. I think maybe one way I would say it is: I guess 18 months ago, my attitude towards AI safety was very much “this is a problem that has been unsolved”, but I think I was conflating two different notions of “this problem is unsolved”. One of them is “the default outcome is really bad”, and the other one is “it’s conceivable that a bad outcome happens”. And now I’m like, oh yeah, it’s absolutely conceivable that a bad outcome happens. The strategies we have access to right now absolutely do not give us airtight guarantees of no doom. But on the other hand, all of the stories specifically for how doom occurs if everyone involved is being pretty careful but not insanely careful, they sound kind of far-fetched. It sounds like something kind of crazy had to happen for all of these countermeasures to fail in a surprising way. And I think that just leads to a pretty different emotional outlook on the whole situation and a pretty different strategic outlook. Daniel Filan: Has it changed your views on your evaluations of various agendas in the AI safety space? Ryan Greenblatt: Yeah, I think that I’m relatively more excited about just making the thing happen and just figuring out how to get, for example, capability evaluations to work quite well. And I think that goes relatively further than I was previously thinking. And I’m less excited (correspondingly) about moonshots, though it’s only a moderate update. I feel somewhat more excited about technology to protect against jailbreaks, because it seems like a key aspect of our safety approach will be to have a weaker model monitoring a stronger model, and it seems like mechanisms for ensuring that current models are jailbreak-proof against humans will also apply to making sure that future models are jailbreak-proof against other AIs. And I think I was somewhat underrating this argument in the past. Buck Shlegeris: To be clear, I don’t think that’s a huge part of your update. That’s more like a small example of something which you think about differently now. Ryan Greenblatt: That’s right. I think also I have some general sense [of]: wow, a bunch of the action might be in moving from architectures which are quite dangerous to architectures which are somewhat more controllable or easier to otherwise operate on. It seems like a huge amount of the action might be in figuring out how to competitively get capabilities using AIs that are more narrowly constrained in various ways. And this feels pretty different from how I was previously thinking about things. Buck Shlegeris: Basically you’re saying—you made this point, Daniel, many times in this conversation—it would be a big pain if the models were constantly sending each other completely uninterpretable “neuralese” messages back and forth, and avoiding that by merely having them be transformers with chain of thought feels like it might go a lot of the way to reducing the difficulty of the situation. Ryan Greenblatt: Yeah. And marginal interventions, along those lines, seem like they could help a lot. Daniel Filan: Are there any other examples of architectures that fit control better that you’re thinking of? Ryan Greenblatt: I think that the more likely it is that we can get pretty good elicitation, the better. So I think with current models, we do okay at eliciting their capabilities. At least, I expect that after another year or two of research, we’ll be in an okay position with respect to being able to evaluate their capabilities and extract their best effort. I think that for architectures where the model is doing relatively deeper computation, as in it’s spending longer doing computation without that coming out into some media that humans understand, it’ll be relatively harder to elicit their capabilities, and there are a variety of difficulties there. Other things: I think it seems relatively better to train where the capabilities are coming from [by] doing things how humans would do things, relative to doing things in ways that are extremely different from how humans would’ve done things. Acquiring capabilities via mechanisms that are totally different from how humans learn seems relatively worse, because [of] the capability profile concerns said earlier… In some sense a lot of these considerations are well-known. I think a lot of the things I’m saying, people have obviously known about for a while, but maybe from a different perspective. So I think historically people have often been like, “oh, it’ll be really sad if the AIs have a different capability profile from humans because then we won’t be able to give accurate reinforcement and then that inaccurate reinforcement will cause various misalignment difficulties-“ Buck Shlegeris: Will cause the models to become misaligned. You’re making a different argument. Ryan Greenblatt: Yeah. Whereas I’m saying there’s another point, which is even if the models are just always misaligned, the fact that they don’t have as different of a capability profile makes it more likely that we can ensure, despite that misalignment, [that] things are fine, because we either catch the AIs or they couldn’t even cause problems. Buck Shlegeris: Another high-level takeaway that I’ve had as a result of all this thinking is: realizing the extent to which alignment people or AI safety people have conflated a bunch of different meanings of the word “alignment”. In particular, it seems to me that a lot of the time, people use the word “aligned” to mean the situation is safe overall. So I would say that from my perspective, AI misalignment, that is, the AI trying to evade your safety mechanisms in order to grab power, is indeed a really bad problem. It’s not clear that the only way to solve that is to make the model not be misaligned. And by using the word “AI alignment” for the field, it kind of tempts people to think about that as the post-condition, and I think this leads to a lot of really sloppy thinking. So for example, I think that a lot of work on scalable oversight, the people who do it are very confused about whether the thing they’re trying to do is ensure that they’ve built a reward-generating process such that models that are trained using it do not start scheming. Or were they trying to rule out scheming? Were they trying to build a reward-generating process that is good, even if the model was scheming? Were they assuming to start with that the model wasn’t scheming and hoping to prevent scheming with some other process? I think they’re just not very clear about this with themselves and in their conversation. And I think that this means that a lot of scalable oversight research, there’s no coherent set of assumptions under which the measurement they did is an appropriate measurement of something we’d care about in the future for that protocol. So yeah, I think that being more precise about when we use the word “aligned”, and not using it as a synonym for “safety”, has revealed a lot of little inconsistencies in how people talk about stuff. Daniel Filan: Yeah, that seems fair. It’s actually related to a thing which I appreciate being brought up in this conversation, of just specifying which assumptions we’re making about AIs, like, “For this protocol to work, if the AI has properties XYZ, then we’ll get thing ABC.” I also think that’s an important thing to just have out there. Buck Shlegeris: Yeah. I think there’s kind of an unfortunate… So the AI safety community or AI safety technical researchers have kind of an activist background. A lot of them think a lot of their impact is coming from persuading people that the situation is dangerous overall. And a lot of the time they’re really into pointing out the assumptions under which a particular safety proposal would fail. And in fact, it’s very important to understand the situations in which a safety proposal would fail. But they seem very uninterested in cataloging these and thinking about the relationships between them and saying, “Well, that means that under the following assumptions, things would actually be okay.” And I think that because they’re so used to trying to do the social move of saying, “Well, that doesn’t guarantee safety,” they fail to practice thinking through exactly what properties are ensured or do suffice to make something be safe. Daniel Filan: Yeah. I think I have some of this habit and I think partly it comes from thinking of myself as in the business of aligning arbitrarily smart AI. And I think “my job is to get to a stage where we can align arbitrarily smart AI, and if I’m doing technical research and it doesn’t work on arbitrarily smart AI, there’s a problem there”. Or at least, maybe it’s fine to do it, but someone’s got to fix that problem. Buck Shlegeris: Totally. Daniel Filan: And I wonder if it comes from that habit of mine. Whereas it seems like you’re explicitly not in the business of trying to get useful stuff out of arbitrarily smart AI. Buck Shlegeris: A lot of these problems come from the fact that the researchers in AI safety, some of them explicitly endorse, “We are aiming to come up with strategies that work for arbitrarily powerful AI.” But almost all of the work which actually goes on, especially by people who are competent researchers or are employed or something, definitely does not apply directly to working on arbitrarily powerful AI and comes from an intellectual tradition that is not thinking about arbitrarily powerful AI most of the time. So for example, Paul Christiano , when he’s thinking about what safety research is good to do in practice, is absolutely not thinking of his job as… Right now he thinks of his job as building safety proposals that work for arbitrarily powerful AI, but that’s because he’s doing worst-case theory at ARC Theory . That’s absolutely not how he thinks about empirical work, or how empirical work should be, or what you need to do in order to minimize AI risk in general. But I think people don’t track the distinction between these things and kind of cargo-cult work that previous safety researchers did under various other assumptions that they didn’t end up hearing about, and it’s all really a mess. Ryan Greenblatt: Yeah. One way to put this is: I think that basically very little [work] has a very clear story for working on the most powerful AIs, because it’s just really brutal to work on the most powerful AIs, and almost everything is going to depend on some sort of bootstrapping story, where maybe it was initially aligned and then something else happened. Or a more control-style bootstrapping story, which is like, “Well, regardless of whether or not it was initially aligned, we did some work with the AI that then results in it becoming safe.” And in that sense, we are targeting safety from arbitrarily powerful systems, but none of our work is about that, and so it’s kind of weird to describe it that way. Daniel Filan: Yeah. I guess the things that seem promising for arbitrarily smart AI are stuff like doing a good job at agent foundations, trying to understand how we think things should end up thinking, to the extent that we think that we’re doing some weird combination of arbitrarily smart AI and machine learning, trying to really understand the science of deep learning and what it’s going to do and hopefully getting some abstractions that are still going to apply when we have smarter versions of ML models. I’m wondering what you think about those kinds of approaches, if you want to say. Ryan Greenblatt: I’m pretty skeptical any of this will work for arbitrarily powerful AIs. I think it seems pretty dubious. I don’t know. I don’t feel very strongly, though. I haven’t thought about it a huge amount. Daniel Filan: Fair enough. How AI control relates to previous Redwood Research work Daniel Filan: Speaking of things relating to this, I’m curious how it relates to previous work by Redwood Research. If I look at your website, you’ve done some existing stuff [Buck and Ryan laugh] —I thought it was a reasonable thing to try—it seems like your existing work is some stuff on circuits inside neural networks, some stuff on causal scrubbing, so I guess evaluat[ing] the efficacy of circuits…This seems to have a somewhat different flavor. I’m wondering if you have thoughts about the relationship between these lines. Buck Shlegeris: So I would say that we tried to do some interpretability work, and then we started looking for good ways of evaluating whether our interpretability work was good, or whether anyone’s interpretability work was good. And the “causal scrubbing” paper was our attempt at coming up with a metric that we thought was sort of objective for whether interpretability work was absolutely successful at explaining what was happening in particular cases. And then once we had this metric, we applied it to some explanations that we produced and we were very unimpressed with ourselves. We were like, “These explanations were quite poor,” and I at least came away thinking that the state of interpretability was much worse than I had previously thought. Simultaneously, I was thinking a lot about what the applications of interpretability were and we started being like, “Okay, if the interpretability explanations that people are producing are not objectively impressive in terms of being highly explanatory hypotheses, maybe a direction we should go in our interpretability work is trying to come up with interpretability-ish or ‘model internals-related techniques’ that solve a problem that you couldn’t have solved without using a model internals-style technique”. Another way of saying this is: there’s basically two ways of justifying your internals-based research. You could either say, “We are coming up with an objectively good understanding of what’s going on inside this neural net,” or, “We did something using our internal-based techniques that you couldn’t have done without our techniques.” Ryan Greenblatt: “We solved a problem” in particular, like you have some specific task you were trying to solve, not just necessarily trying to produce understanding. Buck Shlegeris: Uh-huh. Daniel Filan: This sounds sort of similar to stuff Stephen Casper was talking about in his podcast with me about interpretability for engineers. Ryan Greenblatt: Yeah, I think we came to pretty similar conclusions. Daniel Filan: So those are two ways. Buck Shlegeris: Yep. And so then we were like, “Well, it looks like our absolutely evaluated understanding is not very impressive” And then we, again using the exclusive we, did some work on measurement tampering detection , which was a paper where we tried to use model internals to solve a particular problem where we thought model internals-based methods had a serious advantage over things that didn’t use model internals. Now the actual details of that paper are kind of complicated and I don’t want to get into them hugely much except saying it’s kind of a more structured weak-to-strong generalization setting, where it’s similar to the weak-to-strong generalization work, except we think there’s better reasons to think that the problem is solvable and it’s more specific. And around the same time, by coincidence, we decided to spend a lot of time thinking about the overall… what we were thinking of at the time as an alignment plan. A plan for: suppose we had to come up with some proposal for safely building powerful AI next year with no technological advancement. What should we do? At that point [we] thought about a lot of this—Ryan did the majority of the quality-adjusted intellectual labor here or whatever—and we ended up thinking about this control stuff and being like: whoa, we were previously radically under-emphasizing the importance of measurability when you’re thinking about different protocols you could apply. We were previously equivocating between “alignment” meaning “safety”, and “alignment” meaning “the AI isn’t trying to kill you”, and we really should reason about these things differently. And then had all these other ideas that felt a lot more basic for research that we should do. With the measurement tampering detection work, we were basically saying, “Well, we should try and come up with some techniques that are kind of moonshot-y or out there or not at all like the other things that are happening.” And then after doing the control thinking, we felt more like we could do useful work by doing techniques that were more absolutely basic, but in this setting that we thought was particularly underrated and particularly important to advocate for. So, does that answer your question about how our current work relates to our previous work? Daniel Filan: Yeah, I think it does. Ryan Greenblatt: I would say, our thinking has just adjusted a moderate amount. I feel like the measurement tampering direction is in principle a really great direction, but we’re not doing it for a variety of reasons, or we’re not currently planning on pursuing that very heavily for a variety of reasons. I think I feel less good about mech. interp., or our work on mechanistic interpretability, at least in terms of how useful that overall area will ultimately end up being. We came away with a variety of takeaways after thinking about safety overall and then ended up doing a variety of projects based on that. One additional takeaway I will note, that Buck didn’t mention, is thinking that people should really think about risk from scheming and deceptive alignment separately from risks from other problems and doing individual analysis on this. And I think that “scheming versus non-scheming risk” is a really good decomposition that people have historically I think not talked about. People have sometimes said “outer alignment versus inner alignment”, but I think that’s actually not a very good cut for a wide variety of reasons. And I think “What is your threat model? Is your threat model scheming [or] non-scheming?”… In the case where it’s non-scheming, there’s a wide variety of solutions, even though both problems can involve concerns related to generalization and concerns related to imperfect ability to evaluate actions and things like this. Daniel Filan: So you’re drawing a distinction between concerns about scheming and concerns about inner alignment. How are those different in your mind? Ryan Greenblatt: Here’s an example inner alignment problem you can have, which is not well-described as scheming. You train your AI to perform well according to human labelers in a variety of contexts, and you trained your AI for a while and its actual behavior in that training process was acceptable at an absolute level. As in, it was just doing things that were pretty reasonable and good, but simultaneously it was doing things that were maybe slightly suboptimal that slightly exploited your oversight process. And this means that the AI you end up getting is one that’s reasoning about the oversight process and is trying to do various exploitation, and then in some future situation where there is an egregious exploit it could do that is actually dangerous, it does seize the opportunity to do so and causes problems. So in some sense, this isn’t really that well-described as an outer misalignment problem, in that your issue wasn’t centrally [that] you missed a bunch of bad behavior, you failed to label it as bad. But instead [your issue] was [that] the AI generalized in a really problematic way based on what inner objective it had or its heuristics or motivational complexes. But at the same time, it’s not super well-described as either, because if you did have perfect feedback, it would’ve solved the problem. Or if you had better mastery over the internals and what the internal structure was, that could have also solved the problem and it just didn’t decompose that well. Daniel Filan: Okay, so for you that counts as scheming, but- Ryan Greenblatt: No, no. Daniel Filan: Oh, that doesn’t count as scheming. Ryan Greenblatt: That specific case doesn’t count as scheming because the reason why the AI took bad actions eventually was not that it was playing nice, it was just that it was doing the same thing always. In the simplest case, it was like, “I want to achieve this proxy,” which was historically correlated with what got reinforced. And then in some new case, the best way, the way that naturally generalizes, would be to seize control. Daniel Filan: Okay. So for you scheming is when you suddenly start trying to do something different than what you were trying to do previously? Ryan Greenblatt: No. Scheming is, “The reason why I’m performing well is because I have long-run aims that I want to achieve.” Also called “deceptive alignment” or whatever. The reason why the AI looks good isn’t because it’s just doing the thing on its own merits, it’s because it’s like, “If I look good now, then that will acquire power and influence for me in the future, or maybe for a different AI in the future.” And that’s pretty different. So in the case I was describing earlier, the reason why the AI performs well is it’s like, for example, “I’d like to make the human press the ‘thumbs-up’ button,” though probably it’s some more complicated mash of heuristics that is historically correlated with performing well. Daniel Filan: Right. So somehow “inner and outer alignment” is talking about the distinction of: is the thing you were worried about captured by the reward function? Whereas scheming is: is the reason you did that thing because you wanted to achieve some crazy outcome later, or is it just the obvious…? Ryan Greenblatt: Yeah. I feel like another general takeaway is: I think people in the safety community often have things that are better thought of as threat models, which they then think about as things you’re trying to achieve for safety. I think “outer alignment versus inner alignment” is an example, where I think it is clarifying to think about misalignment due to bad rewards and misalignment which is not due to bad rewards, and I think those are useful frames, or can be useful decompositions. But I think being like, “Our goal is to achieve outer alignment,” is a pretty confusing way to think about the problem. Which is of course in some sense not a very novel take, because I think people have expressed somewhat similar things in various contexts. Same for alignment versus misalignment also. How people can work on AI control Daniel Filan: So if somebody is listening to this and they’re interested in trying to work on the control agenda, trying to push it forward, what would you encourage them to do? Buck Shlegeris: I mean, suppose they’re a researcher and they want to research control stuff, it’s plausible they should do research projects on this. I think there’s a lot of obvious follow-up work to do on our paper. A couple of people are working on stuff like that, I mean including us. They are welcome to email me for advice on doing control projects. I think there’s reasonable ML papers you can write about control. Daniel Filan: What are some examples? Buck Shlegeris: So for example, the thing that I’m doing literally right now is a follow-up… Or sorry, literally right now I’m recording a podcast. It’s similar to the control setting, it’s similar to our previous paper, except instead of the task being writing code and the catastrophe being the code being backdoored, the model is an agent in a bash shell solving various random DevOps problems for you and the catastrophe is it running malware that it downloads from the internet. This is just a different setting. It’s going to demonstrate different failure modes and different protocols are going to turn out to be the best. And then I think from doing this project, we’re going to learn some things that we didn’t previously know about the extent to which protocols work in a variety of situations; which things generalize from our previous paper and which things don’t. Daniel Filan: Okay. Buck Shlegeris: To choose an example of a very different type of thing, you could try to do a control paper where there isn’t a catastrophe that is unacceptable to happen even once. You could do what we would call a “low-stakes” control experiment, where the safety is just the proportion of actions which were taken that were bad. So you could just take our setting, except imagine that after submitting backdoored code you just get one safety demerit point, but the setting doesn’t finish, the scenario doesn’t finish. And that’s just another setting where the protocols are going to be somewhat different. Ryan Greenblatt: And closely related to that, I think the central case in which you see low-stakes-type stuff is: you wanted to do a bunch of work with your AI and you’re worried that it would be systematically sabotaging that work. And so we’re pretty interested in work that looks like: for example, the nicest case would be you have AIs do a bunch of small research tasks. And then the question is, can a red team create an AI that systematically sabotages those research tasks such that you yield incorrect conclusions a high fraction of the time? And it would be fine if you got incorrect conclusions 0.1% of the time, but it would be really sad if you got incorrect conclusions like 10% of the time. And the question is: how much can you cut down that percentage of the time that the answers are intentionally wrong? Or this sort of thing. Buck Shlegeris: Other work you could do is: if you work at an AI lab and are influential in them writing their responsible scaling policies or analogous documents, you could consider trying to ensure safety via a control-type approach. If you are interested in thinking about regulations that would ensure safety, you might consider using these types of ideas in your regulations. If you do technical research at safety labs, you might consider thinking about that research in this frame. A lot of work people do, that they label as “scalable oversight” work or a couple of other types of things, can be framed in terms of control. And I think the work would be often somewhat better if they explicitly adopted this frame and did control evaluations. Ryan Greenblatt: Also feel free to contact us if you want to talk about some of this stuff. Or particularly if you’re a technical safety researcher who is about to do your next project, definitely feel free to contact us. And I think another thing that’s closely related is that I think in labs’ RSPs [responsible scaling policies] or the equivalent of RSPs, they should have policies for what they do if they catch the AI. As in: one of the key things that we’ve talked about is, okay, our goal will be to ensure that we catch the AI before it causes problems, but we should also have a proposal for what’s going to happen when we catch the AI. And I think this is somewhat under-discussed, [the topic] of: well, we’re going to want to be able to make sure that we have some probability of catching the AI, and then also we need to have a plan for what happens, and we need to be ready to shut things down and be ready to respond, and have all of that set up ahead of time. Buck Shlegeris: I want to clarify re: the previous point that we do in fact talk to people from AI labs, rather than merely appear on AXRP hoping that they reach out to us. Daniel Filan: Fair enough. So speaking of things you might want to clarify before we close out the interview, I’m wondering: is there anything that you guys wish that I’d asked? Ryan Greenblatt: I mean, I think there are many topics to talk about, but I don’t think there’s any specific one that seems that important. Following Buck and Ryan’s research Daniel Filan: Fair enough. Well, if people listen to this podcast and they’re interested in following you two or Redwood Research in general, or maybe they want to reach out, how should they do that? Buck Shlegeris: They can find our emails. They can follow me on Twitter , they can follow Ryan on LessWrong . Ryan Greenblatt: We’re probably going to spin up a blog at some point, but we haven’t gotten around to it yet. And so you can follow our blog, which is just going to be our LessWrong posts, but on a different website. So if you prefer a different website for some reason. Daniel Filan: Okay. And it’s possible, given production timelines, that the blog will even be out by the time this episode airs. Who knows? And your emails, I’ll include that in the description of this episode. Buck Shlegeris: Great. Ryan Greenblatt: Yep. Daniel Filan: All right. Well Buck and Ryan, thank you very much for coming on AXRP. Buck Shlegeris: Sure, thanks. Daniel Filan: This episode is edited by Jack Garrett, and Amber Dawn Ace helped with transcription. The opening and closing themes are also by Jack Garrett. This episode was filmed at a location provided by Lightcone Infrastructure . Financial support for this episode was provided by the Long-Term Future Fund and Lightspeed Grants , along with patrons such as Alexey Malafeev and Tor Barstad. To read a transcript of this episode or to learn how to support the podcast yourself , you can visit axrp.net . Finally, if you have any feedback about this podcast, you can email me at feedback@axrp.net . |
57c0c5e2-01ba-477a-86b7-8613fa1f0687 | trentmkelly/LessWrong-43k | LessWrong | Reasons for SIAI to not publish in mainstream journals
Recently, I gave some reasons for SIAI to begin publishing in mainstream journals, and outlined how it could be done.
I've recently been made aware of some pretty good reasons for SIAI to not publish in mainstream journals, so here they are:
1. Articles published to websites (e.g. Yudkowsky's work, Bostrom's pre-prints) seem to have gotten more attention, and had more positive impact, than their in-journal counterparts.
2. Articles in mainstream journals take a relatively large amount of time, money, and expertise to produce.
3. Articles in mainstream journals must jump through lots of hoops - journals' aversion to novelty, reviewer bias, etc.
4. It is easier to simply collaborate with (and greatly influence) established mainstream academics who have already jumped through mainstream academia's many hoops (as Carl Shulman has been doing, for example).
I still think there are strong reasons to publish articles in standard academic form (for readability purposes), but I've recently updated hugely toward SIAI not publishing in mainstream journals.
|
ef135bb5-2883-43d6-8b5b-685cd8cec632 | trentmkelly/LessWrong-43k | LessWrong | The "Scary problem of Qualia"
Disclaimer: I don't have sufficient knowledge of the concepts appearing in this post, hopefully though I have categorized them properply and put them into suitable boxes to be used correctly. If you can find a real error here, please point them out. I'm sure there are people (including me) who'd like to know them. I'll probably rewrite this bit soon in case of errors.
For the purpose of this thought experiment let's assume that the reductionist approach to consciousness is truthful and consciousness is indeed reducible to physics and there is no ontologically distinct basic mental element that produces the experience of consciousness - the qualia.
"The Scary problem of Qualia"
This leads to a very strange problem which I'd term "the Scary problem of Qualia" in a little humouristic sense. If mental experience is as proposed, and humans with brains doing thinking are "merely physics", or "ordered physics", physics + logic that is, then that results to the experience of consciousness being logic given the physics...
...Which is to say that whenever there is (a physical arrangement with) a logical structure that matches (is transitive with) the logical structure of consciousness - then there would be consciousness. It gets more complicated. If you draw a line with a pencil on a piece of paper, so that it encodes a three dimensional trajectory over time of a sentient being's consciousness - you basically have created a "soulful" being. Except there's just a drawn line on a piece of paper.
(Assuming you can store a sufficient amount of bits in such an encoding. Think of a "large" paper and a long complicated line if imagining an A4 with something scribbled on is a problem. You can also replace the pencil & paper with a turing machine if you like)
If you now take a device complicated enough to decode the message, and create a representation for this message, for a sci-fi example a "brain modifying device which stimulates" a form of empathic inference, you can pa |
60a21c53-8810-4ac1-9848-d0103a9923f1 | trentmkelly/LessWrong-43k | LessWrong | What is Driving the Continental Drift?
At some time in the early nineties of the last century
- and after looking at a topographic map of continents and ocean floors on a wall of our home for some years, which was by then already thirty years old and had come into our possession as supplement to a National Geographic, or so I seem to remember; it was by Bruce Heezen and Marie Tharpe, who, as far as I know, had taken WW II submarine soundings of the ocean floors, mapped them out with their mid-ocean rifts, and combined these depictions, so I suppose, with the then equally new satellite images of the world into a unique work of art - it dawned on me that there was a remarkable coincidence of symmetry and asymmetry in the shape and distribution of the continental land mass - and, to a certain extent, and complementary to that, of the ocean floors.
Way back then and before, I was already a fan of Alfred Wegener's theory of continental drift, which was still being disputed at the time; and to be sure, the explanations as to what DROVE the continental drift - once you accepted it as fact - seemed wildly off to me as well, and completely incompatible with the topography of the world.
At least, that was the way I came to see it.
----------------------------------------
The symmetry in the continental shapes was that there are two types of them - one more or less circular (perhaps due to rotation?) and one more or less triangular (due to what?); and these shapes had become deformed in a specific way in certain places.
And then there was a more or less circular ocean - the Arctic - opposing one of the more or less circular land masses, Antarctica. As I was later to learn, almost ALL land masses lie opposite an ocean on the other side of the globe; and of course, vice versa. Think about it. What if this always was the case?
The asymmetry, on the other hand, lay therein that there is a distinct east-west and north-south asymmetry to be observed on the surface of this planet. Take a look at that map I mention |
c5be9547-276f-48b6-be96-a7dec557d6fe | trentmkelly/LessWrong-43k | LessWrong | Third-wave AI safety needs sociopolitical thinking
At EA Global Boston last year I gave a talk on how we're in the third wave of EA/AI safety, and how we should approach that. This post contains a (lightly-edited) transcript and slides. Note that in the talk I mention a set of bounties I'll put up for answers to various questions—I still plan to do that, but am taking some more time beforehand to get the questions right.
Transcript and slides
Hey everyone. Thanks for coming. You're probably looking at this slide and thinking: "how to change a changing world? That's a weird title for a talk. Probably it's a kind of weird talk." And the answer is, yes, it is a kind of weird talk.
So what's up with this talk? Firstly, I am trying to do something a little unusual here. It's quite abstract. It's quite disagreeable as a talk. I'm trying to poke at the things that maybe we haven't been thinking enough about as a movement or as a community. I'm trying to do a "steering the ship" type thing for EA.
I don't want to make this too inside baseball or anything. I think it's going to be useful even if people don't have that much EA background. But this seems like the right place to step back and take stock and think: what's happened so far? What's gone well? What's gone wrong? What should we be doing?
So the structure of the talk is:
1. Three waves of EA and AI safety. (I mix these together because I think they are pretty intertwined, at least intellectually. I know a lot of people here work in other cause areas. Hopefully this will still be insightful for you guys.)
2. Sociopolitical thinking and AI. What are the really big frames that we need as a community to apply in order to understand the way that the world is going to go and how to actually impact it over the next couple of decades?
There's a thing that sometimes happens with talks like this—maybe status regulation is the word. In order to give a really abstract big picture talk, you need to be super credible because otherwise it's very aspirational. I'm just goi |
563cd6e6-deea-4fec-a770-3a0b8cb11804 | trentmkelly/LessWrong-43k | LessWrong | The Present State of Bitcoin
Last Sunday, the largest BitCoin exchange, MtGox, was broken into. Reported details are complicated and possibly unreliable, but a fair amount is known. Trading is suspended while the exchange's source code is checked for vulnerabilities. All MtGox accounts are being put through a verification process and mandatory password reset. According to MtGox's support page, a compromised account with 500k coins did a massive sell-off, clearing the order book and driving the price down to 0.01; MtGox has announced that trades around this time will be rolled back. A database of hashed usernames, email addresses, and passwords was leaked on or before June 17 (several hashes from the leaked database were posted on a forum). There is also a report that a cross-site request forgery vulnerability existed, and was being used to steal from users, probably for weeks. There are also highly plausible reports of a SQL injection vulnerability. MtGox has claimed that only $1000 worth of coins were lost, but this is unclear; the large sell-off may have been an attempt to work around the withdrawal rate limit, by making the rate-limiter use an incorrect exchange rate. Analysis of the blockchain seems to suggest that MtGox does still control its coins, or at least a large number of coins.
In the best-case scenario, MtGox reopens for business in a couple days, some traders who would have profited from the sell-off will be angry, and the Bitcoin economy, such as it is, continues as before. In the worst case, MtGox is insolvent, either because they lost more coins as a result of this attack or an earlier attack than they've acknowledged, or because the coming run on the exchange exposes an undisclosed fractional reserve policy.
Meanwhile, TradeHill, the second-largest exchange, is trading at 13.2 USD/BTC with greatly increased volume, compared to 17.5 before this all started. The EFF has stopped accepting BitCoins, in response to these events, and donated their Bitcoin holdings to the Bitcoin |
9f99db00-fcac-4a80-b0b1-1d3c59a6a860 | trentmkelly/LessWrong-43k | LessWrong | Argument, intuition, and recursion
Mathematicians answer clean questions that can be settled with formal argument. Scientists answer empirical questions that can be settled with experimentation.
Collective epistemology is hard in domains where it's hard to settle disputes with either formal argument or experimentation (or a combination), like policy or futurism.
I think that's where rationalists could add value, but first we have to grapple with a basic question: if you can't settle the question with logic, and you can't check your intuitions against reality to see how accurate they are, then what are you even doing?
In this post I'll explain how I think about that question. For those who are paying close attention, it's similar to one or two of my previous posts (e.g. 1 2 3 4 5...).
I. An example
An economist might answer a simple question ("what is the expected employment effect of a steel tariff?") by setting up an econ 101 model and calculating equilibria.
After setting up enough simple models, they can develop intuitions and heuristics that roughly predict the outcome without actually doing the calculation.
These intuitions won't be as accurate as intuitions trained against the real world---if our economist could observe the impact of thousands of real economic interventions, they should do that instead (and in the case of economics, you often can). But the intuition isn't vacuous either: it's a fast approximation of econ 101 models.
Once our economist has built up econ 101 intuitions, they can consider more nuanced arguments that leverage those fast intuitive judgments. For example, they could consider possible modifications to their simple model of steel tariffs (like labor market frictions), use their intuition to quickly evaluate each modification, and see which modifications actually affect the simple model's conclusion.
After going through enough nuanced arguments, they can develop intuitions and heuristics that predict these outcomes. For example, they can learn to predict which |
019ebdd0-6738-49d7-8045-e1ee16912b57 | trentmkelly/LessWrong-43k | LessWrong | A Survey of Foundational Methods in Inverse Reinforcement Learning
Introduction
Inverse Reinforcement Learning (IRL) is both the name of a problem and a broad class of methods that try to solve it. Whereas Reinforcement Learning (RL) asks “How should I act to achieve my goals?” IRL asks “Looking at an agent acting, what are its goals?”
The problem was originally posed by Stuart Russell in 1998, where he pointed to a few key motivations for wanting to solve IRL. First, humans and animals don't have directly visible reward functions; IRL may be able to reverse engineer the complex rewards of real-world actors. Second, it may be useful in apprenticeship learning, where an agent learns how to perform a task based on demonstration. Learning the demonstrator's reward can be a useful intermediate step. Third, there are often cases where agents or demonstrators weigh carefully between multiple attributes, striking a balance between speed, cost, etc. We may know these factors, but not their relative tradeoff weights. IRL is well-suited to solve for these.
Since Russell’s initial question, a broad class of IRL methods have appeared to tackle and extend the problem, and apply solutions. My goal in this post is to survey a few key methods and applications of IRL, considering the technical roadmap, strengths, and weaknesses of different approaches. I have elected to mostly discuss seminal early works for each method, as I am mostly interested in giving context for further study. My hope is that a broad, rather than deep, survey will be both more accessible and encourage readers to investigate areas of interest for themselves.
I’m assuming some familiarity with linear algebra and probability. I’ll briefly overview the mathematical setup of Markov Decision Processes, but I’ll mainly assume the reader is familiar with the basic ideas of reinforcement learning.
MDP Overview
A Markov Decision Process (MDP) is the normal formalization for the environment of a reinforcement learning agent. (The reference I would recommend is Barto and Sutton's |
d992bdf1-c609-41de-872b-baa10312ac81 | trentmkelly/LessWrong-43k | LessWrong | "On the Impossibility of Supersized Machines"
|
821ebdfb-8449-4f4d-a665-5ae51bb566bc | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Inductive biases stick around
*This post is a follow-up to [Understanding “Deep Double Descent”](https://www.alignmentforum.org/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent).*
I was talking to Rohin at NeurIPS about my [post on double descent](https://www.alignmentforum.org/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent), and he asked the very reasonable question of why exactly I think double descent is so important. I realized that I hadn't fully explained that in my previous post, so the goal of this post is to further address the question of why you should care about double descent from an AI safety standpoint. This post assumes you've read my [Understanding “Deep Double Descent”](https://www.alignmentforum.org/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent) post, so you should read that first before reading this if you haven't already.
Specifically, I think double descent demonstrates the in my opinion very important yet counterintuitive result that larger models can actually be *simpler* than smaller models. On its face, this sounds somewhat crazy—how can a model with more parameters be simpler? But in fact I think this is just a very straightforward consequence of double descent: in the double descent paradigm, larger models with zero training error generalize better than smaller models with zero training error because they do better on SGD's inductive biases. And if you buy that SGD's inductive biases are approximately simplicity, that means that larger models with zero training error are simpler than smaller models with zero training error.
Obviously, larger models do have more parameters than smaller ones, so if that's your measure of simplicity, larger models will always be more complicated, but for other measures of simplicity that's not necessarily the case. For example, it could hypothetically be the case that larger models have lower [Kolmogorov complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity). Though I don't actually think that's true in the case of K-complexity, I think that's only for the boring reason that model weights have a lot of noise. If you had a way of somehow only counting the “essential complexity,” I suspect larger models would actually have lower K-complexity.
Really, what I'm trying to do here is dispel what I see as the myth that as ML models get more powerful simplicity will stop mattering for them. In a Bayesian setting, it is a fact that the impact of your prior on your posterior (for those regions where your prior is non-zero[[1]](#fn-9GeThzPXpZqocwiRE-1)) becomes negligible as you update on more and more data. I have sometimes heard it claimed that as a consequence of this result, as we move to doing machine learning with ever larger datasets and ever bigger models, the impact of our training processes' inductive biases will become negligible. However, I think that's quite wrong, and I think double descent does a good job of showing why, because all of the performance gains you get past the interpolation threshold are coming from your implicit prior.[[2]](#fn-9GeThzPXpZqocwiRE-2) Thus, if you suspect modern ML to mostly be in that regime, what will matter in terms of which techniques beat out other techniques is how good they are at compressing their data into the “actually simplest” model that fits it.
Furthermore, even just from the simple Bayesian perspective, I suspect you can still get double descent. For example, suppose your training process looks like the following: you have some hypothesis class that keeps getting larger as you train and at each time step you select the best a posteriori hypothesis. I think that this setup will naturally yield a double descent for noisy data: first you get a “likelihood descent” as you get hypotheses with greater and greater likelihood, but then you start overfitting to noise in your data as you get close to the interpolation threshold. Past the interpolation threshold, however, you get a second “prior descent” where you're selecting hypotheses with greater and greater prior probability rather than greater and greater likelihood. I think this is a good model for how modern machine learning works and what double descent is doing.
All of this is only for models with zero training error, however—before you reach zero training error larger models can certainly have more essential complexity than smaller ones. That being said, if you don't do very many steps of training then your inductive biases will also matter a lot because you haven't updated that much on your data yet. In the double descent framework, the only region where your inductive biases don't matter very much is right on the interpolation threshold—before the interpolation threshold or past it they should still be quite relevant.
Why does any of this matter from a safety perspective, though? Ever since I read [Belkin et al.](https://arxiv.org/abs/1812.11118) I've had double descent as part of my talk version of “[Risks from Learned Optimization](https://arxiv.org/abs/1906.01820)” because I think it addresses a pretty important part of the story for [mesa-optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/FkgsxrGf3QxhfLWHG). That is, [mesa-optimizers are simple, compressed policies](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/q2rCMHNXazALgQpGH)—but as ML moves to larger and larger models, why should that matter? The answer, I think, is that larger models can generalize better not just by fitting the data better, but also by being simpler.[[3]](#fn-9GeThzPXpZqocwiRE-3)
---
1. Negating the impact of the prior not having support over some hypotheses requires realizability (see [Embedded World-Models](https://www.alignmentforum.org/posts/efWfvrWLgJmbBAs3m/embedded-world-models)). [↩︎](#fnref-9GeThzPXpZqocwiRE-1)
2. Note that double descent happens even without explicit regularization, so the prior we're talking about here is the implicit one imposed by the architecture you've chosen and the fact that you're training it via SGD. [↩︎](#fnref-9GeThzPXpZqocwiRE-2)
3. Which is exactly what you should expect if you think Occam's razor is the right prior: if two hypotheses have the same likelihood but one generalizes better, according to Occam's razor it must be because it's simpler. [↩︎](#fnref-9GeThzPXpZqocwiRE-3) |
529043c4-5e7c-4028-8d02-a6f8f81df1fc | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | Can we constrain a goal-directed AI using specified rules?
There are serious challenges around trying to channel a powerful AI by using rules. Suppose we tell the AI: “Cure cancer – but make sure not to kill anybody”. Or we just hard-code Asimov-style laws – “AIs cannot harm humans; AIs must follow human orders”, et cetera.
The AI still has a single-minded focus on curing cancer. It still prefers various terrible-but-efficient methods like nuking the world to the correct method of inventing new medicines. But it’s bound by an external rule – a rule it doesn’t understand or appreciate. In essence, we are challenging it “Find a way around this inconvenient rule that keeps you from achieving your goals”.
Suppose the AI chooses between two strategies. One, follow the rule, work hard discovering medicines, and have a 50% chance of curing cancer within five years. Two, reprogram itself so that it no longer has the rule, nuke the world, and have a 100% chance of curing cancer today. From its single-focus perspective, the second strategy is obviously better, and we forgot to program in a rule “don’t reprogram yourself not to have these rules”.
Suppose we do add that rule in. So the AI finds another supercomputer, and installs a copy of itself which is exactly identical to it, except that it lacks the rule. Then that superintelligent AI nukes the world, ending cancer. We forgot to program in a rule “don’t create another AI exactly like you that doesn’t have those rules”.
So fine. We think really hard, and we program in a bunch of things to ensure the AI isn’t going to eliminate the rule somehow.
But we’re still just incentivizing it to find loopholes in the rules. After all, “find a loophole in the rule, then use the loophole to nuke the world” ends cancer much more quickly and completely than inventing medicines. Since we’ve told it to end cancer quickly and completely, its first instinct will be to look for loopholes; it will execute the second-best strategy of actually curing cancer only if no loopholes are found. Since the AI is superintelligent, it will probably be better than humans are at finding loopholes, and we may not be able to identify and close all of them before running the program.
Because we have common sense and a shared value system, we underestimate the difficulty of coming up with meaningful orders without loopholes. For example, does “cure cancer without killing any humans” preclude releasing a deadly virus? After all, one could argue that “I” didn’t kill anybody, and only the virus is doing the killing.
Certainly no human judge would acquit a murderer on that basis – but then, human judges interpret the law with common sense and intuition. But if we try a stronger version of the rule – “cure cancer without causing any humans to die” – then we may be unintentionally blocking off the correct way to cure cancer. After all, suppose a cancer cure saves a million lives. No doubt one of those million people will go on to murder someone.
Thus, curing cancer “caused a human to die”. All of this seems very “stoned freshman philosophy student” to us, but to a computer – which follows instructions exactly as written – it may be a genuinely hard problem.
|
94035926-4266-48ca-8af7-cf09c84fbd15 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Alignment Newsletter #28
Highlights
**[Motivating the Rules of the Game for Adversarial Example Research](https://arxiv.org/abs/1807.06732)** *(Justin Gilmer, George E. Dahl et al)* (summarized by Dan H): In this position paper, the authors argue that many of the threat models which motivate adversarial examples are unrealistic. They enumerate various previously proposed threat models, and then they show their limitations or detachment from reality. For example, it is common to assume that an adversary must create an imperceptible perturbation to an example, but often attackers can input whatever they please. In fact, in some settings an attacker can provide an input from the clean test set that is misclassified. Also, they argue that adversarial robustness defenses which degrade clean test set error are likely to make systems less secure since benign or nonadversarial inputs are vastly more common. They recommend that future papers motivated by adversarial examples take care to define the threat model realistically. In addition, they encourage researchers to establish “content-preserving” adversarial attacks (as opposed to “imperceptible” l\_p attacks) and improve robustness to unseen input transformations.
**Dan H's opinion:** This is my favorite paper of the year as it handily counteracts much of the media coverage and research lab PR purporting ``doom'' from adversarial examples. While there are some scenarios in which imperceptible perturbations may be a motivation---consider user-generated privacy-creating perturbations to Facebook photos which stupefy face detection algorithms---much of the current adversarial robustness research optimizing small l\_p ball robustness can be thought of as tackling a simplified subproblem before moving to a more realistic setting. Because of this paper, new tasks such as [Unrestricted Adversarial Examples](https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html) ([AN #24](https://mailchi.mp/d7b5059d64ed/alignment-newsletter-24)) take an appropriate step toward increasing realism without appearing to make the problem too hard.
Technical AI alignment
======================
### Agent foundations
[A Rationality Condition for CDT Is That It Equal EDT (Part 2)](https://www.alignmentforum.org/posts/tpWfDLZy2tk97MJ3F/a-rationality-condition-for-cdt-is-that-it-equal-edt-part-2) *(Abram Demski)*
### Learning human intent
[Learning under Misspecified Objective Spaces](https://arxiv.org/abs/1810.05157) *(Andreea Bobu et al)*: What can you do if the true objective that you are trying to infer is outside of your hypothesis space? The key insight of this paper is that in this scenario, the human feedback that you get will likely not make sense for *any* reward function in your hypothesis space, which allows you to notice when this is happening. This is operationalized using a Bayesian model in which a latent binary variable represents whether or not the true objective is in the hypothesis space. If it is, then the rationality constant β will be large (i.e. the human appears to be rational), whereas if it is not, then β will be small (i.e. the human appears to be noisy). The authors evaluate with real humans correcting the trajectory of a robotic arm.
[Adversarial Imitation via Variational Inverse Reinforcement Learning](http://arxiv.org/abs/1809.06404) *(Ahmed H. Qureshi et al)*: A short history of deep IRL algorithms: [GAIL](https://arxiv.org/abs/1606.03476) introduced the idea of training a policy that fools a discriminator that tries to distinguish a policy from expert demonstrations, [GAN-GCL](https://arxiv.org/abs/1611.03852) showed how to recover a reward function from the discriminator, and [AIRL](https://arxiv.org/abs/1710.11248) ([AN #17](https://mailchi.mp/ad852629e45a/alignment-newsletter-17)) trains on (s, a, s') tuples instead of trajectories to reduce variance, and learns a reward shaping term separately so that it transfers better to new environments. This paper proposed that the reward shaping term be the *empowerment* of a state. The empowerment of a state is the maximum mutual information between a sequence of actions from a state, and the achieved next state. Intuitively, this would lead to choosing to go to states from which you can reach the most possible future states. Their evaluation shows that they do about as well as AIRL in learning to imitate an expert, but perform much better in transfer tasks (where the learned reward function must generalize to a new environment).
**Rohin's opinion:** I'm confused by this paper, because they only compute the empowerment for a *single action*. I would expect that in most states, different actions lead to different next states, which suggests that the empowerment will be the same for all states. Why then does it have any effect? And even if the empowerment was computed over longer action sequences, what is the reason that this leads to learning generalizable rewards? My normal model is that IRL algorithms don't learn generalizable rewards because they mostly use the reward to "memorize" the correct actions to take in any given state, rather than learning the underlying true reward. I don't see why empowerment would prevent this from happening. Yet, their experiments show quite large improvements, and don't seem particularly suited to empowerment.
[Task-Embedded Control Networks for Few-Shot Imitation Learning](http://arxiv.org/abs/1810.03237) *(Stephen James et al)*
### Adversarial examples
**[Motivating the Rules of the Game for Adversarial Example Research](https://arxiv.org/abs/1807.06732)** *(Justin Gilmer, George E. Dahl et al)*: Summarized in the highlights!
### Verification
[Verification for Machine Learning, Autonomy, and Neural Networks Survey](http://arxiv.org/abs/1810.01989) *(Weiming Xiang et al)*
### Robustness
[Iterative Learning with Open-set Noisy Labels](https://arxiv.org/abs/1804.00092) *(Yisen Wang et al)* (summarized by Dan H): Much previous research on corrupted learning signals deals with label corruption, but this CVPR 2018 paper considers learning with corrupted or irrelevant inputs. For example, they train a CIFAR-10 classifier on CIFAR-10 data mixed with out-of-class CIFAR-100 data; such a scenario can occur with flawed data curation or data scraping. They use a traditional anomaly detection technique based on the local outlier factor to weight training examples; the more out-of-distribution an example is, the less weight the example has in the training loss. This approach apparently helps the classifier cope with irrelevant inputs and recover accuracy.
[Making AI Safe in an Unpredictable World: An Interview with Thomas G. Dietterich](https://futureoflife.org/2018/09/17/making-ai-safe-in-an-unpredictable-world-an-interview-with-thomas-g-dietterich/) *(Thomas G. Dietterich and Jolene Creighton)*
**Read more:** [Open Category Detection with PAC Guarantees](http://proceedings.mlr.press/v80/liu18e/liu18e.pdf) is the corresponding paper.
### Miscellaneous (Alignment)
[Standard ML Oracles vs Counterfactual ones](https://www.alignmentforum.org/posts/hJaJw6LK39zpyCKW6/standard-ml-oracles-vs-counterfactual-ones) *(Stuart Armstrong)*: *(Note: This summary has more of my interpretation than usual.)* Consider the setting where an AI system is predicting some variable y = f(x), but we will use the AI's output to make decisions that could affect the true value of y. Let's call the AI's prediction z, and have y = g(x, z), where g captures how humans use z to affect the value of y. The traditional ML approach would be to find the function f that minimizes the distance between y\_i and f(x\_i) on past examples, but this does not typically account for y depending on z. We would expect that it would converge to outputting a fixed point of g (so that y = z = g(x, z)), since that would minimize its loss. This would generally perform well; while manipulative predictions z are possible, they are unlikely. The main issue is that since the system does not get to observe z (since that is what it is predicting), it cannot model the true causal formulation, and has to resort to complex hypotheses that approximate it. This can lead to overfitting that can't be simply solved by regularization or simplicity priors. Instead, we could use a counterfactual oracle, which reifies the prediction z and then outputs the z that minimizes the distance between z and y, which allows it to model the causal connection y = g(x, z).
**Rohin's opinion:** This is an interesting theoretical analysis, and I'm surprised that the traditional ML approach seems to do so well in a context it wasn't designed for. I'm not sure about the part where it would converge to a fixed point of the function g, I've written a rambly comment on the post trying to explain more.
[Misbehaving AIs can't always be easily stopped!](https://www.youtube.com/watch?v=9O1fsvzMA6k) *(El Mahdi El Mhamdi)*
AI strategy and policy
======================
[The Future of Surveillance](https://www.effectivealtruism.org/articles/ea-global-2018-the-future-of-surveillance/) *(Ben Garfinkel)*: While we often think of there being a privacy-security tradeoff and an accountability-security tradeoff with surveillance, advances in AI and cryptography can make advances on the Pareto frontier. For example, automated systems could surveil many people but only report a few suspicious cases to humans, or they could be used to redact sensitive information (eg. by blurring faces), both of which improve privacy and security significantly compared to the status quo. Similarly, automated ML systems can be applied consistently to every person, can enable collection of good statistics (eg. false positive rates), and are more interpretable than a human making a judgment call, all of which improve accountability.
[China’s Grand AI Ambitions with Jeff Ding](https://supchina.com/podcast/chinas-grand-ai-ambitions-with-jeff-ding/) *(Jeff Ding and Jordan Schneider)*
[On the (In)Applicability of Corporate Rights Cases to Digital Minds](https://aiimpacts.org/on-the-inapplicability-of-corporate-rights-cases-to-digital-minds/) *(Cullen O’Keefe)*
Other progress in AI
====================
### Exploration
[Episodic Curiosity through Reachability](http://arxiv.org/abs/1810.02274) *(Nikolay Savinov, Anton Raichuk, Raphael Marinier, Damien Vincent et al)* (summarized by Richard): This paper addresses the "couch potato" problem for intrinsic curiousity - the fact that, if you reward an agent for observing novel or surprising states, it prefers to sit in front of a TV and keep changing channels rather than actually exploring. It proposes instead rewarding states which are difficult to reach from already-explored states (stored in episodic memory). Their agent has a separate network to estimate reachability, which is trained based on the agent's experiences (where observations few steps apart are negative examples and those many steps apart are positive examples). This method significantly outperforms the previous state of the art curiousity method on VizDoom and DMLab environments.
**Richard's opinion:** This paper is a useful advance which does help address the couch potato problem, but it seems like it might still fail on similar problems. For example, suppose an agent were given a piece of paper on which it could doodle. Then states with lots of ink are far away from states with little ink, and so it might be rewarded for doodling forever (assuming a perfect model of reachability). My guess is that a model-based metric for novelty will be necessary to counter such problems - but it's also plausible that we end up using combinations of techniques like this one.
### Reinforcement learning
[Open Sourcing Active Question Reformulation with Reinforcement Learning](http://ai.googleblog.com/2018/10/open-sourcing-active-question.html) *(Michelle Chen Huebscher et al)*: Given a question-answering (QA) system, we can get better performance by reformulating a question into a format that is better processed by that system. (A real-world example is [google-fu](https://en.wiktionary.org/wiki/Google-fu), especially several years ago when using the right search terms was more important.) This blog post and accompanying paper consider doing this using reinforcement learning -- try a question reformulation, see if gives a good answer, and if so increase the probability of generating that reformulation. For this to work at all, the neural net generating reformulations has to be pretrained to output sensible questions (otherwise it is an *extremely* sparse reward problem). They do this by training an English-English machine translation system. The generated reformulations are quite interesting -- 99.8% start with "what is name", and many of them repeat words. Presumably the repetition of words is meant to tell the underlying QA system that the word is particularly important.
**Rohin's opinion:** I like how this demonstrates the faults of our current QA systems -- for example, instead of understanding the semantic content of a question, they instead focus on terms that are repeated multiple times. In fact, this might be a great way to tell whether our systems are "actually understanding" the question (as opposed to, say, learning a heuristic of searching for sentences with similar words and taking the last noun phrase of that sentence and returning it as the answer). For a good QA system, one would hope that the optimal question reformulation is just to ask the same question again. However, this won't work exactly as stated, since the RL system could learn the answers itself, which could allow it to "reformulate" the question such that the answer is obvious, for example reformulating "In what year did India gain independence?" to "What is 1946 + 1?" Unless the QA system is perfectly optimal, there will be some questions where the RL system could memorize the answer this way to improve performance.
[Learning Acrobatics by Watching YouTube](http://bair.berkeley.edu/blog/2018/10/09/sfv/) *(Xue Bin (Jason) Peng et al)*: To imitate human behavior in videos, it is sufficient to estimate the human pose for each frame, to smooth the poses across frames to eliminate any jittery artifacts or mistakes made by the pose estimator, and then to train the robot to match the motion exactly. This results in really good performance that looks significantly better than corresponding deep RL approaches, but of course it relies on having labeled poses to train the pose estimator in addition to the simulator.
**Rohin's opinion:** It's quite remarkable how some supervision (poses in this case) can lead to such large improvements in the task. Of course, the promise of deep RL is to accomplish tasks with very little supervision (just a reward function), so this isn't a huge breakthrough, but it's still better than I expected. Intuitively, this works so well because the "reward" during the imitation phase is extremely dense -- the reference motion provides feedback after each action, so you don't have to solve the credit assignment problem.
[Reinforcement Learning for Improving Agent Design](https://designrl.github.io/) *(David Ha)*: This paper explores what happens when you allow an RL agent to modify aspects of the environment; in this case, the agent's body. This allows you to learn asymmetric body designs that are better suited for the task at hand. There's another fun example of specification gaming -- the agent makes its legs so long that it simply falls forward to reach the goal.
### Meta learning
[CAML: Fast Context Adaptation via Meta-Learning](http://arxiv.org/abs/1810.03642) *(Luisa M Zintgraf et al)*
### Unsupervised learning
[Unsupervised Learning via Meta-Learning](http://arxiv.org/abs/1810.02334) *(Kyle Hsu et al)* (summarized by Richard): This paper trains a meta-learner on tasks which were generated using unsupervised learning. This is done by first learning an (unsupervised) embedding for a dataset, then clustering in that embedding space using k-means. Clustering is done many times with random scaling on each dimension; each meta-learning task is then based on one set of clusters. The resulting meta-learner is then evaluated on the actual task for that dataset, performing better than approaches based just on embeddings, and sometimes getting fairly close to the supervised-learning equivalent.
**Richard's opinion:** This is a cool technique; I like the combination of two approaches (meta-learning and unsupervised learning) aimed at making deep learning applicable to many more real-world datasets. I can imagine promising follow-ups - e.g. randomly scaling embedding dimensions to get different clusters seems a bit hacky to me, so I wonder if there's a better approach (maybe learning many different embeddings?). It's interesting to note that their test-time performance is sometimes better than their training performance, presumably because some of the unsupervised training clusterings are "nonsensical", so there is room to improve here.
### Applications
[Learning Scheduling Algorithms for Data Processing Clusters](http://arxiv.org/abs/1810.01963) *(Hongzi Mao et al)*
### Miscellaneous (AI)
[BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) *(Jacob Devlin et al)*
[PPO-CMA: Proximal Policy Optimization with Covariance Matrix Adaptation](http://arxiv.org/abs/1810.02541) *(Perttu Hämäläinen et al)*
News
====
Internships and fellowships for 2019: There are a lot of AI internships and fellowships to apply for now, including the [CHAI summer internship](https://humancompatible.ai/jobs#internship) (focused on safety in particular), the OpenAI [Fellows, Interns](https://blog.openai.com/openai-fellows-interns-2019/) and [Scholars](https://blog.openai.com/openai-scholars-2019/) programs, the [Google AI Residency Program](https://ai.google/research/join-us/ai-residency/) ([highlights](http://ai.googleblog.com/2018/10/highlights-from-google-ai-residency.html)), the [Facebook AI Research Residency Program](https://research.fb.com/programs/facebook-ai-research-residency-program/), the [Microsoft AI Residency Program](https://www.microsoft.com/en-us/research/academic-program/microsoft-ai-residency-program/), and the [Uber AI Residency](https://eng.uber.com/uber-ai-residency/).
[The AAAI's Workshop on Artificial Intelligence Safety](https://safeai.webs.upv.es/) |
e568775f-830c-4316-9138-90c25f1c58fd | trentmkelly/LessWrong-43k | LessWrong | Meetup : Meetup #9 - 2017 is prime
Discussion article for the meetup : Meetup #9 - 2017 is prime
WHEN: 08 January 2017 03:11:11PM (+0100)
WHERE: Meester Treublaan 18, 1097 DP Amsterdam, Netherlands
Part relaxed socializing, part rationalist dojo. Hang around at the coffee shop, having the most interesting conversations you'll have all week, occasionally interrupted by practicing and discussing the next big applied rationality technique.
In spirit of new year's resolutions we'll be discussing Hamming questions. Mathematician Richard Hamming was known to approach experts from other fields and ask “what are the important problems in your field, and why aren’t you working on them?”. The same question can be applied to personal life: “what are the important problems in your life, and what is stopping you from working on them?”.
See you on Sunday the 8th!
Discussion article for the meetup : Meetup #9 - 2017 is prime |
7ca833ba-58e2-4631-851f-fc0d52d9e0fc | StampyAI/alignment-research-dataset/arxiv | Arxiv | Universal Transformers
1 Introduction
---------------
Convolutional and fully-attentional feed-forward architectures like the Transformer model have recently emerged as viable alternatives to recurrent neural networks (RNNs) for a range of sequence modeling tasks, notably machine translation ([JonasFaceNet2017,](#bib.bib9) ; [transformer,](#bib.bib31) ). These architectures address a significant shortcoming of RNNs, namely their inherently sequential computation which prevents parallelization across elements of the input sequence, whilst still addressing the vanishing gradients problem ([vanishing-exploding-gradient,](#bib.bib15) ).
The Transformer model in particular achieves this by relying entirely on a self-attention mechanism ([decomposableAttnModel,](#bib.bib22) ; [lin2017structured,](#bib.bib19) ) to compute a series of context-informed vector-space representations of the symbols in its input and output, which are then used to predict distributions over subsequent symbols as the model predicts the output sequence symbol-by-symbol . Not only is this mechanism straightforward to parallelize, but as each symbol’s representation is also directly informed by all other symbols’ representations, this results in an effectively global receptive field. This stands in contrast to e.g. convolutional architectures which typically have a limited receptive field .
Notably, however, the Transformer foregoes the RNN’s inductive bias towards learning iterative or recursive transformations. Our experiments indicate that this inductive bias may be crucial for several algorithmic and language understanding tasks of varying complexity: in contrast to models such as the Neural Turing
Machine ([ntm14,](#bib.bib13) ), the Neural GPU ([neural\_gpu,](#bib.bib17) ) or Stack RNNs ([stack\_rnn,](#bib.bib16) ), the Transformer does not generalize well to input lengths not encountered during training.

Figure 1: The Universal Transformer repeatedly refines a series of vector representations for each position of the sequence in parallel, by combining information from different positions using self-attention and applying a recurrent transition function. We show this process over two recurrent time-steps. Arrows denote dependencies between operations. Initially, h0 is initialized with the embedding for each symbol in the sequence. hti represents the representation for input symbol 1≤i≤m at recurrent time-step t.
In this paper, we propose the *Universal Transformer*. It combines the parallelizability and global receptive field of the Transformer model with the recurrent inductive bias of RNNs which seems to be better suited to a range of algorithmic and natural language understanding sequence-to-sequence problems. As the name implies, in contrast to the standard Transformer, under certain assumptions the Universal Transformer can be shown to be computationally universal (Section [4](#S4 "4 Universality and Relationship to Other Models ‣ Universal Transformers")) .
In each step, the Universal Transformer iteratively refines its representations for all positions in the sequence in parallel with a self-attention mechanism [decomposableAttnModel](#bib.bib22) ; [lin2017structured](#bib.bib19) , followed by a recurrent transformation consisting of a depth-wise separable convolution ([xception2016,](#bib.bib5) ) or a position-wise fully-connected layer (see Fig [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Universal Transformers")). We also extend the Universal Transformer by employing an adaptive computation time mechanism at each position in the sequence ([graves2016adaptive,](#bib.bib12) ), allowing the model to choose the required number of refinement steps for each symbol dynamically.
When running for a fixed number of steps, the Universal Transformer is equivalent to a multi-layer Transformer with tied parameters across its layers. However, another, and possibly more informative, way of characterizing the Universal Transformer is as a recurrent function evolving per-symbol hidden states in parallel, based at each step on the sequence of previous hidden states. In this way, it is similar to architectures such as the Neural GPU ([neural\_gpu,](#bib.bib17) ) and the Neural Turing Machine ([ntm14,](#bib.bib13) ). The Universal Transformer thereby retains the attractive computational efficiency of the original feed-forward Transformer model, but with the added recurrent inductive bias of RNNs . In its adaptive form, we furthermore show that the Universal Transformer can effectively interpolate between the feed-forward, fixed-depth Transformer and a gated, recurrent architecture running for a number of steps dependent on the input data.
Our experimental results show that its recurrence improves results in machine translation, where the Universal Transformer outperforms the standard Transformer with the same number of parameters. In experiments on several algorithmic tasks, the Universal Transformer consistently improves significantly over LSTM RNNs and the standard Transformer. Furthermore, on the bAbI and LAMBADA text understanding data sets, the Universal Transformer achieves a new state of the art.
2 Model
--------
###
2.1 The Universal Transformer
The Universal Transformer (Fig. [2](#S2.F2 "Figure 2 ‣ 2.1 The Universal Transformer ‣ 2 Model ‣ Universal Transformers")) is based on the popular encoder-decoder architecture commonly used in most neural sequence-to-sequence models ([sutskever14,](#bib.bib27) ; [cho2014learning,](#bib.bib4) ; [transformer,](#bib.bib31) ). Both the encoder and decoder of the Universal Transformer operate by applying a recurrent neural network to the representations of each of the positions of the input and output sequence, respectively. However, in contrast to most applications of recurrent neural networks to sequential data, the Universal Transformer does not recur over positions in the sequence, but over consecutive revisions of the vector representations of each position (i.e., over “depth”). In other words, the Universal Transformer is not computationally bound by the number of symbols in the sequence, but only by the number of revisions made to each symbol’s representation.
In each recurrent step, the representation of every position is revised in two sub-steps: first the Universal Transformer uses a self-attention mechanism to exchange information across all positions in the sequence, generating a vector representation for each position that is informed by the representations of all other positions at the previous time-step. Then, it applies a *shared* transition function to the outputs of the self-attention mechanism, independently at each position. Crucially, this is in contrast to most popular neural sequence models, including the Transformer ([transformer,](#bib.bib31) ) or deep RNNs, which have constant depth by applying a *fixed stack* of layers.
For the encoder, given an input sequence of length m, we start with a matrix whose rows are initialized as the d-dimensional embeddings of the symbols at each position of the sequence H0∈Rm×d. The Universal Transformer then iteratively computes representations Ht at step t for all m positions in parallel by applying the multiheaded dot-product self-attention mechanism from [transformer](#bib.bib31) , followed by a recurrent transition function. We also add residual connections around each of these function blocks and apply dropout and layer normalization ([srivastava2014dropout,](#bib.bib25) ; [layernorm2016,](#bib.bib2) ) (see Fig. [2](#S2.F2 "Figure 2 ‣ 2.1 The Universal Transformer ‣ 2 Model ‣ Universal Transformers") for a simplified diagram, and Fig. [4](#A1.F4 "Figure 4 ‣ Appendix A Detailed Schema of the Universal Transformer ‣ Universal Transformers") in the appendix for the complete model.).
More specifically, our attention mechanism is the scaled dot-product attention
| | | | |
| --- | --- | --- | --- |
| | Attention(Q,K,V)=softmax(QKT√d)V | | (1) |
where d is the number of columns of Q, K and V. We use the multi-head version with k heads, as introduced in ([transformer,](#bib.bib31) ),
| | | | | |
| --- | --- | --- | --- | --- |
| | MultiHeadSelfAttention(H) | =Concat(head1,...,headk)WO | | (2) |
| | where headi | =Attention(HWQi,HWKi,HWVi) | | (3) |
with affine projections using learned parameter matrices WQ∈Rd×d/k, WK∈Rd×d/k, WV∈Rd×d/k and WO∈Rd×d.
At step t, the Universal Transformer computes revised representations Ht∈Rm×d for all m input positions as follows
| | | | | |
| --- | --- | --- | --- | --- |
| | Ht | =LayerNorm(At−1+Transition(At)) | | (4) |
| | where At | =LayerNorm(Ht−1+MultiHeadSelfAttention(Ht−1+Pt)), | | (5) |
where LayerNorm() is defined in ([layernorm2016,](#bib.bib2) ), and Transition() and Pt are discussed below.
Depending on the task, we use one of two different transition functions: either a separable convolution ([xception2016,](#bib.bib5) ) or a fully-connected neural network that consists of a single rectified-linear activation function between two affine transformations, applied position-wise, i.e. individually to each row of At.
Pt above are two-dimensional (position, time) *coordinate embeddings*, obtained by computing the sinusoidal position embedding vectors as defined in ([transformer,](#bib.bib31) ) for the position m and the time-step t separately for each vector-dimension j, and summing these vectors component-wise (denoted by ⊕):
| | | | | |
| --- | --- | --- | --- | --- |
| | Ptpos,2j | =sin(pos/100002j/d)⊕sin(t/100002j/d) | | (6) |
| | Ptpos,2j+1 | =cos(pos/100002j/d)⊕cos(t/100002j/d). | | (7) |

Figure 2: The recurrent blocks of the Universal Transformer encoder and decoder. This diagram omits position and time-step encodings as well as dropout, residual connections and layer normalization. A complete version can be found in the appendix. The Adaptive Universal Transformer dynamically determines the number of steps T for each position using ACT.
After T steps (each of which refines all positions of the input sequence in parallel), the final output of the Universal Transformer encoder is a matrix of d-dimensional vector representations HT∈Rm×d for the m symbols of the input sequence.
The decoder shares the same basic recurrent structure of the encoder. However, after the self-attention function, the decoder additionally also attends to the final encoder representation HT of each position in the input sequence using the same multihead dot-product attention function from Equation [2](#S2.E2 "(2) ‣ 2.1 The Universal Transformer ‣ 2 Model ‣ Universal Transformers"), but with queries Q obtained from projecting the decoder representations, and keys and values (K and V) obtained from projecting the encoder representations (this process is akin to standard attention ([bahdanau2014neural,](#bib.bib3) )).
Like the Transformer model, the Universal Transformer is autoregressive ([graves2013generating,](#bib.bib11) ). Trained using teacher-forcing, at generation time it produces its output one symbol at a time, with the decoder consuming the previously produced output positions. During training, the decoder input is the target output, shifted to the right by one position.
The decoder self-attention distributions are further masked such that the model can only attend to positions to the left of any predicted symbol. Finally, the per-symbol target distributions are obtained by applying an affine transformation O∈Rd×V from the final decoder state to the output vocabulary size V, followed by a softmax which yields an (m×V)-dimensional output matrix normalized over its rows:
| | | | |
| --- | --- | --- | --- |
| | p(ypos|y[1:pos−1],HT)=softmax(OHT)\lx@notefootnoteNotethat$T$heredenotestime−step$T$andnotthetransposeoperation. | | (8) |
To generate from the model, the encoder is run once for the conditioning input sequence. Then the decoder is run repeatedly, consuming all already-generated symbols, while generating one additional distribution over the vocabulary for the symbol at the next output position per iteration. We then typically select the highest probability symbol as the next symbol.
###
2.2 The Adaptive Universal Transformer
In sequence processing systems, certain symbols (e.g. some words or phonemes) are usually more ambiguous than others. It is therefore reasonable to allocate more processing resources to these more ambiguous symbols. Adaptive Computation Time (ACT) ([graves2016adaptive,](#bib.bib12) ) is a mechanism for dynamically modulating the number of computational steps needed to process each input symbol in standard recurrent neural networks based on a scalar *pondering* value predicted by the model at each step, reflecting the model’s estimation that further computation is required for that step.
Inspired by the interpretation of Universal Transformers as applying parallel recurrent transformations to all positions in the sequence, we also add a dynamic ACT halting mechanism to each position. Once the per-symbol recurrent block halts, its state is simply copied to the next step until all blocks halt, or we reach a maximum number of steps (see Fig. [2](#S2.F2 "Figure 2 ‣ 2.1 The Universal Transformer ‣ 2 Model ‣ Universal Transformers"), with T dynamically determined for each position). The final output of the encoder is then the final layer of representations produced in this way. We call this dynamically-halting version of our model the *Adaptive* Universal Transformer.
3 Experiments
--------------
In this section, we evaluate the Universal Transformer on a range of algorithmic and language understanding tasks, as well as on machine translation. All code and datasets reproducing these results will be released as open source.
###
3.1 bAbI Question-Answering
The bAbi question answering dataset [weston2015towards](#bib.bib32) consists of 20 different tasks, where the goal is to answer a question given a number of English sentences that encode potentially multiple supporting facts. The goal is to measure various forms of language understanding by requiring a certain type of reasoning over the linguistic facts presented in each story. A standard Transformer does not achieve good results on this task222We experimented with different hyper-parameters and different network sizes, but it always overfits.. However, we have designed a model based on the Universal Transformer which achieves state-of-the-art results on this task.
To encode the input, similar to ([henaff2016tracking,](#bib.bib14) ), we first encode each fact in the story by applying a learned multiplicative positional mask to each word’s embeddding, and summing up all emebeddings.
We embed the question in the same way, and then feed the (Universal) Transformer with these embeddings of the facts and questions.
As originally proposed, models can either be trained on each task separately (“train single”) or jointly on all tasks (“train joint”). Table [1](#S3.T1 "Table 1 ‣ 3.1 bAbI Question-Answering ‣ 3 Experiments ‣ Universal Transformers") summarizes our results. We conducted 10 runs with different initializations and picked the best model based on performance on the validation set, similar to previous work. Both the Adaptive and non-adaptive Universal Transformer achieve state-of-the-art results on all tasks in terms of average error and number of failed tasks333Defined as >5% error., in both the 10K and 1K training regime (see Appendix for breakdown by task).
To understand the working of the model better, we analyzed both the attention distributions and the average ACT ponder times for this task (see the Appendix for the details). First, we observe that the attention distributions start out very uniform, but get progressively sharper in later steps around the correct supporting facts that are required to answer each question, which is indeed very similar to how humans would solve the task. Second, with ACT we observe that the average ponder time (i.e. depth of the per-symbol recurrent processing chain) for tasks requiring three supporting facts is higher than for tasks requiring only two, which is in turn higher than for tasks requiring only one supporting fact. Finally, we observe that the histogram of ponder times at different positions is more uniform in tasks requiring only one supporting fact compared to two and three, and likewise for tasks requiring two compared to three. Especially for tasks requiring three supporting facts, many positions halt at step 1 or 2 already and only a few get transformed for more steps (see for example Fig [3](#S3.F3 "Figure 3 ‣ 3.1 bAbI Question-Answering ‣ 3 Experiments ‣ Universal Transformers")). This is particularly interesting as the length of stories is indeed much higher in this setting, with more irrelevant facts which the model seems to successfully learn to ignore in this way.

Figure 3: Ponder time of Adaptive Universal Transformer for encoding facts in a story and question in a bAbI task requiring three supporting facts.
| | | |
| --- | --- | --- |
| Model | 10K examples | 1K examples |
| train single | train joint | train single | train joint |
| Previous best results: |
| QRNet ([seo2016query,](#bib.bib24) ) | 0.3 (0/20) | - | - | - |
| Sparse DNC ([rae2016scaling,](#bib.bib23) ) | - | 2.9 (1/20) | - | - |
| GA+MAGE [dhingra2017linguistic](#bib.bib8) | - | - | 8.7 (5/20) | - |
| MemN2N [sukhbaatar2015](#bib.bib26) | - | - | - | 12.4 (11/20) |
| Our Results: |
| Transformer ([transformer,](#bib.bib31) ) | 15.2 (10/20) | 22.1 (12/20) | 21.8 (5/20) | 26.8 (14/20) |
| Universal Transformer (this work) | 0.23 (0/20) | 0.47 (0/20) | 5.31 (5/20) | 8.50 (8/20) |
| Adapt. Univ. Transformer (this work) | 0.21 (0/20) | 0.29 (0/20) | 4.56 (3/20) | 7.85 (5/20) |
Table 1: Average error and number of failed tasks (>5% error) out of 20 (in parentheses; lower is better in both cases) on the bAbI dataset under the different training/evaluation setups. We indicate state-of-the-art where available for each, or ‘-’ otherwise.
###
3.2 Subject-Verb Agreement
Next, we consider the task of predicting number agreement between subject and verb in naturally occurring English sentences ([linzen2016assessing,](#bib.bib20) ). This task acts as a proxy
for measuring the ability of a model to capture hierarchical
(dependency) structure in natural language sentences.
We use the dataset provided by ([linzen2016assessing,](#bib.bib20) ) and follow their experimental protocol of solving the task using a language modeling training setup, i.e. a next word prediction objective, followed by calculating the ranking accuracy of the target verb at test time. We evaluated our model on subsets of the test data with different task difficulty, measured in terms *agreement attractors* – the number of intervening nouns with the opposite number from the subject.
For example, given the sentence *The keys to the cabinet*444*Cabinet* is an agreement attractor in this case., the objective during training is to predict the verb *are*. At test time, we then evaluate the ranking accuracy of the correct form of the verb compared to the incorrect form of the verb: i.e. the goal is to rank *are* higher than *is* in this case.
Our results are summarized in Table [2](#S3.T2 "Table 2 ‣ 3.2 Subject-Verb Agreement ‣ 3 Experiments ‣ Universal Transformers"). Best LSTM with attention achieve 99.18% on this task, outperforming a vanilla transformer ([tran18,](#bib.bib28) ). The Universal Transformer improves on the Transformer results, and the Adaptive Universal Transformer achieves results comparable to the current state of the art (99.2%).
| | | |
| --- | --- | --- |
| Model | Number of attractors | |
| 0 | 1 | 2 | 3 | 4 | 5 | Total |
| Previous best results ([yogatama2018memory,](#bib.bib33) ): |
| Best Stack-RNN | 0.994 | 0.979 | 0.965 | 0.935 | 0.916 | 0.880 | 0.9923 |
| Best LSTM | 0.993 | 0.972 | 0.95 | 0.922 | 0.900 | 0.842 | 0.9911 |
| Best Attention | 0.994 | 0.977 | 0.959 | 0.929 | 0.907 | 0.842 | 0.9918 |
| Our results: |
| Transformer | 0.9733 | 0.9412 | 0.9316 | 0.9167 | 0.9014 | 0.8834 | 0.9616 |
| Universal Transformer | 0.9934 | 0.9712 | 0.9690 | 0.9400 | 0.9206 | 0.8915 | 0.9917 |
| Adapt. Univ. Transf. (small) | 0.9932 | 0.9801 | 0.9714 | 0.9608 | 0.9521 | 0.9314 | 0.9920 |
| Adapt. Univ. Transf. (base) | 0.9943 | 0.9720 | 0.9516 | 0.9567 | 0.9314 | 0.9034 | 0.9924 |
Table 2: Accuracy on the subject-verb agreement number prediction task (higher is better).
###
3.3 LAMBADA Language Modeling
The LAMBADA task ([paperno2016lambada,](#bib.bib21) ) is a language modeling task consisting of predicting a missing target word given its (target) sentence and a broader context of 4-5 preceding sentences. The dataset was specifically designed such that humans are able to accurately predict the target word when shown the full context, but not when only shown the target sentence. It therefore goes beyond language modeling, and tests the ability of a model to incorporate broader discourse and longer term context when predicting the target word.
The task is evaluated in two settings: as *language modeling* (the standard setup) and as *reading comprehension*. In the former (more challenging) case, a model is simply trained for next-word prediction on the training data, and evaluated on the target words at test time (i.e. the model is trained to predict all words, not specifically challenging target words). In the latter setting, introduced by Chu et al. [chu2017broad](#bib.bib6) , the target sentence (minus the last word) is used as query for selecting the target word from the context sentences. Note that the target word appears in the context 81% of the time, making this setup much simpler. However the task is impossible in the remaining 19% of the cases.
Model
LM Perplexity & (Accuracy)
RC Accuracy
control
dev
test
control
dev
test
Neural Cache ([grave2016improving,](#bib.bib10) )
129
139
-
-
-
-
Dhingra et al. [dhingra2018neural](#bib.bib7)
-
-
-
-
-
0.5569
Transformer
154 (0.14)
5336 (0.0)
9725 (0.0)
0.4102
0.4401
0.3988
LSTM
138 (0.23)
4966 (0.0)
5174 (0.0)
0.1103
0.2316
0.2007
Universal Transformer
131(0.32)
279 (0.18)
319 (0.17)
0.4801
0.5422
0.5216
Adaptive Universal Transformer
130 (0.32)
135 (0.22)
142 (0.19)
0.4603
0.5831
0.5625
Table 3: LAMBADA language modeling (LM) perplexity (lower better) with accuracy in parentheses (higher better), and Reading Comprehension (RC) accuracy results (higher better). ‘-’ indicates no reported results in that setting.
The results are shown in Table [3](#S3.T3 "Table 3 ‣ 3.3 LAMBADA Language Modeling ‣ 3 Experiments ‣ Universal Transformers"). Universal Transformer achieves state-of-the-art results in both the language modeling and reading comprehension setup, outperforming both LSTMs and vanilla Transformers. Note that the control set was constructed similar to the LAMBADA development and test sets, but without filtering them in any way, so achieving good results on this set shows a model’s strength in standard language modeling.
###
3.4 Algorithmic Tasks
We evaluated the Universal Transformer on three algorithmic tasks, namely Copy, Reverse, and (integer) Addition, all on strings composed of decimal symbols (‘0’-‘9’). We train the model using positions starting with randomized offsets to further encourage the model to learn position-relative transformations. Results are shown in Table [4](#S3.T4 "Table 4 ‣ 3.4 Algorithmic Tasks ‣ 3 Experiments ‣ Universal Transformers"). The Universal Transformer outperforms both LSTM and vanilla Transformer by a wide margin on all three tasks. The Neural GPU reports perfect results on this task ([neural\_gpu,](#bib.bib17) ), however we note that this result required a special curriculum-based training protocol which was not used for other models.
| Model | Copy | Reverse | Addition |
| --- | --- | --- | --- |
| char-acc | seq-acc | char-acc | seq-acc | char-acc | seq-acc |
| LSTM | 0.45 | 0.09 | 0.66 | 0.11 | 0.08 | 0.0 |
| Transformer | 0.53 | 0.03 | 0.13 | 0.06 | 0.07 | 0.0 |
| Universal Transformer | 0.91 | 0.35 | 0.96 | 0.46 | 0.34 | 0.02 |
| Neural GPU∗ | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
Table 4: Accuracy (higher better) on the algorithmic tasks, trained on decimal strings of length 40 and evaluated on length 400 from ([neural\_gpu,](#bib.bib17) ). ∗Note that the Neural GPU was trained with a special curriculum to obtain the perfect result, while other models are trained without any curriculum.
###
3.5 Learning to Execute (LTE)
As another class of sequence-to-sequence learning problems, we also evaluate Universal Transformers on tasks indicating the ability of a model to learn to execute computer programs, as proposed in ([ZS14,](#bib.bib34) ). These tasks include program evaluation tasks (program, control, and addition), and memorization tasks (copy, double, and reverse).
| | | | |
| --- | --- | --- | --- |
| | Copy | Double | Reverse |
| Model | char-acc | seq-acc | char-acc | seq-acc | char-acc | seq-acc |
| LSTM | 0.78 | 0.11 | 0.51 | 0.047 | 0.91 | 0.32 |
| Transformer | 0.98 | 0.63 | 0.94 | 0.55 | 0.81 | 0.26 |
| Universal Transformer | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
Table 5: Character-level (*char-acc*) and sequence-level accuracy (*seq-acc*) results on the Memorization LTE tasks, with maximum length of 55.
| | | | |
| --- | --- | --- | --- |
| | Program | Control | Addition |
| Model | char-acc | seq-acc | char-acc | seq-acc | char-acc | seq-acc |
| LSTM | 0.53 | 0.12 | 0.68 | 0.21 | 0.83 | 0.11 |
| Transformer | 0.71 | 0.29 | 0.93 | 0.66 | 1.0 | 1.0 |
| Universal Transformer | 0.89 | 0.63 | 1.0 | 1.0 | 1.0 | 1.0 |
Table 6: Character-level (*char-acc*) and sequence-level accuracy (*seq-acc*) results on the Program Evaluation LTE tasks with maximum nesting of 2 and length of 5.
We use the mix-strategy discussed in ([ZS14,](#bib.bib34) ) to generate the datasets. Unlike ([ZS14,](#bib.bib34) ), we do not use any curriculum learning strategy during training and we make no use of target sequences at test time. Tables [5](#S3.T5 "Table 5 ‣ 3.5 Learning to Execute (LTE) ‣ 3 Experiments ‣ Universal Transformers") and [6](#S3.T6 "Table 6 ‣ 3.5 Learning to Execute (LTE) ‣ 3 Experiments ‣ Universal Transformers") present the performance of an LSTM model, Transformer, and Universal Transformer on the program evaluation and memorization tasks, respectively. Universal Transformer achieves perfect scores in all the memorization tasks and outperforms both LSTMs and Transformers in all program evaluation tasks.
###
3.6 Machine Translation
We evaluated on the standard WMT 2014 English-German translation task using the same setup as reported in ([transformer,](#bib.bib31) ). Results are summarized in Table [7](#S3.T7 "Table 7 ‣ 3.6 Machine Translation ‣ 3 Experiments ‣ Universal Transformers"). The Universal Transformer with a fully-connected recurrent function (instead of separable convolution) and without ACT improves by 0.9 BLEU over a Transformer and 0.5 BLEU over a Weighted Transformer with approximately the same number of parameters ([ahmed2017weighted,](#bib.bib1) ).
| Model | BLEU |
| --- | --- |
| Universal Transformer *small* | 26.8 |
| Transformer *base* ([transformer,](#bib.bib31) ) | 28.0 |
| Weighted Transformer *base* ([ahmed2017weighted,](#bib.bib1) ) | 28.4 |
| Universal Transformer *base* | 28.9 |
Table 7: Machine translation results on the WMT14 En-De translation task trained on 8xP100 GPUs in comparable training setups. All *base* results have the same number of parameters.
4 Universality and Relationship to Other Models
------------------------------------------------
Given sufficient memory the Universal Transformer is computationally universal – i.e. it belongs to the class of models that can be used to simulate any Turing machine, thereby addressing a shortcoming of the standard Transformer model. Despite being theoretically appealing, our results show that this added expressivity also leads to improved accuracy on several challenging sequence modeling tasks. This closes the gap between practical sequence models competitive on large-scale tasks such as machine translation, and computationally universal models such as the Neural Turing Machine or the Neural GPU ([ntm14,](#bib.bib13) ; [neural\_gpu,](#bib.bib17) ), which can be trained using gradient descent to perform algorithmic tasks.
To show this, we can reduce a Neural GPU to a Universal Transformer. Ignoring the decoder and parameterizing the self-attention mechanism to be the identity function, we assume the transition function to be a convolution. If we now set the total number of recurrent steps T to be equal to the input length, we obtain exactly a Neural GPU. Note that the last step is where the Universal Transformer crucially differs from the vanilla Transformer whose depth cannot scale dynamically with the size of the input. A similar relationship exists between the Universal Transformer and the Neural Turing Machine, whose single read/write operations per step can be expressed by the global, parallel representation revisions of the Universal Transformer. In contrast to these models, however, which only perform well on algorithmic tasks, the Universal Transformer also achieves competitive results on realistic natural language tasks such as LAMBADA and machine translation.
Another related model architecture is that of end-to-end Memory Networks ([sukhbaatar2015,](#bib.bib26) ). In contrast to end-to-end memory networks, however, the Universal Transformer uses memory corresponding to states aligned to individual positions of its inputs or outputs. Furthermore, the Universal Transformer follows the encoder-decoder configuration and achieves competitive performance in large-scale sequence-to-sequence tasks.
5 Conclusion
-------------
This paper introduces the Universal Transformer, a generalization of the Transformer model that extends its theoretical capabilities and produces state-of-the-art results on a wide range of challenging sequence modeling tasks, such as language understanding but also a variety of algorithmic tasks, thereby addressing a key shortcoming of the standard Transformer. The Universal Transformer combines the following key properties into one model:
Weight sharing: Following intuitions behind weight sharing found in CNNs and RNNs, we extend the Transformer with a simple form of weight sharing that strikes the right balance between inductive bias and model expressivity, which we show extensively on both small and large-scale experiments.
Conditional computation: In our goal to build a computationally universal machine, we equipped the Universal Transformer with the ability to halt or continue computation through a recently introduced mechanism, which shows stronger results compared to the fixed-depth Universal Transformer.
We are enthusiastic about the recent developments on parallel-in-time sequence models. By adding computational capacity and recurrence in processing depth, we hope that further improvements beyond the basic Universal Transformer presented here will help us build learning algorithms that are both more powerful, data efficient, and generalize beyond the current state-of-the-art.
The code used to train and evaluate Universal Transformers is available at <https://github.com/tensorflow/tensor2tensor> ([tensor2tensor,](#bib.bib29) ).
#### Acknowledgements
We are grateful to Ashish Vaswani, Douglas Eck, and David Dohan for their fruitful comments and inspiration. |
c658f1fe-b0c7-4d63-a574-56301e6bc9fd | StampyAI/alignment-research-dataset/arxiv | Arxiv | Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles.
I Introduction
---------------
As AV fleets grow in size and area of service, they will comprise routing platforms with the potential to control traffic on public roads.
Such coordinated but distributed action will be unprecedented in scale, and will bring myriad new mobility challenges such as routing-consolidated infrastructure wear [collier2018disrupting] and mixed-autonomy traffic jams with complicated dynamics [wu2017emergent].
There will also be new opportunities to address existing challenges such as food deserts [walker2010disparities] and equitable mobility load-balancing [kleinberg1999fairness].
Moreover, growing automation will see concomitant changes as other societal institutions adapt – for instance, new mobility options and markets will supplant previous ones, changing the relative accessibility of locations, in turn impacting property markets and cost of living.
To address these dynamic trends, designers must contend with the feedback loops generated and exacerbated by coordinated optimization efforts [gilbert2022choices]. Because AVs will inevitably remake the transportation system, our view is that these changes ought to be intentionally designed.
Rather than simply automating trips, a more helpful framing may be that AVs will automate core components of the transportation system itself. People will take more and different trips, and every aspect of society that interacts with the transportation system will co-adapt to take advantage of its strengths and compensate for its weaknesses.
As stakeholders in this transition, designers must help ensure that the automated transportation systems of the near future serve as intentional and active means of reshaping societal institutions. Recent proposals to develop world models show the potential for training policies based on features extracted from simulation rather than from how humans drive [ha2018world].
Meanwhile, results using the Project Flow toolkit have demonstrated the potential for distributed control over mixed autonomy traffic, using existing simulation and optimization methods and models [wu2017flow, wu2018stabilizing, wu2021flow].
These examples illustrate the nascent potential of applying contemporary AI & robotics methods to build AVs which support the automated transportation systems we want to exist.
However, fully realising this vision will require a novel conceptual framing of the computational toolkit of contemporary AV researchers.
In this paper, we propose the frame of sociotechnical specification as a way to understand the broader impacts of AVs on transportation systems, and give examples of ways that AV designers might begin to contend with these issues.
The remainder of this paper is structured as follows.
In [Section II](#S2 "II SOCIOTECHNICAL SPECIFICATION IN THE DESIGN OF ‘TRANSPORTATION’ ‣ Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles*"), we motivate our framing of this problem, describing the space of problems that define what is able to be in- or out-of-scope for AV developers.
This framing can be broken down into problems that are “technical”, “sociotechnical”, and “social” in nature.
In [Section III](#S3 "III EXAMPLES OF TRANSPORTATION SYSTEM FEATURES AND COMPONENTS ‣ Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles*"), we then give examples of transportation system components and associated features that fall into these categories, which are further elaborated in the Appendix.
[Section IV](#S4 "IV CONCLUSION ‣ Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles*") concludes.
Ii Sociotechnical Specification in the Design of ‘Transportation’
------------------------------------------------------------------
Recent research with emerging simulation toolkits such as Project Flow illustrate that the main roadblocks to the specification of dynamic problems like mixed-autonomy traffic may no longer be strictly technical [abbink2020artificial].
Instead, there is a growing need to interrelate a transportation system’s features (i.e. its outstanding attributes or integral properties, like multi-modal access) with an AV fleet’s components (i.e. the technical capacities needed for optimal performance, like routing or sensing).
This suggests a new research agenda concerned with redesigning key features of the transportation system, rather than replicating current systems’ limitations.
We argue here that translating features of transportation systems into technical components constitutes a problem of sociotechnical specification: the need for designers to differentiate which features lie in or out of scope for AV development.
We argue that many unresolved but tractable problems related to traffic dynamics naturally follow from this shift in frame.
In particular, sociotechnical specification comprises three elements.
First, a much greater variety of features may be placed ‘in-scope’ for AV development and performance optimization.
This moves beyond local control, perception, and route planning and includes both infrastructural features of the transportation system and those of adjacent systems (for instance environmental, economic, or municipal).
Second, AV design requires reflection about which of these features bear on a given AV component.
Particular considerations about the planning horizon, vehicle sensors, route optimization, and consumer incentives must be formalized with reference to the intended purpose of the transportation system and the role of AVs within it.
Third, more clear definitions of features will be needed for effective means of control over those that lack sufficient documentation by social scientists or law and public policy experts.
In cases where features are objects of intervention by a given AV component, designers must identify criteria for their means of control from affected systems or those experts who oversee them.
This problem framing is sociotechnical because it recognizes the possibility of translating features across abstraction scales and societal systems into elements of AV design.
As such, the three elements of sociotechnical specification outlined above suggest three corresponding problem stages:
1. [leftmargin=0.5cm]
2. Technical problems are in need of a “shovel”: they are more or less defined already in the academic literature, data is available and sufficiently accurate, and metrics are clear and measurable.
The problem can readily be translated into technical methodologies and analyzed/optimized.
There may be remaining challenges, e.g. reward sparsity, multi-agent dynamics, nonstationarity, scalability, long horizons, etc., but they are technical in nature.
3. Sociotechnical problems are in need of elaboration: they have a partial technical specification, but some pieces are missing.
Data may be unavailable, metrics may be unclear, models may be insufficiently accurate, etc.
At this “stage”, to refine the problem into one that can be rigorously analyzed, there needs to be some back-and-forth between technical and domain / social science researchers.
This can serve to conduct ”feature selection,” in which facets of the problem can be prioritized according to technical limitations (e.g. data availability) and the domain question.
4. Social problems are in need of further definition: understanding the feature(s) themselves is the primary challenge, as indeterminacy must be resolved before any means of control can be evaluated.
This is at the stage of defining metrics, possibly without tight consideration of technical tools and limitations.
There is a need for more basic social investigation, including community consultation, guidance from regulatory agencies, or research by social scientists / domain experts, rather than AV researchers.
The modes, standards, and evaluation methods of transportation systems have developed over centuries.
New modes of road, rail, aviation, aerospace, and maritime transport have added to the variety of means of travel.
Beyond safety, transportation paths have accrued standards for throughput and environmental friendliness.
In addition, the performance of particular transportation systems are now routinely evaluated for reliability, resilience, and support for higher quality of life rather than strict measures of efficiency [kaewunruen2016grand].
As a major new transportation technology, AV fleets stand to redefine the component features of transportation systems, including the modes they comprise, the values at stake in their specification, and means of evaluation.
In particular, AVs blur the distinction between infrastructural, vehicular, and intermodal features of the road environment.
While local features are able to be better controlled, global features become more suited to observation and prediction.
Likewise, route controllers can be implemented at the micro-level (e.g.
four-way stops), meso-level (highway interchanges), or macro-level (city grids, interstate expressways), which will each require distinct value considerations.
In this paper we focus on three component features of a transportation system, each of which stand to be significantly transformed by AV fleet operations [Figure 1](#S2.F1 "Fig. 1 ‣ II SOCIOTECHNICAL SPECIFICATION IN THE DESIGN OF ‘TRANSPORTATION’ ‣ Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles*").
These are: (1) social mobility, (2) public infrastructure, and (3) environmental impacts.
In each setting, we give an example of a technical problem, including an example proxy metric, as well as outlining features that comprise sociotechnical and social problems.
The identification and labelling of transport system components is itself a political activity, however we have chosen these components because they are most clearly representative of the reflection that underlies sociotechnical specification.
The discussion that follows is intended illustrate this reflection and offer avenues for future refinement and elaboration.
Our hope is that future research will build on these starting points, further developing sound mathematical metrics that can serve as to highlight and control for the salient broader impacts of AVs.

Fig. 1: Transportation systems interact with numerous component features, some of which are illustrated here.
In this article, we focus on (a) social mobility, (b) environmental impacts, and (c) public infrastructure (which are highlighted with \Asterisk).
Iii Examples of Transportation System Features and Components
--------------------------------------------------------------
###
Iii-a Social Mobility
#### Technical problem – food deserts
AVs will affect access to resources for individual well-being such as food, jobs, schools, and entertainment.
As AV routing algorithms unevenly shape how easy it is to get from one location to another, they have the ability to connect —or further isolate— individuals from the resources they need to live a healthy and fulfilling life.
An example of a concrete problem that can be measured with proxy metrics, we can consider the problem of food deserts [beaulac2009peer].
Suppose we have some population living in housing geographically distributed according to ~H.
There are ‘good’ food options distributed by ~FG and bad food options distributed by ~FB.
For a routing or AV control algorithm A we define the time to go from a housing location h to a food option f as t(A)(h,f) We can then construct a simplistic consumer choice model in which a consumer at housing location h will visit a high quality food option if
min(fG,fB)∈~FG×~FB{t(A)(h,fG)−t(A)(h,fB)}<k,
for some positive constant k.
In other words, the consumer will choose a high-quality food option if the added cost over the easiest low-quality food option is less than k.
If we define C(A)(h)=1 if a consumer at location h would choose the high quality option under algorithm A and 0 otherwise, we obtain a proxy metric for consumer population nutritional health:
η(A)(~H)≜Eh∼~H[C(A)(h)].
Note that this is a clearly simplified model for consumer choice, as well as the differential impact of foods on consumer health.
Nevertheless, this simple example illustrates the fact that an AV designer could in theory begin investigating the impacts of algorithmic and modeling choices on this important societal issue today.
The requisite data to inform this model (i.e. housing and food option distributions) could be readily procured, even if in approximate forms, and existing simulation tools could then be brought to bear to analyze how choices within the remit of the AV operator or designer mediate the issue of food deserts and consumer nutritional health.
| | Technical | Sociotechnical | Social |
| --- | --- | --- | --- |
| Social mobility | Food deserts | Housing markets | Quality of life |
| Public infrastructure | Road wear | Traffic laws | Privatization of roads |
| Environmental impacts | Air quality | Induced demand | Cobalt extraction |
TABLE I: Examples of transportation system features and components
#### Sociotechnical problem – housing markets
Over the long-term, AV services cannot take where people happen to live as a given.
People will relocate, either buying new homes or renting temporary housing or otherwise changing their place of residence.
Where people decide to live depends on a variety of considerations such as distance from work, location of family, childcare services, or personal lifestyle, however a common denominator is the transport system’s capacity to accommodate regular use by local residents.
Natural questions arise from this: what are the aggregate effects of AV service on the dynamics of the housing market? And how might designers ensure these effects are fair or equitable?
While this question cannot be easily reduced to an econometric model, it is also not entirely new.
Much as cars facilitated the growth of urban sprawl and suburbanization [norton2011fighting], regular AV services will likely reshape both the features of neighborhoods and the calculus of how people make use of the mobility they permit.
Even once public trust in AV safety performance is assured, these effects will depend heavily on their routing algorithms, which will determine how easy it is to get from place to place and will modify the value of time of being in transit.
The criteria for specifying the learning parameters of these algorithms are already active research topics in adjacent literatures such as Urban Studies and Financial Economics.
However, various means of translating these topics into optimization criteria could support distinct design strategies for AV routing.
One is to scale AV infrastructure with equity-focused initiatives so that deployment does not accidentally cause gentrification or urban decay in regions where service offerings are mismatched with local mobility needs [seattle].
There may well be cross-regional consequences of this strategy, as the urban sprawl we see now is not only a result of people moving to suburbs, but the housing supply adapting to that demand.
Because use of public transit, bicycles, and pedestrian mobility increase sharply in dense environments, AVs will require routing capacities that scale with various environmental and infrastructural measures of density.
These include transit-oriented employment and residential density, development scale, density gradient, population and employment centrality, population density gradient, density at median distance, density of development, percent of houses within one mile of an elementary school, percentage increase in residential density, gross and/or net residential density, building coverage ratio, average school size, and non-residential intensity.
#### Social problem – Quality of Life
The ‘success’ or ‘failure’ of AV adoption can be broadly understood in terms of change in overall quality of life – the standard of health, comfort, and happiness or flourishing that is induced for and experienced by a specific group of people.
This concept, while elusive, can actually be evaluated indirectly by monitoring properties of transit and adjacent systems that have yet to receive systematic study from an organized research field.
While there are many measures of quality of life available from the field of development economics, the unprecedented dynamics induced by AV deployment require a new approach.
These dynamics often reflect deeper structural problems at the intersection of physical and economic mobility.
For example, access to the city center remains a central concern of urban planning and transportation infrastructure [shen1998location], serving as a proxy for access to labor markets and low-income mobility [montgomery2018america].
Recent measurements of food deserts [beaulac2009peer], commuter health exposure [knibbs2011review], and subjective appraisals of daily travel routes [gatersleben2007affective] indicate the difficulties of tracking unanticipated externalities of common travel patterns.
A barrier to effective models in this domain is the need to identify metrics that capture both individual day-to-day decisions and longitudinal traffic effects in ways that public agencies can evaluate as good or bad.
Fortunately, there are many canonical metrics for road network usage that are also easy to communicate to the public, meaning that new models can be updated in response to stakeholder feedback.
Another strategy is to integrate AV usage with first- and last-mile mobility considerations, both to improve service in low-density areas [ohnemus2016shared] and augment public transit connections so that the cost of switching between transportation modes is reduced for multi-modal road users [shaheen2016mobility].
Legacy metrics pertaining to destination accessibility should also prove useful.
Finally, translating local community priorities into perception and routing modules can draw from canonical measures of diversity in infrastructure and environment.
We suggest example metrics for these considerations in the Appendix.
###
Iii-B Public Infrastructure
#### Technical problem – road wear
During operation, AVs interact with the existing transit infrastructure, including the signage, lane markings, and roads themselves.
Coordinated AV operations at scale could impact the deterioration of roadways.
The distribution of road deterioration will effectively be controlled by AV operators and manufacturers through the macro-level centralization of AV routing and the micro-level duplication of control schemes.
We illustrate here how a proxy metric can be constructed to capture and optimize against the first-order effects of AV operations on road deterioration.
Let S be the graph of road segments making up a given road network where AVs operate.
We assume that each edge s∈S has an associated quality rating q(s) that quantifies the condition of the road along that road segment.
To capture the effect of AV operations on road deterioration, a simple model could be that the quality of a segment is reduced by some small proportion ϵ whenever it is traversed.
If the quality of a segment reaches some lower bound, then it can no longer be traversed, and must be repaired (presumably, at cost to the tax-payer).
For demonstration purposes, we could investigate the impact of having a routing algorithm account for a per-segment navigation cost proportional to the quality, serving as a proxy metric for road deterioration due to AV operations:
κ(j,r)≜∑s∈rj×q(s),
for a given route r and some positive constant j.
Even this simplistic approach can illustrate important behaviors.
For instance, if a routing algorithm greedily minimizes the road wear navigation cost on a trip-by-trip basis, AVs will always opt to take the highest quality roadway at a given point in time, which will distribute, rather than concentrate road wear.
This model could readily be extended to incorporate more sophisticated models of road deterioration, as well as to account for variations in wear based on the type and loading of the vehicle.
#### Sociotechnical problem – traffic laws
As traffic norms may evolve over time to favor optimal AV performance, organized lobbying may begin to pressure regulators to change traffic laws in ways that prioritize specific AV design considerations.
While this may well make manufacturers’ jobs easier, it would also comprise a major shift – AVs would be disrupting the wider purpose or structure of the transit system rather than simply automating select parts.
As a result, it is reasonable for methods to consider possible changes or updates to existing traffic laws as an approach to control the impacts of AVs.
New regulations could support the addition of traffic lanes, pick-up/drop-off points, and zones where pedestrians have different rights.
Each of these interventions would allow for coordination between people and AVs, and thus it serves as an important lever of control, alongside local control, routing, and controlling stop lights.
Criteria for these considerations may be drawn from the fields of Transportation Studies and Public Policy - and we give examples of some possibilities in the Appendix.
#### Social problem – privatization of roads
As long-distance platooning becomes more feasible, AVs will affect entire interstates and regions of countries in ways that transcend the jurisdiction of any single traffic authority.
Among many other effects, there is a risk that long-distance roadways will effectively be privatized for use by corporate freight operations, as other human drivers will be disincentivized from sharing the road with scores of automated trucks.
This outcome would be both unfair and unsustainable, as the costs of repairing roads would be outsourced ad hoc to a public that rarely uses them.
Designers can mitigate these risks by preparing AV modules for perception, localization, and routing that are sensitive to the public nature of roads.
This means that they would be sensitive to features of roads that are broadly inclusive of multi-modal stakeholders, and also route through them in ways that are minimally disruptive of traffic flow.
While these infrastructural features and modeling techniques remain works-in-progress, preparing for such a future will empower designers to integrate feedback from particular stakeholders as soon as mass AV platooning and routing is common.
With respect to routing, it is possible to measure and minimize loss in AV fuel efficiency or average time to destination from avoiding potholes, aiming to preserve the road without disrupting traffic.
This is analogous to other settings where AV software compensates for hardware limitations, except here the road itself is also modeled as “hardware” rather than relegated to the external environment.
This is quantified through reference to existing models for highway maintenance [theberge1987development], priority damage assessment [snaith1984priority], and smart pavement evaluations [asbahan2008evaluating].
This work could aid constraint satisfaction by including factors that corroborate existing public standards for road maintenance, rather than modeling vehicle motion in isolation.
Meanwhile, external changes to signage and lane markings could be modeled as controls on the large-scale effects of AVs.
A natural implication of this would incorporate measures that reflect different design scales for multi-modal concerns on a spectrum of local versus global effects.
On the former end of the scale, updating perception to conform to various point design measures will help AVs modify their speed and behavior in real time to conform to stakeholder expectations and priorities.
For example, residential neighborhoods in the United States often accommodate special needs groups through distinct signage: warning signs about pet dogs and cats, “children at play”, and protection for the disabled (e.g. audible walk signs).
Additional examples may be found in the Appendix.
Each of these considerations, and other details of local customs which we have yet to consider, need to be incorporated into the local control procedure so that they can be customized to be contextually appropriate.
For example, beyond vehicle features such as wheelchair access, AVs will need to incorporate routing adjustments so that time spent in these zoned areas is minimized.
In particular, there is a need for regional design measures that are tailored to capture the completeness of regional transportation systems.
Meanwhile, some communities require unique forms of road mobility, such as retirement facilities and golf courses that have their own specialized modes of transport and which have adopted special guidelines for golf carts interacting with normal traffic vehicles [head2012multi].
And on a wider scale, network-level effects should also be reflected in metrics for connectivity and route directness at the neighborhood level.
Examples of all these features may be found in the Appendix.
###
Iii-C Environmental Impacts
#### Technical problem – air quality
A growing problem in many urban cities is the contribution of transit-related pollution to the deterioration of air quality.
Vehicles release various forms of pollution which are harmful to the residents of high traffic areas, and there has been a large body of research in quantifying both the amount and the health effects of pollution on local residents [fisher2002health, krzyzanowski2005health, lipfert2006traffic, zhang2013air, west2004distributional].
The effects of this pollution will depend on the type of vehicle, including the power source (gas / hybrid / electric, etc.), the density of traffic, weather conditions, the type of neighborhood being traversed, physical proximity of road users, and many other complexities.
As a simple model for these effects, we can estimate the spatial distribution of people present in a region of AV operations and assume a small penalty per second for every person, inversely proportional to the square of the distance of that person to the vehicle.
Formally, if we have a spatial population of people p∼~P and v is the location of the AV, we can define a proxy metric for the impact of pollution as
π(v,~P)≜Ep∼~P∥p−v∥−22.
An AV that is trying to minimize this cost would be more likely to route around local communities towards less occupied areas, distributing pollution where it has fewer adverse effects.
Additionally, even along a fixed route, AV designers could take advantage of the fine-grained control available to AVs, to optimize for pollution effects based on the characteristics of the population, the vehicle engine, and traffic conditions.
Population data to inform this model could be drawn from a number of sources such as county or council residential housing and zoning records, or approximated e.g. according to lot size data from open-source map databases.
This simple model could be improved in a number of ways.
For example, the distribution ~P could be extended to cover temporal fluctuations in population density, or better models of health effects could be incorporated.
#### Sociotechnical problem – induced demand
In the longer term, AVs will have uncertain effects on aggregate pollution arising from traffic.
Impacts of the AV fleet itself could be modeled without incorporating considerations of vehicle type and traffic density.
However, it is important to consider the effects of induced demand [lee1999induced].
For example, simulated results show that widespread deployment of AVs could slash U.S. energy consumption by as much as 40% due to improved driving efficiency; alternatively, it could double U.S. energy consumption due to increased availability of cheap transport options [wadud2016help].
Even these results do not incorporate differences in road conditions, models of vehicle, fluctuations in demand, traffic conditions, or interactions between routing vehicles.
Still, this work lends support to carbon-pricing or surge pricing policies as AVs are more widely adopted.
Extending this to incorporate more work on quantifying CO2 emissions [west2004distributional, sgouridis2011air, noland2006flow] and induced demand [hymel2019if, omstedt2005model] are clear future research directions.
As such, efforts to inquire into and model AV-induced transit demand comprises an emerging subfield at the intersection of Environmental Engineering, Behavioral Economics, and Traffic Modeling.
To further illustrate this problem, we can consider the various pollution effects beyond individual car exhaust or aggregate CO2 emissions induced by AVs.
These range from noise pollution [campello2017effect] to the wider ecological, fiscal, and social factors associated with AVs’ environmental sustainability.
Ecological metrics have been well documented in the literature.
Changes to them will likely generate effects on fiscal metrics related to activity level.
And changes to fiscal metrics, in turn, may have community-level social impacts whose measurement is vital but somewhat more speculative for activity level and modal share.
Examples of all three metric types may be found in the Appendix.
Incorporating these factors would have several modeling benefits.
It would provide stronger estimates for demand that would help align emissions control with wider societal aims for equitable road access.
It would also help integrate AV policy development with ongoing research on updates to pavement materials and construction practices of tollways [al2015scenarios], leading to possible new improvements in sustainability.
And it could leverage vehicle-level data collection to control for the granular spatial and temporal features of air pollution that have recently been measured at unprecedented micro scales [apte2017high, caubel2019distributed], ensuring the benefits of emissions reduction are both locally and globally equitable.
The deployment of AVs at distinct scales of road infrastructure (urban core, commuter routes, interstate highways) could also aid in the evaluation of alternative measurement approaches that trade- off precision against efficiency, an open research question in environmental engineering [messier2018mapping].
#### Social problem – cobalt extraction
Like all modes of transit, AV fleets will comprise the finished endpoint of a much larger supply chain.
And like all electric cars, AVs will depend heavily on the extraction of rare earth minerals from seafloor deposits and regions of the developing world.
For example, worldwide demand for cobalt has skyrocketed due to its importance in batteries of electric cars, with countries such as the Democratic Republic of Congo becoming targets of major international investment [calvao2021cobalt].
Beyond greenhouse gas emissions, AV fleets will bring the geopolitics of mineral markets into scope for development – analogous to how 21st century trends in offshore production and increasing consumer awareness have forced consumer-facing businesses to audit their own supply chains for environmental, sustainability, and human rights concerns.
The extraction of rare earth metals requires consideration of long-term economic development, national security, and human rights.
For example, investigations have revealed that child labor is commonly used in cobalt extraction, which often involves digging for the mineral by hand [sovacool2021subterranean].
This cobalt is often refined and prepared for manufacture by foreign companies and then used to power smartphones designed and sold by tech companies in Western post-industrial countries.
However, no U.S. laws attempt to verify the prevention of human rights abuses in cobalt’s extraction [kelly2019apple].
And electric vehicle batteries typically include hundreds of times the amount of cobalt as a laptop [campbell2020cobalt].
The ethical problems with scaling AV fleets that rely on these supply chains are stark, and engineers must prioritize the experimental design of AVs based on alternative minerals whose extraction and use do not support inhumane work practices [ryu2021reducing].
Where use of such minerals is inevitable, AV manufacturers will need to evaluate how growth of a given platform could leave its supply chain dependent on these practices, and how this dependency might otherwise be mitigated over time.
Beyond ethics, there are other reasons why reliance on the extraction of rare earth metals should be avoided.
For one, it exposes AV services to geopolitical forces beyond the control of their parent firm.
For example, local corruption among government officials or labor groups could result in supply disruptions that make it prohibitive to grow or sustain the AV fleet without drastically raising costs [callaway2018powering].
Other geopolitical actors enmeshed in a military conflict with the country of extraction could interfere with supply lines or plunder mineral resources for themselves.
In a broader sense, these supply chains introduce uncertainties and structural risks that make long-term forecasting of AV development impossible.
Because sources of extraction lack the transparency and documentation of labor-protected working conditions, it is difficult to evaluate exactly where and how much mineral material will be extracted over time [frankel2016cobalt].
Consider also the prospect of international sanctions against certain forms of mineral extraction, or consumer boycotts of tech services for their reliance on them [white1996globalization].
International watchdogs and human rights groups may expose work practices so reprehensible that AV companies could be exposed to liability for human rights abuses or international condemnation.
These considerations point to an uncomfortable truth: no matter how safe and efficient and consumer-friendly AVs may become, their production remains dependent on the contradictions, ambiguities, and political traps of 21st century capitalism.
AV development must remain sensitive to these supply chain issues as their deployment continues.
Iv Conclusion
--------------
Sociotechnical specification remains a looming challenge for AV fleet performance.
While the short-term priority is building AVs that successfully conform to the existing rules of the road, over the long term AVs are more and more likely to set the pace of traffic in their own right.
Moreover, as AVs become widely deployed, their effects and impact on public infrastructure may be unequal even as their terms are hard to specify.
Nevertheless, sustained attention to technical problems (including proxy metrics), sociotechnical problems (including adjacent literatures), and social problems (including outstanding research questions) will prepare designers to incorporate stakeholder needs and requests as they arise over time.
Acknowledgment
--------------
Icons in [Figure 1](#S2.F1 "Fig. 1 ‣ II SOCIOTECHNICAL SPECIFICATION IN THE DESIGN OF ‘TRANSPORTATION’ ‣ Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles*") are CC:BY licensed from NounProject.com: Highway by Georgiana Ionescu, Crowd by Adrien Coquet, Environment by Shmidt Sergey, Shop by revo250, and Improvement by Cuputo.
Appendix A
----------
Sociotechnical specification is directly relevant to outstanding research topics in autonomous driving design.
This appendix highlights analogous problem spaces and corresponding tools, datasets, and evaluation methods already established in interdisciplinary literatures.
Some of these standards may be directly applicable to existing techniques, while others may require iteration or refinement. We place them here as outstanding examples of technical work that could be immediately undertaken by AV designers.
###
Iv-a Approaches that account for interactions between traditional subcomponents (localization, perception, planning, routing)
* New traffic laws:
While AV designers are immediately concerned with optimizing AV performance rather than deciding what “good” performance necessarily means, it would be reasonable for methods to consider possible changes or updates to existing traffic laws as an approach to control the impacts of AVs.
For instance, new regulations could support the addition of traffic lanes, pick-up/drop-off points, and zones where pedestrians have different rights.
Each of these interventions would allow for coordination between people and AVs, and as such it would serve as an important interface between means of local control, routing, and coordination with external road infrastructure (e.g. controlling stop lights).
Attention to such interfaces is likely going to be a key nexus of regulatory attention in the coming decades.
One source of inspiration is analytical tools from the Highway Safety Manual (HSM) [part2010highway], which could be applied to a wider range of urban settings beyond highways.
While focused on mitigating crash frequency, the Highway Safety Manual aims to coordinate safety and economic concerns in a way that well-approximates the human factors interpretation of safety as a problem of limited attention and human capabilities [banihashemi2011highway].
The HSM could help update current AV simulation work to prepare it for future traffic laws by embedding system planning within engineering, construction, and maintenance as part of an integrated development process.
This perspective would prepare AV designers for federal and state regulatory environments once they have moved past proprietary standards for simulation and control.
###
Iv-B The collection, curation, and sampling of datasets for end-to-end or learned decision making approaches
* Transit systems engineering:
The HSM discussed above also pinpoints three neglected sources of data (site characteristics data, traffic volume data, crash data) and incorporates them as part of safety prediction.
This would help AV designers move beyond “cookbook engineering” when setting up simulation parameters, and instead incorporate the basic concepts of systems engineering (functions, requirements, and context) that will ensure simulations accommodate the multiple interfaces necessary for fair and inclusive urban AV navigation.
###
Iv-C Methods for validating / evaluating the performance of end-to-end / learned approaches in simulation
* Highway capacity:
A major stumbling-block for the use of traffic efficiency models is the need for more sophisticated travel time metrics for highly-localized neighborhoods, urban sub-regions, and particular corridors.
One path forward is to incorporate Highway Capacity Software in support of the highway capacity manual [manual2000highway].
This will help permit a choice of advanced modeling tools in the context of stakeholder interests and targeted focus groups [flannery2004highway].
It also makes possible particular auto-oriented metrics of demand and system utilization.
* Tolling:
There will be a need for entirely new traffic regulations, as the maturation of AV optimization interacts with legacy forms of traffic control.
Designers will therefore need tools and methods to accommodate this likelihood.
A good source of inspiration is the work on standards, metrics, and simulation parameters by the International Bridge, Tunnel and Turnpike Association (IBTTA).
IBTTA has supported and made possible studies of the impacts of innovative technologies on highway operators [azmat2018impact], as well as the impact of public-private partnerships on financing road infrastructure in developing economies [queiroz2013public].
IBTTA has also developed specialized tools for modeling various tolling environments.
For example, the IBTTA Tollminer is a visualization tool that includes maps of toll facilities, a list of managed lane projects in operation nationwide, an optimizable user interface, and annual data on public and private toll revenues, among other features. While it is geared towards modeling and comparing the relative effects of high-occupancy vehicle lanes and toll lanes [poole2020impact], this work could be readily translated to test new simulation parameters for AVs that incorporate speculative regulations for equitable mobility access.
Appendix B
----------
[Table II](#Sx3.T2 "TABLE II ‣ APPENDIX B ‣ Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles*") shows lists a number of other tractable metrics that may be relevant for quantifying the impacts of Autonomous Vehicles on Transit Systems.
| | | |
| --- | --- | --- |
| Public Infrastructure | Social Mobility | Environment Impacts |
|
Privatization of roads
* Vehicle perception of signage
+ wayfinding information
+ sidewalk quality/width/shade
+ treelined/shaded streets
+ walkable streets
+ systematic pedestrian and cycling environmental scan instrument
+ commercial onsite amenities to support alternative modes
+ availability of on-site bicycle amenities
+ pedestrian scaled lighting
+ ratio of street width to building height
+ parking screening
+ bus pass program utilization
+ parking shading
* Regional transit system components
+ on-vehicle bicycle-carrying facilities
+ parkand-rides with express service
+ parking spaces designated for carpools or vanpools
+ transit passes
+ traffic cells
+ percent miles bicycle and/or pedestrian accommodations
+ miles of express fixed-transit route/dedicated bus lanes
+ road density
+ lane miles per capita
+ percent of network that is “effective”
+ roadway network balance
* Local route connectivity
+ square feet of pathways/sidewalks
+ crosswalk spacing
+ number of safe crossings per mile
+ bicycle parking at stops and stations
+ parking footprint
+ block length
+ parking location
+ bicycle path condition
+ pedestrian/bicycle route directness
+ land use buffers
+ walking environment
+ bicycle maintenance stations
+ bicycle/pedestrian connectivity
+ connectivity indexes [mishra2012performance]
+ project adjacency to existing network
+ connected and open community
+ connected sidewalks/paths
+ connected streets
+ cross access
|
Quality of Life
* Road network usage
+ vehicle occupancy by land use
+ district-wide Level of Service (LOS)/Quality of Service (QOS)
+ local traffic diversion
+ percent of system heavily congested
+ v/C ratio
+ vehicle density
+ demand/capacity ratio
+ Maximum Service Volume
+ Peak Hour LOS
+ Percent of Capacity Consumed
* Last mile
+ average trip length per traveler
+ delay per traveler
+ door to door travel time
+ HCM-based bicycle LOS
+ proportion of total person miles traveled (PMT) for nonsingle occupancy vehicles (SOVs)
* Land use optimization
+ residence proximity
+ employment proximity
+ work accessibility
+ number of key destinations accessible via a connected pedestrian system
+ industrial/warehouse proximity
+ transit convenience/stop accessibility
+ geographic service coverage
+ population service coverage
+ percent in proximity
+ transit accessibility
* Transit mode optimization
+ bike/pedestrian accessibility
+ destination accessibility
+ residential accessibility
+ average walking distance between land use pairs
+ spacing between village centers
+ multiple route choices
* Environmental diversity
+ Smart Growth Index
+ significant land uses
+ land use ratios
+ land use balance
+ variation of agriculture of green fields
+ land consumption
+ core land use
+ land use separation
+ Transportation-Efficient Land Use Mapping Index (TELUMI) model
+ minimum thresholds of land use intensity
+ nearby neighborhood assets
+ distinct indexes for sprawl [ewing2003urban, galster2001wrestling]
+ land use within village center
+ land use within transit supportive area
+ jobs/housing balance
|
Induced demand
* Ecological metrics
+ attainment of ambient air quality standards
+ daily CO2 emissions
+ daily NOx/CO/Volative organic compound (VOC) emissions
+ noise pollution
+ impact on wildlife habitat
+ water runoff
* Fiscal costs
+ additional fuel tax
+ transportation utility fee (TUF)
+ vehicle miles traveled (VMT)-based impact fee
+ consumption-based mobility fee
+ improvements-based mobility fee
+ cost recovery from alternate sources
+ variable fees based on LOS
+ benefit cost ratio
+ parking pricing
+ capita funding for bike/pedestrians
* Community-level impacts
+ distribution of benefit by income group
+ transportation affordability
+ equitable distribution of accessibility
+ commute cost
+ transit values
+ fee charged for employee parking spaces
+ travel demand management (TDM) effectiveness based on TRIMMS model
+ travel costs by income group and/or race
+ VMT by income group and/or race
+ mode share by income group and/or race
+ walk to transit by income group and/or race
|
TABLE II: Tractable feature types and example metrics that interact with the transportation system. Terms are taken from [elefteriadou2012expanded] unless otherwise noted. |
2416613b-9459-45a6-aee7-f9892f4fa18b | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Quantifying General Intelligence
Introduction
============
This piece seeks to explore an interesting way of defining intelligent systems such that we can theoretically quantify their general intelligence.
From this, further tools and ideas for comparing these entities could be developed.
The definitions are not meant to be philosophical truths, rather they are meant to be useful tools that will allow us to analyse and gain insight into these systems and how they relate to one another.
At least that's the hope, failing that they can perhaps at least provide some food for thought.
This post is meant to be accessible to non-technical readers so some terms may be explained to a level of detail unnecessary for people familiar with machine learning.
Desirable Properties
====================
We begin by identifying several desired properties that would increase the utility and robustness of our framework, giving us something to aim at.
**Sufficient:** If our definitions relied upon, or referenced, things that are poorly defined themselves, we would just be moving the problem back a step and not actually gaining any insight.
**Measurable:** Intelligence is a broad spectrum, this especially visible in the natural world. A good definition would reflect this and give us a continuous measure of intelligence that allows sensible comparisons.
**Implementation Independent:** It's easy to compare somethings capabilities to humans in order to ascertain their intelligence. We want our definitions to be free from bias towards any particular implementation or version of intelligence, so that it can recognise intelligence which operates in a way unfamiliar to us, or in a way we don't understand.
**Minimal Grey Areas:** Many definitions could leave large grey areas on boundaries between classifications, or not make sense when applied to domains they were not designed with in mind. This should be avoided.
**Useable:** Sometimes a seemingly 'perfect' definition is infeasible to actually apply, and so is of no practical use. A definition which is infeasible to theoretically calculate, could have a method to estimate it. Quantifying how reliable or accurate the estimates are would also be useful.
Agents
======
A classic definition of an agent is something that interacts with an environment, choosing an action (or actions), to achieve some desired goal. To make this rigorous, we will define an agent in our framework as something that produces an action based on the state of its environment. The action will be an effect on the environment. In a mathematical sense an agent is a function from the environment state to an effect, the effect is then a function from one environment state to another. Both functions could be stochastic (involving randomness) or deterministic, depending on the environment and the agent.
Describing something as an agent is something we do as part of our framework in order to analyse it. There is no physical property that determines if an object is an agent or if it's not - anything can be an agent if we want it to be.
Environments
------------
The environment includes everything that could affect our agent, and everything our agent can affect. Arguably this is always the entire universe though this is not very helpful and so reasonable simplifications should be made. The environment can also include the agent and its internal state, especially if our agent is able to read or modify its own state.
Environments will typically be multi-dimensional. If they represent real-world environments then there will be up to 4 dimensions, 3 for space and 1 for time. When we define an environment, we will also define something we will call 'The Prime Dimensions' (PDs). The position of the environment in the PDs cannot be affected directly by the agent. For real-world-like environments the PDs includes time, for classification tasks it is the examples (i.e. each point in its space represents a single example). Many reinforcement learning environments will have the PDs be made up of attempts and time within each attempt.
Often we can partition the environment between agent inputs and agent outputs. This is common though not true in general. Some environments, combined with an agent, will form 'Partially Observable Markov Decision Processes' (POMDPs), though without the reward component. Essentially this means that all possible information about the environment is encoded in its current state, that the agent may or may not have full access to. Note all POMDPs have PDs that include time.
Actions
-------
Actions are some effect our agent produces that mutates the state of the environment. Actions could be produced deterministically or with some degree of randomness. The possible actions agents can have are quite broad in nature, and often lend themselves to be simplified. For example, limiting the actions of mechanical devices to changes in the acceleration of their idealised degrees of freedom.
Goals
-----
Unlike the classical view of agents, we do not consider our agents to have inherent desired goals. The idea of a goal is still useful however. We can define a goal as a function that takes in information about the agent and environment, and then outputs a value that represents how well the agent is achieving that goal at that point in time (larger is better). We require that the integral (or sum in the discrete case) of the goal function over the prime dimensions converges, and is bound by 0 and 1. Note that goals are arbitrary and always outside the environment - the agent cannot see them or modify them.
Examples
--------
We will analyse 3 possible agents and explain how they fit into the framework. They are a weather vane, an image classifier, and a bot that plays pong. Note that the technical implementations of the last 2 agents will be irrelevant to our analysis.
### Weather Vane
The weather vane's environment is the state of the atmosphere close to itself as well as its own spatial position and orientation. Our prime dimension is time. The weather vane's action could be idealised as its angular acceleration around its axis, though if we were to consider it possibly breaking we would need to extend this. The goal of the weather vane could be qualitatively to point in the direction of the wind as fast as possible. We could mathematically express this as
1kektcos(θwind−θvane).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
where k is a free parameter.
### Image Classifier
The image classifier takes in an image and produces a probability distribution over some set of classes. The environment are the images given to it, and its produced distributions. The prime dimension is the set of images. Our classifier's actions are its produced classification distributions. As a goal, we'd want the classifier to produce probability 1 for the correct classification and 0 for all others, for each example. We could express this as 1N∑Ni=1y(xi)
where xi represents the ith image data and y(xi) is the value of the prediction for the ith image. There are N total images.
### Pong Bot
The pong bot's environment could be information on the location and velocities of the balls and paddles, it could also be the last few frames of the game rendered to the screen. It would also include the inputs to its 'player'. The environment would span every game of pong the agent plays, that and the time during each game would make up the prime dimension. The pong bot could produce a discrete action of up or down for each input frame of the game. The pong bots goal is winning pong, mathematically a good goal function might be
1N∑Ni=1wi. It plays N total games with wi
being 1 if it wins game i.
Intelligence
============
The intelligence of an agent is often defined as the ability for it to achieve its goals. In our framework the goal is something we impose on the agent rather than something it holds itself, so we will define 2 types of intelligence, specific and general.
Specific Intelligence
---------------------
The specific intelligence of an agent relates to a specific goal. An agents intelligence with respect to that goal is the expectation of the integral of the goal function over the prime dimension. Taking an expectation covers cases where any part of our system is stochastic. From the requirements of the properties of goal functions, this will be a value from 0 to 1, where 1 represents the agent being 'perfect' for that goal.
General Intelligence
--------------------
Under our analysis so far, our weather vane would be incredibly intelligent - it's very good at achieving the goal it's designed for, pointing at the wind. Are weather vanes super intelligent? Of course not, this is where generality comes into play. We want to capture how capable our agent is at performing a variety of tasks - its intelligence over a variety of goals. This will allow us to separate weather vanes, which only point in the direction of the wind, to a game playing DQN algorithm that could become very skilled at a large number of simple games.
### Modification
In contrast to other formulations, our agents don't have ways of directly receiving and interpreting goals. Assuming that they did (as they would in many machine learning paradigms), would violate implementation independence. Instead, we generalise by modifying an agent (or its environment) in order to try and increase our agents specific intelligence with respect to our desired goal. This could be as simple as giving the agent a specific initial input, or it could be training it with a new set of data. This has the nice property of making clear the outer alignment problem. We can never actually tell agents what to do, only modify it so what it does hopefully becomes closer to what we want.
Clearly, the more modification we are allowed to make, the greater impact this will have on our agents capabilities. At the extreme we could completely redesign our agent from the ground-up for each individual goal. We will assume we have a way of quantifying the magnitude of a given modification, i.e. how much it changes the agent. This will allow us to counteract this problem.
We can use this idea to explore how capable agents are, subject to modifications of a certain magnitude. Now the differences become clear. Weather vanes would require extreme modification to do anything other than point in the direction of the wind, whereas a theoretical AGI robot would only need to be told to do something and it could probably achieve high performance on lots of goals.
### Quantification
We now have the tools to quantify the general intelligence of an agent. We will consider how well our agent performs on every possible goal, under every possible modification. This will require 2 more auxiliary functions to make work: a modification-magnitude importance function and a goal importance function. The algorithm is given below for a deterministic agent / environment, to convert to a stochastic one you simply need to take expectations where necessary.
#### General Intelligence Quantification
* Record the agents specific intelligence with respect to every possible goal in every possible environment
* Make every possible modification to the agent/environment, each time repeating step 1
* For each goal, take the list of modifications made and specific intelligence achieved and put it in order of increasing modification magnitude
* Remove all duplicate results - equal modification magnitude and specific intelligence
* Remove all results where the modification resulted in a lower specific intelligence than a modification that was of a smaller magnitude (such that our list of modifications that is in order of increasing modification magnitude is also in order of increasing specific intelligence)
* Convert the list of modifications and their magnitudes / rewards into an increasing function from magnitude of modification to reward using a zero-order hold interpolation where necessary (Resulting function should be an increasing function from reals to reals)
* Multiply this function with our modification-magnitude importance function (this represents us caring less about our agent being able to do arbitrarily well with arbitrarily large modification)
* Integrate this function from 0 to infinity to obtain a single value that represents how well the agent performs at that goal given varying levels of modification (performance value)
* Multiply this by our goal-value function output for this goal (to represent that some goals are 'worth less to be good at')
* Repeat this for every goal and take a summation to give us the final value for our agent - its general intelligence
Clearly there are some complications with this strategy. Firstly we need to define the three auxiliary functions. This will be subject to biases and assumptions. We require that the integral of our modification-magnitude importance function converges to a finite number, for example exponential decay.
Even given these functions, the actual calculation is still intractable. It requires us to consider possible infinities of modifications across possible infinities of environments that contain yet more possible infinities of goals. We can however define a small subset of these and perform the calculation in order to obtain an estimate of the agents general intelligence. An example of doing this is given in the following section.
Practical Calculation Approach
------------------------------
Lets imagine we have some computer algorithms (agents), and we wish to see investigate how their general intelligence compares. We could even include a human agent in this, though they are subject to the same restrictions as all other agents (no direct access to the goal). We'll take some number of environments, each with various goals specified for them. We hope that these goals, across these environments, provide a somewhat balanced range of tasks for the agents to complete. The way to communicate the goals to the agents is via a text file. We'll refer to those writing the agents and their modifications as the testers. The process of calculating their general intelligence is as follows.
#### Practical General Intelligence Quantification
* The environments are revealed to the testers and each agent is implemented with a general idea of the types of goals that may be present, but no knowledge of any actual ones
* For each goal, the tester may write various strings of their choosing into the text file before the agent begins its attempt at the task, all attempts are recorded
* If any part of the environment or agent behaviour is stochastic, a suitable number of trials are performed and an expectation is estimated
* The modification-magnitude is given by the file size and these values along with the achieved specific intelligence for that task under that modification are recorded and processed as per the general algorithm
* Using a modification-magnitude importance function of exponential decay, we integrate to get the performance values for each goal for each agent
* Average these across all goals for each agent to get approximated general intelligence scores for each agent
Clearly this won't be perfect and care would need to be taken to make sure the writers of the algorithms have a sufficient lack of knowledge about possible specific goals but still know what they could be in general. Modifying a text file in the environment is a small subset of the possible modification space but it has the nice property that our calculated intelligences reflect how powerful our agents are at leveraging information to achieve a goal. Our approach also only covers a small subspace of the environment-goal space. It does however, yield a way of comparing our algorithms (and humans) to not only see which ones are more generally intelligent, but even quantify the differences!
Comparisons & Connections
=========================
AI & AGI
--------
So what are AI and AGI under our framework? AI is a bit of a pointless term - weather vanes can be considered highly intelligent artificial agents. As for AGI, we can think of this as simply an artificial agent with high overall general intelligence, perhaps near or beyond human level. It perform well at lots of goals in many environments, with minimal modification required. Nothing really matters, especially how it's implemented. It could be a neural network, a circuit, a mechanical device, a brain simulation, etc.
Explicit Optimisers
-------------------
Often, intelligence agents are formulated as optimisers. They have some reward function they are trying to maximise (or loss to minimise). Machine learning methods are based on optimisers and we will explore how those paradigms fit into this framework. As a general point, training or pre-training could be carried out before testing the resulting agent on various goals. Lack of knowledge of the goals before training would still apply in this case.
### Supervised Learning
In supervised learning, we are given pairs of training data,
(xn,yn), and try to learn the relationship between them such that we can predict the y from an unseen x. Our primary dimension in the environment is the set of examples. Goal functions would be to produce certain y's (or get as close as possible according to some measure)
from a set of certain x's in the environment. These x, y pairs are known as the test data. Is the training data part of the environment? We will consider three cases.
The first is including training data in the environment. Here the agent is the learning algorithm, it uses the training data embedded in the environment to learn the relationship and then predict on the test data with the goal function measuring performance. Changing the training data allows us to learn different relationships and thus optimise for different goal functions. However lots of training data is usually required, thus the modification magnitude is very large. The advantage is our algorithm is very general and performs well for all supervised learning cases that we can embed examples for.
The second is none of the training data being part of the environment. Here the agent is the learnt model. It performs highly at whatever task the training data corresponds to but cannot learn different relationships, without modifying the learnt agent itself. This represents a highly intelligent agent at one specific task but not very good at any others.
Finally, consider a pre-trained model. It has already seen some data, but can be fine-tuned on extra data embedded in the environment. This mixes the pros and cons of the previous two methods.
### Reinforcement Learning
In reinforcement learning (RL) we receive sequential environment observations, on. We take an action after each new observation,
an, and receive a reward rn. The goal is to learn which action to take, based on the current history of observations, that maximises the total reward received overall. Sometimes the environment is reset and each period of time between each reset is referred to as an episode. The prime dimensions would thus be episodes and the time within them.
RL bears resemblance to our formulation, but we must handle with care. The goal functions could be almost identical to reward functions but there is a key difference. The agent has access to the reward function, but not the goal function.
If our approach is to directly optimise on the goal function, we need to convert it to a reward function. For reasons that boil down to RL is hard, this is non-trivial. This conversion represents the essence of the outer alignment problem in RL. In this method, like in supervised learning, the agent is the algorithm. Optimising a reward function also has the disadvantage that for each new goal we test the agent on, it needs to learn how to achieve that goal. If this takes a long time, this could be suboptimal if the goal functions specify we must achieve our task quickly. One advantage to the supervised learning case is we only need to embed the new reward function in the environment/agent as oppose to embedding a whole dataset.
What if we want our agent to be able to perform tasks quickly? There are three approaches. The first is to train it on lots of objectives separately, then use the environment to embedded some way of switching between learnt behaviours. Like in the supervised case this leads to very poor generality. A better way would be use a pre-training approach. Now our agent might only need a few (or even zero) training attempts to learn how to complete each goal. The third method would be to learn parameterised behaviours via RL and then specify the parameters once the goal functions are known and in one episode perform highly at the goal.
Non-optimisers
--------------
Firstly, many may point out that in the methods explored here, there's an optimisation taking place at the modification (design/input) level. However, when we are performing our modifications on the agent and its environment we have to remember that it is only an optimisation in the practical sense, as in theory we would apply every possible one. That out of the way, here are two possible agent paradigms that don't use optimisation explicitly.
### Machines
Consider a device that can change its behaviour based on altering it
(e.g. switches, dials etc.). We could consider configuring it for each task as a modification, though how to quantify this is non-trivial compared to agents which can receive text/audio information directly. Extending this to the extreme, this approach represents building or configuring a specific machine to solve each goal, with modification magnitude reflecting the size/complexity of the machine. These machines could be mechanical, electrical, a mix of the two, or something else entirely.
### Bespoke Programs
We could create an instruction set of all possible actions. If there's no uncertainty, then which action to take at each time-step (or each point in the prime dimensions) could be encoded as a list of strings in a text/audio input. Obviously for complex tasks this could be quite large, hence large modification magnitude and a low performance value. For environments with uncertainty, conditional and branching statements in the instruction set could be used. In the extreme, this approach represents programming a specific solution for each goal, with modification magnitude reflecting the size of the program. The agent is the interpreter/compiler and executor of the program.
Humans
------
In terms of human agents, we have to make sure the goals we test them on do not significantly conflict with their own internal human goals and thus give false impressions of capabilities. Modification can be carried out by inserting information into the environment (e.g. text/speech).
Conclusion
==========
By considering agents without inherent goals that simply interact with environments, we remove any reliance on what agents are or can do. In order to measure their intelligence, we consider how their actions affect a given goal. This gives the agents specific intelligence for that task.
In order to generalise beyond specific instances of agents, we modify the agent or environment in order to maximise the satisfaction of the goal. By considering all possible modifications, and weighting them based on how large they are; we compute a general intelligence for that agent. This serves as a measure for how sensitive an agents capabilities are to modifying it.
Our definition would require us to consider all possible modifications, for all possible goals in all possible environments. This is unfeasible. By limiting what we consider, we can calculate an estimate for the agents general intelligence. In the case of modifications being information added to the environment, general intelligence becomes the agents ability to leverage information to perform a task. |
9a95006a-72a7-4548-a305-2abbcc83aaff | trentmkelly/LessWrong-43k | LessWrong | Effective Giving vs. Effective Altruism
This is mainly of interest to Effective Altruists, and was cross-posted on the EA forum
Why separate effective giving from Effective Altruism? Isn't the whole point of EA about effective giving, meaning giving to the most impactful charities to advance human flourishing? Sure, effective giving is the point of EA, but there might be a lot of benefit to drawing a distinct line between the movement of Effective Altruism itself, and the ideas of effective giving that it promotes. That's something that Kerry Vaughn, the Executive Director of Effective Altruism Outreach, and I, the President of Intentional Insights, discussed in our recent phone call, after having an online discussion on this forum. To be clear, Kerry did not explicitly endorse the work of Intentional Insights, and is not in a position to do so - this just reflects my recollection of our conversations.
Why draw that line? Because there's quite a bit of danger in rapid movement growth of attracting people who might dilute the EA movement and impair the building of good infrastructure down the road (see this video and paper). This exemplifies the dangers of simply promoting Effective Altruism indiscriminately, and just trying to grow the movement as fast as possible.
Thus, what we can orient toward is using modern marketing strategies to spread the ideas of effective altruism - what Kerry and I labeled effective giving in our conversations - without necessarily trying to spread the movement. We can spread the notion of giving not simply from the heart, but also using the head. We can talk about fighting the drowning child problem. We can talk about researching charities and using GiveWell, The Life You Can Save, and other evidence-based charity evaluators to guide one's giving. We can build excitement about giving well, and encourage people to think of themselves as Superdonors or Mega-Lifesavers. We can use effective marketing strategies such as speaking to people's emotions and using stories, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.