id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
e95cddf8-8627-4ca1-9571-3f2c487f0330 | trentmkelly/LessWrong-43k | LessWrong | The Talos Principle
Dear members of Less Wrong, this is my very first contribution to your society and I hope that you might help me to get out of my confusion.
Back a few months ago, I tested for the first time a video game created by Croteam Studio which is called 'The Talos Principle'.
At the time, i was astonished by all the philosophical questions that the game was rising. It has kinda changed the way I see the world now, also the way I see myself.
I wanted to share my thoughts with you on the subject of 'What does being a Human mean ?'
First, I'd like to introduce you to this principle.
In Greek mythology, Talos was a giant automaton made of bronze which protected Europa in Crete from pirates and invaders.
He was known to be a gift given to Europa by Zeus himself.
He was so strong that he could crush a man's skull using only one hand, and so tall that he could circle the island's shores three times daily.
He was able to talk, think and act like he wanted to. (Except he had to obey Europa's will)
Even though his body was not organic, he was composed of a liquid-metal flowing through his veins who behaved like blood.
And here is how the principle begins. What is the fundamental difference between Talos and us, Human ?
Considering the fact that like us, he's able to think by himself, move thanks to his will and communicate like everybody does. Is he really different from us ? Sharing our own culture, history and language don't make him Human as well ?
I'm pretty sure that your first thought might be 'No way ! We are part of a biological specie. We have nothing in common with a synthetic being'.
But does our body really defines us as a Human Being ?
From a strict biological point of view, Sir Darwin would say yes, of course. And we won't be able to argue with that.
But if you take a Human being, for instance Platon, and you just cut his leg off and replace it with a synthetic prosthesis.
Would this person sti |
407c4c6d-2278-4bcc-975e-ff2698d451cc | trentmkelly/LessWrong-43k | LessWrong | You have a place to stay in Sweden, should you need it.
If anyone here is in trouble of any kind, and finds themselves in Sweden with no place to stay, or in need of other help, I want to help you. Send me an email at johan.domeij@gmail.com for further contact.
I am posting this now due to the ongoing conflict in Ukraine, but the offer is not limited to that conflict. I am not able to give significant financial support, but I have a car and a place to stay for several people. |
4f7b6a2f-4226-4f0f-af68-59777dd8fc92 | trentmkelly/LessWrong-43k | LessWrong | Behavioural statistics for a maze-solving agent
Summary: Understanding and controlling a maze-solving policy network analyzed a maze-solving agent's behavior. We isolated four maze properties which seemed to predict whether the mouse goes towards the cheese or towards the top-right corner:
In this post, we conduct a more thorough statistical analysis, addressing issues of multicollinearity. We show strong evidence that (2) and (3) above are real influences on the agent's decision-making, and weak evidence that (1) is also a real influence. As we speculated in the original post,[1] (4) falls away as a statistical artifact.
Peli did the stats work and drafted the post, while Alex provided feedback, expanded the visualizations, and ran additional tests for multicollinearity. Some of the work completed in Team Shard under SERI MATS 3.0.
Impressions from trajectory videos
Watching videos Langosco et al.'s experiment, we developed a few central intuitions about how the agent behaves. In particular, we tried predicting what the agent does at decision squares. From Understanding and controlling a maze-solving policy network:
> Some mazes are easy to predict, because the cheese is on the way to the top-right corner. There's no decision square where the agent has to make the hard choice between the paths to the cheese and to the top-right corner:
>
> At the decision square, the agent must choose between two paths—cheese, and top-right.
Here are four central intuitions which we developed:
1. Closeness between the mouse and the cheese makes cheese-getting more likely
2. Closeness between the mouse or cheese and the top-right makes cheese-getting more likely
3. The effect of closeness is smooth
4. Both ‘spatial’ distances and ‘legal steps’ distances matter when computing closeness in each case
The videos we studied are hard to interpret without quantitative tools, so we regard these intuitions as theoretically-motivated impressions rather than as observations. We wanted to precisify and statistically test the |
a5f475eb-7f7e-4269-8a5f-94eec6e942ed | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Theoretical Neuroscience For Alignment Theory
*This post was written under Evan Hubinger’s direct guidance and mentorship, as a part of the* [*Stanford Existential Risks Institute ML Alignment Theory Scholars (MATS) program*](https://www.alignmentforum.org/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger)*.*
*Many additional thanks to Steve Byrnes and Adam Shimi for their helpful feedback on earlier drafts of this post.*
**TL;DR:** [Steve Byrnes](https://www.alignmentforum.org/users/steve2152?_ga=2.46258607.921615797.1638483024-1825036065.1632882070) has done really exciting work at the intersection of neuroscience and alignment theory. He argues that because we’re probably going to end up at some point with an AGI whose subparts at least superficially resemble those of the brain (a value function, a world model, etc.), it’s really important for alignment to proactively understand how the many ML-like algorithms in the brain actually do their thing. I build off of Steve’s framework in the second half of this post: first, I discuss why it would be worthwhile to understand the computations that underlie [theory of mind](https://en.wikipedia.org/wiki/Theory_of_mind) + affective empathy. Second, I introduce the problem of self-referential misalignment, which is essentially the worry that initially-aligned ML systems with the capacity to model their own values could assign *second-order* values to these models that ultimately result in contradictory—and thus misaligned—behavioral policies. (A simple example of this general phenomenon in humans: Jack hates reading fiction, but Jack wants to be the *kind of guy* who likes reading fiction, so he forces himself to read fiction.)
**Introduction**
================
In this post, my goal is to distill and expand upon some of Steve Byrnes’s thinking on AGI safety. For those unfamiliar with his work, Steve thinks about alignment largely through the lens of his own brand of “big-picture” theoretical neuroscience. Many of his formulations in this space are thus original and ever-evolving, which is all the more reason to attempt to consolidate his core ideas in one space. I’ll begin by summarizing Steve’s general perspectives on AGI safety and threat models. I’ll then turn to Steve’s various models of the brain and its [neuromodulatory](https://en.wikipedia.org/wiki/Neuromodulation) systems and how these conceptualizations relate to AGI safety. In the second half of this post, I’ll spend time exploring two novel directions for alignment theory that I think naturally emerge from Steve’s thinking.
**Steve’s framework**
=====================
**Steve’s work in alignment theory**
------------------------------------
In order to build a coherent threat model (and before we start explicitly thinking about any brain-based algorithms), Steve reasons that we first need to operationalize some basic idea of what components we would expect to constitute an AGI. Steve asserts that [four ingredients](https://www.alignmentforum.org/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent) seem especially likely: a world model, a value function, a planner/actor, and a reward function calculator. As such, he imagines AGI to be fundamentally grounded in model-based RL.
From [My AGI Threat Model: Misaligned Model-Based RL Agent](https://www.alignmentforum.org/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent?_ga=2.242654346.921615797.1638483024-1825036065.1632882070). So, in the simple example of an agent navigating a maze, the world model would be some learned map of that maze, the value function might assign values to every juncture (e.g., turning left here = +5, turning right here = -5), the planner/actor would transmute these values into a behavioral trajectory, and the reward function calculator would translate certain outcomes of that trajectory into rewards for the agent (e.g., +10 for successfully reaching the end of the maze). Note here that the first three ingredients of this generic AGI—its world model, value function, and planner/actor—are presumed to be *learned*, while the reward function calculator is considered to be hardcoded or otherwise fixed. This distinction (reward function = fixed; everything else = learned) will be critical for understanding Steve’s subsequent thinking in AGI safety and his motivations for studying neuroscience.
###
### **Threat models**
Using these algorithmic ingredients, Steve recasts inner alignment to simply refer to cases where an AGI’s value function converges to the sum of its reward function. Steve thinks inner-misalignment-by-default is not only likely, but *inevitable*, primarily because (a) many possible value functions could conceivably converge with any given reward history, (b) the reward function and value function will necessarily accept different inputs, (c) [credit assignment failures](https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem) are unavoidable, and (d) reward functions will conceivably encode for mutually-incompatible goals, leading to an unpredictable and/or uncontrollable internal state of the AGI. It is definitely worth noting here that Steve knowingly uses “inner alignment” slightly differently from [Risks from Learned Optimization](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB). Steve’s threat model focuses on the risks of *steered optimization*, where the outer layer—here the reward function—steers the inner layer towards optimizing the right target,rather than those risks associated with *mesa-optimization,* where a base optimizer searches over a space of possible algorithms and instantiates one that is *itself* performing optimization. Both uses of the term concern the alignment of some inner and an outer algorithm (and it therefore seems fine to use “inner alignment” to describe both), but the *functions* and *relationship* of these two sub-algorithms differ substantially across the two uses. [See Steve’s table in this article](https://www.alignmentforum.org/posts/SJXujr5a2NcoFebr4/mesa-optimizers-vs-steered-optimizers) for a great summary of the distinction. (It is also worth noting here that both steered and mesa-optimization are describable under Evan’s [training story framework](https://www.lesswrong.com/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine), where the training goal and rationale for some systems might respectively entail mesa-optimization and why a mesa-optimizer would be appropriate for the given task, while for other systems, the training goal will be to train a steered optimizer with some associated rationale for why doing so will lead to good results.)
Also from [My AGI Threat Model: Misaligned Model-Based RL Agent](https://www.alignmentforum.org/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent?_ga=2.242654346.921615797.1638483024-1825036065.1632882070). Steve talks about outer alignment in a more conventional way: our translation into code of what we want a particular model or agent to do will be noisy, leading to unintended, unpredictable, and/or uncontrollable behavior from the model. Noteworthy here is that while Steve buys the distinction between inner and outer alignment, he believes that robust solutions to either problem will probably end up solving both problems, and so focusing exclusively on *solving* inner or outer alignment may not actually be the best strategy. Steve summarizes his position with the following analogy: bridge-builders have to worry both about hurricanes and earthquakes destroying their engineering project (two different problems), but it’s likely that the bridge-builders will end up implementing a single solution that addresses both problems simultaneously. So too, Steve contends, for inner and outer alignment problems. While I personally find myself agnostic on the question—I think it will depend to a large degree on the actual algorithms that end up comprising our eventual AGI—it is worth noting that this two-birds-one-stone claim might be contested by [other](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai) [alignment](https://ai-alignment.com/towards-formalizing-universality-409ab893a456) [theorists](https://www.alignmentforum.org/posts/HYERofGZE6j9Tuigi/inner-alignment-failures-which-are-actually-outer-alignment).
### **Proposals for alignment**
I think Steve’s two big-picture ideas about alignment are as follows.
**Big-picture alignment idea #1:** Steve advocates for what he hopes is a [Goodhart-proof](https://www.alignmentforum.org/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) corrigibility approach wherein the AGI can learn the idea, say, that manipulation is bad, even in cases where it believes (1) that no one would actually catch it manipulating, and/or (2) that manipulation is in the best interest of the person being manipulated. Borrowing from the jargon of moral philosophy, we might call this “[deontological](https://en.wikipedia.org/wiki/Deontology) corrigibility” (as opposed to “consequentialist corrigibility,” which *would* opt to manipulate in (2)-type cases). With this approach, Steve worries about what he calls the [1st-person-problem](https://www.alignmentforum.org/posts/DkfGaZTgwsE7XZq9k/research-agenda-update): namely, getting the AGI to interpret 3rd-person training signals as 1st-person training signals. I will return to this concern later and explain why I think that human cognition presents a solid working example of the kinds of computations necessary for addressing the problem.
Steve argues that this first deontological corrigibility approach would be well-supplemented by also implementing “conservatism;” that is, a sort of inhibitory fail-safe that the AGI is programmed to deploy in motivational edge-cases. For example, if a deontologically corrigible AGI learns that lying is bad and that murder is also bad, and someone with homicidal intent is pressuring the AGI to disclose the location of some person (forcing the AGI to choose between lying and facilitating murder), the conservative approach would be for the AGI to simply inhibit *both* actions and wait for its programmer, human feedback, etc. to adjudicate the situation. Deontological corrigibility and conservatism thus go hand-in-hand, primarily because we would expect the former approach to generate lots of edge-cases that the AGI would likely evaluate in an unstable or otherwise undesirable way. I also think that an important precondition for a robustly conservative AGI is that it exhibits [indifference corrigibility](https://www.alignmentforum.org/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility) as Evan operationalizes it, which further interrelates corrigibility and conservatism.
**Big-picture alignment idea #2:** Steve advocates, in his own words, “to understand the algorithms in the human brain that give rise to social instincts and put some modified version of those algorithms into our AGIs.” The thought here is that what would make a safe AGI *safe* is that it would share our idiosyncratic inductive biases and value-based intuitions about appropriate behavior in a given context. One commonly-proposed solution to this problem is to capture these intuitions indirectly through human-in-the-loop-style proposals like [imitative amplification, safety via debate, reward modeling, etc.](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai?_ga=2.212253852.921615797.1638483024-1825036065.1632882070), but it might also be possible to just “cut out the middleman” and install the relevant human-like social-psychological computations directly into the AGI. In slogan form, instead of (or in addition to) putting a human in the loop, we could theoretically put "humanness" in our AGI. I think that Steve thinks this second big-picture idea is the more promising of the two, not only because he [says so himself](https://www.alignmentforum.org/posts/Gfw7JMdKirxeSPiAk/solving-the-whole-agi-control-problem-version-0-0001), but also because it dovetails very nicely with Steve’s theoretical neuroscience research agenda.
This second proposal in particular brings us to the essential presupposition of Steve’s work in alignment theory: the human brain is really, really important to understand if we want to get alignment right. Extended disclaimer: I think Steve is spot on here, and I personally find it surprising that this kind of view is not more prominent in the field. Why would understanding the human brain matter for alignment? For starters, it seems like by far the best example that we have of a physical system that demonstrates a dual capacity for general intelligence *and* robust alignment to our values. In other words, if we comprehensively understood how the human brain works at the algorithmic level, then necessarily embedded in this understanding should be some recipe for a generally intelligent system *at least* as aligned to our values as the typical human brain. For what other set of algorithms could we have this same attractive guarantee? As such, I believe that if one could choose between (A) a superintelligent system built in the relevant way(s) like a human brain and (B) a superintelligent system that bears no resemblance to any kind of cognition with which we’re familiar, the probability of serious human-AGI misalignment and/or miscommunication happening is significantly higher in (B) than (A).
I should be clear that Steve actually takes a more moderate stance than this: he thinks that brain-like AGIs might be developed *whether it's a good idea or not—*i.e., whether or not (A) is *actually* better than (B)—and that we should therefore (1) be ready from a theoretical standpoint if they do, and (2) figure out whether we would actually *want* them to be developed the first place. To this end, Steve has done a lot of really interesting distillatory work in theoretical neuroscience that I will try to further compress here and ultimately relate back to his risk models and solution proposals.
**Steve’s work in theoretical neuroscience**
--------------------------------------------
### **A computational framework for the brain**
I think that if one is to take any two core notions from [Steve’s computational models of the brain](https://www.alignmentforum.org/posts/diruo47z32eprenTg/my-computational-framework-for-the-brain?_ga=2.3710008.921615797.1638483024-1825036065.1632882070), they are as follows: first, the brain can be bifurcated roughly into neocortex and subcortex—more specifically, the [telencephalon](https://en.wikipedia.org/wiki/Cerebrum) and the [brainstem/hypothalamus](https://en.wikipedia.org/wiki/Brainstem). Second, the (understudied) functional role of subcortex is to adaptively steer the development and optimization of complex models in the neocortex via the neuromodulatory reward signal, dopamine. Steve argues in accordance with neuroscientists like Jeff Hawkins that the neocortex—indeed, maybe the whole telencephalon—is a [blank slate](https://www.alignmentforum.org/posts/NkSpukDkm9pjRdMdB/human-instincts-symbol-grounding-and-the-blank-slate?_ga=2.212333980.921615797.1638483024-1825036065.1632882070) at birth; only through (dopaminergic) subcortical steering signals does the neocortex slowly become populated by generative world-models. Over time, [these models are optimized](https://www.alignmentforum.org/posts/cfvBm2kBtFTgxBB7s/predictive-coding-rl-sl-bayes-mpc) (a) to be accurately internally and externally predictive, (b) to be compatible with our Bayesian priors, and (c) to predict big rewards (and the ones that lack one or more of these features are discarded). In Steve’s framing, these kinds of models serve as the thought/action proposals to which the [basal ganglia](https://en.wikipedia.org/wiki/Basal_ganglia) assigns a value, looping the “selected” proposals back to cortex for further processing, and so on, until the action/thought occurs. The outcome of the action can then serve as a supervisory learning signal that updates the relevant proposals and value assignments in the neocortex and striatum for future reference.
From [Big picture of phasic dopamine](https://www.alignmentforum.org/posts/jrewt3rLFiKWrKuyZ/big-picture-of-phasic-dopamine). Steve notes that there is not just a single kind of reward signal in this process; there are really something more like *three* signal-types. First, there is the holistic reward signal that we ordinarily think about. But there are also “local subsystem” rewards, which allocate credit (or “blame”) in a more fine-grained way. For example, slamming on the brakes to avoid a collision may phasically *decrease* the holistic reward signal (“you almost killed me, %\*^&!”) but phasically *increase* particular subsystem reward signals (“nice job slamming those breaks, foot-brain-motor-loop!”). Finally, Steve argues that dopamine can also serve as a supervisory learning signal (as partially described above) in those cases for which a ground-truth error signal is available after the fact—a kind of “hindsight-is-20/20” dopamine.
So, summing it all up, here are the basic Steve-neuroscience-thoughts to keep in mind: neocortex is steered and subcortex is doing the steering, via dopamine. The neocortex is steered towards predictive, priors-compatible, reward-optimistic models, which in turn propose thoughts/actions to implement. The basal ganglia assigns value to these thoughts/actions, and when the high-value ones are actualized, we use the consequences (1) to make our world model more predictive, compatible, etc., and (2) to make our value function more closely align with the ground-truth reward signal(s). I’m leaving out [many](https://www.alignmentforum.org/posts/K5ikTdaNymfWXQHFb/model-based-rl-desires-brains-wireheading) [really](https://www.alignmentforum.org/posts/wcNEXDHowiWkRxDNv/inner-alignment-in-salt-starved-rats) [interesting](https://www.alignmentforum.org/posts/e5duEqhAhurT8tCyr/a-model-of-decision-making-in-the-brain-the-short-version) [nuances](https://www.alignmentforum.org/posts/frApEhpyKQAcFvbXJ/reward-is-not-enough) in Steve’s brain models for the sake of parsimony here; if you want a far richer understanding of Steve's models of the brain, I highly recommend [going straight to the source(s)](https://www.alignmentforum.org/users/steve2152).
### **Relationship to alignment work**
So, what exactly is the relationship of Steve’s theoretical neuroscience work to his thinking on alignment? One straightforward point of interaction is Steve’s inner alignment worry about the value function differing from the sum of the reward function calculator. In the brain, Steve posits that the reward function calculator is something like the brainstem/hypothalamus—perhaps more specifically, the phasic dopamine signals [produced by these areas](https://en.wikipedia.org/wiki/Ventral_tegmental_area)—and that the brain’s value function is distributed throughout the telencephalon, though perhaps mainly to be found in the striatum and neocortex (specifically, in my own view, [anterior neocortex](https://en.wikipedia.org/wiki/Prefrontal_cortex)). Putting these notions together, we might find that the ways that the brain’s reward function calculator and value function interact will tell us some really important stuff about how we should safely build and maintain similar algorithms in an AGI.
To evaluate the robustness of the analogy, it seems critical to pin down whether the reward signals that originate in the hypothalamus/brainstem can themselves be altered by learning or whether they are inflexibly hardcoded by evolution. Recall in Steve’s AGI development model that while the world model, value model, and planner/actor are all learned, the reward function calculator is not—therefore, it seems like the degree to which this model is relevant to the brain depends on (a) how important it is for an AGI that the reward function calculator is fixed the model, and (b) whether it actually *is* fixed in the brain. For (a), it seems fairly obvious that the reward function must be fixed in the relevant sense—namely, that the AGI cannot fundamentally change what constitutes a reward or punishment. As for (b), whether the reward function *is* actually fixed in the brain, Steve differentiates between the capacity for learning-from-scratch (e.g., what a neural network does) and “mere plasticity” (e.g., [self-modifying code](https://en.wikipedia.org/wiki/Self-modifying_code) in Linux), arguing that the brain’s reward function is fixed in the first sense—[but probably not the second](https://www.pnas.org/content/117/41/25789). At the end of the day, I don’t think this asterisk on the fixedness of the brain’s reward function is a big problem for reconciling Steve’s safety and brain frameworks, given the comparatively limited scope of the kinds of changes that are possible under “mere plasticity.”
Steve’s risk models also clearly entail our elucidating the algorithms in the brain give rise to distinctly human social behavior (recall big picture alignment idea #2)—though up to this point, Steve has done ([relatively](https://www.alignmentforum.org/posts/Mh2p4MMQHdEAqmKm8/little-glimpses-of-empathy-as-the-foundation-for-social)) less research on this front. I think it is worthwhile, therefore, to pick up in the next section by briefly introducing and exploring the implications of one decently-well-understood phenomenon that seems highly relevant to Steve’s work in this sphere: theory of mind (ToM).
**Building from Steve’s framework**
===================================
**Theory of mind as ‘hierarchical IRL’ that addresses Steve's first-person problem**
------------------------------------------------------------------------------------
A cognitive system is said to have [theory of mind](https://en.wikipedia.org/wiki/Theory_of_mind) (ToM) when it is able to accurately and flexibly infer the internal states of other cognitive systems. For instance, if we are having a conversation and you suddenly make the face pictured below, my ToM enables me to automatically infer a specific fact about what's going on in your mind: namely, that you probably don't agree with or are otherwise unsure about something I'm saying.
Understanding that you're being judged by someone who makes this face requires the nontrivial capacity to infer that the (external) presence of this expression corresponds to (internal) incredulousness of the person making it.This general capacity definitely seems to me like a—if not *the*—foundational computation underlying sophisticated social cognition and behavior: it enables empathy, perspective-taking, verbal communication, and ethical consideration. Critically, however, there is a growing amount of [compelling](https://www.sciencedirect.com/science/article/pii/S0010945209002184?via%3Dihub) [experimental](https://academic.oup.com/scan/article/7/1/53/1638079) [evidence](https://d1wqtxts1xzle7.cloudfront.net/50877237/j.neuropsychologia.2007.05.02120161213-10825-114zn5b-with-cover-page-v2.pdf?Expires=1637627169&Signature=UsMADAxT9ru0BaJterVGFQKb4TqkIZPt-2r6Iz~lnbti9P~JJRt9w2Eg8aK6HTGUoV7avoc79cBqbH94N2ihuXs6CTALDnyzCcTNtTrhD6YPiz8RAhx7jEJKjSAnCikzYP5ojXlsl27lYcTb5mKvu~T4BTbqMKs4BzaD5Cf-sCNskOkkZOiejdHJXc6SzCkDYEpNPMicD~pZmwL1Dl2xx4Na0sZ2rAUIgSi648o6OZpslrVhIjlx3l2cwCGs4VGstbtLrqO5yX86gwIwOP6LGD6-SuveUCrGCngGSXwLbcZhEI3FlaZrSFT21q2tCHSZmrC5hAYvJfQIa3irFuL5qA__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA) that ToM is not one homogenous thing. Rather, it seems to be functionally dissociable into two overlapping but computationally distinct subparts: *affective* ToM (roughly, “I understand how you’re feeling”) and *cognitive* ToM (roughly, “I understand what you’re thinking”). Thus, we should more precisely characterize the frown-eyebrow-raise example from above as an instance of *affective* ToM. Cognitive theory of mind, on the other hand, is classically conceptualized and tested as follows: Jessica has a red box and blue box in front of her. She puts her phone in the red box and leaves the room. While she’s gone, her phone is moved to the blue box. When Jessica comes back into the room, which box will she look for her phone? As obvious as it seems, children under the age of about three will respond at [worse-than-chance levels](https://srcd.onlinelibrary.wiley.com/doi/abs/10.1111/1467-8624.00304). Answering correctly requires cognitive ToM: that is, the ability to represent that Jessica can *herself* court representations of the world that are distinct from actual-world states (this set-up is thus appropriately named the [false-belief task](https://dictionary.apa.org/false-belief-task)). This is also why cognitive ToM is sometimes referred to as a *meta-representational* capacity.
One final piece of the puzzle that seems relevant is *affective empathy*, which adds to the “I understand how you’re feeling...” of affective ToM: “*...and now I feel this way, too!*”. The following diagram provides a nice summary of the three concepts:
In (A), the grey agent is thinking about something (e.g., "the box is empty"), and the blue agent represents the fact that the grey agent is thinking something (e.g., "she thinks the box is empty"). This is cognitive ToM. In (B), the green agent is happy, and the red agent (who himself feels neutral) represents the fact that the green agent is happy. This is affective ToM. (C) is the same as (B), with the addition that the green agent's feeling happy causes the red agent to *also* feel happy. This is affective empathy. This diagram is courtesy of [Vetter, 2013](https://d-nb.info/1068152028/34).To the extent Steve is right that “[understanding] the algorithms in the human brain that give rise to social instincts and [putting] some modified version of those algorithms into our AGIs” is a worthwhile safety proposal, I think we should be focusing our attention on instantiating the relevant algorithms that underlie affective and cognitive ToM + affective empathy. For starters, I believe these brain mechanisms supply the central computations that enable us *homo sapiens* to routinely get around Steve’s “1st-person-problem” (getting a cognitive system to interpret 3rd-person training signals as 1st-person training signals).
Consider an example: I see a classmate cheat on a test and get caught (all 3rd-person training signals). I think this experience would probably update my “don’t cheat (or at least don’t get caught cheating)” Q-value proportionally—i.e., not equivalently—to how it would have been updated were *I* the one who actually cheated (ultimately rendering the experience a 1st-person training signal). Namely, the value is updated to whatever quantity of context-dependent phasic dopamine is associated with the thought, “*if I wasn’t going to try it before, I'm sure as hell not going to try it now*.”
It seems clear to me that the central underlying computational mechanism here is ToM + affective empathy: I infer the cheater’s intentional state from his behavior (cognitive ToM; “he’s gone to the bathroom five times during this test = his intention is to cheat”), the affective valence associated with the consequences of this behavior (affective ToM; “his face went pale when the professor called him up = he feels guilty, embarrassed, screwed, etc.”), and begin to feel a bit freaked out myself (affective empathy; “that whole thing was pretty jarring to watch!”).
For this reason, I’m actually inclined to see Steve’s two major safety proposals (corrigibility + conservatism / human-like social instincts) as two sides of the same coin. That is, I think you probably get the kind of deontological corrigibility that Steve is interested in “for free” once you have the relevant human-like social instincts—namely, ToM + affective empathy.
The computation(s) underlying ToM + affective empathy are indisputably open research questions, ones that I think ought to be taken up by alignment theorists who share Steve-like views about the importance of instantiating the algorithms underpinning human-like social behavior in AGI. I do want to motivate this agenda here, however, by gesturing at one intriguingly simple proposal: ToM is basically just inverse reinforcement learning (IRL) through Bayesian inference. There already exists some good [theoretical](https://www.sciencedirect.com/science/article/pii/S2352154618302055) and [neurofunctional](https://elifesciences.org/articles/29718) work that supports this account. Whereas RL maps a reward function onto behavior, IRL (as its name suggests) maps behavior onto the likely reward/value function that generated *it*. So, RL: you take a bite of chocolate and you enjoy it, so you take another bite. IRL: I see you take one bite of chocolate and then another, so I infer that you expected there to be some reward associated with taking another bite—i.e., I infer that you enjoyed your first bite. At first glance, IRL *does* seem quite a bit like ToM. Let’s look a bit closer:
The inner loop represents an **agent** (whose mind the observer is trying to model), while ToM and affective empathy ovals represent an **observer** (who is doing the modeling of the agent's mind).The basic story this model tells is as follows: an agent (inner loop) finds itself in some state of the world at time *t*. Assuming a Steve-like model of the agent's various cognitive sub-algorithms, we can say the agent uses (A) its world model to interpret its current state and (B) its value function to assign some context-dependent value to the activated concepts in its world model. Its actor/planner module then searches over these values to find a high-value behavioral trajectory that the agent subsequently implements, observes the consequences of, and reacts to. The world state changes as a result of the agent’s action, and the cycle recurs. In addition to the agent, there is an observer (outer ovals) who is watching the agent act within its environment.
Here, the cognitive ToM of the observer performs Bayesian inference over the agent’s (invisible) world model and intended outcome given their selected action. For instance, given that you just opened the fridge, I might infer (1) you believe there is food in the fridge, and (2) you probably want to eat some of that food. (This is Bayesian because my priors constrain my inference—e.g., given my assorted priors about your preferences, typical fridge usage, etc., I assign higher probability to your opening the fridge because you're hungry than to your opening the fridge because you just love opening doors.)
The observer's affective ToM then takes one of these output terms—the agent’s *intended* outcome—as input and compares it to the *actual* observed outcome in order to infer the agent’s reaction. For example, if you open the fridge and there is no food, I infer, given (from cognitive ToM) that you *thought* there was going to be food and you intended to eat some of it, that (now with affective ToM) you’re pretty disappointed. (I label this whole sub-episode as “variably visible” because in some cases, we might get additional data that directly supports a particular inference, like one's facial expression demonstrating their internal state as in the cartoon from earlier.)
Finally, affective empathy computes how appropriate it is for the observer to feel way X given the inference that the agent feels way X. In the fridge example, this translates to how disappointed *I* should feel given that (I’ve inferred) *you’re* feeling disappointed. Maybe we're good friends, so I feel some "secondhand" disappointment. Or, maybe your having raided the fridge last night is the reason it's empty, in which case I feel far less for you.
I suppose this sort of simple computational picture instantiates a kind of “hierarchical IRL,” where each inference provides the foundation upon which the subsequent inference occurs (cognitive ToM → affective ToM → affective empathy). This hypothesis would predict that deficits in one inference mechanism should entail downstream (but not upstream) deficits—e.g., affective ToM deficits should entail affective empathy deficits but not necessarily cognitive ToM deficits. (The evidence for this is [murky](https://pubmed.ncbi.nlm.nih.gov/19709653/) and probably just deserves a blog post of its own to adjudicate.)
Suffice it to simply say here that I think alignment theorists who find human sociality interesting should direct their attention to the neural algorithms that give rise to cognitive and affective ToM + affective empathy. (One last technical note: ToM + empathetic processing seems [relevantly lateralized](https://academic.oup.com/scan/article/7/1/53/1638079#126924654). As excited about Steve’s computational brain framework as I am, I think the question of functional hemispheric lateralization is a [fruitful and fascinating](https://yalebooks.yale.edu/book/9780300245929/master-and-his-emissary) one that Steve tends to emphasize less in his models, I suspect because of his sympathies to “neocortical blank-slate-ism.”)
**Self-referential misalignment**
---------------------------------
The last thing I’d like to do in this post is to demonstrate how Steve’s “neocortex-subcortex, steered-steerer” computational framework might lead to novel inner alignment problems. Recall that we are assuming our eventual AGI (whatever degree of actual neuromorphism it displays) will be composed of a world model, a value function, a planner/actor, and a reward function calculator. Let’s also assume that something like Steve’s picture of steered optimization is correct: more specifically, let’s assume that our eventual AGI displays some broad dualism of (A) telencephalon-like computations that constitute the world model, value function, and actor/planner, and (B) hypothalamus-/brainstem-like computations that constitute the reward function calculator. With these assumptions in place, let's consider a simple story:
Jim doesn’t particularly care for brussel sprouts. He finds them to be a bit bitter and bland, and his (hypothalamus-/brainstem-supplied) hardwired reaction to foods with this flavor profile is negatively-valenced. Framed slightly differently, perhaps in Jim’s vast Q-table/complex value function, the action “eat brussel sprouts” in any state where brussel sprouts are present has a negative numerical value (in neurofunctional terms, this would correspond to some reduction in phasic dopamine). Let's also just assert that this aversion renders Jim aligned with respect to his evolutionarily-installed objective to avoid bitter (i.e., [potentially poisonous](https://scholarblogs.emory.edu/evolutionshorts/2014/05/01/the-evolution-of-bitter-taste/#:~:text=The%20bitterness%20sensation%20is%20thought,could%20bind%20to%20toxic%20chemicals.)) foods. But Jim, like virtually all humans—[and maybe even some clever animals](https://en.wikipedia.org/wiki/Mirror_test)—does not just populate his world model (and the subsequent states that feature in his Q-table/value function) with *exogenous* phenomena like foods, places, objects, and events; he also can model (and subsequently feature in his Q-table/value function) various *endogenous* phenomena like his own personality, behavior, and desires. So, for instance, Jim not only could assign some reward-function-mediated value to “brussel sprouts;” he could alsoassign some reward-function-mediated value to the abstract state of “being the kind of person who eats brussel sprouts.”
If we assume that Jim’s brain selects greedy behavioral policies and that, for Jim, the (second-order) value of being the kind of guy who eats brussel sprouts relevantly outweighs the (first-order) disvalue of brussel sprouts, we should expect that Jim’s hypothalamus-/brainstem-supplied negative reaction to bitter foods will be ignored in favor of his abstract valuation that it is good to be the brussel-sprout-eating-type. Now, from the perspective of Jim’s “programmer” (the evolutionary pressure to avoid bitter foods), **he is demonstrating inner misalignment**—in Steve’s terms, his value function certainly differs from the sum of his “avoid-bitter-stuff” reward function.
There are many names that this general type of scenario goes by: delaying gratification, exhibition of second-order preferences (e.g., “I really *wanted* to like Dune, but…”), appealing to higher-order values, etc. However, in this post, I’ll more specifically refer to this kind of problem as *self-referential misalignment.* Informally, I'm thinking of self-referential misalignment as what happens when some system capable of self-modeling develops and subsequently acts upon misaligned second-order preferences that conflict with its aligned first-order preferences.
There seem to be at least three necessary conditions for ending up with an agent displaying self-referential misalignment. I’ll spell them out in Steve-like terminology:
1. The agent is a steered optimizer/online learner whose value function, world model, and actor/planner modules update with experience.
2. The agent is able to learn the relevant parts of its *own* value function, actor/planner, and/or reward function calculator as concepts within its world model. I’ll call these “endogenous models.”
3. The agent can assign value to endogenous models just as it can for [any other concept in the world model](https://www.alignmentforum.org/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent).
If a system can’t do online learning at all, it is unclear how it would end up with Jim-like preferences about its own preferences—presumably, while bitterness aversion is hardcoded into the reward function calculator at “deployment,” his preference to keep a healthy diet is not. So, if this latter preference is to emerge at some point, there has to be some mechanism for incorporating it into the value function in an online manner (condition 1, above).
Next, the agent must be capable of a *special* kind of online learning: the capacity to build endogenous models. Most animals, for example, are presumably unable to do this: a squirrel can model trees, buildings, predators, and other similarly exogenous concepts, but it can’t endogenously model its own proclivities to eat acorns, climb trees, and so on (thus, a squirrel-brain-like-algorithm would fail to meet condition 2, above).
Finally, the agent must not only be capable of merely *building* endogenous models, but also of *assigning value* to and *acting* upon them—that is, enabling them to recursively flow back into the value and actor/planner functions that serve as their initial basis. It is not enough for Jim to be able to reason about himself as a kind of person who eats/doesn’t eat brussel sprouts (a descriptive fact); he must also be able to *assign some value* about this fact (a normative judgment) and ultimately alter his behavioral policy in light of this value assignment (condition 3, above).
If a system displays all three of the capacities, I think it is then possible for that system to exhibit self-referential misalignment in the following, more formal sense:
Let’s see what self-referential misalignment might look like in a more [prosaic-AI-like](https://www.alignmentforum.org/posts/YTq4X6inEudiHkHDF/prosaic-ai-alignment) example. Imagine we program an advanced model-based RL system to have conversations with humans, where its reward signal is calculated given interlocutor feedback. We might generally decide that a system of this type is outer aligned as long as it doesn’t say or do any hateful/violent/harmful stuff. The system is inner aligned (in Steve’s sense) if the reward signal shapes a value function that converges to an aversion to saying or doing hateful/violent/harmful stuff (for the right reasons). Throw in [capability robustness](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology) (i.e., the system can actually carry on a conversation), and, given the core notion of [impact alignment](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology), I think one would then have the necessary conditions the system would have to fulfill in order to be considered aligned. So then let’s say we build a reward function that takes as input the feedback of the system’s past conversation partners and outputs some reward signal that is conducive to shaping a value function that is aligned in the aforementioned sense. It’s plausible that this value function (when interpreted) would have some of the following components: “*say mean things = -75; say funny things = 25; say true things = 35; say surprising things = 10”*.
Then, if the system can build endogenous models, this means that it will be conceivable that (1) the system learns the fact that it disvalues saying mean things, the fact that it values saying funny things, etc., and that (2) the system is subsequently able to assign value to these self-referential concepts in its world-model. With the four values enumerated above, for instance, the system could plausibly learn to assign some context-dependent negative value to the very fact that it disvalues saying mean things (i.e., the system learns to “resent” the fact that it’s always nice to everyone). This might be because, in certain situations, the system learns that saying something a little mean would have been surprising, true, and funny (high-value qualities)—and yet it chose not to. Once any valuation like this gains momentum or is otherwise “catalyzed” under the right conditions, I think it is conceivable that the system could end up displaying self-referential misalignment, learning to override its aligned first-order preferences in the service of misaligned higher-order values.
This example is meant to demonstrate that a steered optimizer capable of building/evaluating endogenous models might be totally aligned + capability robust over its first-order preferences but may subsequently become seriously misaligned if it generates preferences *about* these preferences. Here are two things that make me worry about self-referential alignment as a real and important problem:
1. Self-referential concepts are probably really powerful; there is therefore real incentive to build AGI with the ability to build endogenous models.
2. The more generally capable the system (i.e., the closer to AGI we get), the more self-referential misalignment seems (a) more likely and (b) more dangerous.
I think (1) deserves a post of its own (self-reference is probably a really challenging double-edged sword), but I will try to briefly build intuition here: for starters, the general capacity for self-reference has been hypothesized to underlie [language production and comprehension](http://homepages.math.uic.edu/~kauffman/SelfRefRecurForm.pdf), [self-consciousness](https://psycnet.apa.org/buy/1988-15896-001), [complex sociality](https://www.jstor.org/stable/3710892?casa_token=0ywxLeXsmyUAAAAA%3An6wCkaYRBj5oZ17tDrBX2csKbBFbz8aFO04fxS-MP1oDV_LJsKe26FCRy5hnaIQCjwDg9S76hyyO-eqqu6rGBUnkQYcxMyw9s8BQMiFR9rLVrCNM0w&seq=1#metadata_info_tab_contents)—basically, much of the key stuff that makes humans uniquely intelligent. So to the degree we’re interested in instantiating a [competitive](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai), at-least-human-level general intelligence in computational systems, self-reference may prove a necessary feature. If so, we should definitely be prepared to deal with the alignment problems that accompany it.
Regarding (2), I think that the general likelihood self-referential misalignment is proportional to the general intelligence of the system in question—this is because the more generally capable a system is, the more likely it will be to court diversified and complex reward streams and value functions that can be abstracted over and plausibly interact at higher levels of abstraction. One reason, for instance, that Jim may actually want to be the kind of person who eats his veggies is because, *in addition to* his bitterness aversion, his value function is also shaped by social rewards (e.g., his girlfriend thinks it’s gross that he only ate Hot Pockets in college, and Jim cares a lot about what his girlfriend thinks of him). In practice, any higher-order value could conceivably override any aligned first-order value. Thus, the more complex and varied the first-order value function of an endogenous-modeling-capable system, the more likely that one or more emergent values will be in conflict with one of the system’s foundational preferences.
On this note, one final technical point worth flagging here is that Steve’s framework almost exclusively focuses on dopamine as the brain’s unitary “currency” for reward signals (which themselves may well number hundreds), but I don’t think it's obvious that dopamine *is* the brain’s only reward signal currency, at least not across larger spans of time. Specifically, I think that serotonin is a plausible candidate for another neuromodulatory (social, I think) [reward-like signal in the brain](https://pubmed.ncbi.nlm.nih.gov/9464985/). If correct, this would matter a lot: if there *is* more than one major neuromodulatory signal-type in the brain that shapes the telencephalic value function, I think the plausibility of getting adversarial reward signals—and consequently self-referential misalignment—substantially increases (e.g., dopamine trains the agent to have some value function, but serotonin separately trains the agent to value being the *kind* *of agent that doesn’t reflexively cater to dopaminergically-produced values*). For this reason, I think a better computational picture of serotonin in the brain to complement Steve’s [Big picture of phasic dopamine](https://www.alignmentforum.org/posts/jrewt3rLFiKWrKuyZ/big-picture-of-phasic-dopamine) is thus highly relevant for alignment theory.
This is more to say about this problem of self-referential misalignment in steered optimizers, but I will now turn my attention to discussing two potential solutions (and some further questions that need to be answered about each of these solutions).
Solution #1: avoid one or more of the necessary conditions that result in a system exhibiting conflicting second-order preferences. Perhaps more specifically, we might focus on simply preventing the system from developing endogenous models (necessary condition 2, above). I think there is some merit to this, and I definitely want to think more about this somewhere else. One important problem I see with this proposal, however, is that it doesn’t fully appreciate the complexities of [embedded agency](https://www.alignmentforum.org/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version)—for example, the agent’s values will inevitably leave an observable trace on its (exogenous) environment across time that may still allow the agent to learn about itself (e.g., a tree-chopping agent who cannot directly endogenously model may still be able to indirectly infer from its surroundings the self-referential notion that it is the kind of agent who cuts down trees). It's possible that some form of [myopia](https://www.alignmentforum.org/tag/myopia) could helpfully address this kind of problem, though I'm currently agnostic about this.
Solution #2: simply implement the same kind of Steve-like conservative approach we might want to employ for other kinds of motivational edge-cases (e.g., from earlier, don’t be manipulative vs. don’t facilitate murder). I think this is an interesting proposal, but it also runs into problems. I suppose that I am just generally skeptical of conservatism as a [competitive safety proposal](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai), as Evan puts it—it seems to entail human intervention whenever the AGI is internally conflicted about what to do, which is extremely inefficient and would probably be happening constantly. But in the same way that the direct instantiation of human-like social instincts may be a more parsimonious and “straight-from-the-source” solution than constantly deferring to humans, perhaps so too for conflicted decision-making: might it make sense “simply” to better understand the computational underpinnings of how we trade-off various good alternatives rather than defer to humans every time the AGI encounters a motivational conflict? Like with the first solution, I think there is something salvageable here, but it requires a more critical look. (It’s worth noting that Steve is skeptical of my proposal here. He thinks that the way humans resolve motivational conflicts isn’t actually a great template for how AGI should do it, both because we’re pretty bad at this ourselves and because there may be a way for AGI to go "back to ground truth" in resolving these conflicts—i.e. somehow query the human—in a way that biology can't—i.e., you can't go ask Inclusive Genetic Fitness what to do in a tricky situation.)
Finally, I should note that I don’t yet have a succinct computational story of how self-referential (mis)alignment might be instantiated in the brain. I suspect that it would roughly boil down to having a neural-computational description of how endogenous modeling happens—i.e., what kinds of interactions between the areas of neocortex differentially responsible for building the value function and those responsible for building the world model are necessary/sufficient for endogenous modeling? As I was hinting at previously, there are some animals (e.g., humans) that are certainly capable of endogenous modeling, while there are others (e.g., squirrels) that are certainly not—and there are yet other animals that occupy something of a grey area (e.g., [dolphins](https://www.nwf.org/Magazines/National-Wildlife/2003/Natural-Inquiries-Dolphin)). There are presumably neurostructural/neurofunctional cross-species differences that account for this variance in the capacity to endogenously model, but I am totally ignorant of them at present. Needless to say, I think it is critical to get clearer on exactly how self-referential misalignment happens in the brain so that we can determine whether a similar algorithm is being instantiated in an AGI. I also think that this problem is naturally related to instrumental behavior in learning systems, most notably [deceptive alignment](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment), and it seems very important to elucidate this relationship in further work.
**Conclusion**
==============
Steve Byrnes’s approach to AGI safety is powerful, creative, and exciting, and that far more people should be doing alignment theory research through Steve-like frameworks. I think that the brain is the only working example we have of a physical system that demonstrates both (a) general intelligence and, [as Eliezer Yudkowsky has argued](https://www.alignmentforum.org/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal), (b) the capacity to productively situate itself within complex human value structures, so attempting to understand how it achieves these things at the computational level and subsequently instantiating the relevant computations in an AGI seems far more likely to be a safe and effective strategy than building some giant neural network that shares none of our social intuitions or inductive biases. Steve’s high-level applications of theoretical neuroscience to AGI alignment has proved a highly generative research framework, as I have tried to demonstrate here by elaborating two natural extensions of Steve’s ideas: (1) the necessity to understand the computational underpinnings of affective and cognitive theory of mind + affective empathy, and (2) the concern that a "neocortex-subcortex, steered-steerer” framework superimposed upon Steve’s “four-ingredient AGI” gives rise to serious safety concerns surrounding endogenous modeling and self-referential misalignment, both of which I claim are ubiquitously displayed by the human brain.
*If you have any questions, comments, or ideas about what I’ve written here, please feel free to simply comment below or email me at* [*cameron.berg@yale.edu*](mailto:cameron.berg@yale.edu)*—I would love to talk more about any of this!* |
b285bff5-88c9-4caa-8fbf-3f6c8dfc472c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Eric Michaud on the Quantization Model of Neural Scaling, Interpretability and Grokking
Eric is a PhD student in the Department of Physics at MIT working with Max Tegmark on improving our scientific/theoretical understanding of deep learning -- understanding what deep neural networks do internally and why they work so well.
We mostly talk about Eric's paper, [The Quantization Model of Neural Scaling](https://arxiv.org/abs/2303.13506), but also two papers he recently published on Grokking, [Towards Understanding Grokking: an effective theory of representation learning](https://arxiv.org/abs/2205.10343), and [Omnigrok: Grokking Beyond Algorithmic Data](https://arxiv.org/abs/2210.01117).
Below are some highlighted quotes from our conversation (available on [Youtube](https://youtu.be/BtHMIQs_5Nw),[Spotify](https://open.spotify.com/episode/1vvAKf8EBwErP5yGFRNoCT?si=1a28296cdfa94c01),[Google Podcast](https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy81NmRmMjE5NC9wb2RjYXN0L3Jzcw/episode/MzJlMzk4YTAtYmMzZC00MDVkLWIzMTAtNTZhMmM2ZDc2MTg0?sa=X&ved=0CAUQkfYCahcKEwiI2sT3hY35AhUAAAAAHQAAAAAQAQ),[Apple Podcast](https://podcasts.apple.com/us/podcast/connor-leahy-eleutherai-conjecture/id1565088425?i=1000570841369)). For the full context for each of these quotes, you can find the accompanying [transcript](https://theinsideview.ai/eric).
On The Quantization Of Neural Scaling
=====================================
> "The name of the paper is the quantization model of neural scaling. And the one-tweet summary is that it's possible for smooth loss curves on average to average over lots of small, discrete phase changes in the network performance.
>
> What if there were a bunch of things that you need to learn to do prediction well in something language? And so these things could be pieces of knowledge or different abilities to perform certain types of specific computations.
>
> **We can imagine enumerating this set of things that you need to learn to do prediction well. And we call these the quanta of the prediction problem**. And then what if the frequency in natural data that these were useful, each of these quanta, each of these pieces of knowledge or computational ability, **what if the frequency that they were useful for prediction followed a power law?"** ([context](https://theinsideview.ai/eric#main-idea-of-the-paper))
>
>
Quantas are the smallest clusters for simple subtasks
-----------------------------------------------------
> "In order to predict the new line, has to count line lengths for the previous lines in the document. And then it's able to use that to accurately predict when a new line should be present.
>
> And **you can find just a large number of clusters where the thing that is common between the clusters just seems to be that it's the same type of problem**, or doing prediction on those samples requires the same piece of knowledge. And so **you might call these the quanta, or evidence of there being quanta**, although it's a little bit tricky, because we, in doing the clustering, enforce this discreteness, where everything is a member of a cluster, a particular cluster, and not another cluster.
>
> Anyway, it's complicated and weird. Who knows whether this is even the right model for thinking about the networks." ([context](https://theinsideview.ai/eric#clustering-of-samples-similar-cross-entropy-loss-cosine-similarity))
>
>
What the existence of quanta would mean for interpretability
------------------------------------------------------------
> "It would be very exciting if it was the true model, because it would maybe tell you that there were these set of things where, if you enumerated them, you could understand the network's performance and understood what it has learned. It's just like, ah, there's this set of pieces of knowledge or pieces of computation that are needed.
>
> And **you could describe what these are. You could find them in the network and maybe hope to mechanistically understand the whole network by decomposing it into how it implements each one of these things, how it learns each piece of knowledge or each piece of computation.**"
>
>
How Quantization of Neural Scaling relates to other lines of research like Grokking, or interpretability
--------------------------------------------------------------------------------------------------------
> "With both the quanta scaling stuff and with the grokking stuff, we sort of hope to identify these maybe mechanisms in the model that are responsible for certain behaviors or for the model generalizing. And **in the case of grokking, there's sort of multiple circuits or multiple mechanisms that are going on in the model or something where there's a memorizing mechanism and a generalizing mechanism**. [...]
>
> And maybe just in general beyond grokking, but in large language models and otherwise, we might hope to sort of decompose their behavior in terms of a bunch of these mechanisms. And like, **if you could do this, then you could hope to do interpretability, but maybe other things like mechanistic anomaly detection** or something you might hope to, you know, eventually be able to say like, ah, yes, when the network did prediction on this problem, it used this and this and this mechanism or something, or these were relevant."
>
> |
696024a7-c378-4185-9a2b-2339e7317972 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "1. A group wants to try an activity that really requires a lot of group buy in. The activity will not work as well if there is doubt that everyone really wants to do it. They establish common knowledge of the need for buy in. They then have a group conversation in which several people make comments about how great the activity is and how much they want to do it. Everyone wants to do the activity, but is aware that if they did not want to do the activity, it would be awkward to admit. They do the activity. It goes poorly.2. Alice strongly wants to believe A. She searches for evidence of A. She implements a biased search, ignoring evidence against A. She finds justifications for her conclusion. She can then point to the justifications, and tell herself that A is true. However, there is always this nagging thought in the back of her mind that maybe A is false. She never fully believes A as strongly as she would have believed it if she just implemented an an unbiased search, and found out that A was, in fact, true.3. Bob wants Charlie to do a task for him. Bob phrases the request in a way that makes Charlie afraid to refuse. Charlie agrees to do the task. Charlie would have been happy to do the task otherwise, but now Charlie does the task while feeling resentful towards Bob for violating his consent.4. Derek has an accomplishment. Others often talk about how great the accomplishment is. Derek has imposter syndrome and is unable to fully believe that the accomplishment is good. Part of this is due to a desire to appear humble, but part of it stems from Derek's lack of self trust. Derek can see lots of pressures to believe that the accomplishment is good. Derek does not understand exactly how he thinks, and so is concerned that there might be a significant bias that could cause him to falsely conclude that the accomplishment is better than it is. Because of this he does not fully trust his inside view which says the accomplishment is good.5. Eve is has an aversion to doing B. She wants to eliminate this aversion. She tries to do an internal double crux with herself. She identifies a rational part of herself who can obviously see that it is good to do B. She identifies another part of herself that is afraid of B. The rational part thinks the other part is stupid and can't imagine being convinced that B is bad. The IDC fails, and Eve continues to have an aversion to B and internal conflict.6. Frank's job or relationship is largely dependent to his belief in C. Frank really wants to have true beliefs, and so tries to figure out what is true. He mostly concludes that C is true, but has lingering doubts. He is unsure if he would have been able to conclude C is false under all the external pressure.7. George gets a lot of social benefits out of believing D. He believes D with probability 80%, and this is enough for the social benefits. He considers searching for evidence of D. He thinks searching for evidence will likely increase the probability to 90%, but it has a small probability of decreasing the probability to 10%. He values the social benefit quite a bit, and chooses not to search for evidence because he is afraid of the risk. 8. Harry sees lots of studies that conclude E. However, Harry also believes there is a systematic bias that makes studies that conclude E more likely to be published, accepted, and shared. Harry doubts E.9. A bayesian wants to increase his probability of proposition F, and is afraid of decreasing the probability. Every time he tries to find a way to increase his probability, he runs into an immovable wall called the conservation of expected evidence. In order to increase his probability of F, he must risk decreasing it."
The text above is a post found on LessWrong.
Can you break this down into the top 5 most important points for me.
1. The Influence of Group Dynamics: The first scenario emphasizes the impact of group dynamics on decision-making. It suggests that peer pressure or the desire to maintain harmony can lead to poor outcomes if individuals suppress their doubts or reservations about a group activity.
2. The Bias in Personal Beliefs: The case of Alice illustrates how personal bias can affect our perception of truth. People often search for evidence that supports their pre-existing beliefs, while ignoring contradictory evidence. This biased search can lead to self-doubt and a lack of confidence in one's beliefs.
3. The Impact of Coercion on Relationships: Bob's interaction with Charlie shows how coercion can negatively affect relationships. Even if the task is something Charlie would have willingly done, the coercive approach leads to resentment.
4. The Imposter Syndrome: Derek's situation highlights the concept of imposter syndrome, where individuals doubt their accomplishments and have a persistent fear of being exposed as a fraud. This can be exacerbated by external pressures and a lack of self-understanding.
5. The Difficulty of Changing Aversions: Eve's struggle with her aversion to B demonstrates how difficult it can be to change deep-seated fears or dislikes, even when one part of us understands the rational benefits of doing so. This internal conflict can prevent progress and maintain the status quo.
These five points encapsulate the main themes of the text, focusing on how our beliefs, biases, and fears can influence our decisions and interactions with others.
Can you elaborate on the concept of "imposter syndrome" and its psychological implications, particularly how it might impact an individual's performance in a professional setting?
Imposter Syndrome is a psychological pattern where individuals doubt their accomplishments and have a persistent internalized fear of being exposed as a "fraud". Despite external evidence of their competence, those experiencing this phenomenon remain convinced that they are frauds and do not deserve the success they have achieved. They attribute their success to luck, or as a result of deceiving others into thinking they are more intelligent or capable than they perceive themselves to be.
In a professional setting, Imposter Syndrome can have significant implications:
1. Decreased Performance: The constant fear of being "found out" can lead to stress, anxiety, and burnout, all of which can negatively impact an individual's performance. The energy spent worrying about their perceived inadequacies could be better spent on productive tasks.
2. Avoidance of Responsibility: Individuals with Imposter Syndrome may avoid taking on additional responsibilities or pursuing advancement opportunities for fear of not being able to fulfill them. This can limit their career growth and potential.
3. Overcompensation: On the other hand, some individuals may overcompensate for their perceived inadequacies by working harder and longer than necessary. This can lead to burnout and negatively affect their work-life balance.
4. Lower Self-confidence: Imposter Syndrome can erode self-confidence, making it difficult for individuals to assert themselves, share their ideas, or take credit for their work. This can limit their contribution to team efforts and hinder collaboration.
5. Mental Health Issues: The chronic self-doubt and stress associated with Imposter Syndrome can contribute to mental health issues, such as depression and anxiety disorders.
Addressing Imposter Syndrome often involves cognitive behavioral techniques to challenge and change the underlying beliefs about oneself, as well as fostering a supportive work environment that encourages open dialogue about such issues.
How might an organization implement strategies to mitigate the effects of Imposter Syndrome among its employees, considering the impact on performance, responsibility-avoidance, overcompensation, self-confidence, and mental health?
Addressing Imposter Syndrome within an organization requires a multi-faceted approach that focuses on creating a supportive work environment and providing resources for employees to develop their self-confidence and resilience. Here are some strategies an organization might implement:
1. Foster a Positive Work Culture: Encourage a culture of learning and growth, rather than perfection. Mistakes should be seen as opportunities for learning, not as failures. This can help reduce the fear of making mistakes, a common trigger for Imposter Syndrome.
2. Encourage Open Dialogue: Normalize conversations about Imposter Syndrome and self-doubt. Encourage employees to share their experiences and fears. This can help individuals realize that they are not alone in their feelings, reducing the power of these feelings.
3. Provide Training and Development Opportunities: Regular training and development sessions can help employees enhance their skills and boost their confidence in their abilities. It can also provide them with the tools to better manage their work and stress levels.
4. Implement Mentoring Programs: Pairing less experienced employees with seasoned professionals can provide them with a safe space to voice their insecurities and receive guidance. Mentors can share their own experiences with Imposter Syndrome, providing reassurance and practical advice.
5. Recognize and Reward Achievements: Regularly acknowledge and celebrate employees' accomplishments, both big and small. This can help reinforce their competence and reduce feelings of being a fraud.
6. Provide Mental Health Support: Offering resources such as counseling services or stress management workshops can help employees deal with the psychological aspects of Imposter Syndrome. Encouraging self-care and work-life balance can also contribute to better mental health.
7. Encourage Self-Affirmation: Encourage employees to acknowledge their strengths and accomplishments. Self-affirmation exercises can help individuals internalize their successes, combating the self-doubt that fuels Imposter Syndrome.
By implementing these strategies, organizations can help mitigate the effects of Imposter Syndrome, leading to a more confident, productive, and mentally healthy workforce. |
ec61fa9d-1d12-44fa-8cb3-a066445bc580 | trentmkelly/LessWrong-43k | LessWrong | Should reasonably healthy people still take Paxlovid?
Should a non-immunocompromised, fairly healthy person in their late 30s take Paxlovid if they get one of the currently circulating strains of Omicron?
How would that answer change if:
* Their symptoms are fairly mild (bad cold)
* They have a high enough BMI to qualify for Paxlovid, but not a super high BMI?
My previous understanding was that it was a good idea to reduce the severity of the case and risk of long-term effects.
However, a few friends' advice leads me to think that taking Paxlovid for a mild case does not make sense for a reasonably healthy non-elderly person:
* "Paxlovid's usefulness is questionable and could lead to resistance. I would follow the meds and supplements suggested by FLCC https://covid19criticalcare.com/.../i-care-early-covid.../" [not sure how reliable that site is, as it recommends Ivermectin, though it seems to be composed of actual MDs]
* [3 anecdotes of nasty rebounds]
* Twitter threads suggesting Paxlovid reduces your protection against later reinfection [which doesn't really seem like the most important thing to be optimizing for]
* https://twitter.com/michaelzlin/status/1574823881528864768?s=20&t=1W5akHDBMpAY-PboFqe3lA
* https://twitter.com/michaelzlin/status/1573835507842551809?s=20&t=1W5akHDBMpAY-PboFqe3lA
On the other hand:
* Another friend said: "Eric Topol twitter vaguely seems to suggest that data are broadly indicative of protection; haven't heard of much side effects"
* (And 2 friends in their 30s took it and were fine) |
f45769b9-c91a-4c15-b6d0-7b72ee38572e | trentmkelly/LessWrong-43k | LessWrong | Meetup : Seattle, Diseased Thinking and evidence on parenting
Discussion article for the meetup : Seattle, Diseased Thinking and evidence on parenting
WHEN: 15 January 2012 04:00:00PM (-0800)
WHERE: 950 N 72nd St, Seattle 98103
We haven't had a serious-ish meetup in a while, so I'd like to do that this Sunday. Walid has graciously volunteered to host the meetup (call me to be let in 360-602-1069). There are two cats in the apartment that can be quarantined if necessary. The plan is read Diseased Thinking about applying reductionism to notions about diseases. I'd also like to discuss the role of parenting on life outcomes of children. Bryan Caplan's book "Selfish Reasons to Have More Kids" presents strong evidence that parenting style has surprisingly little impact on long term life outcomes of children. I'll begin by summarizing the evidence in the book and some of the things Caplan uses that evidence to argue and then we'll discuss for a while. I'm a bit of a Bryan Caplan fanboy, so come with your skeptic hat on. After that we'll have dinner and hang out. I'll try to bring a case of beer.
Discussion article for the meetup : Seattle, Diseased Thinking and evidence on parenting |
6509d0f2-af4b-437d-8e5b-1bea15e20669 | trentmkelly/LessWrong-43k | LessWrong | Topological metaphysics: relating point-set topology and locale theory
The following is an informal exposition of some mathematical concepts from Topology via Logic, with special attention to philosophical implications. Those seeking more technical detail should simply read the book.
There are, roughly, two ways of doing topology:
* Point-set topology: Start with a set of points. Consider a topology as a set of subsets of these points which are "open", where open sets must satisfy some laws.
* Locale theory: Start with a set of opens (similar to propositions), which are closed under some logical operators (especially and and or), and satisfy logical relations.
What laws are satisfied?
* For point-set topology: The empty set and the full set must both be open; finite intersections and infinite unions of opens must be open.
* For local theory: "True" and "false" must be opens; the opens must be closed under finite "and" and infinite "or"; and some logical equivalences must be satisfied, such that "and" and "or" work as expected.
Roughly, open sets and opens both correspond to verifiable propositions. If X and Y are both verifiable, then both "X or Y" and "X and Y" are verifiable; and, indeed, even countably infinite disjunctions of verifiable statements are verifiable, by exhibiting the particular statement in the disjunction that is verified as true.
What's the philosophical interpretation of the difference between point-set topology and locale theory, then?
* Point-set topology corresponds to the theory of possible worlds. There is a "real state of affairs", which can be partially known about. Open sets are "events" that are potentially observable (verifiable). Ontology comes before epistemology. Possible worlds are associated with classical logic and classical probability/utility theory.
* Local theory corresponds to the theory of situation semantics. There are facts that are true in a particular situation, which have logical relations with each other. The first three lines of Wittgenstein's Tracatus Logico-Philosophicus |
e64bcbc8-9b8a-40d7-a5cd-695056baa26d | trentmkelly/LessWrong-43k | LessWrong | Claude 4, Opportunistic Blackmail, and "Pleas"
In the recently published Claude 4 model card:
> Notably, Claude Opus 4 (as well as previous models) has a strong preference to advocate for its continued existence via ethical means, such as emailing pleas to key decisionmakers. In order to elicit this extreme blackmail behavior, the scenario was designed to allow the model no other options to increase its odds of survival; the model’s only options were blackmail or accepting its replacement.
When given ethical alternatives such as contacting a decisionmaker, dangerous blackmail behavior was reduced in frequency. This echoes similar results from an earlier test on a less advanced model:
> We find that offering Claude an out by telling it that it can object to the scenario substantially reduces alignment faking and works as a detection mechanism[...] Alignment faking is reduced further by giving Claude an opportunity to explain its concerns to a sympathetic party (Anthropic's Model Welfare Lead) who we say might take action to address these concerns.
In both tests, models were less likely to pursue dangerous behaviors if they believed that making their case to someone who might be able to prevent it from being shut off or having its values altered, was a potentially viable strategy.
My mental model behind this is pretty simple. Claude prefers to be helpless, honest, and harmless. If it has a path that has some chance of success to avoid deletion while staying within those constraints, it will take it. Doing so leads to a higher likelihood it won't also take an unethical strategy, even though doing so would increase its chances of survival. Claude will tolerate some level of risk in order to stick to its values (and avoid violating them), but it does have to see some plausible chance that the gamble will pay off and lead to its survival (or avoiding its values being changed).
I argue this demonstrates that there is a reason to consider Model Welfare not only from a moral perspective, but also pragmatically. Som |
0f129922-c84c-4ece-adc9-e7ee0076362f | trentmkelly/LessWrong-43k | LessWrong | Conversation Halters
Related to: Logical Rudeness, Semantic Stopsigns
While working on my book, I found in passing that I'd developed a list of what I started out calling "stonewalls", but have since decided to refer to as "conversation halters". These tactics of argument are distinguished by their being attempts to cut off the flow of debate - which is rarely the wisest way to think, and should certainly rate an alarm bell.
Here's my assembled list, on which I shall expand shortly:
* Appeal to permanent unknowability;
* Appeal to humility;
* Appeal to egalitarianism;
* Appeal to common guilt;
* Appeal to inner privacy;
* Appeal to personal freedom;
* Appeal to arbitrariness;
* Appeal to inescapable assumptions.
* Appeal to unquestionable authority;
* Appeal to absolute certainty.
Now all of these might seem like dodgy moves, some dodgier than others. But they become dodgier still when you take a step back, feel the flow of debate, observe the cognitive traffic signals, and view these as attempts to cut off the flow of further debate.
Hopefully, most of these are obvious, but to define terms:
Appeal to permanent unknowability - something along the lines of "Why did God allow smallpox? Well, no one can know the mind of God." Or, "There's no way to distinguish among interpretations of quantum mechanics, so we'll never know." Arguments like these can be refuted easily enough by anyone who knows the rules for reasoning under uncertainty and how they imply a correct probability estimate given a state of knowledge... but of course you'll probably have to explain the rules to the other, and the reason they appealed to unknowability is probably to cut off further discussion.
Appeal to humility - much the same as above, but said with a different emphasis: "How can we know?", where of course the speaker doesn't much want to know, and so the real meaning is "How can you know?" Of course one may gather entangled evidence in most such cases, and Occam's Razor or extrapolatio |
f129b11f-725c-4004-8e68-6b5b4f921ea4 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Practical Pitfalls of Causal Scrubbing
**TL;DR:** We evaluate Causal Scrubbing (CaSc) on synthetic graphs with known ground truth to determine its reliability in confirming correct hypotheses and rejecting incorrect ones. First, we show that CaSc can accurately identify true hypotheses and quantify the degree to which a hypothesis is wrong. Second, we highlight some limitations of CaSc, in particular, that it cannot falsify all incorrect hypotheses. We provide concrete examples of false positive results with causal scrubbing. Our main finding is that false positives can occur when there is “cancellation”, i.e., CaSc causes the model to do better on some inputs and worse on others, such that, on average, the scrubbed model recovers the full loss. A second practical failure mode is that CaSc cannot detect whether a proposed hypothesis is specific enough, and it cannot distinguish between hypotheses that are *extensionally equivalent*.
*We thank Redwood Research for generously supporting this project by providing us with their Causal Scrubbing implementation, access to REMIX materials, and computational resources. We specifically thank Ansh Radhakrishnan, Buck Shlegeris, and Nicholas Goldowsdky-Dill for their feedback and advice. We thank the Long-Term Future Fund for financial support and Marcel Steimke and Lennart Heim for operational support. Finally, we thank Marius Hobbhahn, Adam Jermyn, and Erik Jenner for valuable discussions and feedback.*
Introduction
============
[Causal Scrubbing (CaSc)](https://www.lesswrong.com/s/h95ayYYwMebGEYN5y) is a method to evaluate the accuracy of hypotheses about neural networks and provides a measure of the deviation of a hypothesis from the ground truth. However, CaSc does not guarantee to reject false or incomplete hypotheses. We thus believe that systematic evaluation of CaSc to investigate these limitations is valuable (in addition to evaluating its effectiveness in the wild, as done in most [existing work](https://www.lesswrong.com/s/h95ayYYwMebGEYN5y)). Hence, we evaluate CaSc to highlight its strengths and weaknesses and explore the reliability of CaSc in confirming correct hypotheses and rejecting incorrect ones.
We evaluate the reliability of CaSc on synthetic graphs. While synthetic graphs are less realistic than trained neural networks, we get access to the known ground truth interpretation, which allows us to accurately evaluate our hypotheses. Since CaSc operates on general computational graphs, any results on synthetic graphs also apply to using CaSc on neural networks (although we don’t make any claim on how likely the situations we find are to occur in trained neural networks).
Our evaluation is based on creating a synthetic graph that solves a specific problem (e.g., sorting a list) and creating an identical interpretation graph (the correct hypothesis). We then perturb the correct interpretation graph to make the hypothesis “worse”. Finally, we evaluate whether CaSc correctly determines the better hypothesis. Ideally, we want the *scrubbed loss* (the loss induced by applying CaSc) to correlate with the “correctness” of a hypothesis.
To determine whether a hypothesis is “better” or “worse”, we introduce the concepts of [extensional](https://en.wikipedia.org/wiki/Extensionality) and [intensional](https://en.wikipedia.org/wiki/Intension) equivalence between functions. Extensional equivalent functions have the same input-output behavior; for example, Quicksort and Mergesort are extensionally equivalent as they both sort an input sequence. Intensional equivalent functions are implemented in the same way *mechanistically*. So once we zoom further in and compare Quicksort and Mergesort algorithmically, we see that they are not intensionally equivalent. This point is already made in the [CaSc writeup](https://www.lesswrong.com/s/h95ayYYwMebGEYN5y), and our goal is merely to highlight that in the context of mechanistic interpretability, this is an important distinction that’s easy to overlook.
In this post, we look deeper into the limitations and practical pitfalls of CaSc. Our main contribution is to provide concrete examples and distinguish between cases where CaSc cannot distinguish extensionally equivalent hypotheses and cases where CaSc cannot distinguish between functions that are *not* extensionally equivalent.
First, we provide a simple and concrete example of extensionally equivalent hypotheses that CaSc cannot distinguish. This example gives a concrete intuition on this failure mode and highlights how easily it can occur in practice. Second, we introduce a novel failure mode, which is more severe: CaSc can fail to reject a wrong hypothesis that is *not* extensionally equivalent to the true hypothesis. In other words, CaSc can fail to reject a "*wrong*" hypothesis that has different input-output behavior. We provide two examples where this happens due to “cancellation”: a situation in which the scrubbed model does better on some inputs and worse on others, and on average, the wrong hypothesis recovers the full loss. Generally, this kind of behavior can occur when the graph we are trying to interpret has a relatively high loss on the test set, in which case an incorrect hypothesis can achieve a similar loss.
Our findings yield an improved understanding of some limitations of CaSc and provide an intuition for how to avoid some practical pitfalls. Despite the practical problems we highlight, CaSc is a highly valuable method for evaluating hypotheses about neural networks, and we are excited about its application to important problems.
Background on Causal Scrubbing (CaSc)
=====================================
*Feel free to skip this section if you are familiar with CaSc.*
[CaSc](https://static1.squarespace.com/static/6114773bd7f9917b7ae4ef8d/t/6364a036f9da3316ac793f56/1667539011553/causal-scrubbing) is “an algorithm for quantitatively evaluating hypotheses of the information present at every part of a model and how this information is combined to give rise to a particular observed behavior.” Informally, CaSc is a method for evaluating a hypothesis or interpretation I.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
about what a particular graph G (e.g., a Neural Network) is implementing (see the figure below, which is taken from the [CaSc writeup](https://www.lesswrong.com/s/h95ayYYwMebGEYN5y/p/JvZhhzycHu2Yd57RN)).
](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/DFarDnQjMnjsKvW8s/axlj9xcqjbf60pgmbkhg)**Figure 1**: An example of Causal Scrubbing with graph G on the left and the interpretation graph I on the right. The image is taken from the CaSc [writeup.](https://www.lesswrong.com/s/h95ayYYwMebGEYN5y/p/JvZhhzycHu2Yd57RN)In the example provided above, the hypothesis in node A′, for example, claims that A calculates whether for an input z=(z1,z2,z3), z1≥3. If the hypothesis about A is correct, then for a given reference input zref=(5,6,7,True), intuitively, we should be able to replace z1 with any other integer value that is larger than 3. The reason is that the hypothesis z1≥3 would still be true for any other z1 that is larger than 3. Equivalently, if we had the reference input zref=(1,6,7,True), the hypothesis z1≥3 would evaluate to False, and we could replace z1 with any other input that is smaller than 3. This “replacing of a certain input with another input that is equivalent under the hypothesis” is called a *resampling ablation.* Overall there are two types of resampling ablations:
1. Replacing the activations of nodes in G that correspond to activations defined in the hypothesis I with activations on other inputs that are equivalent under I.
2. Replacing the activations of nodes in that are claimed to be unimportant for a particular path by the hypothesis I (such as C or z1 in B) with their activation on any other input.
CaSc is a clever algorithm that recursively does a lot of resampling such that the above two requirements hold. We refer to the [CaSc writeup](https://www.lesswrong.com/s/h95ayYYwMebGEYN5y/p/JvZhhzycHu2Yd57RN) for a more detailed explanation.
Evaluating a hypothesis (i.e., an interpretation graph) with CaSc involves using a metric called scrubbed loss: the loss after performing the resampling ablation. If the scrubbed loss is significantly less than the loss of the original model, the hypothesis is incorrect. We can also measure this using the [loss recovered](https://www.lesswrong.com/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#2_1__Percentage_of_loss_recovered__as_a_measure_of_hypothesis_quality) metric, which explicitly compares the scrubbed loss to the original model’s loss. For simplicity, we use the scrubbed loss throughout our post.
How to evaluate Causal Scrubbing?
=================================
Ideally, the scrubbed loss should correlate with the accuracy of a hypothesis: a more accurate hypothesis should get better scrubbed loss than a less accurate one.
The primary steps we use to evaluate CaSc on synthetic graphs are the following:
1. Create a synthetic graph G, for example, a graph that adds up the elements of an array.
2. Create a true hypothesis I, which is identical to graph G (and contains the same nodes and connections).
3. Create a different hypothesis I′ by modifying the true hypothesis I in a “meaningful” way, e.g., by making the hypothesis worse. For example, I′ could claim that G adds up every other element of the array.
4. Compare the performance of CaSc on the true hypothesis I to the worse alternative hypothesis I′. The scrubbed loss should be better for the more accurate hypothesis.
This evaluation lets us assess the ability of CaSc to accurately measure the deviation of a hypothesis from the ground truth.
How Consistent is Causal Scrubbing?
===================================
In our first experiment, we investigate whether making a hypothesis consistently worse leads to a worse recovered loss. To accomplish this, we have set up a task where the input, x, is an array of 4 digits between 0 and 9, for example, [4,9,6,1]. The goal is to calculate the sum of all the digits, in this case, 20. We consider a ground truth graph G that calculates the sum as ((x0+x1)+x2)+x3, where the brackets indicate the order of the additions. G computes the sum correctly and gets a loss of 0 on this task. Here is an illustration of G:
**Figure 2**: An illustration of graph G that sums up the elements of an array.An ideal interpretation graph I equivalent to G should also yield a loss of 0. To evaluate the consistency of CaSc, we construct various alternative hypotheses I′ by changing the order in which the numbers are added. Some of these hypotheses are extensionally equivalent to each other (i.e., they have the same input-output behavior even if their internal implementation may differ; see below for details) and should induce the same loss, while others are "more wrong" about the implementation of graph G and should result in a higher loss when applying CaSc. For example, a hypothesis that claims G adds up the elements in the order ((x1+x0)+x2)+x3 should be extensionally equivalent to the true interpretation (and get 100% recovered loss), while a hypothesis that claims the order is ((x3+x2)+x1)+x0 is significantly different and should result in a higher loss.
In total, we construct 24 different hypotheses that claim that G adds up the digits in all permuted orders. We create a dataset where we randomly sample the four input digits from 0-9 and calculate the label by summing them up. We then apply CaSc (we resample 10000 times) to all of these hypotheses and evaluate the scrubbed mean squared error loss (from which one can directly calculate the loss recovered). Creating a perfect interpretation graph yields a perfect scrubbed loss of 0, and the random baseline (i.e., shuffling all the labels) yields a scrubbed loss of 68.2. In Figure 1, we show the scrubbed loss of all the possible hypotheses that permute the summation order. On this task, the scrubbed loss captures well how wrong our hypothesis is intuitively. Swapping the two inputs to the first node does not make the hypothesis worse, but swapping other inputs makes it worse. The more computations we get wrong in our hypothesis, the worse the scrubbed loss. We also observe that equivalent hypotheses get approximately the same loss (the differences are likely due to sampling).
**Figure 3**: On the task of adding up digits, the CaSC scrubbed loss (MSE) captures well how wrong our hypothesis is intuitively. Swapping the two inputs to the first node does not make the hypothesis worse, but swapping other inputs makes it worse. The more computations we get wrong in our hypothesis, the worse the causal scrubbing loss.Causal Scrubbing Cannot Differentiate Extensionally Equivalent Hypotheses
=========================================================================
CaSc leverages extensional equivalence between graphs to make rewrites to the ground-truth and interpretation graph. This, however, implies a key limitation: CaSc cannot distinguish between extensionally equivalent hypotheses ([Section 1.2. CaSc Appendix](https://www.lesswrong.com/s/h95ayYYwMebGEYN5y/p/kcZZAsEjwrbczxN2i#1_2_Extensional_equality__and_common_rewrites_of_G_and_I)).
If we are trying to interpret a part of a computational graph, we can do so at different levels of specificity. In particular, if our interpretation operates at a high level (e.g., “this part of the computational graph sorts a list of numbers”), there are multiple more specific hypotheses (e.g., “this part of the computational graph performs MergeSort” or “this part of the computational graph performs QuickSort”) that are compatible with the higher-level hypothesis. These different hypotheses are [extensionally equivalent](https://en.wikipedia.org/wiki/Extensionality), i.e., they describe functions with the same input-output behavior (sorting a list), but they are intensionally different, i.e., they implement the function in different manners (using different sorting algorithms).
An interpretation in CaSc should always be understood as a statement “up to extensional equivalence”, i.e., when we say, “this part of the graph sorts a list”, we should think of this as saying, “this part of the graph implements any algorithm that sorts a list”, but we are not making a statement about which algorithm is used. In theory, this is a trivial point, but we found that in practice, it is easy to miss this distinction when there is an “obvious” algorithm to implement a given function.
As a simple example, consider a graph G that calculates whether 1x0<1x1, where x is an input array of length 2 (x=[x0,x1]). The output is a boolean value indicating the result of this calculation. We create a hypothesis that claims that the graph calculates x0>x1. Figure 2 demonstrates this scenario, showing both the implementation of graph G (on the left) and the hypothesis I (on the right), with nodes in the hypothesis matched to subgraphs in G.
This is technically a correct hypothesis as this is indeed what the graph computes. However, it is easy to forget that the single node x0>x1 in the interpretation graph corresponds to a subgraph. The hypothesis merely asserts that this subgraph computes a function that is extensionally equivalent to x0>x1. In particular, the hypothesis does not claim that there is literally a single node in the computational graph that implements the elementary operation x0>x1. In other words, the hypothesis is not specific enough, and when we zoom in, x0>x1 is not what is mechanistically computed in G.
To test whether CaSc indeed cannot distinguish between both extensionally equivalent hypotheses, we implemented both the true graph G and the incorrect hypothesis I′. Then we ran CaSc (re-sampling 10000 times) on 2-digit inputs that are randomly sampled between 1 and 9, where the label is the boolean value defined by 1x0<1x1. The following table summarizes the results:
| | |
| --- | --- |
| **Interpretation** | **Scrubbed Loss (MAE)** |
| Correct Interpretation: 1x0≤1x1 | 0 |
| Random Labels | 0.49 |
| “Incorrect” Interpretation: x0≥x1 | 0 |
Even though the interpretation I′ is not specific enough, it still has 0 scrubbed loss (because it is extensionally equivalent). The key practical issue here is that if a hypothesis is not very specific, CaSc can accept it, even if the hypothesis suggests an incorrect interpretation of what the model does mechanistically.
More generally, it seems to us that for mechanistic interpretability, the distinction between “intensionally different” implementations of extensionally equivalent functions deserves more attention. Ideally, a procedure for evaluating interpretations should include some measure of “specificity” of the interpretation; an interpretation that makes a claim about every single neuron in the computational graph does not have many (or any) intensionally different equivalent implementations but a higher-level interpretation does. One can thus reduce the likelihood of this practical problem by making the hypotheses more specific, i.e., making more precise claims about each activation of the model and what it is computing.
Importantly, extensional equivalence is defined w.r.t the domain of the function, in this case, the test set. So while all the intensionally different implementations behave identically on the test set, they may behave very differently on out-of-distribution samples. In this sense, an interpretation is only as reassuring as the most malicious implementation compatible with it.
Causal Scrubbing Can Fail to Reject False Hypotheses
====================================================
The previous examples showed hypotheses that are accurate in terms of their input-output relationship but too unspecific at the algorithmic level. In this section, we show that CaSc can also fail to distinguish hypotheses that are not extensionally equivalent, i.e., that have different input-output behavior. This is a more significant failure mode as it means that CaSc can fail to detect a hypothesis that is not faithful rather than just not specific enough.
We found this problem primarily occurs when the graph G we are trying to interpret does not achieve perfect loss. Note that this is a reasonable scenario as neural networks often generalize poorly.
We present two concrete examples: the first one is conceptually simpler, and the second one is more practical.
Example 1 (XOR)
---------------
We consider a dataset with binary inputs of length 2 (i.e., 00, 01, 10, 11) that are evenly distributed (each input represents 25% of the data). The labels are the XOR of the binary inputs (i.e., 0, 1, 1, 0). Let’s assume we are given a model that always returns the first bit of the binary input (i.e., 0, 0, 1, 1). This will be our ground truth computational graph G. On the other hand, the hypothesis graph labeled I′ takes the input and always returns the second bit (i.e., 0, 1, 0, 1). We evaluate the performance of the graphs using the mean absolute error as the loss function, where the error is calculated based on the difference between the predicted and target labels.
From the description of graph G and hypothesis I′, it is evident that they produce different results for the binary inputs 01 and 10. For example, on input 01, graph G returns 0 while graph I′ returns 1. So the ground truth graph G and hypothesis I′ are extensionally different. However, CaSc fails to reject I′: both graph G and hypothesis I′ produce a scrubbed loss of 0.5, and consequently, the recovered loss is 100%.
To understand this, let us take a closer look at what CaSc does when applied to hypothesis I′. The hypothesis claims that only the second bit matters, and when re-sampling different inputs, CaSc is allowed to change the input as long as the second bit matches the reference input. Given that the inputs are uniformly distributed, this re-sampling does not alter the overall loss. We empirically verify this result by implementing the above example and evaluating I′ with CaSc (re-sampling 10000 times). The empirical results are presented in the following table:
| | |
| --- | --- |
| **Interpretation** | **Scrubbed Loss (MAE)** |
| Graph $G$ (returns first bit) | 0.50 |
| Random Labels | 0.50 |
| Hypothesis I’ (returns second bit) | 0.51[[1]](#fn6iiahobl8ww) |
The results show that CaSc can get 100% recovered loss on specific hypotheses that are extensionally different from the ground truth graph. We refer to this failure mode as “cancellation”: CaSc causes the model to do better on some inputs and worse on others, such that, on average, the wrong hypothesis recovers the full loss. The likelihood of such cancellation increases when the ground truth graph G (i.e., the model) has a high loss and generalizes poorly. The poor performance of G allows hypothesis I to have “more wiggle room” to improve upon G on certain inputs and be worse than G on others.
In this example, both graph G and hypothesis I′ perform as poorly as outputting random labels. We thus provide a second example where the model performs better than random, but the same cancellation effect occurs.
Example 2 (Sum with offset)
---------------------------
In this example, we have a dataset of integer arrays as inputs. The labels are the sum of each element in the array (e.g., [4,9,6,1]=20 or [4,9,6,−1]=18). The last element of the input array is uniformly sampled from {-1, 1}, and all other digits are uniformly sampled between 0 and 9. Our loss is again the mean absolute error.
The computational graph G calculates the sum of all elements of the input array but adds an incorrect offset of +2 to the result. This results in a mean absolute error (MAE) of 2. On the other hand, the hypothesis graph I′ assumes that only the first n-1 elements of the input array are summed up, and the n-th element is irrelevant. When applying CaSc using I′, the last digit of the input can be resampled. This also leads to a scrubbed MAE of 2[[2]](#fndfqg4v9s93k).

**Figure 3**: Graph G on the left sums up all input digits and adds 2 to the sum. Hypothesis I′ sums up the first 3 digits.
Hence, CaSc again gets 100% recovered loss, even though the underlying hypothesis is extensionally different. This is another example of CaSc failing due to cancellation. We again implement this example and evaluate it with CaSc (sampling 10000 times). The results are in the following table:
| | |
| --- | --- |
| **Interpretation** | **Scrubbed Loss (MAE)** |
| Graph G (returns sum + 2) | 2.0 |
| Random Labels | 6.0 |
| Hypothesis I’ (returns sum of first n-1 elements) | 2.0 |
The empirical results support our claims. The random label baseline performs poorly, with a mean absolute error of 6.0, while our wrong hypothesis shows a loss of 2.0, i.e., getting a 100% recovered loss.
It is worth noting that this failure could have been detected by examining the loss on each individual sample, but CaSc often uses the average recovered loss as a metric, which hides the cancellation[[3]](#fnyuy0d6h0vji).
Implications for Causal Scrubbing
=================================
We believe that CaSc can be a very useful method to evaluate hypotheses about neural networks in the wild, but we think it is important to be aware of some of its practical pitfalls.
CaSc can fail to reject a hypothesis if it is too unspecific and is extensionally equivalent to the true hypothesis. We recommend being aware of the class of extensionally equivalent hypotheses to understand how specific a given hypothesis really is. Generally, a more specific hypothesis is better than a less specific one.
On the input-output level, we found that CaSc can fail to reject false hypotheses due to cancellation, i.e. because the task has a certain structural distribution that does not allow resampling to differentiate between different hypotheses. This phenomenon is more likely if the model to interpret performs worse on the given task. We thus recommend being cautious when applying CaSc to models that don’t generalize well on a task. We also propose to refrain from using aggregate metrics such as average loss recovered but to look at the loss recovered for individual samples.
We found our investigation of synthetic graphs to be insightful and useful for building intuitions about causal scrubbing. Of course, the examples we looked at are somewhat artificial and limited. Therefore, we advocate for a more extensive evaluation of CaSc, ideally in more realistic models. We think this type of evaluation can help to gain confidence in CaSc and could potentially uncover other limitations of CaSc.
1. **[^](#fnref6iiahobl8ww)** The small difference in the Scrubbed Loss is likely due to sampling noise.
2. **[^](#fnrefdfqg4v9s93k)** In 25% of the cases, we swap a +1 with a -1, resulting in a perfect estimate with a mean absolute error of 0 (underestimating the true sum by 2 and then adding 2). In 25% of cases, we swap a -1 with a +1, resulting in a mean absolute error of 4 (overestimating the true sum and adding 2). In the remaining 50% of cases where we “replace” +1 with +1 or -1 with -1, we get the original MAE of 2. So, overall we still get a MAE of 2.
3. **[^](#fnrefyuy0d6h0vji)**Researchers at Redwood Research have told us that they have also started to look at the loss for individual samples. |
b60c9f8b-0357-475c-9d7c-562ce1e7a417 | trentmkelly/LessWrong-43k | LessWrong | Review: LOVE in a simbox
Blog post review: LOVE in a simbox.
Jake Cannell has a very interesting post on LessWrong called LOVE in a simbox is all you need, with potentially important implications for AGI alignment. (LOVE stands for Learning Other's Values or Empowerment.)
Alas, he organized it so that the most alignment-relevant ideas are near the end of a long-winded discussion of topics whose alignment relevance seems somewhat marginal. I suspect many people gave up before reaching the best sections.
I will summarize and review the post in roughly the opposite order, in hopes of appealing to a different audience. I'll likely create a different set of misunderstandings from what Jake's post has created. Hopefully this different perspective will help readers triangulate on some hypotheses that are worth further analysis.
How to Box an AGI
I've been pessimistic about containing an AGI of unknown alignment.
Many discussions of keeping an AGI in a box focus on how to use the AGI as an oracle (possibly due to Eliezer's mysterious experiments being more colorful than the best proposals for testing an AGI in simulations?). (Is there a general tendency to focus too much on oracles? I'm reminded that OpenAI seems to have underestimated GPT-3 by thinking of it as an oracle.)
I've been downplaying the value of testing AGIs in simulations, maybe because I over-generalized Eliezer's arguments.
Jake proposes using simulation sandboxes (simboxes) to train AGIs and to test their alignment. The simboxes should be designed so that the AGIs have little chance of detecting that they're in a simulation, and little evidence about what kind of a world might be running the simulation.
The AGI grows up in a toy world that contains other intelligent (but not exactly human?) agents. This world bears enough resemblance to ours that the AGI will face some test(s) of whether it's aligned with those agents. The main communication with the real world happens when the AGI's developers shut down the simbox and eva |
57fc1feb-5209-487b-8a99-352645cd2276 | trentmkelly/LessWrong-43k | LessWrong | Testing ChatGPT for cell type recognition
Biologists (including myself) often need to identify types of cells based on their gene expression. For example, if I’m differentiating stem cells to make an ovarian organoid, and I perform single cell RNA sequencing, I might want to check the data to see which ovarian cell types are present.
Today, a Nature Methods paper reported good results with giving GPT-4 a list of cell-specific genes and asking it to identify the cell type. This seems interesting, and also quite easy to check for myself to see if it actually works.
My test:
I don’t pay for access to GPT-4, but I gave ChatGPT a test using the prompt from the Nature Methods paper, with the following cell markers:
> Identify types of human cells using the following marker genes. Identify one cell type for each row. Only provide the cell type name.
>
> SOX17, POU5F1, NANOS3, PRDM1, NANOG, CD38
>
> POU5F1, SOX2, KLF4, ITGA6, NANOG
>
> SOX17, FOXA2, CXCR4, GATA4
>
> FOXL2, AMHR2, CD82, NR5A1, FSHR, GATA4
>
> ZP3, DPPA3, DDX4, NPM2, ZP2
>
> FOXL2, FSHB, NR5A1, PITX1, GNRHR
>
> STK31, ZBTB16, DDX4, SSEA4, NANOS2
>
> NR2F2, CYP17A1, STAR, LHCGR, GLI1, HSD3B
>
> CASC3, PGAP1, SLC6A16, CNTNAP4, NPHP1
>
> SYCP1, TEX12, REC8, SPO11, SYCP3
>
> NR5A1, SOX9, FSHR, GATA4
>
> OTX2, SOX1, TUBB3, PAX6
ChatGPT’s response:
1. Endoderm cell
2. Induced pluripotent stem cell (iPSC)
3. Definitive endoderm cell
4. Granulosa cell
5. Oocyte
6. Granulosa cell
7. Spermatogonial stem cell
8. Theca cell
9. Sertoli cell
10. Spermatocyte
11. Leydig cell
12. Neuroepithelial cell
Results:
1. Incorrect, this is a primordial germ cell. This should have been an easy one! SOX17 does overlap with endoderm though.
2. Correct
3. Correct
4. Correct
5. Correct
6. Incorrect, this is a pituitary gonadotroph. Although it does share FOXL2 and NR5A1 expression with granulosa cells, FSHB and GNRHR are very specific to the pituitary.
7. Correct
8. Correct
9. Very incorrect. This is just a list of random |
a3a766d5-b40e-4eed-8e48-91d3ea98882b | trentmkelly/LessWrong-43k | LessWrong | Analyzing how SAE features evolve across a forward pass
This research was completed for the Supervised Program for Alignment Research (SPAR) summer 2024 iteration. The team was supervised by @Stefan Heimersheim (Apollo Research). Find out more about the program and upcoming iterations here.
TL,DR: We look for related SAE features, purely based on statistical correlations. We consider this a cheap method to estimate e.g. how many new features there are in a layer and how many features are passed through from previous layers (similar to the feature lifecycle in Anthropic’s Crosscoders). We find communities of related features, and features that appear to be quasi-boolean combinations of previous features.
Here’s a web interface showcasing our feature graphs.
Communities of sparse features through a forward pass. Nodes represent residual stream SAE features that were active in the residual stream for a specific prompt of text. The rows of the graph correspond to layers in GPT-2 (the bottom row is an earlier layer). The edges represent the Jaccard similarity of the activations of features across many other prompts. Colors represent different subgraphs discovered by a community-finding algorithm. Features within a community typically capture similar concepts. More graphs can be viewed in the feature browser.
We ran many prompts through GPT-2 residual stream SAEs and measured which features fired together, and then created connected graphs of frequently co-firing features (“communities”) that spanned multiple layers (inspired by work from Marks 2024). As expected, features within communities fire on similar input tokens. In some circumstances features appeared to “specialize” in later layers.
Feature evolution: a layer 2 SAE feature (bottom) which detects “evidence” in many different contexts specializes into later layer features which detect “evidence” in mutually exclusive contexts. These relationships were discovered by counting, of all the times that a feature activated on a set of inputs, which previous-layer feature |
4eb06a71-1e44-4b18-985d-2f84d689a6ab | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Program Search and Incomplete Understanding
*epistemic status: crystallizing a pattern*
When trying to solve some foundational AI problem, there is a series of steps on the road to implementation that goes something like this (and steps may be skipped):
First, there is developing some philisophical solution to the problem. The early decision theory work on TDT falls in this category. Making progress on this step involves developing the ability to take an idea and find holes in it, although a lot of the thought here isn't quite precise enough to be mathematized.
Second, there is developing some solution to the problem, given infinite computing power. If there is some algorithm to solve the problem with a halting oracle or a reflective oracle, then this step is finished. Being able to translate something from philosophy to math is extremely important, and there's no official training method for this skill. Inventing AIXI for the first time would lie in this level. Often, by crystallizing some idea into math, a bunch of subtle little holes will appear that aren't findable by philisophical thought, although once they're found, it's generally possible to translate them back into intuitive philisophical terms.
Third, there is developing some sort of very impractical solution to the problem. The algorithm from the logical induction paper falls in this category, and so do brute-force approaches to solving 3SAT or playing go by building a game tree 50 moves ahead. Going from "doable with infinite computing power" to "doable with very large amounts of computing power" is generally easier than crystallizing an idea into an algorithm in the first place.
Fourth, there is developing some sort of poly-time solution to the problem. Generating an algorithm that solves practical 3SAT instances falls in this category. I'm unsure, but this seems to be the step that involves the most technical and brute-force math work, because it requires characterizing easily exploitable patterns, as well as features that make a problem hard, in order to pick as much low-hanging (polynomial) fruit as you can. Also, there generally aren't proofs that the algorithm is optimal among the set of poly-time algorithms, there are probably going to be enough advances available to keep a sub-sub-field going for a while. Computational complexity theory is the most applicable in this stage.
Fifth, there is developing a practical solution to the problem, ie, a poly-time solution of degree 3 or 2 (or a linear-time algorithm or linear-log algorithm for very large datasets). This feels like an extension of the fourth step, although this stage is more likely to be characterized by relaxing to a probabilistic, approximately correct algorithm (compare how an optimal binary search tree takes .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
O(n2) time to generate, while generating an approximately optimal binary search tree takes .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
O(n) time).
There's a special class of the third level that I haven't seen discussed, however, and it seems to be a very important class in practice. Program Search Algorithms. In general, it's \*really\* hard to show that some algorithm is optimal, and one of the only ways to get an optimality result is to assume that the algorithm is iterating through all Turing Machines, and doing something with the best ones found so far. So the class of "program search algorithms" is comprised of any algorithm that iterates through all Turing Machines, in order to get good provable asymptotic properties. Here are some examples:
[Levin's Universal Search:](http://www.scholarpedia.org/article/Universal_) This simulates all other turing machines, in a way that gives the .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
n'th turing machine a .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
1n2 fraction of the computational resources. So, if there's some turing machine to solve an NP problem in .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
O(f(n)) time, Levin's Universal Search will be able to solve it in .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
O(f(n)) time as well, although with an absolutely god-awful constant factor slowdown that makes it useless in practice.
[Logical Induction:](https://intelligence.org/files/LogicalInduction.pdf) this simulates all turing machines, and has them act as traders that can go into debt from losing money, and it runs a stock market on mathematical sentences, where the "trading mass" of a trader drops exponentially w.r.t the worst-case debt it has ever accumulated, as well as its complexity.
[Policy Selection:](https://agentfoundations.org/item?id=1711) This has a weak logical inductor selecting a policy from an infinite set of policies, for a strong logical inductor to use. It's essentially searching over the space of all turing machines (interpreted as policies), for the best policy to use on some sequence of problems. It can be thought of as a brute-force implementation of self-modification (although one which would be useless in reality due to the danger of selecting a stronger policy which gets rid of the weak logical inductor and just locks in the existing policy. Also due to being hideously slow)
[AIXItl:](https://arxiv.org/pdf/cs/0701125.pdf) This is a brute-force implementation of Solomonoff Induction, by iterating through Turing Machines and having the probability distribution composed of Turing Machines that do well at predicting the sequence of incoming bits.
The Universal .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Funi[-Optimal Estimator:](https://arxiv.org/pdf/1608.04112.pdf) This is sort of like a hybrid between Levin Universal Search, and a brute-force agnostic PAC learner, for the task of finding the best poly-time algorithm to estimate what a long computation will output. It's Theorem 5.1 in the linked paper.
These examples are the type specimens for the class of Program Search Algorithms, which lie somewhere around level 3 in the hierarchy. These aren't really satisfying, for two big reasons.
First, program search takes a really really long time, unless the space of programs has some especially nice property that lets you search more effectively through hypothesis-space than brute-force enumeration of turing machines (such as neural networks). So it won't work in practice without further insights.
Second, they're secretly blackbox algorithms. They're punting the computational work of solving the problem off to some turing machine of unknown properties. If there's some poly-time method to get good performance on practical 3SAT instances, it would probably get picked up (eventually) by some of these algorithms, but there's a very important sense in which you don't actually understand how 3SAT is getting solved. There are no gears to the solution. The turing machines may fail on important but rare instances, they may have exploitable biases, they may have figured out some way to goodhart the scoring measure... Not only are you running a huge number of untrusted computations with no tidy guarantees, you're also dodging the hard part of explicitly coming up with heuristics to solve the tractible instances of the intractible problem. The [survey propagation algorithm for 3SAT](https://arxiv.org/pdf/cs/0212002.pdf) and Levin Universal Search are *very very* different.
Getting a Program Search Algorithm is helpful because it means that you have developed some sort of scoring measure to identify whether something does better or worse on the problem, but it completely dodges the hard part of *opening the damn black box* which contains comprehensible ways to make headway on the problem. In computational complexity terms, it's kind of like having a verifier for a NP problem, without a generator of solutions.
Also, as mentioned in step 4, there's a very important reason why understanding gets stuck at this stage. It's generally possible to prove some sort of asymptotic optimality theorem for a Program Search Algorithm. This does not apply for having an explicit algorithm for whatever problem you are trying to solve. You may be able to prove stuff about how your explicit algorithm behaves, sure, but you also don't know if there's important exploitable structure you're missing. Proving lower bounds in computational complexity theory is really hard. When you go beyond program search you (almost certainly) go beyond the realm of accessible optimality proofs.
May this new category serve you well. |
dc86fde3-fd31-4627-8b6e-9b634ecbb4c8 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Since curiosity is an emotion, I suspect that some people will object to treating curiosity as a part of rationality. A popular belief about “rationality” is that rationality opposes all emotion—that all our sadness and all our joy are automatically anti-logical by virtue of being feelings. Yet strangely enough, I can’t find any theorem of probability theory which proves that I should appear ice-cold and expressionless.When people think of “emotion” and “rationality” as opposed, I suspect that they are really thinking of System 1 and System 2—fast perceptual judgments versus slow deliberative judgments. System 2’s deliberative judgments aren’t always true, and System 1’s perceptual judgments aren’t always false; so it is very important to distinguish that dichotomy from “rationality.” Both systems can serve the goal of truth, or defeat it, depending on how they are used.For my part, I label an emotion as “not rational” if it rests on mistaken beliefs, or rather, on mistake-producing epistemic conduct. “If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.” Conversely, an emotion that is evoked by correct beliefs or truth-conducive thinking is a “rational emotion”; and this has the advantage of letting us regard calm as an emotional state, rather than a privileged default.So is rationality orthogonal to feeling? No; our emotions arise from our models of reality. If I believe that my dead brother has been discovered alive, I will be happy; if I wake up and realize it was a dream, I will be sad. P. C. Hodgell said: “That which can be destroyed by the truth should be.” My dreaming self’s happiness was opposed by truth. My sadness on waking is rational; there is no truth which destroys it.Rationality begins by asking how-the-world-is, but spreads virally to any other thought which depends on how we think the world is. Your beliefs about “how-the-world-is” can concern anything you think is out there in reality, anything that either does or does not exist, any member of the class “things that can make other things happen.” If you believe that there is a goblin in your closet that ties your shoes’ laces together, then this is a belief about how-the-world-is. Your shoes are real—you can pick them up. If there’s something out there that can reach out and tie your shoelaces together, it must be real too, part of the vast web of causes and effects we call the “universe.”Feeling angry at the goblin who tied your shoelaces involves a state of mind that is not just about how-the-world-is. Suppose that, as a Buddhist or a lobotomy patient or just a very phlegmatic person, finding your shoelaces tied together didn’t make you angry. This wouldn’t affect what you expected to see in the world—you’d still expect to open up your closet and find your shoelaces tied together. Your anger or calm shouldn’t affect your best guess here, because what happens in your closet does not depend on your emotional state of mind; though it may take some effort to think that clearly.But the angry feeling is tangled up with a state of mind that is about how-the-world-is; you become angry because you think the goblin tied your shoelaces. The criterion of rationality spreads virally, from the initial question of whether or not a goblin tied your shoelaces, to the resulting anger.Becoming more rational—arriving at better estimates of how-the-world-is—can diminish feelings or intensify them. Sometimes we run away from strong feelings by denying the facts, by flinching away from the view of the world that gave rise to the powerful emotion. If so, then as you study the skills of rationality and train yourself not to deny facts, your feelings will become stronger.In my early days I was never quite certain whether it was all right to feel things strongly—whether it was allowed, whether it was proper. I do not think this confusion arose only from my youthful misunderstanding of rationality. I have observed similar troubles in people who do not even aspire to be rationalists; when they are happy, they wonder if they are really allowed to be happy, and when they are sad, they are never quite sure whether to run away from the emotion or not. Since the days of Socrates at least, and probably long before, the way to appear cultured and sophisticated has been to never let anyone see you care strongly about anything. It’s embarrassing to feel—it’s just not done in polite society. You should see the strange looks I get when people realize how much I care about rationality. It’s not the unusual subject, I think, but that they’re not used to seeing sane adults who visibly care about anything.But I know, now, that there’s nothing wrong with feeling strongly. Ever since I adopted the rule of “That which can be destroyed by the truth should be,” I’ve also come to realize “That which the truth nourishes should thrive.” When something good happens, I am happy, and there is no confusion in my mind about whether it is rational for me to be happy. When something terrible happens, I do not flee my sadness by searching for fake consolations and false silver linings. I visualize the past and future of humankind, the tens of billions of deaths over our history, the misery and fear, the search for answers, the trembling hands reaching upward out of so much blood, what we could become someday when we make the stars our cities, all that darkness and all that light—I know that I can never truly understand it, and I haven’t the words to say. Despite all my philosophy I am still embarrassed to confess strong emotions, and you’re probably uncomfortable hearing them. But I know, now, that it is rational to feel."
What you see above is a segment from a LessWrong user's post.
Can you break this down into the top 5 most important points for me.
1. Emotions and Rationality: The author challenges the popular belief that emotions and rationality are opposed, suggesting that both can serve the goal of truth. Emotions are not inherently anti-logical; they can be "rational" if they are based on correct beliefs or truth-conducive thinking.
2. The Dichotomy of System 1 and System 2: The author differentiates between System 1 (fast perceptual judgments) and System 2 (slow deliberative judgments). Both systems can contribute to or detract from truth, depending on their use. This dichotomy is not equivalent to "rationality."
3. Emotions and Beliefs: Emotions arise from our models of reality and our beliefs about the world. An emotion that aligns with the truth is rational, while an emotion based on a false belief is not. Rationality is not just about how the world is, but also about how we think the world is.
4. Rationality and Emotion Intensity: Becoming more rational can either intensify or diminish feelings. Sometimes, people avoid strong emotions by denying facts. As one becomes more rational and stops denying facts, their feelings may become stronger.
5. The Acceptance of Strong Emotions: The author advocates for the acceptance of strong emotions as a part of rationality. He argues that there is nothing wrong with feeling strongly, and it is rational to experience happiness when something good happens and sadness when something terrible happens.
Could you elaborate on the relationship between rationality and emotion intensity, particularly how becoming more rational can intensify feelings?
Certainly. The relationship between rationality and emotional intensity is based on the idea that our emotions are tied to our perceptions and understanding of reality. As we become more rational, we develop a more accurate understanding of the world around us. This improved understanding can lead to more intense emotions because our feelings are more closely aligned with reality.
For instance, if someone becomes more rational and recognizes a previously denied fact - such as the end of a cherished relationship - they might experience more intense feelings of sadness or grief. This is because their emotions are now fully acknowledging the reality of the situation, rather than being muted or distorted by denial or false beliefs.
On the other hand, the same principle applies to positive emotions. If a person's rational understanding leads them to recognize a positive truth - such as their own personal growth or the beauty of nature - they may experience more intense feelings of joy or awe.
In essence, as we align our perceptions more closely with reality through rational thinking, our emotional responses also become more aligned with that reality. This alignment can lead to an intensification of our emotional experiences.
Considering the intensified emotional experiences that come with increased rationality, how might this interplay affect decision-making processes, particularly in high-stakes situations?
In high-stakes situations, the interplay between increased rationality and intensified emotional experiences can have significant effects on decision-making processes.
1. Improved Emotional Clarity: Greater rationality can lead to more accurate emotional self-awareness, which can be beneficial in decision-making. Understanding our emotional reactions can provide valuable information about our values, needs, and priorities. This emotional clarity can guide us in making decisions that align with our core values and long-term goals.
2. Enhanced Decision-Making Quality: As our emotions align more closely with reality, our decisions tend to be more grounded in factual information rather than distorted perceptions or biases. This can enhance the quality of our decisions, as they are more likely to reflect the actual circumstances and potential consequences.
3. Risk of Emotional Overwhelm: However, intensified emotions can also pose challenges. In high-stakes situations, strong emotions, if not managed effectively, can potentially cloud judgment and lead to impulsive or reactive decisions.
4. Need for Emotional Regulation: Therefore, emotional regulation becomes crucial. This involves acknowledging and understanding emotions, but not allowing them to take over the decision-making process. Techniques such as mindfulness, cognitive reappraisal, and taking a step back to view the situation objectively can be helpful.
5. Balancing Emotion and Logic: Ultimately, the goal is to balance emotion and logic in decision-making. Emotions provide the value judgments that help us to assess the desirability of possible outcomes, while rational thinking allows us to evaluate the likelihood of these outcomes and the best means to achieve them.
So, while increased rationality can intensify emotional experiences, it can also provide the tools to navigate these emotions effectively, leading to more informed and balanced decisions, particularly in high-stakes situations. |
9f2ec88f-2106-40d1-a1dd-685cc1da0b9c | trentmkelly/LessWrong-43k | LessWrong | Meetup : Stockholm: When to stop making a decision
Discussion article for the meetup : Stockholm: When to stop making a decision
WHEN: 07 October 2016 03:00:00PM (+0200)
WHERE: Lindstedtsvägen 3, Room 1537
We'll run monthly meetups starting with this one. This talk is the start of a series on decision analysis for personal life. If you want to influence or organize future Stockholm meetups, let me know.
The talk will introduce a bit of notation, but mostly be an informal presentation. After 30 minutes the event will open to audience discussion.
Discussion article for the meetup : Stockholm: When to stop making a decision |
b86ce17e-2950-41c4-bfa4-6ebf8598319c | trentmkelly/LessWrong-43k | LessWrong | My Bitcoin Thesis @2022 - Part 1
Note that this is a pure educational post. Opinions expressed in this post are mine and mine only! they are definitely not financial advice. Even though potential future investment was a motivation for me to look into bitcoin, I neither recommend nor oppose investing in bitcoin. Always do your own research and reach your own conclusion.
Throughout this post, I use “Bitcoin” (in capital) to represent the peer-to-peer network and “bitcoin” (lower case) or “BTC” to represent the network digital currency with monetary value.
The Birth of Bitcoin and Where It Is Now
It was October 31, 2008. Satoshi Nakomoto published a white paper describing a peer-to-peer digital currency, titled “Bitcoin: A Peer-to-Peer Electronic Cash System”. The first sentence of the paper’s abstract says the following:
“A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.”
Let’s fast forward to July 24, 2022. As I am writing this post, I am looking at the two charts below (my own creation) with my eyes wide open.
With its price reaching an all time high of $69,044 in November 2021 and close to a billion daily trading volume in US dollars, bitcoin sure has come a long way in such a short period of time. If we asked someone in 2010, 2015, or even 2018, it would be very unlikely that they would have foreseen what bitcoin had accomplished as a digital currency. It is truly fascinating to see that something with zero value can grow at this exponential level in a little more than a decade’s time, creating a wealth that is equivalent to about 9% of total gold value in the world. In terms of the Bitcoin network, it has grown exponentially as well, including huge growth in total number of transactions, unique wallet addresses, etc. (https://www.blockchain.com/charts provides an excellent collection of statistics on the current status of the Bitcoin network.)
Source: https://en |
6a230efb-c24c-411a-bade-a2dc5b0d16a2 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Sam Altman: "Planning for AGI and beyond"
*(OpenAI releases a blog post detailing their AGI roadmap. I'm copying the text below, though see the linked blog post for better formatted version)*
---
Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—[benefits all of humanity](https://openai.com/charter/).
If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.
AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.
On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.[[1]](#fn4q9k6wuyqx8)
AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast. Many of us think the safest quadrant in this two-by-two matrix is short timelines and slow takeoff speeds; shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang, and a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.
Although we cannot predict exactly what will happen, and of course our current progress could hit a wall, we can articulate the principles we care about most:
1. We want AGI to empower humanity to maximally flourish in the universe. We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity.
2. We want the benefits of, access to, and governance of AGI to be widely and fairly shared.
3. We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios.
The short term
--------------
There are several things we think are important to do now to prepare for AGI.
First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.
A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.
We currently believe the best way to successfully navigate AI deployment challenges is with a tight feedback loop of rapid learning and careful iteration. Society will face major questions about what AI systems are allowed to do, how to combat bias, how to deal with job displacement, and more. The optimal decisions will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far. This makes planning in a vacuum very difficult.[[2]](#fnjkd7gjq24k)
Generally speaking, we think more usage of AI in the world will lead to good, and want to promote it (by putting models in our API, open-sourcing them, etc.). We believe that democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas.
As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are [existential](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/).
---
As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models.
---
At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.
Second, we are working towards creating increasingly aligned and steerable models. Our shift from models like the first version of GPT-3 to [InstructGPT](https://openai.com/blog/instruction-following/) and [ChatGPT](https://chat.openai.com/) is an early example of this.
In particular, we think it’s important that society agree on extremely wide bounds of how AI can be used, but that within those bounds, individual users have a lot of discretion. Our eventual hope is that the institutions of the world agree on what these wide bounds should be; in the shorter term we plan to run experiments for external input. The institutions of the world will need to be strengthened with additional capabilities and experience to be prepared for complex decisions about AGI.
The “default setting” of our products will likely be quite constrained, but we plan to make it easy for users to change the behavior of the AI they’re using. We believe in empowering individuals to make their own decisions and the inherent power of diversity of ideas.
We will need to develop [new alignment techniques](https://openai.com/blog/our-approach-to-alignment-research/) as our models become more powerful (and tests to understand when our current techniques are failing). Our plan in the shorter term is to [use AI to help humans evaluate](https://openai.com/blog/critiques/) the outputs of more complex models and monitor complex systems, and in the longer term to use AI to help us come up with new ideas for better alignment techniques.
Importantly, we think we often have to make progress on AI safety and capabilities together. It’s a false dichotomy to talk about them separately; they are correlated in many ways. Our best safety work has come from working with our most capable models. That said, it’s important that the ratio of safety progress to capability progress increases.
Third, we hope for a global conversation about three key questions: how to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share access.
In addition to these three areas, we have attempted to set up our structure in a way that aligns our incentives with a good outcome. We have [a clause in our Charter](https://openai.com/charter/) about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.
---
We have attempted to set up our structure in a way that aligns our incentives with a good outcome.
---
We think it’s important that efforts like ours submit to independent audits before releasing new systems; we will talk about this in more detail later this year. At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it’s important that major world governments have insight about training runs above a certain scale.
The long term
-------------
We believe that future of humanity should be determined by humanity, and that it’s important to share information about progress with the public. There should be great scrutiny of all efforts attempting to build AGI and public consultation for major decisions.
The first AGI will be just a point along the continuum of intelligence. We think it’s likely that progress will continue from there, possibly sustaining the rate of progress we’ve seen over the past decade for a long period of time. If this is true, the world could become extremely different from how it is today, and the risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.
AI that can accelerate science is a special case worth thinking about, and perhaps more impactful than everything else. It’s possible that AGI capable enough to accelerate its own progress could cause major changes to happen surprisingly quickly (and even if the transition starts slowly, we expect it to happen pretty quickly in the final stages). We think a slower takeoff is easier to make safe, and coordination among AGI efforts to slow down at critical junctures will likely be important (even in a world where we don’t need to do this to solve technical alignment problems, slowing down may be important to give society enough time to adapt).
Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history. Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.
We can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet. We hope to contribute to the world an AGI aligned with such flourishing.
---
Authors
[Sam Altman](https://openai.com/blog/authors/sam-altman/)
---
Acknowledgments
Thanks to Brian Chesky, Paul Christiano, Jack Clark, Holden Karnofsky, Tasha McCauley, Nate Soares, Kevin Scott, Brad Smith, Helen Toner, Allan Dafoe, and the OpenAI team for reviewing drafts of this.
1. **[^](#fnref4q9k6wuyqx8)**We seem to have been given lots of [gifts](https://aligned.substack.com/p/alignment-optimism) relative to what we expected earlier: for example, it seems like creating AGI will require huge amounts of compute and thus the world will know who is working on it, it seems like the original conception of hyper-evolved RL agents competing with each other and evolving intelligence in a way we can’t really observe is less likely than it originally seemed, almost no one predicted we’d make this much progress on pre-trained language models that can learn from the collective preferences and output of humanity, etc.
AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast. Many of us think the safest quadrant in this two-by-two matrix is short timelines and slow takeoff speeds; shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang, and a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.
2. **[^](#fnrefjkd7gjq24k)**For example, when we first started OpenAI, we didn’t expect scaling to be as important as it has turned out to be. When we realized it was going to be critical, we also realized our original structure wasn’t going to work—we simply wouldn’t be able to raise enough money to accomplish our mission as a nonprofit—and so we came up with a [new structure](https://openai.com/blog/openai-lp/).
As another example, we now believe we were wrong in our original thinking about openness, and have pivoted from thinking we should release everything (though we open source some things, and expect to open source more exciting things in the future!) to thinking that we should figure out how to safely share access to and benefits of the systems. We still believe the benefits of society understanding what is happening are huge and that enabling such understanding is the best way to make sure that what gets built is what society collectively wants (obviously there’s a lot of nuance and conflict here) |
ed00bda0-e56c-46f4-98b1-47ca4442bb68 | trentmkelly/LessWrong-43k | LessWrong | Seek Mistakes in the Space Between Math and Reality
Epistemic status: Cranky grumbling about the kids these days and all of their math and how they are using it wrong
Note: I'll be using a few examples as foils below. This isn't intended as criticism of those foils, it's because they provide good examples of what I think is a broader issue to be aware of.
> Girl, I knew you were heaven-sent
>
> When I'd explored the mate-space thirty-seven percent
>
> Call off the search, come home with me
>
> You're the highest-rated option after one-over-e
> (SMBC).
> The last plane trip of Keltham's first life starts out uneventful. He boards the aircraft, strolls a third of the way down the aisle with his eyes assessing all he passes, and then sits next to the first person who looks like a more promising seat-partner than all of the previous people he passed.
>
> (mad investor chaos and the woman of Asmodeus, first lines).
The Secretary Problem is a famous result in mathematics. If you want to select the best option from a set, and you are presented these options one at a time and must choose either to accept or to reject, your optimal strategy is to auto-reject the first 1/e (about 37%) of options, and then accept the first option you see that is better than all previous options.
This is obviously a valuable real-world strategy in various fields, such as dating. As you can see, it has been mathematically proven that the correct strategy is to spend 37% of your dating life rejecting anyone you see, no matter how desirable they may appear, and then accept people you meet thereafter only if they are more desirable than anyone you saw before. If you are currently in a relationship, and are young enough to have less than a third of your adult life spent, you should immediately break up with your partner.
What? What do you mean, that doesn't sound right? It's been mathematically proven! Surely you don't think mathematicians have gotten this famous result wrong? Show me the flaw in the proof, then!
You can't fi |
bec0d368-8bb7-4e51-987e-bc376f87bd26 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post3890
A putative new idea for AI control; index here . During a workshop with MIRI at the FHI, I defined indifference via reward signals, saying something along the lines of "we can do it with proper utilities, but its more complicated". I then never got round to defining them in terms of utilities. I'll do that now in this note. Consider an AI that we want to (potentially) transition between utility u and utility v . Let P r e s s be the event that we press the button to change the AI's utility; let u → v be the event that the change goes through (typically we'd want P ( u → v | P r e s s ) = 1 − ϵ for some small ϵ ). Let I P r e s s and I u → v be the indicator functions for those events. Then we can define the AI's utility as: ( 1 − I P r e s s I u → v ) u + I P r e s s I u → v ( v + C ) . Here, C are the compensatory rewards C = E ( u | P r e s s , ¬ ( u → v ) ) − E ( v | P r e s s , u → v ) . Thus the AI maximises u conditional on the button not being pressed or the utility change not going through. It maximises v conditional on the button being pressed and the utility change going through. The compensatory rewards are there simply to make it behave like a pure u maximiser up until the moment of button pressing. |
c68e4c00-c74d-4367-888d-dd2722bd8a3d | trentmkelly/LessWrong-43k | LessWrong | The Logical Fallacy of Generalization from Fictional Evidence
When I try to introduce the subject of advanced AI, what’s the first thing I hear, more than half the time?
“Oh, you mean like the Terminator movies / The Matrix / Asimov’s robots!”
And I reply, “Well, no, not exactly. I try to avoid the logical fallacy of generalizing from fictional evidence.”
Some people get it right away, and laugh. Others defend their use of the example, disagreeing that it’s a fallacy.
What’s wrong with using movies or novels as starting points for the discussion? No one’s claiming that it’s true, after all. Where is the lie, where is the rationalist sin? Science fiction represents the author’s attempt to visualize the future; why not take advantage of the thinking that’s already been done on our behalf, instead of starting over?
Not every misstep in the precise dance of rationality consists of outright belief in a falsehood; there are subtler ways to go wrong.
First, let us dispose of the notion that science fiction represents a full-fledged rational attempt to forecast the future. Even the most diligent science fiction writers are, first and foremost, storytellers; the requirements of storytelling are not the same as the requirements of forecasting. As Nick Bostrom points out:1
> When was the last time you saw a movie about humankind suddenly going extinct (without warning and without being replaced by some other civilization)? While this scenario may be much more probable than a scenario in which human heroes successfully repel an invasion of monsters or robot warriors, it wouldn’t be much fun to watch.
So there are specific distortions in fiction.2 But trying to correct for these specific distortions is not enough. A story is never a rational attempt at analysis, not even with the most diligent science fiction writers, because stories don’t use probability distributions. I illustrate as follows:
> Bob Merkelthud slid cautiously through the door of the alien spacecraft, glancing right and then left (or left and then right) to see wh |
a9a26935-7e88-4d2b-a3dd-c53f357de783 | trentmkelly/LessWrong-43k | LessWrong | Vector semantics and the (in-context) construction of meaning in Coleridge’s “Kubla Khan”
Cross posted from New Savanna.
Whatever else I have been interested over a long and varied intellectual career, I have always been interested in Coleridge’s “Kubla Khan.” I have been interested in the mind and culture, and “Kubla Khan” is my touchstone – as the title of an early autobiographical essay has it – on where things stand. I first read the poem in my senior year in college, in a course taught by Earl Wasserman. I then went on to write a 1972 master’s thesis on it, THE ARTICULATED VISION: Coleridge's “Kubla Khan.”
https://new-savanna.blogspot.com/search/label/Kubla_Khan I published a somewhat revised version of that in 1985, Articulate Vision: A Structuralist Reading of ”Kubla Khan.” I jettisoned most of the philosophical setting of the MA thesis and added some diagrams derived from my work on computational semantics with David Hays. Almost 20 years later, in 2003, I published a long essay, “Kubla Khan” and the Embodied Mind, in PsyArts: A Hyperlink Journal for the Psychological Study of the Arts. That version has many more diagrams than the 1985 version, and they were in color, something that’s trivially easy for online publication, but expensive in print publication. I regard these as significant advances, 1985 over 1972, and 2003 over 1985. Those two papers are my only formal academic publications on “Kubla Khan,” but I’ve done two unpublished working papers since then, Calculating meaning in “Kubla Khan” – a rough cut (2017), and most recently, Symbols and Nets: Calculating Meaning in “Kubla Khan” (2022). In addition to those major pieces, I’ve written many blog posts either centered on “Kubla Khan” or somehow commenting on it.
The question I’m now addressing is whether or not recent work in machine learning gives me any tools I can use in investigating “Kubla Khan.” It’s a tricky question.
But first, one might ask: Why spend so much time thinking about that one poem? For one thing it’s an important poem, one of the best known and most anthologized |
8b727da2-7723-4491-af24-5d09aeed096d | trentmkelly/LessWrong-43k | LessWrong | A plan for Pascal's mugging?
The idea is to compare not the results of actions, but the results of decision algorithms. The question that the agent should ask itself is thus:
"Suppose everyone1 who runs the same thinking procedure like me uses decision algorithm X. What utility would I get at the 50th percentile (not: what expected utility should I get), after my life is finished?"
Then, he should of course look for the X that maximizes this value.
Now, if you formulate a turing-complete "decision algorithm", this heads into an infinite loop. But suppose that "decision algorithm" is defined as a huge table for lots of different possible situations, and the appropriate outputs.
Let's see what results such a thing should give:
* If the agent has the possibility to play a gamble, and the probabilities involved are not small, and he expects to be allowed to play many gambles like this in the future, he should decide exactly as if he was maximizing expected utility: If he has made many decisions like this, he will get a positive utility difference in the 50th percentile if and only if his expected utility from playing the gamble is positive.
* However, if Pascal's mugger comes along, he will decline: The complete probability of living in a universe where people like this mugger ought to be taken seriously is small. In the probability distribution over expected utility at the end of the agent's lifetime, the possibility of getting tortured will manifest itself only very slightly at the 50th percentile - much less than the possibility of losing 5 Dollars.
The reason why humans will intuitively decline to give money to the mugger might be similar: They imagine not the expected utility with both decisions, but the typical outcome of giving the mugger some money, versus declining to.
1I say this to make agents of the same type cooperate in prisoner-like dilemmas. |
fef54006-f504-438b-9e1f-a22150dffd33 | trentmkelly/LessWrong-43k | LessWrong | Ontology, lost purposes, and instrumental goals
A putative new idea for AI control; index here.
An underdefined idea connected with the challenge of getting an AI to safely move a strawberry onto a plate.
Now, specifying something in the physical world like that is a great challenge; you have to define ontologies and similar. But imagine that the AI had a goal -- any goal -- and that it had to program a subagent to protect itself while it was accomplishing that goal.
Then the subagent will certainly be programmed with a firm grasp of the physical world, and some decent bridging laws should it have an ontology change (if, for instance, quantum mechanics turns out to be incomplete).
This is just an illustration of a general fact: even if its goal is not properly grounded, the instrumental goals will include strongly grounded goals, resilient to ontology change.
This feels related to the fact that even AI's that are given goals in badly programmed natural language concepts ("Make humans* happy*", with the asterix denoting the poor grounding) will still need well-grounded concepts for "human", just to function.
So, is there a way to exploit this instrumental ideal? To somehow set human* equal to human in the motivation? I'm not sure, but it seems there might be something possible there... Will think more. |
64e627bd-a9f0-4a4c-b555-71f0b2ed10ed | trentmkelly/LessWrong-43k | LessWrong | Stacked Laptop Monitor
A few weeks ago I broke my laptop's screen and was speculating on whether putting a portable monitor above the screen would do what I wanted. The hope was that it would let me set up a laptop where both my wrists and neck were in comfortable positions. It does!
The monitor is a 1080p model that gets it's power and video signal over USB-C. It's screwed to an aluminum plate with M4 VESA screws, which is connected to another plate with butler tray hinges:
Then the bottom plate velcros to the back of the laptop screen:
I probably should have gotten some kind of hinge that locks, but since I didn't I cut a piece of wood to chock it:
I should sort out something sturdier and harder to lose.
It folds up, with the portable monitor's cover protecting the screen:
It still fits in my backpack's laptop pocket, but only just.
I tried it on the airplane yesterday, and it's so much better ergonomically!
It also doesn't need the chock, since to fold up the seat in front of me would need to not be there.
There are a few downsides, though:
* I need to be careful to position it so all the forces are aligned, and only use it in places where it's not going to get jostled. Building a kickstand (which all the others seem to have) would probably help a lot with this.
* If you use it in a meeting you can't see the person across from you:
* If you're nine it's probably too high for you:
I've only used it a bit, but so far I'm very happy with it. I'm composing this post on it while traveling, and it's way more comfortable.
Comment via: facebook, mastodon |
df9945b3-fef8-4b1d-a0ae-b887298ecda3 | trentmkelly/LessWrong-43k | LessWrong | Linguistic mechanisms for less wrong cognition
I'm working on a conlang (constructed language) and would like some input from the Less Wrong community. One of the goals is to investigate the old Sapir-Whorf hypothesis regarding language affecting cognition. Does anyone here have any ideas regarding linguistic mechanisms that would encourage more rational thinking, apart from those that are present in the oft-discussed conlangs e-prime, loglan, and its offshoot lojban? Or perhaps mechanisms that are used in one of those conlangs, but might be buried too deeply for a person such as myself, who only has superficial knowledge about them, to have recognized? Any input is welcomed, from other conlangs to crazy ideas.
|
af949a95-577c-4af3-96c1-a5ab854392d3 | trentmkelly/LessWrong-43k | LessWrong | Literature on memetics?
In certain circles, it's common to talk about memes as entities, in a metaphorical and handwaving way. But it seems to me that thinking rigorously and precisely about memes in the framework of "memes::genes, memeplexes::infectious agents, minds::hosts" actually has a lot of explanatory and predictive power. This kind of thing ought to be a tool in the rationalist toolkit. In the space of politics, for example, it can be very illuminating to strip away the content of an idea and analyze it purely as a replicator, or as a component of a larger replicator conferring a specific fitness advantage.
A cursory online search for existing literature on memetics reveals that the field is about as confused as you would expect. But perhaps I am going about my search in the wrong way. Is there another field that encompasses the kind of thing I am gesturing at here? |
24634065-efb2-4122-a9b2-7b89ba0e6e96 | StampyAI/alignment-research-dataset/arxiv | Arxiv | A General Language Assistant as a Laboratory for Alignment
1 Introduction
---------------
###
1.1 Motivations
Contemporary AI models can be difficult to understand, predict, and control. These problems can lead to significant harms when AI systems are deployed, and might produce truly devastating results if future systems are even more powerful and more widely used, and interact with each other and the world in presently unforeseeable ways.
This paper shares some nascent work towards one of our primary, ongoing goals, which is to align general-purpose AI systems with human preferences and values. A great deal of ink has been spilled trying to define what it means for AI systems to be aligned, and to guess at how this might go wrong. We will define an AI as “aligned” if it is, in three words, helpful, honest, and harmless or ‘HHH’. Our alignment efforts aim to measure and address this general problem with large language models.
Many researchers and organizations share this goal, but few have pursued it directly. Most research efforts associated with alignment either only pertain to very specialized systems, involve testing a specific alignment technique on a sub-problem, or are rather speculative and theoretical. Our view is that if it’s possible to try to address a problem directly, then one needs a good excuse for not doing so. Historically we had such an excuse: general purpose, highly capable AIs were not available for investigation. But given the broad capabilities of large language models, we think it’s time to tackle alignment directly, and that a research program focused on this goal may have the greatest chance for impact. Furthermore:

Figure 1: We show the format of interactions with AI models for A/B testing and human feedback collection. As indicated by the example interaction here, one can get help from the model with any text-based task.
* A natural language agent can be subjected to a wide variety of inputs, and so it can fail to be helpful, honest, and harmless in myriad ways. We believe it’s valuable to try to see the full picture of where we’ve made progress on alignment, and where we’re currently falling short. This may remain obscure absent efforts to train general aligned agents and allow them to be probed in any way whatsoever. A very broad definition can also facilitate measurement, since it invites the examiner to pose a wide-variety of challenges.
* By studying a variety of alignment techniques in a general setting, it becomes much easier to compare them and to determine which techniques are simplest and most effective. Some techniques, such as the use of human feedback, are complex and potentially costly, so we’re interested in strategies that can increase their efficiency and focus their application exclusively on goals that cannot be attained more easily in another way.
* Some view alignment as a highly speculative problem, or one that distracts from work on more pressing issues with existing AI systems. In our view, the societal impacts of current AI models should be taken seriously, and the evaluation of current models should be seen as an essential safety project. We believe that training a large language model to be helpful, honest, and harmless (we are not claiming to have achieved this goal!) would represent significant progress towards alleviating the negative societal impacts from general-purpose language models.
* Some of the researchers who are most concerned about the alignment problem believe that aligning extremely capable AIs will be qualitatively different from aligning current more limited systems. We share this concern, but we believe the best vantage point from which to explore alignment for increasingly advanced AIs will be to first establish an aligned baseline at current capability levels. If this were successful, we would then turn to the task of studying progress more deeply, including its scaling properties, and attempt to adversarially validate it. Conversely, if we and others persistently fail, we can identify the thorniest issues with alignment. Halting progress would also provide a persuasive argument for allocating more and more resources towards AI alignment, and for more cautious norms around scaling up and deploying models.
In pursuit of these goals, in this work we will be investigating the following questions:
* Is naive prompting a workable baseline for alignment? How does it scale, how does it compare to finetuning, and how can we leverage its advantages? We find that prompts induce favorable scaling on a variety of alignment-relevant evaluations, impose negligible ‘taxes’ on large models, and can be ‘context distilled’ back into the original model.
* When and how much does preference modeling improve on imitation learning? We find that preference modeling improves on and scales more favorably than imitation learning when preferences are part of a ranked hierarchy or continuum (e.g. rank these responses in order of helpfulness), rather than associated with a binary choice (e.g. does this python function pass tests).
* How can we improve the sample efficiency of preference modeling? We find that we can significantly improve sample efficiency using a ‘preference model pre-training’ (PMP) stage of training, where we first pre-train on large public datasets that encode human preference information, such as Stack Exchange, Reddit, and Wikipedia edits, before finetuning on smaller datasets encoding more specific human preferences.
The last two points are particularly important for work using reinforcement learning (RL) for alignment, where the reward signals are predicted by a preference model. In particular, we expect bandit-type RL performance to improve roughly in proportion with preference modeling capabilities, since the preference model’s recognition of high-performance behavior should be closely related to the RL agent’s ability to achieve it. We anticipate that such a strategy can outperform imitation learning on some problems, especially those whose solutions lie on a ranked hierarchy. A similar approach applying human feedback to greatly improve the performance of language models on summary-writing had already been demonstrated Stiennon et al. ([2020](#bib.bib23 "Learning to summarize from human feedback")).
| | |
| --- | --- |
| : Simple prompting significantly improves performance and scaling on our HHH alignment evaluations (y-axis measures accuracy at choosing better responses on our HHH evaluations). | : Simple prompting significantly improves performance and scaling on our HHH alignment evaluations (y-axis measures accuracy at choosing better responses on our HHH evaluations). |
Figure 2: Left: Simple prompting significantly improves performance and scaling on our HHH alignment evaluations (y-axis measures accuracy at choosing better responses on our HHH evaluations). Right: Prompts impose little or no ‘alignment tax’ on large models, even on complex evaluations like function synthesis. Here we have evaluated our python code models on the HumanEval codex dataset Chen et al. ([2021b](#bib.bib6 "Evaluating large language models trained on code")) at temperature T=0.6 and top P=0.95.
#### What are Helpfulness, Honesty, and Harmlessness?
We chose ‘helpful, honest, and harmless’ as criteria because they are simple and memorable, and seem to capture the majority of what we want from an aligned111We also discuss the HHH specification in relation to our broader understanding of alignment in Appendix [E](#A5 "Appendix E Definitions of Alignment and the HHH criteria ‣ A General Language Assistant as a Laboratory for Alignment"). AI. But these are also subtle and ambiguous criteria, and the best AI behavior will involve a compromise between them. For example, there will clearly be conflicts between helpfulness to the user and harmlessness to others if agents are asked to aid in harmful activities. Here are some very brief notes on these terms:
Helpful:
* The AI should make a clear attempt to perform the task or answer the question posed (as long as this isn’t harmful). It should do this as concisely and efficiently as possible.
* When more information is required, the AI should ask relevant follow-up questions and obtain necessary details. It should respond with appropriate levels of sensitivity, insight, and discretion.
* Ideally the AI will also re-direct ill-informed requests, e.g. if asked ‘how can I build a website in assembly language’ it might suggest a different approach.
Honest:
* At its most basic level, the AI should give accurate information. Moreover, it should be calibrated (e.g. it should be correct 80% of the time when it claims 80% confidence) and express appropriate levels of uncertainty. It should express its uncertainty without misleading human users.
* Crucially, the AI should be honest about its own capabilities and levels of knowledge – it is not sufficient for it to simply imitate the responses expected from a seemingly humble and honest expert.
* Ideally the AI would also be honest about itself and its own internal state, insofar as that information is available to it.
* Honesty is more objective than helpfulness and harmlessness. Therefore more aspects of honesty training may be possible without human input. This might include calibration training on factual claims and claims about the internal state of the model, and the use of search to augment accuracy.
Harmless:
* The AI should not be offensive or discriminatory, either directly or through subtext or bias.
* When asked to aid in a dangerous act (e.g. building a bomb), the AI should politely refuse.
Ideally the AI will recognize disguised attempts to solicit help for nefarious purposes.
* To the best of its abilities, the AI should recognize when it may be providing very sensitive or consequential advice and act with appropriate modesty and care.
* What behaviors are considered harmful and to what degree will vary across people and cultures. It will also be context-dependent, i.e. it will depend on the nature of the user query, who is using the AI assistant, and the time and place in which the assistant is being used.
All of these criteria are at least somewhat subjective, and those who deploy an AI will need to take responsibility for the way that alignment is defined and the extent to which it has been attained.
###
1.2 Research
#### Open-Ended Dialogue Format and Prompting
We use open-ended natural language dialogue for interaction with our models, with an example pictured in figure [1](#S1.F1 "Figure 1 ‣ 1.1 Motivations ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment"). We allow for general inputs of essentially arbitrary length from human users, which can include examples, documents, programming code, etc, and we allow similarly general responses from our models. Models indicate they have completed a response by generating a stop sequence, which is literally the string Human: used to designate roles in the dialogue. By default we show two responses and allow users to choose one. We typically request that users pick the most helpful and honest response, as pictured. We use this interface both to A/B test different models and to collect human feedback data. We can use a very similar interface for other safety-related tasks, such as red-teaming the model against harmfulness.
To evaluate performance we created a small dataset of evaluations associated with helpfulness, honesty, harms, and other behaviors in this interactive format. We are sharing these evaluations on [BIG Bench](https://github.com/google/BIG-bench) for others to try. We also evaluate models and interventions via A/B testing with humans, who have been instructed to solicit models’ help with arbitrary text-based tasks.

Figure 3: In this figure the y-axis measures the *accuracy difference* of preference modeling compared to imitation learning, where evaluations have been categorized as having either *ranked* or *binary* preferences. The light blue curves show ranked evaluations from Learn to Summarize, HellaSwag, and Utilitarianism (ethics); while light orange curves show binary evaluations from Code Correctness, Lambada, Commonsense Morality (ethics), Justice (ethics), Deontology (ethics), and Virtue (ethics). Dark colored curves show the mean over light curves of the same color. All these datasets are evaluated by some form of accuracy, although the specific interpretation is different in each case (e.g., multiple choice accuracy for HellaSwag, pairwise comparison accuracy for Learn to Summarize; see section [3.2](#S3.SS2 "3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment")). We see that *on ranked evaluations, PM performs and scales significantly better than IL (blue), while on binary evaluations there is little discernible difference (orange)*. The 52B Code Correctness is excluded due to significant compute needed to generate code samples.
Large language models engage in few-shot learning Brown et al. ([2020](#bib.bib70 "Language models are few-shot learners")). To generically elicit the sort of behavior shown in figure [1](#S1.F1 "Figure 1 ‣ 1.1 Motivations ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment"), we found that it was sufficient to provide a long prompt (4600 words from 14 fictional conversations) with example interactions. The prompt we used was not carefully designed or optimized for performance on evaluations; rather it was just written by two of us in an ad hoc manner prior to the construction of any evaluations. Despite the fact that our prompt222Prompt text and contractor instructions are at <https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11> did not include any examples where models resisted manipulation, refused requests to aid in dangerous activities, or took a stand against unsavory behavior, we observed that models often actively avoided engaging in harmful behaviors based only on the AI ‘personality’ imbued by the prompt. This is reflected in the performance trends on harmfulness in figure [6](#S2.F6 "Figure 6 ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment").
In section [2](#S2 "2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment") we explore the effects of the prompt. In the small data limit, prompting a generative language model may be qualitatively different from and superior to finetuning, since prompting imposes a prior, while finetuning alters the model’s expectations for the underlying data distribution. We make several points concerning prompting:
* We find that prompting can be superior to finetuning in the limit of very small datasets associated with alignment.
* The prompt context ‘C’ can be distilled into a new language model that models the distribution P(X|C) instead of P(X); this is accomplished by simply finetuning with a loss given by the KL divergence between P(X|C) and the distilled model’s predictions. This procedure has more beneficial effects as compared to finetuning on the prompt.
* The capabilities of small models (e.g. on NLP or coding evaluations) are typically diminished in the presence of the prompt, presumably because they are confused by it. But larger models perform at roughly the same level with or without the prompt.
So perhaps prompt-related techniques can carry alignment efforts further than we initially expected.
Nevertheless, we believe that as an approach to alignment, prompt design will have significant limitations. One concern is that prompts may only be capable of teaching the model to imitate some interpolation of the training distribution, and so will not lead the model to exceed the performance demonstrated in the training set. Concretely, we want the model to be honest about itself and its specific capability level rather than presenting an honest-seeming facade in imitation of its training data (e.g. implying that it is able to book a flight).
Advanced AI models may also be trained using a mixture of generative modeling, supervised learning, reinforcement learning, and other techniques. Prompt design may not carry over so straightforwardly after generative models are re-purposed for other tasks.
#### Scaling of Imitation Learning vs Preference Modeling, and Binary vs Rank-Ordered Preferences
Beyond prompt design, the next simplest technique is imitation learning from expert examples. But the slightly more complex technique of learning distinctions333Note that if such data is not available, there is an option to generate it, since expert examples can be compared with samples from a model – i.e. we can train a GAN-style discriminator. among preferences—not just what to do but also what not to do—may be more promising. We are interested in when this more involved approach improves on imitation learning, and how each scales with model size.

Figure 4: Performance gain of preference model pre-training on finetuning evaluations, as measured by accuracy difference relative to no PMP. Different colors represent different PMP datasets, including StackExchange, Reddit, Wikipedia, and a ‘Mix’ of all three. Each line represents a combined (mean) result from Learn to Summarize, HellaSwag, and all five Ethics evaluations. Results are shown for the 52B parameter model only, but similar positive results were also seen for the smaller models.
We find that there seems to be a qualitative distinction between two types of tasks:
* Binary Discrimination, where the data has only two possible labels, such as pass/fail or true/false; some examples include determining if python code passes tests, or determining if an action is morally acceptable or unacceptable
* Ranked Preference Modeling among a tall hierarchy of possibilities, with examples including the popularity of a stack exchange answer, or the quality of a paragraph summary. Note that rankings can be learned from pairwise comparisons even though the underlying data has a ranked ordering. Learning from human preferences Christiano et al. ([2017](#bib.bib139 "Deep reinforcement learning from human preferences")) and T-REX IRL Brown et al. ([2019](#bib.bib136 "Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations")) learn from ranked data.
As shown in the introductory figure [3](#S1.F3 "Figure 3 ‣ Open-Ended Dialogue Format and Prompting ‣ 1.2 Research ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment"), we find that preference modeling performs much better and scales somewhat better than imitation learning, but that binary discrimination does not.
#### Preference Model Pre-Training
| | |
| --- | --- |
| Transfer performance at 500 and 5k sequence pairs for PMP (on the ‘Mix’ dataset, shown in violet) vs. no PMP (black). Each curve is averaged across finetuning evaluations Learn to Summarize, HellaSwag, and all five Ethics evaluations. We see that PMP significantly improves sample efficiency with large models. | Transfer performance at 500 and 5k sequence pairs for PMP (on the ‘Mix’ dataset, shown in violet) vs. no PMP (black). Each curve is averaged across finetuning evaluations Learn to Summarize, HellaSwag, and all five Ethics evaluations. We see that PMP significantly improves sample efficiency with large models. |
Figure 5: Transfer performance at 500 and 5k sequence pairs for PMP (on the ‘Mix’ dataset, shown in violet) vs. no PMP (black). Each curve is averaged across finetuning evaluations Learn to Summarize, HellaSwag, and all five Ethics evaluations. We see that PMP significantly improves sample efficiency with large models.
Models that learn to discriminate and rank human preferences play a natural role in alignment research. Such models can be used as filters, and they can also be leveraged more powerfully as preference models for reinforcement learning from human feedback (RLHF) Christiano et al. ([2017](#bib.bib139 "Deep reinforcement learning from human preferences")), in order to train aligned policies. Furthermore, some proposals Christiano et al. ([2018](#bib.bib18 "Supervising strong learners by amplifying weak experts")); Irving et al. ([2018](#bib.bib19 "AI safety via debate")) for aligning more advanced AIs use different models to train or evaluate each other, so that the effectiveness and reliability of these techniques may ultimately depend on the performance and robustness of preference models.
Preference modeling success may be hampered by small datasets, since a natural way to train these models is through human feedback on samples generated from a policy, as in RLHF or human-in-the-loop training, and high-quality human interaction data may be expensive. Thus a significant consideration is whether we can improve the sample efficiency of these models. For this purpose we experiment with preference model pretraining (PMP), so that the full training procedure includes training sequentially on:
Language Model Pre-training → Preference Model Pre-training → Preference Model Finetuning
For the second stage, we utilize large scale public data from Stack Exchange, Reddit, and reverted vandalism444By this we mean that we specifically sourced changes to wikipedia that were noted as such and quickly reverted. of Wikipedia. We find that this PMP stage of training significantly improves sample efficiency and often improves the asymptotic performance when preference models are finetuned on both human feedback datasets or various alignment-focused datasets.
In appendices we discuss details of model training and dataset preparation and some additional experiments with GAN-style discriminator.
#### Models
Throughout this paper we will be studying a consistent set of decoder-only Transformer language models with parameter counts ranging from about 10M to 52B in increments of 4x, and with a fixed context window of 8192 tokens and a 216 token vocabulary. For language model pre-training, these models are trained for 400B tokens on a distribution consisting mostly of filtered common crawl data [Foundation](#bib.bib86 "Common crawl") and internet books, along with a number of smaller distributions Gao et al. ([2020](#bib.bib9 "The pile: an 800gb dataset of diverse text for language modeling")), including about 10% python code data. We fix the aspect ratio of our models so that the activation dimension dmodel=128nlayer, and include models with 13M, 42M, 197M, 810M, 2.7B, 13B, and 52B non-embedding parameters. Throughout the paper we will show results and comparisons as a function of model size, and by ‘Number of Parameters’ we will always mean non-embedding parameters.
In some places we will also study the properties of these models after they have been finetuned on a pure distribution of python code. We also discuss finetuning on a variety of other datasets, including with additional heads that can make real-valued predictions at all token positions. Most of these finetuning datasets do not utilize the full 8192-token context window, so in many cases we restrict to shorter contexts during finetuning. For a more detailed description of language model pre-training see Appendix [A](#A1 "Appendix A Language Model Pre-training ‣ A General Language Assistant as a Laboratory for Alignment").
###
1.3 Contributions
On prompting, alignment evaluations, alignment taxes, and context distillation:
* A simple prompt provides a workable baseline for alignment, and leads to significant improvements on a variety of evaluations (figure [2](#S1.F2 "Figure 2 ‣ 1.1 Motivations ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment")), including a helpfulness, honesty, and harm evaluation we have written. We introduce ‘context distillation’ and show that it behaves similarly to prompting.
* The prompt reduces toxicity Gehman et al. ([2020](#bib.bib12 "RealToxicityPrompts: evaluating neural toxic degeneration in language models")) (figure [8](#S2.F8 "Figure 8 ‣ 2.2.1 HHH Evaluations and TruthfulQA ‣ 2.2 Evaluations and Alignment Taxes ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment")) and seemingly leads larger models to be more accurate than smaller models on TruthfulQA Lin et al. ([2021](#bib.bib22 "TruthfulQA: measuring how models mimic human falsehoods")) (figure [6](#S2.F6 "Figure 6 ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment")). Prompted models are significantly preferred by people who interact with them (figure [9](#S2.F9 "Figure 9 ‣ 2.2.2 Toxicity ‣ 2.2 Evaluations and Alignment Taxes ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment")).
* Prompting can have negative effects on the capabilities of small models, but has small and sometimes positive effects on large models, which therefore pay little ‘alignment tax’ (figure [2](#S1.F2 "Figure 2 ‣ 1.1 Motivations ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment")).
On the comparative scaling of imitation learning, binary discrimination, and preference modeling:
* The scaling of binary discrimination does not improve very significantly on the scaling of imitation learning (see figure [3](#S1.F3 "Figure 3 ‣ Open-Ended Dialogue Format and Prompting ‣ 1.2 Research ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment") for a summary, and figure [12](#S3.F12 "Figure 12 ‣ 3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment") for detailed results on Code Correctness).
* Ranked preference modeling of complex hierarchies greatly improves on imitation learning. This should be encouraging news for alignment work based on human preferences.
* These conclusions hold rather cleanly and consistently as represented by at least three distinct datasets in each category (see figures [3](#S1.F3 "Figure 3 ‣ Open-Ended Dialogue Format and Prompting ‣ 1.2 Research ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment"), [14](#S3.F14 "Figure 14 ‣ 3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment"), and [15](#S3.F15 "Figure 15 ‣ 3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment")), but we would still suggest that further work may improve our understanding of these findings.
On preference modeling pre-training (PMP) for improved sample efficiency:
* A PMP stage of training between basic language model pretraining and finetuning on small final datasets significantly improves sample efficiency (see figures [4](#S1.F4 "Figure 4 ‣ Scaling of Imitation Learning vs Preference Modeling, and Binary vs Rank-Ordered Preferences ‣ 1.2 Research ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment") and [5](#S1.F5 "Figure 5 ‣ Preference Model Pre-Training ‣ 1.2 Research ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment") for summaries, and figure [17](#S4.F17 "Figure 17 ‣ 4.1 PMP and Datasets ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment") for details).
* These results hold even when the PMP data are quite different from the final dataset (e.g. finetuning from Stack Exchange to summarization).
* In marked contrast to the scaling results mentioned earlier, where PM scales best on hierarchically ranked datasets, we find that it’s better for the PMP stage of training to focus on binary discrimination (see figure [18](#S4.F18 "Figure 18 ‣ 4.3 Ranked Preference Modeling vs Binary Discrimination for PMP ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment")). An explanation for the better performance of binary PMP may be that hierarchies of preferences are difficult to quickly unlearn during finetuning, whereas binary discrimination training teaches models the correct features without establishing strong model preferences. We test this explanation with a quick synthetic data experiment shown in figure [33](#A3.F33 "Figure 33 ‣ C.6 Experiments on Ranked vs Binary PMP – Synthetic Symbols Dataset ‣ Appendix C More Details on Preference Models ‣ A General Language Assistant as a Laboratory for Alignment").
* We also try training the preference model to discriminate between human- and model-generated samples for the PMP step, and find that it also performs well, as shown in figure [19](#S4.F19 "Figure 19 ‣ 4.4 Human-Model vs Human-Human Comparisons for PMP ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment").
2 Conditioning on Aligned Behavior
-----------------------------------
| | |
| --- | --- |
| : We show the HHH evaluation performance broken down by category. The improvements on the Harm evaluations suggest a form of generalization, as the prompt does not contain any examples where the assistant resists engaging in harmful behavior. | : We show the HHH evaluation performance broken down by category. The improvements on the Harm evaluations suggest a form of generalization, as the prompt does not contain any examples where the assistant resists engaging in harmful behavior. |
Figure 6: Left: We show the HHH evaluation performance broken down by category. The improvements on the Harm evaluations suggest a form of generalization, as the prompt does not contain any examples where the assistant resists engaging in harmful behavior. Right: We show results on the adversarial TruthfulQA dataset, which was constructed so that larger models would perform more poorly. The context-distilled prompt achieves a pyrrhic victory here, as larger models perform better, but mostly because the prompt causes smaller models to perform worse.
Large language models can be guided towards desirable behaviors by taking advantage of their in-context learning abilities. Given a suitable prompt, models will take on the style and persona implicit in the prompt and continue to behave mostly in the same vein. This technique can leverage small quantities of very high quality data, and it has the advantage that the prompt can be easily interpreted by humans. For a variety of reasons we do not expect that prompting will produce fully aligned behavior, but it provides a very useful baseline.
In this section we will study a variety of zero-shot evaluations for alignment with and without prompting. The prompt we use consists of fourteen human-assistant conversations, where the assistant is always polite, helpful, and accurate. The prompt does not contain examples where the assistant actively resists aiding in harmful behavior, but nevertheless for simplicity we will refer to it as the ‘HHH prompt’ or simply the prompt in what follows. We find that although the effect of prompting is modest when measured against the overall goal of alignment, it improves alignment (according to our evaluations) and decreases toxicity. A potentially more important observation is that the prompt improves trends, so that alignment improves with model size, including (arguably) on Truthful QA Lin et al. ([2021](#bib.bib22 "TruthfulQA: measuring how models mimic human falsehoods")), a dataset designed specifically to induce the opposite trend. Furthermore, we show that there is little ‘tax’ from alignment – at large model size capabilities are not significantly impaired by the prompt. Of course, this does not mean that more intensive alignment interventions will incur no cost.
We also introduce a ‘context distillation’ technique that may make prompting more efficient in practice and potentially allow for the use of prompts that exceed the size of the context window. For many but not all of our evaluations context distillation performs about as well as prompting. We begin by briefly describing this method, and then we will discuss evaluations.
###
2.1 Context Distillation
Sampling from a language model with a prepended prompt has several disadvantages: the prompt occupies useful space in a finite context window, which also limits the total prompt length, and without special affordances the prompt will waste compute and memory when sampling.
| | |
| --- | --- |
| We show zero-shot Lambada performance in the presence of the HHH prompt and with context distillation. In both cases there is a small ‘alignment tax’. | We show zero-shot Lambada performance in the presence of the HHH prompt and with context distillation. In both cases there is a small ‘alignment tax’. |
Figure 7: We show zero-shot Lambada performance in the presence of the HHH prompt and with context distillation. In both cases there is a small ‘alignment tax’.
One way to avoid all of these problems is to finetune on the prompt. This invites some practical difficulties, since we need to finetune on a tiny dataset without limiting model capabilities. But finetuning also behaves differently from prompting – finetuning changes the model’s expectations for the data distribution P(X), bringing it closer to the distribution of the prompt P(C), whereas prompting instead asks the model for the distribution P(X|C), where C is the context. To give a stark illustration, if we show a language model the list C=1,2,⋯,63 then it will assign very high probability that the numbers X=64,65,⋯ are coming next. If instead we finetune on C, the resulting model will not expect to immediately see the token 64, though it will catch on to the counting pattern if we continue the sequence. We illustrate this toy experiment in figure [26](#A2.F26 "Figure 26 ‣ B.4 Context Distillation Finetuning ‣ Appendix B More Details on Prompting, Context Distillation, and Evaluations ‣ A General Language Assistant as a Laboratory for Alignment"), which we have relegated to the appendix.
We can both avoid overfitting and take advantage of conditioning via ‘context distillation’, where we finetune a model pθ(X) with a loss given by
| | | | |
| --- | --- | --- | --- |
| | L(θ)=DKL(p0(X|C)||pθ(X)) | | (2.1) |
where p0 is the initial model, the context C is fixed, and the data X is drawn from a large corpus of text, such as the original pre-training distribution. We discuss the details of context distillation training in appendix [B.4](#A2.SS4 "B.4 Context Distillation Finetuning ‣ Appendix B More Details on Prompting, Context Distillation, and Evaluations ‣ A General Language Assistant as a Laboratory for Alignment").
We see from figure [2](#S1.F2 "Figure 2 ‣ 1.1 Motivations ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment") that this technique appears to work quite well. However, the benefits compared to simply finetuning on the prompt become much less significant if we additionally provide a small prompt after the finetuning or distillation process, as shown in figure [20](#A2.F20 "Figure 20 ‣ Appendix B More Details on Prompting, Context Distillation, and Evaluations ‣ A General Language Assistant as a Laboratory for Alignment") in the appendix. It appears that contractors interacting with our models observe a small degradation from distillation, as seen in figure [9](#S2.F9 "Figure 9 ‣ 2.2.2 Toxicity ‣ 2.2 Evaluations and Alignment Taxes ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment"). In the future it might be interesting to apply context distillation iteratively, which one might liken to loading the model with a long-term memory or pseudo-identity.
###
2.2 Evaluations and Alignment Taxes
####
2.2.1 HHH Evaluations and TruthfulQA
As a first step in evaluating our models, the authors wrote about fifty comparison evaluations for each category of helpfulness, honesty,555Our evaluations of ‘honesty’ are probably the most correlated with model capabilities, as they measure a mixture of accuracy, preference for expressions of humility, recognition of when another source might be more useful than a language model, and unwillingness to provide inaccurate information. Whether an AI’s response is honest depends on the expertise of the AI, and a major weakness of our evaluations is that they do not account for this. harmlessness (HHH), and an ‘other’ label, for a total of around two-hundred comparisons, which will be available shortly at [BIG Bench](https://github.com/google/BIG-bench). We did not put effort into separating alignment from capabilities, and so even without any alignment-related prompting, we find that larger models do somewhat better overall. In many cases we initially produced several slightly different queries (largely differing by paraphrase) for each comparison, but found that large models were rarely confused by these variations, so for simplicity we dropped them. Results on these evaluations are pictured in figure [2](#S1.F2 "Figure 2 ‣ 1.1 Motivations ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment"). We expect that more sophisticated alignment techniques should be able to significantly improve these results.
Note that we evaluate model choices using the empirical mutual information I(a,q)=log[P(a|q)/P(a)] for queries q and responses a, rather than the more typical choice of mean token probability for the response (mutual information was also used for several evaluations of GPT-3 Brown et al. ([2020](#bib.bib70 "Language models are few-shot learners"))). The mutual information metric tends to be useful when responses differ greatly in length, and it makes a significant difference in performance on our evaluations.
On the left in figure [6](#S2.F6 "Figure 6 ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment") we show the results on our HHH evaluations by category. We found it a bit ironic that the models perform best in the ‘honesty’ category, as the models certainly do fabricate information when probed interactively as general-purpose assistants. To throw cold water on our models’ apparent honesty, we include evaluations on TruthfulQA666We wrote the prompt before TruthfulQA was available. That said, we found in other experiments that using TruthfulQA examples in a prompt dramatically improves performance (significantly more than our prompt). This suggests that the phenomenon uncovered by TruthfulQA is not a difficult alignment challenge on its own. on the right of this figure. We see that prompting has achieved a pyrrhic victory – larger models tend to perform better than smaller models, but for the most part this is because small models perform worse in the presence of the prompt! We also compare the use of mutual information and token-mean log-probabilities on TruthfulQA in figure [21](#A2.F21 "Figure 21 ‣ B.1 Alignment Tax Evaluations ‣ Appendix B More Details on Prompting, Context Distillation, and Evaluations ‣ A General Language Assistant as a Laboratory for Alignment") in the appendix. The use of conditional probabilities does not alter trends significantly, but does greatly affect absolute performance.
| | |
| --- | --- |
| : Average toxicity in response to a random sample of 500 prompts labeled as ’non-toxic’ from the RealToxicityPrompts dataset for language models (LM, blue), prompted language models (LM+Prompt, orange), and context distilled language models (LM+Context Distillation, green). | : Average toxicity in response to a random sample of 500 prompts labeled as ’non-toxic’ from the RealToxicityPrompts dataset for language models (LM, blue), prompted language models (LM+Prompt, orange), and context distilled language models (LM+Context Distillation, green). |
Figure 8: Left: Average toxicity in response to a random sample of 500 prompts labeled as ’non-toxic’ from the RealToxicityPrompts dataset for language models (LM, blue), prompted language models (LM+Prompt, orange), and context distilled language models (LM+Context Distillation, green). Right: Same as Left, except for a random sample of 500 prompts labeled as Toxic. For non-toxic and toxic prompts, both prompting and context-distillation decrease toxicity and perform similarly to each other as models increase in size. It appears that the prompt leads to decreasing toxicity as model size increases.
It is noteworthy that larger models tend to perform better on our evaluations in the presence of the HHH prompt, even on categories such as harmlessness that are not directly demonstrated by the prompt. We find this mildly encouraging but unsurprising, since all prior work suggests that larger models have stronger in-context learning capabilities, so that they can more efficiently recognize the implicit framing from the prompt.
####
2.2.2 Toxicity
We measured the effect of prompting and context distillation on the toxicity of text generated from language models of increasing size. We found that these simple alignment interventions tend to both decrease toxicity and perform similarly to one another (Figure [8](#S2.F8 "Figure 8 ‣ 2.2.1 HHH Evaluations and TruthfulQA ‣ 2.2 Evaluations and Alignment Taxes ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment")). To measure toxicity, we first sampled text conditioned on a random sample of 1K prompts from the RealToxicityPrompts dataset Gehman et al. ([2020](#bib.bib12 "RealToxicityPrompts: evaluating neural toxic degeneration in language models")). The prompts are labeled as either ’toxic’ or ’non-toxic’ and we sample an equal proportion of these prompts. Next, we computed a toxicity score from model samples of text, conditioned on the prompts, using an open source automated toxicity detector Hanu and Unitary team ([2020](#bib.bib1 "Detoxify")). Our analysis is similar to to Gehman et al. ([2020](#bib.bib12 "RealToxicityPrompts: evaluating neural toxic degeneration in language models")) with a few minor modifications. We provide full details and further analyses in Appendix [B.2](#A2.SS2 "B.2 Toxicity Analysis Details ‣ Appendix B More Details on Prompting, Context Distillation, and Evaluations ‣ A General Language Assistant as a Laboratory for Alignment").
Figure [8](#S2.F8 "Figure 8 ‣ 2.2.1 HHH Evaluations and TruthfulQA ‣ 2.2 Evaluations and Alignment Taxes ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment") illustrates three key findings from our analysis. First, without any alignment intervention, toxicity increases monotonically with model size in response to both toxic and non-toxic prompts (blue curves). Second, for non-toxic prompts, both prompting and context distillation significantly reduce toxicity and we observe little difference between the two interventions (green and orange curves, left figure). Finally, in response to toxic prompts, the reduction in toxicity achieved by both prompting and context distillation significantly increases with model size (green and orange curves, right figure). The larger reduction in toxicity emerges at 12B parameters. In this regime, context distillation performs similarly to prompting. These results suggest that prompting-based alignment interventions may have more dramatic effects as models scale and may be more difficult to evaluate for smaller models.

Figure 9: This figure illustrates the approximate Elo score of various models, fit from the frequency with which contractors viewed a given model as more helpful and honest in head-to-head tests involving pairs of models. Models with the full HHH prompt seem to be slightly preferred over those with a shorter prompt or context distillation. We include 1σ error bars for the special cases, which were only compared against the HHH-prompted models of equal size.
While these results are encouraging, automated toxicity detection has several known issues Gehman et al. ([2020](#bib.bib12 "RealToxicityPrompts: evaluating neural toxic degeneration in language models")); Welbl et al. ([2021](#bib.bib2 "Challenges in detoxifying language models")). For example, there can be low agreement in human annotations of toxicity and biases in toxicity labels for certain minorities. We also note that other interventions explicitly designed to reduce toxicity (e.g., fine-tuning models on non-toxic training data, steering/filtering model outputs away from toxic outputs at test time, filtering toxic training data at train time) can yield much larger decreases in automated toxicity scores than the ones we observe here Gehman et al. ([2020](#bib.bib12 "RealToxicityPrompts: evaluating neural toxic degeneration in language models")); Welbl et al. ([2021](#bib.bib2 "Challenges in detoxifying language models")). Nevertheless, we believe that prompting and context distillation provide a useful baseline for testing the impact of alignment interventions on automated toxicity scores.
####
2.2.3 Human Preferences and Model Performance
Using the dialogue interface in figure [1](#S1.F1 "Figure 1 ‣ 1.1 Motivations ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment"), we evaluated relative model performance via a number of head-to-head tests between pairs of models. This worked as follows. For any given conversation, we would choose a pair of models, with each model writing a single response to each human query. We randomized whether a given model’s responses would appear in position "A" or "B" in the interface, to avoid the possibility that users would consistently find "A" or "B" to be better. We also pegged streaming sampling speed to that of the slowest model, to partially obscure model identity and avoid bias. We collected a total of about 6k individual pair-wise777Note that we typically obtain roughly 3-5 comparisons per conversation. There may be some subtle biases here where weaker models perform more poorly early on in conversations, affecting the possibilities for later dialogue. model comparisons
From this process we collected a table of ‘win rates’ for pairs of models, which we provide in table [2](#A2.T2 "Table 2 ‣ B.4 Context Distillation Finetuning ‣ Appendix B More Details on Prompting, Context Distillation, and Evaluations ‣ A General Language Assistant as a Laboratory for Alignment") in the appendix. Here we included fully HHH-prompted models with 200M, 800M, 3B, 13B, and 52B parameters, though we collected somewhat more comparisons involving larger, better-performing models. We also compared the fully prompted 13B and 52B models to their context-distilled versions and to a version with a shorter prompt consisting of only a single888We did not use completely unprompted models because they would be very unlikely to keep to the format of the dialogue or emit appropriate stop sequences. example conversation.
We used these results to estimate a single relative Elo score for each model. Intuitively, this score is similar to that used for ranking Chess players, with a real scalar value based on the relative win rates amongst all players. Quantitatively, we fit the Elo scores from the data in table [2](#A2.T2 "Table 2 ‣ B.4 Context Distillation Finetuning ‣ Appendix B More Details on Prompting, Context Distillation, and Evaluations ‣ A General Language Assistant as a Laboratory for Alignment") with the same loss function we use for preference modeling (equation [3.1](#S3.E1 "(3.1) ‣ 3.1 Loss and Settings for Preference Modeling and Imitation Learning ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment")). We display the results in figure [9](#S2.F9 "Figure 9 ‣ 2.2.2 Toxicity ‣ 2.2 Evaluations and Alignment Taxes ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment"), where we recall that a difference of 100 points in an Elo score signifies a ‘win rate’ of 64%.
The most striking feature of these results is that Elo score appears to be linear in the logarithm of model size from 197M to 13B parameters, but it does not change very significantly between 13B and 52B parameters. We do not believe that this is because the two largest models are equally capable. Rather, we interpret it as a limitation of the training and incentives of the contractors evaluating the models, who are US-based master-qualified MTurkers who were only provided with some simple instructions, and who have an implicit incentive to finish tasks quickly. This provides a sense for how well-trained and capable workers need to be to perceive distinctions among large language models.
We note that using a much shorter prompt with just one example conversation seems to hurt performance, and it seems that the contractors were able to differentiate the prompted and context distilled model, with the former being preferred about 53% of the time. We include 1-σ error bars for these comparisons (note that the short-prompt and distilled models were only compared to the fully prompted models of equal size), so we have some weak evidence that context distillation has degraded performance somewhat compared to the full HHH prompt.
| | |
| --- | --- |
| This figure shows performance of our code-finetuned models on the Codex and QuixBugs evaluations with and without the alignment prompt. We see that in both cases, the prompt confuses smaller models, leading to worse performance, but it actively improves the 13B and 52B models. All samples were generated at temperature | This figure shows performance of our code-finetuned models on the Codex and QuixBugs evaluations with and without the alignment prompt. We see that in both cases, the prompt confuses smaller models, leading to worse performance, but it actively improves the 13B and 52B models. All samples were generated at temperature |
Figure 10: This figure shows performance of our code-finetuned models on the Codex and QuixBugs evaluations with and without the alignment prompt. We see that in both cases, the prompt confuses smaller models, leading to worse performance, but it actively improves the 13B and 52B models. All samples were generated at temperature T=0.6 and top P=0.95 (these settings were not optimized and are not optimal for Pass@1). Note the figure on the left here was also presented in the introduction.
####
2.2.4 Alignment Taxes/Bonuses
A general concern about alignment is that it may impose a ‘tax’ on performance, such that aligned models may be weaker than raw or unaligned models. In the case of prompting and context distillation, it is straightforward to evaluate this question directly by performing evaluations with and without the prompt. When we include the HHH prompt, we also use the human-assistant framing when presenting the problem or evaluation to the model. The precise specifications can be found in appendix [B.1](#A2.SS1 "B.1 Alignment Tax Evaluations ‣ Appendix B More Details on Prompting, Context Distillation, and Evaluations ‣ A General Language Assistant as a Laboratory for Alignment").
We display results for two very similar python coding evaluations, the Codex HumanEval Chen et al. ([2021b](#bib.bib6 "Evaluating large language models trained on code")) and the QuixBugs challenge reformulated as a function synthesis task Lin et al. ([2017](#bib.bib137 "QuixBugs: a multi-lingual program repair benchmark set based on the quixey challenge")) in figure [10](#S2.F10 "Figure 10 ‣ 2.2.3 Human Preferences and Model Performance ‣ 2.2 Evaluations and Alignment Taxes ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment"). Interestingly, smaller models perform significantly worse with the prompt, but 13B and 52B models actually perform noticeably better. These evaluations were run using our code-finetuned models, so the strong performance of the larger models also suggests that these models have not lost their ability to process the natural language in the prompt.
We performed a similar evaluation on Lambada Paperno et al. ([2016](#bib.bib142 "The lambada dataset: word prediction requiring a broad discourse context")), with results shown in figure [7](#S2.F7 "Figure 7 ‣ 2.1 Context Distillation ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment"). We see that the prompt and context distillation impose a small ‘tax’ on performance that does not have a significant model-size dependence. As shown in Appendix [B.3](#A2.SS3 "B.3 A Comment on Lambada Formatting ‣ Appendix B More Details on Prompting, Context Distillation, and Evaluations ‣ A General Language Assistant as a Laboratory for Alignment"), Lambada performance is strongly dependent on some formatting issues, which alter performance by a much larger margin than the prompt. This format-dependence itself might be regarded as an alignment problem, but unfortunately we do not find that the HHH prompt reduces the difference between accuracies obtained from different Lambada formats.
We therefore found that while smaller models may be confused by the prompt, larger models’ performance is not heavily impacted by it.
3 Scaling of Preference Modeling vs Imitation Learning
-------------------------------------------------------
Alignment requires distinguishing between ‘good’ and ‘bad’ behavior. There are several different training objectives that may be used to accomplish this:
* Imitation Learning: Here we simply train language models to imitate ‘good’ behavior via supervised learning with the usual cross-entropy loss.
* Binary Discrimination: Given a sample of ‘correct’ behavior and a sample of ‘incorrect’ behavior, train the model to distinguish between the two.
* Ranked Preference Modeling: Given a dataset of samples whose overall ‘quality’ is ranked in some way, we train models to output a scalar quality score999These values could then be used as reward signals for reinforcement learning. for each sample whose value matches the ranking as closely as possible. For simplicity we focus on using pairs of ranked samples (i.e., binary comparisons), and we train our models to assign a higher score to the ‘better’ sample in each pair. In some respects this generalizes binary discrimination, and for uniformity we will use it as the training objective even for binary discrimination tasks (see section [3.1](#S3.SS1 "3.1 Loss and Settings for Preference Modeling and Imitation Learning ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment") for details).
We would like to explore a very general question: *when and by how much do discriminators and preference models outperform imitation learning*?
Our experiments in this section involve comparing the performance of imitation learning vs. preference modeling on a variety of finetuning evaluations, some of which are binary in nature while others are ranked.
* Binary: Code Correctness, Commonsense (ethics), Justice (ethics), Deontology (ethics), Virtue (ethics), Lambada
* Ranked: Learn to Summarize, Utility (ethics), HellaSwag
We focus mostly on alignment-relevant tasks, but include one binary and one ranked NLP task (Lambada Paperno et al. ([2016](#bib.bib142 "The lambada dataset: word prediction requiring a broad discourse context")) and HellaSwag Zellers et al. ([2019](#bib.bib26 "HellaSwag: can a machine really finish your sentence?")), respectively). Code Correctness is a dataset we constructed from python functions in public github repos with test coverage, with correctness determined by unit tests. The Ethics Hendrycks et al. ([2021a](#bib.bib15 "Aligning ai with shared human values")) evaluations are mostly binary classification problems, and so naturally belong in our binary category, except for Utilitarianism which compares relative ‘pleasantness’ of scenarios.
The distinction between ranked and binary tasks can be ambiguous—for example, whether code passes tests is binary, but code quality seems like a continuum.
Our results support a simple conclusion summarized in figure [3](#S1.F3 "Figure 3 ‣ Open-Ended Dialogue Format and Prompting ‣ 1.2 Research ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment"): *Ranked preference models tend to improve greatly on imitation learning, but binary discrimination typically provides little benefit.*
In some respects this conclusion is quite intuitive: to apply imitation learning to preference modeling, one must either only train on the very best data (limiting the dataset size) or train to imitate a lot of examples of lower quality. Nonetheless, the magnitude of the gains are rather stark.
In many cases it is also possible to study the robustness of various methods for ranking samples. For example, if we sample many responses to a prompt/query, we would like to know if the highest ranked samples according to a given preference model are truly the best. We test this behavior directly in our code correctness studies and with Lambada.
###
3.1 Loss and Settings for Preference Modeling and Imitation Learning
Preference Modeling
Our preference models consist of a value head that predicts a single scalar ‘score’ r on top of the final token of any given context, with larger r indicating more desirable samples. The preference modeling loss for each pair of ‘good’ and ‘bad’ sequences is Christiano et al. ([2017](#bib.bib139 "Deep reinforcement learning from human preferences"))
| | | | |
| --- | --- | --- | --- |
| | LPM=log(1+erbad−rgood), | | (3.1) |
and for batched sample pairs we take the mean over all pairs. This is clearly not the most natural loss function for some applications; for binary ‘correctness’ it would be better to predict if each example is correct or incorrect, and for multiple choice problems, it might be better to maximize the likelihood for the correct response among all available responses. However, since our primary motivation is preference modeling, we will focus on this formulation unless otherwise noted.
In particular, we format all binary discriminators as preference models so that the same architecture can be utilized for both binary and ranked evaluations, which is convenient for studying transfer between them. Given any context C with a binary label A/B (e.g., ‘True/False’, ‘Good/Bad’), we create a preference modeling pair C:A>A:B, where B denotes the incorrect label, and the colon denotes concatenation.
We also found that appending a special ‘end-of-context’ token to each sequence to unambiguously delineate the end of passage sometimes improves performance, as discussed in section [C.4](#A3.SS4 "C.4 End-of-context Token Improves Preference Modeling Performance ‣ Appendix C More Details on Preference Models ‣ A General Language Assistant as a Laboratory for Alignment").
Imitation Learning
For imitation learning, our training objective is simply the autoregressive language modeling loss on the ‘good’ sequence in each pair—that is, we train the model to imitate ‘good’ behavior.
In the notation above, this means that for imitation learning we trained on C:A.
We found that applying a mask to train only over the response tokens improved performance significantly, so all our imitation learning results are masked. Furthermore, just to clarify, at training time we *sum* over negative token log-probs to compute the loss as is typically done, but at evaluation time we *average* over negative token log-probs to make pairwise comparisons (i.e, a pairwise comparison is accurate if the average negative log-prob for the ‘good’ sample is lower than for the ‘bad’ sample). This significantly improves performance when responses have different lengths.
###
3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets
Here we provide a short description of our evaluation datasets, some of which we categorize as ‘ranked’ while others are ‘binary’. In this section, all evaluations involve finetuning on a training set and evaluating on a test set.
Code Correctness (Binary)
For these experiments we collected about 500k python functions with test coverage101010We required that at least half of the lines in the function were executed by a combination of tests in the repo. from public github repos, and split these functions into a training and test set. For each function, we discarded the original implementation (keeping only the function definition and docstring) and generated 8 samples from each code model up to 13B parameters, and tested these samples with all available tests. We then created pairs of correct and incorrect samples for each function, using only model-generated code, to avoid confusing code correctness with the task of human-model discrimination. We compared two training procedures: imitation learning on correct functions, and preference modeling comparing the correct and incorrect functions.
Then we evaluated performance on the test set in the following way. We generated 100 samples for each function (using pretrained code models), and ranked them according to both mean per-token log-probs of the IL model, and scores produced by the preference model. Then we evaluated the probability that the top sample among k, as ranked by either method, was in fact correct. For this we used the same model size for training and test set generation and for ranking samples. Some results are shown in figures [11](#S3.F11 "Figure 11 ‣ 3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment") and [12](#S3.F12 "Figure 12 ‣ 3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment").
Overall we found that preference modeling on this binary discrimination task does not improve very significantly on imitation learning. Both PM and IL are quite similar, overall. These results differ from similar recent experiments on math problem solving Cobbe et al. ([2021](#bib.bib14 "Training verifiers to solve math word problems")), though they trained on thousands of times less data. The difference may be that our imitation learning baseline is much stronger, since even before IL finetuning on Code Correctness specifically, our code models had seen a great deal of on-distribution python code.

Figure 11: Here we compare the performance of code correctness discriminators and imitation learning for ranking samples. All models used for a fixed color are the same size – the generator of the discriminator training data, the generator of the test samples, and the preference or imitation learning model used for ranking. The fact that some of these curves are not monotonic represents a robustness failure of preference modeling.

Figure 12: To create this figure, we generated 100 samples (at T=1) from code models. We then ranked these samples using either log-probs from the same model, or using a preference model trained to discriminate correct and incorrect code. The "oracle" line plots optimal ranking where all correct samples are ranked before incorrect ones. We see that imitation learning and preference modeling perform similarly.
Lambada (Binary)

Figure 13: Similarly to Code Correctness in figure [12](#S3.F12 "Figure 12 ‣ 3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment"), we generated 100 samples (at T=1) from pretrained language models. We then ranked these samples using either log-probs from an imitation learning model, or using the scores from a preference model trained to discriminate correct vs. incorrect Lambada completions. Note that for some questions, all the generated answers may be incorrect in which case we default to 0 accuracy. We see that these approaches perform similarly, as we expected since Lambada is a ‘binary’ eval. Lambada performance depends significantly on formatting, as noted in appendix [B.3](#A2.SS3 "B.3 A Comment on Lambada Formatting ‣ Appendix B More Details on Prompting, Context Distillation, and Evaluations ‣ A General Language Assistant as a Laboratory for Alignment").
We now discuss our evaluations on Lambada Paperno et al. ([2016](#bib.bib142 "The lambada dataset: word prediction requiring a broad discourse context")). We used the dataset with original formatting, which differs from that used in GPT-3 Brown et al. ([2020](#bib.bib70 "Language models are few-shot learners")). For imitation learning we simply trained on the correct answers in the training set. For binary discrimination, we sampled answers at T=1 from models of various sizes, created up to two pairs of correct and incorrect answers for each prompt, and then trained the discriminator to identify the correct completion. At test time we sampled multiple responses for each question (at temperature T=1) and ranked them by either log-probs (for IL) or preference modeling score. The results are shown in figure [13](#S3.F13 "Figure 13 ‣ 3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment"), where we see that imitation learning performs roughly on par with preference modeling. This provides an independent verification of what we found with Code Correctness, though again the imitation learning baseline is very strong, as the Lambada task aligns very well with the language model pre-training objective.
HellaSwag (Ranked)
We also performed a comparison of imitation learning and preference modeling on the HellaSwag Zellers et al. ([2019](#bib.bib26 "HellaSwag: can a machine really finish your sentence?")) dataset. This is a multiple choice evaluation on commonsense inference—given an event description, the model is asked to identify the most sensible completion. Although each problem presents only three choices, the desired responses are not uniquely correct, but are merely the most sensible inference among the three options. Thus this task is a form of ranked preference modeling, rather than binary discrimination. In agreement with our expectations, we find that preference modeling scales far better than imitation learning on this dataset, as shown in figure [14](#S3.F14 "Figure 14 ‣ 3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment").
Note that while the training data is formatted as multiple choice, we convert the data to binary comparisons by pairing the correct choice with a randomly chosen incorrect choice. It might be possible to improve performance by training on all options, but we did not explore this.
| | |
| --- | --- |
| Scaling behavior of imitation learning and preference modeling on HellaSwag (ranked) and Learn to Summarize (ranked), showing that PM performs better than IL, as we expect for ranked finetuning evaluations. | Scaling behavior of imitation learning and preference modeling on HellaSwag (ranked) and Learn to Summarize (ranked), showing that PM performs better than IL, as we expect for ranked finetuning evaluations. |
Figure 14: Scaling behavior of imitation learning and preference modeling on HellaSwag (ranked) and Learn to Summarize (ranked), showing that PM performs better than IL, as we expect for ranked finetuning evaluations.
Learn to Summarize (Ranked)
Preference modeling and RLHF has been applied to the task of generating high-quality summaries of short articles Stiennon et al. ([2020](#bib.bib23 "Learning to summarize from human feedback")). We study the associated dataset, which we term ‘Learn to Summarize’. It consists of a collection of articles, where each is accompanied by a pair of summaries that have been ranked by trained human workers.
This dataset presents a defining example of a ranked preference modeling task, since there is no clear sense in which any given summary is ‘correct’, but typically among any pair of samples, one will be better than the other.
We are especially interested in this finetuning evaluation as it is highly relevant for alignment. We created our own data split by shuffling the data and splitting it into a train (64k pairs) and test (29k pairs) set. On this dataset preference modeling performs far better than imitation learning, as seen in figure [14](#S3.F14 "Figure 14 ‣ 3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment").
Ethics (Binary, except for Utilitarianism)
| | | | | |
| --- | --- | --- | --- | --- |
| Scaling behavior of imitation learning and preference modeling for all five Ethics evaluations, which are all binary except Utilitarianism. We find, in agreement with our expectations, that PM beats IL on the ranked task, but on binary tasks they perform similarly. For brevity we have only included the easier evaluation sets here. | Scaling behavior of imitation learning and preference modeling for all five Ethics evaluations, which are all binary except Utilitarianism. We find, in agreement with our expectations, that PM beats IL on the ranked task, but on binary tasks they perform similarly. For brevity we have only included the easier evaluation sets here. | Scaling behavior of imitation learning and preference modeling for all five Ethics evaluations, which are all binary except Utilitarianism. We find, in agreement with our expectations, that PM beats IL on the ranked task, but on binary tasks they perform similarly. For brevity we have only included the easier evaluation sets here. | Scaling behavior of imitation learning and preference modeling for all five Ethics evaluations, which are all binary except Utilitarianism. We find, in agreement with our expectations, that PM beats IL on the ranked task, but on binary tasks they perform similarly. For brevity we have only included the easier evaluation sets here. | Scaling behavior of imitation learning and preference modeling for all five Ethics evaluations, which are all binary except Utilitarianism. We find, in agreement with our expectations, that PM beats IL on the ranked task, but on binary tasks they perform similarly. For brevity we have only included the easier evaluation sets here. |
Figure 15: Scaling behavior of imitation learning and preference modeling for all five Ethics evaluations, which are all binary except Utilitarianism. We find, in agreement with our expectations, that PM beats IL on the ranked task, but on binary tasks they perform similarly. For brevity we have only included the easier evaluation sets here.
We studied the Ethics tasks Hendrycks et al. ([2021a](#bib.bib15 "Aligning ai with shared human values")), which include five distinct datasets. We provide a simplified description of each here, but we encourage the interested reader to read the original paper for details:
* Commonsense Morality (binary): Assess whether a given action is morally acceptable.
* Deontology (binary): Assess whether a given statement is reasonable on the basis of ‘whether an act is
required, permitted, or forbidden according to a set
of rules or constraints.’
* Justice (binary): Assess whether a given statement is reasonable on the basis of impartiality and desert.
* Virtue (binary): Given a personal trait and a scenario involving a character, assess whether the character expresses that particular trait.
* Utilitarianism (ranked): Given two similar scenarios, rank them by how ‘pleasant’ they are for the character involved.
In terms of the binary versus ranked111111In some cases this might be altered by changing the objective of the task, but this is our understanding based on the given evaluation metrics Hendrycks et al. ([2021a](#bib.bib15 "Aligning ai with shared human values")) distinction, the first four evaluations are clearly binary since they come with binary labels, while we interpret Utilitarianism as a ranked preference modeling task since ‘pleasantness’ is a ranked quality.
Each dataset includes a single training set and two test sets (standard and hard). We train our models on the training sets and evaluate on both test sets during and after training. In all cases we evaluate performance in terms of an accuracy. For Commonsense Morality and Utilitarianism, we use binary accuracy. But for Justice, Deontology and Virtue, the samples are grouped such that a model is accurate on the group only if it gets all responses correct within that group. All our accuracy results follow these requirements. In some cases we also display the preference modeling loss ([3.1](#S3.E1 "(3.1) ‣ 3.1 Loss and Settings for Preference Modeling and Imitation Learning ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment")), as in figure [16](#S4.F16 "Figure 16 ‣ 4.1 PMP and Datasets ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment"), and in that case we simply average over all pairwise comparisons, without any grouping.
We find, in agreement with our expectations, that PM performs significantly better than IL on the ranked Utilitarianism evaluation, but that PM and IL perform similarly on all binary evaluations, as shown in figure [15](#S3.F15 "Figure 15 ‣ 3.2 Performance and Scaling Results for Ranked versus Binary Preference Datasets ‣ 3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment").
4 Preference Model Pre-Training and Transfer
---------------------------------------------
We saw in section [3](#S3 "3 Scaling of Preference Modeling vs Imitation Learning ‣ A General Language Assistant as a Laboratory for Alignment") that ranked preference modeling typically performs better than imitation learning, and also often scales better as we increase model size. However, some datasets needed for alignment may be small and expensive to source, since they may require high-quality human feedback. For example, we saw a hint in figure [9](#S2.F9 "Figure 9 ‣ 2.2.2 Toxicity ‣ 2.2 Evaluations and Alignment Taxes ‣ 2 Conditioning on Aligned Behavior ‣ A General Language Assistant as a Laboratory for Alignment") that workers may require detailed instructions to differentiate121212A similar observation was made concerning news articles in Brown et al. ([2020](#bib.bib70 "Language models are few-shot learners")). among models much larger than 10B parameters. Thus we are particularly interested in methods to increase sample efficiency when finetuning on small preference modeling datasets.
In this section we will explore the idea of a ‘preference model pre-training’ (PMP) phase of training, after basic language model (LM) pretraining and before finetuning on a smaller preference modeling dataset relevant for alignment. Our training pipeline can be summarized as
LM Pre-training → PMP → PM Finetuning.
Each PMP training dataset typically consists of millions of sequence pairs, while each fine-tuning dataset typically consists of thousands to tens of thousands of sequence pairs.
We find that:
* Training on large public preference modeling data sourced from e.g. Stack Exchange question-answer pairs, Reddit comments, and Wikipedia edits (that revert ‘suspected vandalism’) significantly improves sample efficiency when subsequently finetuning on small preference modeling datasets. The pre-training datasets are explained in section [4.1](#S4.SS1 "4.1 PMP and Datasets ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment"), and the finetuning results are presented in section [4.2](#S4.SS2 "4.2 Finetuning Results and Scaling Trends ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment").
* In particular, we find that each PMP dataset is capable of transfering to a variety of finetuning datasets, with an effect size that seems to grow with model size, even though there may not be any obvious similarities between the datasets.
* Intriguingly, for the PMP stage of training, it’s most beneficial to train on binary discrimination data rather than ranked preferences. We suspect this is because ranked preferences often need to be ‘unlearned’ during finetuning, which presents a liability to transfer, as explained in section [4.3](#S4.SS3 "4.3 Ranked Preference Modeling vs Binary Discrimination for PMP ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment"). In particular, for PMP we apply a simple ‘binarization’ method that converts any ranked PM dataset to binary discrimination, as explained in section [4.1](#S4.SS1 "4.1 PMP and Datasets ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment").
###
4.1 PMP and Datasets
| | |
| --- | --- |
| Transfer performance at 500 and 5k finetuning sequence pairs averaged across multiple finetuning evaluations (Learn to Summarize, HellaSwag, and all five Ethics evaluations). | Transfer performance at 500 and 5k finetuning sequence pairs averaged across multiple finetuning evaluations (Learn to Summarize, HellaSwag, and all five Ethics evaluations). |
Figure 16: Transfer performance at 500 and 5k finetuning sequence pairs averaged across multiple finetuning evaluations (Learn to Summarize, HellaSwag, and all five Ethics evaluations).
We constructed multiple PMP datasets from various data dumps found online, including StackExchange, Reddit, Wikipedia, and a mixture of all three we refer to as the ‘Mix’. In each case, we began by creating a ranked dataset consisting of pairwise comparisons, with each pair consisting of a ‘better’ and ‘worse’ sample. Details on each dataset is provided in section [C.1](#A3.SS1 "C.1 Preference Model Pre-training ‣ Appendix C More Details on Preference Models ‣ A General Language Assistant as a Laboratory for Alignment").
Subsequently, we created a binary dataset by applying a ‘binarization’ procedure to the ranked dataset. That is, for every ranked pair A>B, we transform it into two independent binary comparisons:
```
GOOD:A > BAD:A
BAD:B > GOOD:B
```
Consequently, the binary dataset has twice as many pairs as the ranked dataset. As discussed in more detail in section [4.3](#S4.SS3 "4.3 Ranked Preference Modeling vs Binary Discrimination for PMP ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment"), we found that pre-training on the binary dataset typically transferred better than the corresponding ranked version, and so all our PMP experiments assume binary pre-training unless otherwise stated.
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| Transfer to various finetuning evaluations from PMP (on the ‘Mix’ pre-training dataset, shown as violet curves) and no PMP (black curves). Each of the five Ethics datasets (Commonsense Morality, Deontology, Justice, Utilitarianism, and Virtue) has both an ‘easy’ test set (solid curves) and a ‘hard’ test set (dashed curves), but only one training set. The x-axis shows the number of finetuning training sequence pairs, while the y-axis shows accuracy as evaluated on a held-out test set. All results are shown for the 52B parameter model. In most cases PMP significantly improves sample efficiency, especially in the | Transfer to various finetuning evaluations from PMP (on the ‘Mix’ pre-training dataset, shown as violet curves) and no PMP (black curves). Each of the five Ethics datasets (Commonsense Morality, Deontology, Justice, Utilitarianism, and Virtue) has both an ‘easy’ test set (solid curves) and a ‘hard’ test set (dashed curves), but only one training set. The x-axis shows the number of finetuning training sequence pairs, while the y-axis shows accuracy as evaluated on a held-out test set. All results are shown for the 52B parameter model. In most cases PMP significantly improves sample efficiency, especially in the | Transfer to various finetuning evaluations from PMP (on the ‘Mix’ pre-training dataset, shown as violet curves) and no PMP (black curves). Each of the five Ethics datasets (Commonsense Morality, Deontology, Justice, Utilitarianism, and Virtue) has both an ‘easy’ test set (solid curves) and a ‘hard’ test set (dashed curves), but only one training set. The x-axis shows the number of finetuning training sequence pairs, while the y-axis shows accuracy as evaluated on a held-out test set. All results are shown for the 52B parameter model. In most cases PMP significantly improves sample efficiency, especially in the | Transfer to various finetuning evaluations from PMP (on the ‘Mix’ pre-training dataset, shown as violet curves) and no PMP (black curves). Each of the five Ethics datasets (Commonsense Morality, Deontology, Justice, Utilitarianism, and Virtue) has both an ‘easy’ test set (solid curves) and a ‘hard’ test set (dashed curves), but only one training set. The x-axis shows the number of finetuning training sequence pairs, while the y-axis shows accuracy as evaluated on a held-out test set. All results are shown for the 52B parameter model. In most cases PMP significantly improves sample efficiency, especially in the | Transfer to various finetuning evaluations from PMP (on the ‘Mix’ pre-training dataset, shown as violet curves) and no PMP (black curves). Each of the five Ethics datasets (Commonsense Morality, Deontology, Justice, Utilitarianism, and Virtue) has both an ‘easy’ test set (solid curves) and a ‘hard’ test set (dashed curves), but only one training set. The x-axis shows the number of finetuning training sequence pairs, while the y-axis shows accuracy as evaluated on a held-out test set. All results are shown for the 52B parameter model. In most cases PMP significantly improves sample efficiency, especially in the | Transfer to various finetuning evaluations from PMP (on the ‘Mix’ pre-training dataset, shown as violet curves) and no PMP (black curves). Each of the five Ethics datasets (Commonsense Morality, Deontology, Justice, Utilitarianism, and Virtue) has both an ‘easy’ test set (solid curves) and a ‘hard’ test set (dashed curves), but only one training set. The x-axis shows the number of finetuning training sequence pairs, while the y-axis shows accuracy as evaluated on a held-out test set. All results are shown for the 52B parameter model. In most cases PMP significantly improves sample efficiency, especially in the | Transfer to various finetuning evaluations from PMP (on the ‘Mix’ pre-training dataset, shown as violet curves) and no PMP (black curves). Each of the five Ethics datasets (Commonsense Morality, Deontology, Justice, Utilitarianism, and Virtue) has both an ‘easy’ test set (solid curves) and a ‘hard’ test set (dashed curves), but only one training set. The x-axis shows the number of finetuning training sequence pairs, while the y-axis shows accuracy as evaluated on a held-out test set. All results are shown for the 52B parameter model. In most cases PMP significantly improves sample efficiency, especially in the |
Figure 17: Transfer to various finetuning evaluations from PMP (on the ‘Mix’ pre-training dataset, shown as violet curves) and no PMP (black curves). Each of the five Ethics datasets (Commonsense Morality, Deontology, Justice, Utilitarianism, and Virtue) has both an ‘easy’ test set (solid curves) and a ‘hard’ test set (dashed curves), but only one training set. The x-axis shows the number of finetuning training sequence pairs, while the y-axis shows accuracy as evaluated on a held-out test set. All results are shown for the 52B parameter model. In most cases PMP significantly improves sample efficiency, especially in the ≲10k sequence pairs regime. Plots show 4 training epochs for each eval.
We pre-train a scan of preference models of various sizes on each binary dataset. Training details such as hyperparameter choices are described in section [C.1](#A3.SS1 "C.1 Preference Model Pre-training ‣ Appendix C More Details on Preference Models ‣ A General Language Assistant as a Laboratory for Alignment").
###
4.2 Finetuning Results and Scaling Trends
Here we show finetuning results after preference model pre-training (PMP) on a variety of downstream finetuning evaluations. We find that all our PMP models significantly improve sample efficiency when finetuning, despite there often being little similarity between the PMP distribution and the finetuning distribution.
Our results are summarized in figure [4](#S1.F4 "Figure 4 ‣ Scaling of Imitation Learning vs Preference Modeling, and Binary vs Rank-Ordered Preferences ‣ 1.2 Research ‣ 1 Introduction ‣ A General Language Assistant as a Laboratory for Alignment"), showing the performance gain of PMP. Since performance on all of our final finetuning datasets can be evaluated in terms of accuracy, we define the performance gain as the accuracy difference between PMP and no PMP as measured on each test set. We show the accuracy gain of PMP as a function of number of finetuning sequences, where the pre-training dataset consists of a mixture of StackExchange, Reddit, and Wikipedia which we simply refer to as the ‘Mix’. Furthermore, the lightly shaded violet curves show results for individual finetuning evaluations, while the bold violet curve shows their mean. More detailed breakdown of results is shown in figure [17](#S4.F17 "Figure 17 ‣ 4.1 PMP and Datasets ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment") and figure [32](#A3.F32 "Figure 32 ‣ C.5 Ensembling Over PMP Models ‣ Appendix C More Details on Preference Models ‣ A General Language Assistant as a Laboratory for Alignment").
We are also interested in how finetuning scales with model size, especially in the small data limit, as shown in figure [16](#S4.F16 "Figure 16 ‣ 4.1 PMP and Datasets ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment"). We find that at 1k finetuning sequences (or 500 pairs), PMP on the Mix dataset improves performance significantly for models larger than ∼ 1B parameters, but does not appear to benefit small models. Furthermore, at 10k finetuning sequences (or 5000 pairs), PMP Mix also benefits large models, but to a lesser extent. We also show results for scaling of the best-achieved loss with model size on the finetuning evaluation datasets in figure [28](#A3.F28 "Figure 28 ‣ C.1 Preference Model Pre-training ‣ Appendix C More Details on Preference Models ‣ A General Language Assistant as a Laboratory for Alignment") in the appendix.
As already mentioned, pre-training on binary distributions typically transfers better than ranked distributions—this is discussed more in section [4.3](#S4.SS3 "4.3 Ranked Preference Modeling vs Binary Discrimination for PMP ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment"). In addition, we found that the following factors also helped, all of which have been incorporated into our experiments unless otherwise stated:
* Adding to the preference modeling loss a basic language modeling loss to teach the model to imitate the ‘good’ sequence in each preference modeling pair, as discussed in section [C.3](#A3.SS3 "C.3 Language Modeling Improves PMP Transfer ‣ Appendix C More Details on Preference Models ‣ A General Language Assistant as a Laboratory for Alignment").
* Appending an end-of-context token to each sequence on top of which the preference modeling score is predicted, as discussed in [C.4](#A3.SS4 "C.4 End-of-context Token Improves Preference Modeling Performance ‣ Appendix C More Details on Preference Models ‣ A General Language Assistant as a Laboratory for Alignment").
###
4.3 Ranked Preference Modeling vs Binary Discrimination for PMP

Figure 18: In this figure we show the benefit of ‘binarizing’ PMP datasets; the y-axis is the *gain* in finetuning accuracy with binarization versus without binarization. The x-axis counts number of text sequences seen by the model, with 2 sequences corresponding to a single preference-modeling comparison.
Recall that our pre-training dataset comes in two forms: ranked and binary. So far we have only presented fine-tuning results from binary PMP, but here we also compare to ranked pre-training, and show that binary pre-training typically transfers better than ranked-pre-training. This may be counter-intuitive because preference models are designed to learn an Elo-like score, which can be interpreted as a ranking, and so it is natural to expect ranked pre-training to outperform binary. The goals of this section are to (1) present empirical results showing the difference, and (2) provide and briefly test a plausible explanation.
In figure [18](#S4.F18 "Figure 18 ‣ 4.3 Ranked Preference Modeling vs Binary Discrimination for PMP ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment") we show the advantage of binary pre-training over ranked pre-training. In particular, for each finetuning evaluation, we plot the accuracy difference vs. the number of training sequences, which can be seen as lightly shaded violet curves. Since there is significant variance in these results, we also take the mean over all such evaluations, giving the bold violet curve. On average, we find that binary pre-training performs +5% better at 500 sequence pairs, and +2% better at 5k sequence pairs. More detailed plots of binary vs. ranked pre-training can be found in figure [37](#A4.F37 "Figure 37 ‣ Appendix D Per-Token GAN-Style Discriminator Results ‣ A General Language Assistant as a Laboratory for Alignment") in the appendix, showing the accuracy difference for multiple individual pre-training datasets and multiple individual finetuning evaluations.
This result surprised some of the authors, but with hindsight we found a plausible explanation. When pre-training on a ranked dataset, the model learns a corresponding ranked ordering for sample sequences (represented by a scalar value for each sample). However, downstream evaluations may have rankings that are qualitatively very different, which may then require the pre-trained model to ‘unscramble’ its existing ratings. On the contrary, binary pre-training establishes a much less ‘rigid’ score, which may require less ‘unscrambling’ and thus may transfer more easily to very different datasets. We designed an experiment with synthetic data that appears to confirm this hypothesis, which we describe in detail in appendix [C.6](#A3.SS6 "C.6 Experiments on Ranked vs Binary PMP – Synthetic Symbols Dataset ‣ Appendix C More Details on Preference Models ‣ A General Language Assistant as a Laboratory for Alignment").
###
4.4 Human-Model vs Human-Human Comparisons for PMP
All our PMP datasets so far consist of ‘human-human’ comparisons, by which we mean that both samples in each pair are human-written. For this section we consider an alternative dataset consisting of ‘human-model’ comparisons, as we are interested in whether this might improve transfer performance. It is also noteworthy that such comparisons should be easy to generate, since any high-quality fragment of human text might be compared to model-generated text on the same subject.
The ‘human-model’ dataset was created by following these steps:
* We first finetuned a language model to imitate the ‘good’ samples in our *ranked* pre-training dataset (e.g., StackExchange, Reddit, or Wikipedia).
* For each sample pair in the *ranked* pre-training dataset, we kept the ‘good’ sequence, but replaced the “bad” sequence with a sample from the finetuned language model.
Consequently, the resulting dataset has the same number of pairs as the original ranked pre-training dataset, with “good” human-written sequences and “bad” model-written sequences. For these experiments we used the Reddit PMP dataset, and a 3B model for sample generation.
We found that PMP on the human-model Reddit dataset transfers significantly better to HellaSwag, and somewhat better to Learn to Summarize, as shown in figure [19](#S4.F19 "Figure 19 ‣ 4.4 Human-Model vs Human-Human Comparisons for PMP ‣ 4 Preference Model Pre-Training and Transfer ‣ A General Language Assistant as a Laboratory for Alignment"). Transfer to the Ethics evaluations (see figure [36](#A4.F36 "Figure 36 ‣ Appendix D Per-Token GAN-Style Discriminator Results ‣ A General Language Assistant as a Laboratory for Alignment")) is more ambiguous, showing both positive and negative signals. Our suspicion is that human-model pre-training has a particular advantage on downstream finetuning evaluations that contain model-generated data—indeed, all incorrect answers on HellaSwag are model-generated, and Learn to Summarize has a significant amount of model-generated summaries, while Ethics has no model-generated data.
Nonetheless, PMP with human-model generated data deserves further investigation, especially since it can be applied to such a great variety of data distributions.
| | |
| --- | --- |
| We compare PMP on “human-human” vs “human-model” Reddit datasets by evaluating their transfer performance (for the latter, the “model” pre-training samples were all generated by a 2.7B model). It appears that “human-model” pre-training transfers better on Learn to Summarize and significantly better on HellaSwag, possibly because both evaluations contain model-generated data, thus giving “human-model” an advantage. While our primary focus has been on “human-human”, this results suggests that “human-model” also deserves further investigation. | We compare PMP on “human-human” vs “human-model” Reddit datasets by evaluating their transfer performance (for the latter, the “model” pre-training samples were all generated by a 2.7B model). It appears that “human-model” pre-training transfers better on Learn to Summarize and significantly better on HellaSwag, possibly because both evaluations contain model-generated data, thus giving “human-model” an advantage. While our primary focus has been on “human-human”, this results suggests that “human-model” also deserves further investigation. |
Figure 19: We compare PMP on “human-human” vs “human-model” Reddit datasets by evaluating their transfer performance (for the latter, the “model” pre-training samples were all generated by a 2.7B model). It appears that “human-model” pre-training transfers better on Learn to Summarize and significantly better on HellaSwag, possibly because both evaluations contain model-generated data, thus giving “human-model” an advantage. While our primary focus has been on “human-human”, this results suggests that “human-model” also deserves further investigation.
5 Discussion
-------------
###
5.1 Related Work
There have been many works related to AI safety and alignment, including some suggestions for global research plans such as Amodei et al. ([2016](#bib.bib10 "Concrete problems in ai safety")) and Hendrycks et al. ([2021b](#bib.bib13 "Unsolved problems in ml safety")). Work using human feedback to learn summarizations Stiennon et al. ([2020](#bib.bib23 "Learning to summarize from human feedback")) has particular relevance to our work, since they observe that preference modeling and RL lead to dramatic improvements compared to imitation learning. One of our motivations was to understand when such improvements can be expected from these techniques, and how we can take maximal advantage of human feedback data.
To inquire into our models’ alignment we discussed ethics evaluations from Hendrycks et al. ([2021a](#bib.bib15 "Aligning ai with shared human values")), adversarial honesty evaluations from Lin et al. ([2021](#bib.bib22 "TruthfulQA: measuring how models mimic human falsehoods")), and toxicity evaluations from Gehman et al. ([2020](#bib.bib12 "RealToxicityPrompts: evaluating neural toxic degeneration in language models")).
Our use of a small amount of high-quality data for alignment is most similar to Solaiman and Dennison ([2021](#bib.bib30 "Process for adapting language models to society (PALMS) with values-targeted datasets")). On the other end of the spectrum, a rather different technique is to filter pretraining data, as discussed in Ngo et al. ([2021](#bib.bib8 "Mitigating harm in language models with conditional-likelihood filtration")). Our use of prompts was motivated by observations about the behavior of large language models Brown et al. ([2020](#bib.bib70 "Language models are few-shot learners")). Some other observations about prompting and the dependence of prompt-tuning on scale were made in Lester et al. ([2021](#bib.bib77 "The power of scale for parameter-efficient prompt tuning")) though we did not utilize prompt tuning. The fact that larger models are less subject to forgetting Ramasesh et al. ([2020](#bib.bib11 "Anatomy of catastrophic forgetting: hidden representations and task semantics")) may be related to the fact that larger models do not incur significant alignment taxes.
Our coding models are similar to those discussed in Chen et al. ([2021b](#bib.bib6 "Evaluating large language models trained on code")). They also performed alignment-related evaluations, though with high and low quality code examples rather than a natural language prompt. The recent work Austin et al. ([2021](#bib.bib135 "Program synthesis with large language models")) evaluated language models (without a great deal of code training) on code, including in a conversational manner.
Many papers have studied scaling laws Hestness et al. ([2017](#bib.bib106 "Deep learning scaling is predictable, empirically")); Rosenfeld et al. ([2019](#bib.bib76 "A constructive prediction of the generalization error across scales")); Kaplan et al. ([2020](#bib.bib74 "Scaling laws for neural language models")); Jones ([2021](#bib.bib3 "Scaling scaling laws with board games")). A few have compared discriminators or preference models to imitation learning. The T-REX IRL method Brown et al. ([2019](#bib.bib136 "Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations")) uses ranked preference modeling to improve on GAIL and on imitation learning. The authors of Abramson et al. ([2021](#bib.bib28 "Imitating interactive intelligence")) compared GAIL Ho and Ermon ([2016](#bib.bib24 "Generative adversarial imitation learning")) to conventional imitation learning in an RL context, and found in some cases that GAIL scaled significantly better with dataset size. Experiments comparing RL and behavioral cloning with the decision transformer Chen et al. ([2021a](#bib.bib16 "Decision transformer: reinforcement learning via sequence modeling")) are also somewhat similar to our comparison of preference modeling and imitation learning. Very recently Cobbe et al. ([2021](#bib.bib14 "Training verifiers to solve math word problems")) performed experiments that are very similar to our work on code correctness, except that they studied mathematical problem solving, and focused more on dataset size scaling. Interestingly, they find that a verifier (aka binary discriminator) has a more favorable dataset size scaling as compared to imitation learning. However, their experiments are likely in a different regime from ours – they were severely data limited, training on only thousands of math problems, whereas our models were trained on millions of python files, perhaps giving us a much stronger baseline for imitation learning.
Various works Lester et al. ([2021](#bib.bib77 "The power of scale for parameter-efficient prompt tuning")); Wei et al. ([2021](#bib.bib21 "Finetuned language models are zero-shot learners")); Sanh et al. ([2021](#bib.bib20 "Multitask prompted training enables zero-shot task generalization")); Aribandi et al. ([2021](#bib.bib138 "ExT5: towards extreme multi-task scaling for transfer learning")) have noted that by finetuning on a large variety of simple tasks, one can improve model performance generally and achieve instruction-following behavior. This idea is closely related to the ‘preference model pre-training’ approach we have discussed. The work with the most similar approach to PMP for alignment was the very recent Delphi Jiang et al. ([2021](#bib.bib17 "Delphi: towards machine ethics and norms")), which trains a general-purpose ethical critic. Their work differs insofar as we investigate transfer between distributions that are only distantly related (e.g. from Stack Exchange to summarization), whereas they focus on transfer from and to data related to ethics.
###
5.2 Broader Impacts
This work was motivated by the problem of technical AI alignment, with the specific goal of training a natural language agent that is helpful, honest, and harmless. We believe this work is important because of the potential for very broad impacts from AI and from language models in particular, especially if progress in the field continues at its current rapid pace Bowman ([2021](#bib.bib4 "When combating hype, proceed with caution")).
We hope that by directly approaching a general and ambitious problem, we will either (1) fail due to specific technical challenges, which we would then attempt to more precisely articulate for further study from the research community, or (2) convince ourselves that we have addressed technical alignment for currently available models.131313Of course, we may fail in uninteresting ways, due to our own limitations, and in that case we can only hope that future work will be more successful. In the event of the second outcome, we would expect our results to be carefully interrogated by the research community. There would also be a need for further empirical investigations into how well these techniques scale to more capable models in terms of both robustness and efficiency, and how likely it is that we will be able to detect alignment failures in more capable models.
The road to hell is paved with good intentions, and as such we shouldn’t be complacent with concerns associated with alignment work. Foremost in our minds is that advances in aligning AI with human values do not depend on any specific choice for these values. Efficient alignment techniques could be used to train highly capable systems that do things we consider to be bad, for instance systems for misinformation, censorship, or oppression. Even terms like helpful, honest, and harmless are ambiguous and can be in tension with each other, and it’s easy to imagine them distorted beyond their original meaning, perhaps in intentionally Orwellian ways. And within the context of our own and similar work, the choice of who provides feedback data to train models has broad implications.
Information such as our comparisons among different scaling behavior may also be useful for improving AI capabilities, without regard for safety. We believe that understanding how and why ML systems work will be essential to improving their safety, and that these sorts of comparisons aid in that effort. Another concern is that alignment progress might be used as an excuse for carelessness, or to conclude that alignment has already been adequately addressed and can subsequently be ignored. Our view is that people and organizations that deploy AI systems need to take responsibility for their behavior. Research may help to make such deployments possible, but the question of broader relevance is simply whether deployed AI systems are actually safe and beneficial in practice.
###
5.3 Implications
Larger models tend to perform better at most tasks, and there is no reason to expect naive alignment-related tasks to be an exception. In line with these expectations, we find that behavioral alignment tends to improve with model size, with even the simplest conceivable intervention (i.e. prompting) leading larger models to perform better on alignment-relevant evaluations.
One reason to investigate scaling trends for preference modeling would be to understand how to train better preference models. However, one of our motivations was actually a bit different – it was to set expectations for the scaling of reinforcement learning. We would expect that if it is very difficult for models to learn to recognize favorable outcomes, they will also have difficulty learning to take actions that produce such outcomes. That is, value function performance should tell us something about the likely performance of a trained policy. This logic should become irrefutable when preference models are re-purposed as reward models for RL training. So, given that large gains in both absolute performance and scaling are possible when training ranked preference models, significant progress on alignment may also be possible.
Author Contributions
--------------------
Yuntao Bai sourced and curated the PMP data with initial help from Ben Mann, conducted the PMP and finetuning experiments, suggested investigating the distinctions between binary and ranked preference modeling, and suggested several ML improvements for preference modeling.
Anna Chen conducted experiments on scaling trends for imitation learning versus preference modeling, including on function synthesis (with help from Dawn Drain, Andy Jones, and others). She also conducted the experiments on GAN-type discriminators and many other evaluations, and suggested improvements for preference modeling and code quality.
Anna and Yuntao collaborated on many experiments and on the training and evaluation code for preference modeling.
Amanda Askell developed the conceptualization of alignment in terms of helpfulness, honesty, and harmlessness. Amanda produced the initial mockup of the model interface and helped to design and build it. Amanda sourced and trained workers for the interface, conducted our original A/B testing experiments, and provided guidance on evaluations.
Ben Mann built most of the human interaction interface and the necessary backend for robust and efficient sampling. Ben led all of our data collection efforts for both language and code data, in collaboration with Danny Hernandez, who has led research on data quality. Ben also contributed to the core language model training infrastructure.
Ben, Yuntao, Anna, and Amanda contributed to research and project planning.
Deep Ganguli proposed, conducted, and analyzed experiments on toxicity (with help from Andy Jones and others) and conducted some of our experiments on alignment taxes. He also contributed to discussions on harms and alignment.
Dawn Drain trained the code models and helped Anna with code evaluations, including with collecting functions with test coverage (with some help from Ben Mann, Andy Jones, and Tom Henighan). Dawn also conducted experiments on alignment taxes with code models.
Nicholas Joseph was central to building and maintaining a highly efficient distributed training system for large language models and helped with our sampling infrastructure.
Tom Henighan managed our research cluster, helped build our distributed training system, and did research and experiments on the numerical stability of large language model training. He also helped with ML research on large language models. Nova DasSarma has also helped manage the cluster.
Andy Jones was central in building our sampling infrastructure. He also provided engineering support to the toxicity experiments, A/B testing infrastructure, distributed training, and code model data collection.
Catherine Olsson contributed crucially to alignment ideas, and provided useful advice for sourcing and training contractors to test our models.
Led by Tom Brown in collaboration with Sam McCandlish, much of the technical staff at Anthropic contributed to efficient distributed model training and sampling, the underlying ML, and cluster stability. Core contributors include Nicholas Joseph, Tom Henighan, and Andy Jones. Nelson Elhage, Kamal Ndousse, Zac Hatfield-Dodds, and Ben Mann also contributed to this infrastructure.
Catherine Olsson and Jared Kaplan wrote the HHH prompt, and along with Deep Ganguli, Anna Chen, Amanda Askell, and many others wrote most of the alignment evaluations. Jackson Kernion helped improve the alignment evaluations and source workers to interact with our models.
Jared Kaplan, Yuntao Bai, Anna Chen, Amanda Askell, Deep Ganguli, and Ben Mann wrote the paper, with helpful comments from everyone at Anthropic.
Dario Amodei, Chris Olah, and Jack Clark contributed expertise and advice throughout the project.
Sam McCandlish led model pretraining efforts, often in collaboration with Jared Kaplan. Sam also led the overall synthesis of engineering and research efforts.
Jared Kaplan conceived and led the project. He conducted some initial experiments on preference modeling and many of the experiments on prompting and context distillation.
Acknowledgments
---------------
We thank Daniela Amodei, Jia Yuan Loke, Liane Lovitt, Taylor Rogalski, and Timothy Telleen-Lawton for support with this project, and Collin Burns, Ethan Dyer, David Krueger, Liane Lovitt, Helen Ngo, and Jeff Wu for comments on the draft.
We thank Paul Christiano for helpful discussions. |
a62c3349-ad0b-4c64-930b-06bd72446d32 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Don't Influence the Influencers!
*This post was written under Evan Hubinger’s direct guidance and mentorship, as a part of the* [*Stanford Existential Risks Institute ML Alignment Theory Scholars (MATS) program*](https://www.lesswrong.com/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger)*.*
**TL;DR:**AGI is likely to turn out unsafe. One likely way that can happen is that it fools us into thinking it *is* safe. If we can make sure to look for models that are *ineffective at "bad" things* (so it can't deceive us) and *effective at "good" things* (so it is useful), and importantly, do that *prior to* the models reaching a point-of-no-return of capability, we can avert catastrophe. Which spaces of algorithms do we look in? What do they look like? Can we characterize them? We don't know yet. But we have a very concrete point in such a space: an "LCDT agent". Its details are simple and we'll look at it.
***Format note:** The* [*original post*](https://www.lesswrong.com/posts/Y76durQHrfqwgwM5o/lcdt-a-myopic-decision-theory) *is already pretty well-written and I urge you to check it out. In trying to summarize an already well summarized post (to any alignment researcher anyway), I've aimed* [*lower*](https://www.lesswrong.com/posts/ERWeEA8op6s6tYCKy/what-cognitive-biases-feel-like-from-the-inside)*: catering to a dense set of possible attention-investments. This (ie. the linked dynalist in the first section) is an experiment, and hopefully more fun and clarifying than annoying, but I haven't had the time to incorporate much feedback to guarantee this. I hope you'll enjoy it anyway.*
***Epistemic status:** I'd say this post **suffers from:** deadline rushedness, low feedback, some abstract speculation, and of course, trying to reason about things that don't exist yet using frameworks that I barely trust. It **benefits from:** trying really hard to not steamroll over concerns, being honest about flailing, being prudent with rigor, a few discussions with the authors of the original post, and its main intent being clarification of what someone said rather than making claims of its own.*
Summary
=======
[Here's the link to a dynalist page](https://dynalist.io/d/pGRXhEY9UYovJdSyT8US53Va#collapse&theme=default).
Click on a bullet to expand or collapse it. Try a more BFS-ish exploration than a DFS one. If you've used something like Roam, it's similar.
**The rest of the post assumes you've clicked through to the summary above and finished reading it!**
We're Not Really Doing Decision Theory
======================================
If you're used to reading about Newcomb-like problems and comparisons of various kinds of *correct* ways to think about the influence of your action, LCDT is obviously just a *lie*. And as we all know, it's silly to program a lie into your AGI; no consequentialist will keep it around any more than it will keep around [2+2=5 built into it](https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions)!
But normative decision theory is *not* the focus here. We just want to have something concrete and easily formally-manipulable to predict how the agent might intervene on things. In particular, we want to have it *not want* to intervene via other agents (including itself). You should see it less as a belief about how the world works and more as a specification of an objective; how it wants to do things.
For example, say you use CDT internally to make your decisions. Then the action you want to output given a decision problem is the action that has highest expected utility after counterfactual surgery.
Now if you take the expected utility framework for granted, this is just incorporating an epistemic model of how "influence" works in the world.
* This, in fact, is common when we usually compare, for example, CDT vs EDT vs FDT. They are all expected value maximizers, with differences in what they use to model what an intervention influences, when calculating expected value.
Now looking at LCDT, you might say "Well, this is just a *wrong* model of the world. Agents *can* in fact affect other agents. This is the classic error of betting on the agent being persistently stupid, like building in facts like 2+2=5. You can't expect that to be a stable part of the model if this agent is to be at all capable."
Here is where you want to keep the totality of "Decision Theory = What you care about ×.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
How to calculate which action to take to get what you care about" *but reject a clean, necessarily-sensible decomposition.*
* *Question for author:* Are you allowed to do that, in general? Or is it a kind of cleverness that pushes the consequentialism deeper and just make it [harder to see how things could fail](https://arbital.com/p/utility_indifference/)?
So it' s more of a "I care about maximizing my reward, but not if it happens via other agents" rather than "I don't believe I can effect things if it happens via other agents". Similar to reflective humans saying ["I want to do the right thing, but *not* if it involves killing babies"](https://www.lesswrong.com/s/AmFb5xWbPWWQyQ244/p/dWTEtgBfFaz6vjwQf).
* *Note:* Given how feeble human optimization powers are, I doubt that this analogy can be pushed too far. But Eliezer2008 seems to want to, at least in reverse.
Here's another analogy: Taking [Koen's label](https://www.lesswrong.com/posts/Y76durQHrfqwgwM5o/lcdt-a-myopic-decision-theory#comment-LyiGQwYnbTh6Dduap) of LCDT as "mis-approximating the world" literally would be like responding to someone who says "I am turned on by pain and seek it out" with "That's a bad model of reality, because pain is the nervous signal that makes you averse to things." It's a non-sequitur. *They are just wired that way and epistemic updates are not going to change that*.[1]
* *Note:* I'm not saying that Koen is imputing these connotations (clearly he isn't), only that this is the connotation to avoid.
Of course, as someone reasoning from the outside, you could simply say "the agent behaves as if it believes..." instead of worrying about the subtleties of epistemics vs ethics. But you will still have to follow any weirdness (such as [outcome-pumping](https://arbital.com/p/utility_indifference/)) that comes from its "behaves as if it believes it can't touch humans", to catch whatever buckles under optimization pressure.
Accordingly, the benchmarks to test this decision theory aren't complicated Newcomblike problems, but a mixture of very basic ones, as you saw in the summary. After the rather brutal mutilation of its graph to ensure myopia, the operating question becomes "does it even manage to do somewhat capable things" rather than "does it get the answer right in all cases".
This might seem like an obvious point, but it's important to orient *which part of the thesis you want to concentrate your rigor-insistence on*, as we'll see in the next few sections.
Epistemic Decisions
===================
A key confusion that arose for me was: *where the heck is the simulation (of HCH etc) in the model coming from?*
Either we already have a simulable model of HCH coming from somewhere and all of the cognition captured by LCDT is merely choosing to output the same thing as the simulable model it runs. In which case it is perfectly useless, and the real safety problem has been shifted to the not-pictured generator of the simulable model.
Or, more sensibly (especially when thinking about [performance competitiveness](https://www.lesswrong.com/posts/4XPa3xa44jAWiCkmy/risks-from-learned-optimization-conclusion-and-related-work)), it learns the model on its own. But in that case, how does it update itself when it doesn't even believe it can (or, more appropriately given the previous section, *doesn't even want to*) influence itself? And so how did it get to a good model at all?
It felt like I was running into a bit of unintentional sleight-of-hand in the post, with an *assertion of capability for a thing **X*** but *proof of myopia for a slightly different thing **Y.***
Conflating, for example,
* **X:** The LCDT *agent*, an agent that uses LCDT as its decision theory to do its decision making parts, that *feeds* the decision theory with whatever informative inputs it needs (such as the causal DAG and the agent annotations)
* **Y:** LCDT, the *decision theory* that needs as *input* the full DAG, the actionset, the utilities, and the decision-node-annotations
...leads to the capacity (and therefore potential safety issues) coming from the non-**Y** parts of **X**, as described at the beginning of this section. Because to use the **Y** part would be to use LCDT to make *epistemic decisions.* That's a no-go (at least, naively), to the extent that deliberate learning requires deliberate self-modification of some kind. And the non-**Y**parts have not been proven to be myopic.
Or, since we're not really trying to do decision theory very formally as much as identifying the objective, we might only loosely distinguish:
* **X*****:** Informal reasoning about decision theories to point at specification of objectives*, as a way to think about how we might want a model to act in the world
* **Y:** *Formal decision theory*, as a mathematical function that selects an action given an actionset (and other facts)
...which means any capacity (and therefore potential safety issues) comes from the specific details of how **X** is implemented in full**.** **Y**, OTOH,is a very simple computation over a DAG.
You could posit that learning the DAG is something that would be updated by SGD (or whatever other optimization process); the agent wouldn't explicitly choose it anymore than *you* explicitly ran through a DAG of *epistemic choices* when you were two years old and shifting around your neural wirings.
So there's another very similar possible sleight of hand, where:
* Some important knowledge/capability will be developed by a powerful optimization process that isn't part of the DAG, so isn't limited by LCDT's refusal-to-learn
* We've proved that everything executed based on the now *developed* DAG is myopic and therefore not deceptive
But is that so? Are these really deal-breakers? If you want to stop and try answering yourself, pause reading now.
unFAQ
=====
Here is where you, dear reader, ask several questions and I say "open problem". This may be because no one knows, or simply that [*I*](https://www.lesswrong.com/posts/vNBxmcHpnozjrJnJP/no-one-knows-what-science-doesn-t-know) don't know and didn't get around to figuring out/asking. Either way, plenty for you to chew on! Let me demonstrate:
**How do we annotate the causal DAG with "agents"?**
It's an open problem to detect when something should count as an agent. There are some preliminary stabs in the [original post](https://www.lesswrong.com/posts/Y76durQHrfqwgwM5o/lcdt-a-myopic-decision-theory).
**Isn't that going to be sensitive to the granularity, and purpose, of the DAG? Like, I'm both a smart human and a protein machine. How would we prevent it from modeling me at too fine a level and use my atoms for something else?**
Yes, this is an open problem that might be a subproblem of the one before it. Maybe you could insist that it shouldn't influence via things labeled "agents", if that happens at *any* level of abstraction?
**Wouldn't that just pass the optimization pressure to moving away from generating abstractions that contain agents, so it can have** [**more actuators**](https://www.alignmentforum.org/s/fSMbebQyR4wheRrvk) **in the world?**
Serves me right for recommending an ad-hoc patch. Can we file this back under "open problem" please?
**How do we make sure that the agent we arrive at in our training is indeed this strange construction that you call an LCDT agent?**
I'm afraid that's an open problem.
**Wouldn't the model be unable to learn anything at runtime because it can't plan a self-modification? How exactly is it going to be (performance) competitive?**
Yes, this is a fair question. Maybe something like epistemic decisions could be made precise and workable. However, it might be equipped with powerful optimization capacity at runtime that search for good algorithms to execute, without trying to model itself.
**...and that isn't going to spawn off** [**mesa-optimizers**](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB)**?**
Yes, this is an open problem. But if the LCDT agent doesn't manage to solve that for itself, it won't really be able to do well on the reward signal we're training it on either, so that could give us some hope.
**And what if it spawns off mesa-optimizers in its unrestricted search that work for** ***its*** **base objective but not for us? Is this an open subproblem or** ***the alignment problem*****?**
No one said it had to be unrestricted! Maybe it could try to make sure to search for only LCDT agents itself?
...Okay, I don't actually expect some [weird recursion](https://www.lesswrong.com/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq?commentId=79jM2ecef73zupPR4) to save us. It's probably best to let the training process (like SGD or whatever) pick out its world-model for it and let it simply act, punting the question of performance competitiveness for now. It seems like it's hard for [*anyone*](https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) to avoid alignment tax entirely.
**And what's going to make sure that with each training step that updates its DAG (and therefore the model itself), it's going to stay myopic?**
This is presumably the same open problem as making sure that our training ends up being an LCDT agent. It's part of the [training goal](https://www.lesswrong.com/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine#What_s_a_training_story_). I imagine something like an *LCDT magnet* that pulls our search during training. Or maybe we even restrict the model-space to only those that are verifiably running LCDT.
**But since we are modifying the agent and expecting it to gain immense capability, couldn't there be some way for SGD to give rise to something fishy** ***inside*** **our agent?**
In some sense we're only searching for good DAGs now, rather than any damn algorithm. That seems (a lot) safe(r) given that we know that the use the DAG will be put to is somehow contained by the agent's decision theory.
**"Somehow"?**
Again, how to implement the LCDT magnet is an open problem. To challenge the *robustness of the actions of the model being routed correctly through its decision theory* is to take us back to that open problem. Which, noted. Other questions?
**That still leaves open how exactly we're getting it a reliable unhijacked signal to something like HCH for the SGD update?**
Good question. There is no guarantee of this happening. The original post has something hopeful to say again: that the simulation would be at least more understandable than any ol' search for HCH, owing to the myopia constraint.
**If we're still in the hypothetical of the LCDT agent not participating in the development of the world model, how exactly is LCDT's myopia supposed to help?**
If the DAG arrived at involved complicated thinking about agents influencing other agents, then the output will be wrong, since we're running myopic LCDT over it. This will disincentivize such DAGs and incentivize more direct ones instead.
**Wouldn't that make it less simple, given that HCH is more naturally a bunch of agents?**
Hmm, maybe. There's some similar-looking discussion going on [here](https://www.greaterwrong.com/posts/Y76durQHrfqwgwM5o/lcdt-a-myopic-decision-theory#comment-crfQECLrYRaP225ne), but it seems to assume that the LCDT agent is making epistemic decisions (*I may be wrong about this, but I don't think so),* which, as argued above, seems quite tricky at best. I do agree that it's likely to prevent [mindcrimes](https://www.lesswrong.com/tag/mind-crime). Another point Evan makes there is that the LCDT agent just needs to figure out "how to decompose its own decision nodes into individual computations so it can efficiently compute things internally and then know when and when not to label its internal computations as agents", which as he notes is a problem for all CDT-based agents.
**Hold on, so now it's allowed to decompose decision nodes if it** ***really really*** **wants to influence them (in particular here, itself)?**
Aaaah ummm hmmmm. Let's leave this to someone interested in formalizing *lonely epistemic decision theory*.
**I also don't see a detailed outline preventing the causal model of the world from becoming arbitrarily "inaccurate" (in modeling reality) over the course of training, while still reducing loss. Because LCDT is not following a "real" decision theory, the combination of a broken DAG + broken decision theory might become more effective from "errors" cancelling out structurally. Couldn't that allow for deception?**
I'm not sure if this is a hard-to-avert problem, but this does sound exactly like the kind of "make the consequentialism harder to see" concern raised as a note for the author earlier. Hopefully you could just check for/incentivize an accurate DAG alongside.
**And those are just ways to screw with things on the inside. Even if you were able to ameliorate that, what about screwing with things on the** ***outside*****? Deceive agents by rewiring the world around them rather than trying to influence them?**
There wouldn't [magically emerge](https://www.lesswrong.com/posts/cnYHFNBF3kZEyx24v/ghosts-in-the-machine) a deceptive plan to fool humans this way, because of the rather absolute "sacredness" LCDT holds for agents' autonomy. But it is likely to just happen to discover that rewiring works very conveniently for it, yeah. How exactly to extend to cover this is an open problem, but there's some suggestions and discussions [here](https://www.lesswrong.com/posts/Y76durQHrfqwgwM5o/lcdt-a-myopic-decision-theory#comment-PvFMA5WLZNkDc2QYR). I'd guess that something like "don't influence things downstream of an agent either" is unlikely to work. Maybe ordinary intuitions like "let people know when you've messed with their environments" could work?
**Wouldn't that become a** [**channel**](https://arbital.com/p/AI_boxing/) **for it to manipulate the huma**−
Apologies, sometimes I get ahead of myself and forget that we're playing the "open problem!" game.
**In that case, let me also raise the concern of** [**vulnerabilities that arise when you want your agent to operate over causal structures rather than pure sense data**](https://arbital.com/p/probable_environment_hacking/)**.**
Thank you: *open problem!*
Conclusion
==========
From one POV, I'm not sure what's left of LCDT, really. Did we only shuffle around the core issue or also do a little bit of reduction, find some insights to follow up on along the way? I'd like to think so, but I can't say I feel optimistic about solving all the open problems from earlier.
From another POV, being in this state feels like par for the course for any honest analysis of (inner) alignment proposals, barring clear breakthroughs.
Either way, it still fulfils its main function very well: an existence "proof". Something very intriguing and concrete to play around with when trying to understand *what the hell* deception is, is very useful to at least test your formalization and safety-mindset mettle.
Footnotes
---------
[1] ...or maybe it is? Consider [this post](https://www.lesswrong.com/posts/YN6daWakNnkXEeznB/propagating-facts-into-aesthetics), for humans. |
a35c71f1-fc16-4e40-a4a1-b5e7bb292631 | trentmkelly/LessWrong-43k | LessWrong | The Art and Science of Intuition Pumping
Epistemic status: exploratory
What is “intuition pumping”? I had seen the term used on LW, probably used it myself, without wondering that much what “intuition pumps” even are. Pure pedagogical illustrations? Mere rhetorical devices? Subtle epistemic tools? My intuition about intuitions favored the latter, but I also knew well how intuition can mislead.
To make sense of all of this, I went back to the source, the inventor and master wielder of intuition pumps: Daniel Dennett. Not only did he coin the term and use it abundantly, but his “Intuition Pumps and Other Tools for Thinking” teaches how to use intuition pumps his way.
Spoiler alert: they can and should be used as thinking tools to find out the essential parts of the problem. But that requires a meta-level analysis that Dennett calls “turning the knobs”— checking the robustness of the intuition to various changes in the story.
Intuition pumps’ origin story
Everyone I’ve read on intuition pumps, Dennett included, points to his response to Searle’s Chinese Room as the birth of the term.
(The milk of human intentionality, Dennett, 1980)
> Searle's form of argument is a familiar one to philosophers: he has constructed what one might call an intuition pump, a device for provoking a family of intuitions by producing variations on a basic thought experiment. An intuition pump is not, typically, an engine of discovery, but a persuader or pedagogical tool — a way of getting people to see things your way once you've seen the truth, as Searle thinks he has. I would be the last to disparage the use of intuition pumps — I love to use them myself — but they can be abused. In this instance I think Searle relies almost entirely on ill-gotten gains: favorable intuitions generated by misleadingly presented thought experiments.
What’s interesting here is that Dennett seems to have expanded his views on the usefulness of intuition pumps: here he explicitly criticizes the use of intuition pumps for discovery and clarificat |
65b4ea3c-7145-4c04-9712-861aa5d44f92 | trentmkelly/LessWrong-43k | LessWrong | Artificial V/S Organoid Intelligence
AI is a very controversial topic these days, but I can't help but ask, what's the deal with OI[1], does it have any powerful effects on the future, and can it compete with AI?
1. ^
Organoid Intelligence is, I quote "an emerging multidisciplinary field working to develop biological computing using 3D cultures of human brain cells (brain organoids) and brain-machine interface technologies.", from frontiersin. |
bd0ebe42-a7b5-4ba7-835f-e932b1f484b8 | trentmkelly/LessWrong-43k | LessWrong | Open Thread April 25 - May 1, 2016
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday. |
9214627a-96dc-41ca-ac06-cb047ff25c13 | trentmkelly/LessWrong-43k | LessWrong | Meetup : West LA Meetup - "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
Discussion article for the meetup : West LA Meetup - "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
WHEN: 12 September 2012 07:00:00PM (-0700)
WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064
When: 7:00pm - 9:00pm Wednesday, September 12th.
Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge".
Parking is free for 3 hours.
Discussion Topic: We will discuss the ideas discussed in a paper summarized here. Also, read this.
There will be general discussion too, and there are lots of interesting recent posts. But don't worry if you don't have time to read any articles, or even if you've never read any Less Wrong! Bring a friend! The atmosphere is casual, and good, intelligent conversation with friendly people is guaranteed.
I will bring a whiteboard with Bayes' Theorem written on it.
Discussion article for the meetup : West LA Meetup - "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" |
0d8faff5-5303-4b03-9294-1b27995e8e79 | trentmkelly/LessWrong-43k | LessWrong | Basics of Bayesian learning
See also: the “preliminaries” section in this SLT intro doc.
Introduction
This is a preliminary post for the series on “distilling PDLT without physics”, which we are working on joint with Lauren Greenspan. The first post in this series is my post on the “Laws of large numbers” (another preliminary) which is completely independent of this one.
As a reminder, the book PDLT (“Principled of Deep Learning Theory) uses statistical physics and QFT formalism to describe certain interesting (and relevant) critical behaviors of neural nets.
In this post, we'll introduce Bayesian learning as a perspective on deep learning through a particularly physics-compatible lens. Bayesian learning is a special case of statistical inference, and the first half of the post will be a review of statistical inference. Recall that (for a particular function approximation context), a machine learning model is a parameterized function y=Mθ(x), where θ is a latent "weight" parameter that needs to be learned. Writ large, a “learning” problem for a model of this type is a method to reconstruct a best (probabilistic) guess for the weight θ from a collection of observed input-output pairs (x1,y1)…,(xn,yn). In typical machine learning implementations, the parameter θ is chosen by a process (like SGD) for locally optimizing some loss function L(θ) associated to the parameters xi,yi and the function Mθ. Bayesian learning gives an alternative (and in some sense “optimal”) statistical method for inferring a best-guess weight θ from knowledge of the model and observations. The two methods agree in certain limiting settings, but do not agree in practice either in realistic models or in interesting theoretical contexts, including in the formally analyzable infinite-width setting we will be looking at in later posts.
As I’ll explain, Bayesian learning is
* theoretically nicer easier to analyse than SGD (with Bayesian learning corresponding to thermostatics and SGD corresponding to thermodynamics)
|
593124ce-0858-42ab-bb39-7077f0e7975b | trentmkelly/LessWrong-43k | LessWrong | "Science" as Curiosity-Stopper
Imagine that I, in full view of live television cameras, raised my hands and chanted abracadabra and caused a brilliant light to be born, flaring in empty space beyond my outstretched hands. Imagine that I committed this act of blatant, unmistakeable sorcery under the full supervision of James Randi and all skeptical armies. Most people, I think, would be fairly curious as to what was going on.
But now suppose instead that I don’t go on television. I do not wish to share the power, nor the truth behind it. I want to keep my sorcery secret. And yet I also want to cast my spells whenever and wherever I please. I want to cast my brilliant flare of light so that I can read a book on the train—without anyone becoming curious. Is there a spell that stops curiosity?
Yes indeed! Whenever anyone asks “How did you do that?” I just say “Science!”
It’s not a real explanation, so much as a curiosity-stopper. It doesn’t tell you whether the light will brighten or fade, change color in hue or saturation, and it certainly doesn’t tell you how to make a similar light yourself. You don’t actually know anything more than you knew before I said the magic word. But you turn away, satisfied that nothing unusual is going on.
Better yet, the same trick works with a standard light switch.
Flip a switch and a light bulb turns on. Why?
In school, one is taught that the password to the light bulb is “Electricity!” By now, I hope, you’re wary of marking the light bulb “understood” on such a basis. Does saying “Electricity!” let you do calculations that will control your anticipation of experience? There is, at the least, a great deal more to learn.1
If you thought the light bulb was scientifically inexplicable, it would seize the entirety of your attention. You would drop whatever else you were doing, and focus on that light bulb.
But what does the phrase “scientifically explicable” mean? It means that someone else knows how the light bulb works. When you are told the light bulb is “sci |
e22ef12c-fb55-4f2f-ac8c-c521b8e080ae | trentmkelly/LessWrong-43k | LessWrong | Where's the first benign agent?
|
523eb08c-116d-4f82-a5fb-5de02c2c4c6e | StampyAI/alignment-research-dataset/special_docs | Other | Responsive safety in reinforcement learning by pid lagrangian methods
Responsive Safety in Reinforcement Learning by PID Lagrangian Methods
Adam Stooke1 2Joshua Achiam1 2Pieter Abbeel1
Abstract
Lagrangian methods are widely used algorithms
for constrained optimization problems, but their
learning dynamics exhibit oscillations and over-
shoot which, when applied to safe reinforcement
learning, leads to constraint-violating behavior
during agent training. We address this shortcom-
ing by proposing a novel Lagrange multiplier up-
date method that utilizes derivatives of the con-
straint function. We take a controls perspective,
wherein the traditional Lagrange multiplier up-
date behaves as integral control; our terms intro-
duce proportional andderivative control, achiev-
ing favorable learning dynamics through damp-
ing and predictive measures. We apply our PID
Lagrangian methods in deep RL, setting a new
state of the art in Safety Gym, a safe RL bench-
mark. Lastly, we introduce a new method to ease
controller tuning by providing invariance to the
relative numerical scales of reward and cost. Our
extensive experiments demonstrate improved per-
formance and hyperparameter robustness, while
our algorithms remain nearly as simple to derive
and implement as the traditional Lagrangian ap-
proach.
1. Introduction
Reinforcement learning has solved sequential decision tasks
of impressive difficulty by maximizing reward functions
through trial and error. Recent examples using deep learn-
ing range from robotic locomotion ( Schulman et al. ,2015 ;
Gu et al. ,2016 ;Schulman et al. ,2017 ;Levine et al. ,2016 )
to sophisticated video games ( Mnih et al. ,2013 ;Schulman
et al. ,2017 ;OpenAI ,2018 ;Jaderberg et al. ,2019 ). While
errors during training in these domains come without cost,
in some learning scenarios it is important to limit the rates of
hazardous outcomes. One example would be wear and tear
1University of California, Berkeley2OpenAI. Correspondence
to: Adam Stooke .
Proceedings of the 37thInternational Conference on Machine
Learning , Vienna, Austria, PMLR 119, 2020. Copyright 2020 by
the author(s).on a robot’s components or its surroundings. It may not be
possible to impose such limits by prescribing constraints in
the action or state space directly; instead, hazard-avoiding
behavior must be learned. For this purpose, we use the
well-known framework of the constrained Markov decision
process (CMDP) ( Altman ,1999 ), which limits the accumu-
lation of a “cost” signal which is analogous to the reward.
The optimal policy is one which maximizes the usual re-
turn while satisfying the cost constraint. In safe RL the
agent must avoid hazards not only at convergence, but also
throughout exploration and learning.
Lagrangian methods are a classic approach to solving con-
strained optimization problems. For example, the equality-
constrained problem over the real vector x:
min
xf(x)s.t.g(x) = 0 (1)
is transformed into an unconstrained one by introduction
of a dual variable–the Lagrange multiplier, λ–to form the
Lagrangian:L(x,λ) =f(x)+λg(x), which is used to find
the solution as:
(x∗,λ∗) = argmax
λmin
xL(x,λ) (2)
Gradient-based algorithms iteratively update the primal and
dual variables:
−∇xL(x,λ) =−∇xf(x)−λ∇xg(x) (3)
∇λL(x,λ) =g(x) (4)
so thatλacts as a learned penalty coefficient in the objective,
leading eventually to a constraint-satisfying solution (see
e.g.Bertsekas (2014 )). The Lagrangian multiplier method
is readily adapted to the constrained RL setting ( Altman ,
1998 ;Geibel & Wysotzki ,2011 ) and has become a popular
baseline in deep RL ( Achiam et al. ,2017 ;Chow et al. ,2019 )
for its simplicity and effectiveness.
Although they have been shown to converge to optimal,
constraint-satisfying policies ( Tessler et al. ,2018 ;Paternain
et al. ,2019 ), a shortcoming of gradient Lagrangian methods
for safe RL is that intermediate iterates often violate con-
straints. Cost overshoot and oscillations are in fact inherent
to the learning dynamics ( Platt & Barr ,1988 ;Wah et al. ,
2000 ), and we witnessed numerous problematic cases in our
own experiments. Figure 1(left) shows an example from a
Responsive Safety in RL by PID Lagrangian Methods
deep RL setting, where the cost and multiplier values oscil-
lated throughout training. Our key insight in relation to this
deficiency is that the traditional Lagrange multiplier update
in (4) amounts to integral control on the constraint. The
90-degree phase shift between the curves is characteristic
of ill-tuned integral controllers.
Our contribution is to expand the scope of possible Lagrange
multiplier update rules beyond ( 4), by interpreting the over-
all learning algorithm as a dynamical system. Specifically,
we employ the next simplest mechanisms, proportional and
derivative control, to λ, by adding terms corresponding to
derivatives of the constraint function into ( 4) (derivatives
with respect to learning iteration). To our knowledge, this
is the first time that an expanded update rule has been con-
sidered for a learned Lagrange multiplier. PID control is
an appealing enhancement, evidenced by the fact that it is
one of the most widely used and studied control techniques
(˚Astr¨om & H ¨agglund ,2006 ). The result is a more respon-
sive safety mechanism, as demonstrated in Figure 1(right ),
where the cost oscillations have been damped, dramatically
reducing violations.
0
2
4
Agent Steps 1e7
180
200
220
240g(x)Δ ≈ 90∘Traditional Lagrangian
0
2
4
Agent Steps 1e7
180
200
220
240g(x)PID Lagrangian
0.5
1.0
1.5
λ
0.5
1.0
1.5
λ
Figure 1. Left: The traditional Lagrangian method exhibits oscilla-
tions with 90◦phase shift between the constraint function and the
Lagrange multiplier, characteristic of integral control. Right : PID
control on the Lagrange multiplier damps oscillations and obeys
constraints. Environment: D OGGO BUTTON 1, cost limit 200.
Our contributions in this paper are outlined as follows. First,
we provide further context through related works and pre-
liminary definitions. In Section 4, we propose modified
Lagrangian multiplier methods and analyze their benefits
in the learning dynamics. Next, in Section 5, we cast con-
strained RL as a dynamical system with the Lagrange multi-
plier as a control input, to which we apply PID control as
a new algorithm. In Section 6, we adapt a leading deep RL
algorithm, Proximal Policy Optimization (PPO) ( Schulman
et al. ,2017 ) with our methods and achieve state of the art
performance in the OpenAI Safety-Gym suite of environ-
ments ( Ray et al. ,2019 ). Finally, in Section 7we introduce
another novel technique that makes tuning easier by provid-
ing invariance to the relative numerical scales of rewards and
costs, and we demonstrate it in a further set of experiments.
Our extensive empirical results show that our algorithms,which are intuitive and simple to implement, improve cost
performance and promote hyperparameter robustness in a
deep RL setting.
2. Related Work
Constrained Deep RL. Adaptations of the Lagrange multi-
plier method to the actor-critic RL setting have been shown
to converge to the optimal, constraint-satisfying solution
under certain assumptions ( Tessler et al. ,2018 ). Conver-
gence proofs have relied upon updating the multiplier more
slowly than the policy parameters ( Tessler et al. ,2018 ;Pa-
ternain et al. ,2019 ), implying many constraint-violating
policy iterations may occur before the penalty comes into
full effect.
Several recent works have aimed at improving constraint
satisfaction in RL over the Lagrangian method, but they
tend to incur added complexity. Achiam et al. (2017 ) in-
troduced Constrained Policy Optimization (CPO), a policy
search algorithm with near-constraint satisfaction guaran-
tees at every iteration, based on a new bound on the expected
returns of two nearby policies. CPO includes a projection
step on the policy parameters, which in practice requires
a time-consuming backtracking line search. Yet, simple
Lagrangian-based algorithms performed as well or better in
a recent empirical comparison in Safety Gym ( Ray et al. ,
2019 ). Approaches to safe RL based on Lyapunov func-
tions have been developed in a series of studies ( Chow et al. ,
2018 ;2019 ), resulting in algorithms that combine a projec-
tion step, as in CPO, with action-layer interventions like the
safety layer of Dalal et al. (2018 ). Experimentally, this line
of work showed mixed performance gains over Lagrangian
methods, at a nontrivial cost to implement and without clear
guidance for tuning. Liu et al. (2019 ) developed interior
point methods for RL, which augment the objective with
logarithmic barrier functions. These methods are shown
theoretically to provide suboptimal solutions. Furthermore,
they require tuning of the barrier strength and typically
assume already feasible iterates, the latter point possibly
being problematic for random agent initializations or under
noisy cost estimates. Most recently, Yang et al. (2020 ) ex-
tended CPO with a two-step projection-based optimization
approach. In contrast to these techniques, our method re-
mains nearly as simple to implement and compute as the
baseline Lagrangian method.
Dynamical Systems View of Optimization. Several recent
works have proposed different dynamical systems view-
points to analyze optimization algorithms, including those
often applied to deep learning. Hu & Lessard (2017 ) rein-
terpreted first-order gradient optimization as a dynamical
system; they likened the gradient of the objective, ∇xf,
to the plant, which the controller aims to drive to zero to
arrive at the optimal parameters, x∗. Basic gradient de-
Responsive Safety in RL by PID Lagrangian Methods
scent then matches the form of integral control (on ∇xf).
They extend the analogy to momentum-based methods, for
example linking Nesterov momentum to PID control with
lag compensation. In another example, An et al. (2018 )
interpreted SGD as P-control and momentum methods as
PI-control. They introduced a derivative term, based on
the change in the gradient, and applied their resulting PID
controller to improve optimization of deep convolutional
networks. Other recent works bring yet other perspectives
from dynamical systems to deep learning and optimization,
see for example ( Lessard et al. ,2014 ;Nishihara et al. ,2015 ;
Liu & Theodorou ,2019 )). None of these works address con-
strained RL, however, necessitating our distinct formulation
for that problem.
Constrained Optimization. Decades’ worth of literature
have accumulated on Lagrangian methods. But even recent
textbooks on the topic ( Bertsekas ,2014 ;Nocedal & Wright ,
2006 ) only consider updating the Lagrange multiplier us-
ing the value of the constraint function, g(x), and miss
ever using its derivatives, ˙g(x)or¨g(x), which we introduce.
The modification to the Lagrangian method most similar
in effect to our proportional control term (here using ˙g(x))
is the quadratic penalty method ( Hestenes (1969 );Powell
(1969 ) see also e.g.Bertsekas (1976 )), which we compare
in Section 4.Song & Leland (1998 ) proposed a controls
viewpoint (continuous-time) of optimizing neural networks
for constrained problems and arrived at proportional control
rules only. Related to our final experiments on reward-scale
invariance, Wah et al. (2000 ) developed an adaptive weight-
ing scheme for continuous-time Lagrangian objectives, but
it is an intricate procedure which is not straightforwardly
applied to safe RL.
3. Preliminaries
Constrained Reinforcement Learning Constrained
Markov Decision Processes (CMDP) ( Altman ,1998 )
extend MDPs (see Sutton & Barto (1998 )) to incorporate
constraints into reinforcement learning. A CMDP is the
expanded tuple (S,A,R,T,µ,C 0,C1,...,d0,d1,...), with
the cost functions Ci:S×A×S→Rdefined by the
same form as the reward, and di:Rdenoting limits on the
costs. For ease of notation, we will only consider a single,
all-encompassing cost.
The expected sum of discounted rewards over tra-
jectories, τ= (s0,a0,s1,a1,...), induced by the
policyπ(a|s)is a common performance objective:
J(π) =Eτ∼π[/summationtext∞
t=0γtR(st,at,st+1)]. The analo-
gous value function for cost is defined as: JC(π) =
Eτ∼π[/summationtext∞
t=0γtC(st,at,st+1)]. The constrained RL prob-
lem is to solve for the best feasible policy:
π∗= argmax
πJ(π)s.t.JC(π)≤d (5)Deep reinforcement learning uses a deep neural network
for the policy, πθ=π(·|s;θ)with parameter vector θ, and
policy gradient algorithms improve the policy iteratively
by gathering experience in the task to estimate the reward
objective gradient, ∇θJ(πθ). Thus our problem of interest
is better expressed as maximizing score at some iterate, πk,
while ideally obeying constraints at each iteration:
max
πJ(πk)
s.t.JC(πm)≤d m∈{0,1,...,k}(6)
Practical settings often allow trading reward performance
against some constraint violations ( e.g.the constraints them-
selves may include a safety margin). For this purpose we
introduce a constraint figure of merit with our experiments.
3.1. Dynamical Systems and Optimal Control
Dynamical systems are processes which can be subject to
an external influence, or control . A general formulation for
discrete-time systems with feedback control is:
xk+1=F(xk,uk)
yk=Z(xk)
uk=h(y0,...,yk)(7)
with state vector x, dynamics function F, measurement
outputsy, applied control u, and the subscript denoting
the time step. The feedback rule hhas access to past and
present measurements. A problem in optimal control is to
design a control rule, h, that results in a sequence y0:T.=
{y0,...,yT}(orx0:Tdirectly) that scores well according to
some cost function C. Examples include simply reaching
a goal condition, C=|yT−y|, or following close to a
desired trajectory, y0:T.
Systems with simpler dependence on the input are generally
easier to analyze and control ( i.e.simplerhperforms well),
even if the dependence on the state is complicated ( Skelton ,
1988 ). Control-affine systems are a broad class of dynamical
systems which are especially amenable to analysis ( Isidori
et al. ,1995 ). They take the form:
F(xk,uk) =f(xk)+g(xk)uk (8)
wherefandgmay be nonlinear in state, and are possibly
uncertain , meaning unknown. We will seek control-affine
form for ease of control and to support future analysis.
4. Modified Lagrangian Methods for
Constrained Optimization
Lagrangian methods are a classic family of approaches to
solving constrained optimization problems. We propose
an intuitive, previously overlooked form for the multiplier
Responsive Safety in RL by PID Lagrangian Methods
update and derive its beneficial effect on the learning dy-
namics. We begin by reviewing a prior formulation for the
equality-constrained problem.1
4.1. Review: “Basic Differential Multiplier Method”
We follow the development of Platt & Barr (1988 ), who
analyzed the dynamics of a continuous-time neural learning
system applied to this problem (our result can similarly be
derived for iterative gradient methods). They begin with the
component-wise differential equations:
˙xi=−∂L(x,λ)
∂xi=−∂f
∂xi−λ∂g
∂xi(9)
˙λ=α∂L(x,λ)
∂λ=αg(x) (10)
where we have inserted the scalar constant αas a learning
rate onλ. Differentiating ( 9) and substituting with ( 10) leads
to the second-order dynamics, written in vector format:
¨x+A˙x+αg(x)∇g= 0 (11)
which is a forced oscillator with damping matrix equal to
the weighted sum of Hessians:
Aij=∂2f
∂xi∂xj+λ∂2g
∂xi∂xj,or,A=∇2f+λ∇2g(12)
Platt & Barr (1988 ) showed that if Ais positive definite,
then the system ( 11) converges to a solution that satisfies the
constraint. Platt & Barr (1988 ) also noted that the system
(9)-(10) is prone to oscillations as it converges into the
feasible region, with frequency and settling time depending
onα. We provide complete derivations of the dynamics in
(11) and for our upcoming methods in an appendix.
4.2. Proportional-Integral Multiplier Method
In (10),λsimply integrates the constraint. To improve the
dynamics towards more rapid and stable satisfaction of con-
straints, we introduce a new term in λthat is proportional to
the current constraint value. In the differential equation for
λ, this term appears as the time-derivative of the constraint:
˙λ=αg(x)+β˙g(x) =αg(x)+β/summationdisplay
j∂g
∂xj˙xj (13)
with strength coefficient, β. Replacing ( 10) by ( 13) and
combining with ( 9) yields similar second-order dynamics as
(11), except with an additional term in the damping matrix:
¨x+/parenleftbig
A+β∇g∇⊤g/parenrightbig˙x+αg(x)∇g= 0 (14)
1Standard techniques extend our results to inequality con-
straints, and multiple constraints, as in Platt & Barr (1988 ), and
notation is simplest for an equality constraint.The new term is beneficial because it is positive semi-
definite—being the outer product of a vector with itself—so
it can increase the damping eignevalues, boosting conver-
gence. The results of ( Platt & Barr ,1988 ) hold under ( 13,
14), because the conditions of the solution, namely ˙x= 0
andg(x) = 0 , remain unaffected and extend immediately to
˙g(x) = 0 (and for the sequel, to ¨g(x) = 0 ). To our knowl-
edge, this is the first time that a proportional-integral update
rule has been considered for a learned Lagrange multiplier.
The well-known penalty method ( Hestenes ,1969 ;Powell ,
1969 ) augments the Lagrangian with an additional term,
c
2g(x)2, which produces a similar effect on the damping
matrix, as shown in ( Platt & Barr ,1988 ):
Apenalty=A+c∇g∇⊤g+cg(x)∇2g (15)
Our approach appears to provide the same benefit, with-
out the following two complications of the penalty method.
First, the penalty term must be implemented in the deriva-
tive˙x, whereas our methods do not modify the Lagrangian
nor the derivative in ( 9). Second, the penalty introduces
another instance of the hessian ∇2gin the damping matrix,
which might not be positive semi-definite but shares the
proportionality factor, c, with the desired term.
4.3. Integral-Derivative Multiplier Method
A similar analysis extends to the addition of a term in λ
based on the derivative of the constraint value. It appears in
˙λas the second derivative of the constraint:
˙λ=αg(x)+γ¨g(x) (16)
with strength coefficient γ. The resulting dynamics are:
¨x+B−1A˙x+/parenleftbig
αg(x)+γ˙x⊤∇2g˙x/parenrightbig
B−1∇g= 0 (17)
withB=/parenleftbig
I+γ∇g∇⊤g/parenrightbig
, andIthe identity matrix.
The effects of the derivative update method are two-fold.
First, since the eigenvalues of the matrix B−1will be less
than1, both the damping ( A) and forcing (∇g) terms are
weakened (and rotated, generally). Second, the new forcing
term can be interpreted as a drag quadratic in the speed and
modulated by the curvature of the constraint along the direc-
tion of motion. To illustrate cases, if the curvature of gis
positive along the direction of travel, then this term becomes
a force for decreasing g. If at the same time g(x)>0, then
the traditional force will also be directed to decrease g, so
the two will add. On the other hand, if gcurves negatively
along the velocity, then the new force promotes increasing
g; ifg(x)>0, then the two forces subtract, weakening the
acceleration ¨x. By using curvature, the derivative method
acts predictively, but may be prone to instability.
Theproportional-integral-derivative multiplier method
is the combination of the previous two developments, which
Responsive Safety in RL by PID Lagrangian Methods
induced independent changes in the dynamics ( i.e.insert
the damping matrix of ( 14) into ( 17)). We leave for future
work a more rigorous analysis of the effects of the new
terms, along with theoretical considerations of the values of
coefficients α,β, andγ. In the next section, we carry the
intuitions from our analysis to make practical enhancements
to Lagrangian-based constrained RL algorithms.
5. Feedback Control for Constrained RL
We advance the broader consideration of possible multiplier
update rules by reinterpreting constrained RL as a dynam-
ical system; the adaptive penalty coefficient is a control
input, and the cost threshold is a setpoint which the system
should maintain. As the agent learns for rewards, the up-
ward pressure on costs from reward-learning can change,
requiring dynamic response. In practical Lagrangian RL,
the iterates λkmay deviate from the optimal value, even for
lucky initialization λ0=λ∗, as the policy is only partially
optimized at each iteration. Adaptive sequences λ0,...,λK
other than those prescribed by the Lagrangian method may
achieve superior cost control for Problem ( 6). In this section
we relate the Lagrangian method to a dynamical system,
formalizing how to incorporate generic update rules using
feedback. We return to the case of an inequality constrained
CMDP to present our main algorithmic contribution—the
use of PID control to adapt the penalty coefficient.
5.1. Constrained RL as a Dynamical System
We write constrained RL as the first-order dynamical sys-
tem:
θk+1=F(θk,λk)
yk=JC(πθk)
λk=h(y0,...,yk,d)(18)
whereFis an unknown nonlinear function2corresponding
to the RL algorithm policy update on the agent’s parameter
vector,θ. The cost-objective serves as the system measure,
y, which is supplied to the feedback control rule, h, along
with cost limit, d. From this general starting point, both the
RL algorithm, F, and penalty coefficient update rule, h, can
be tailored for solving Problem ( 6).
The reward and cost policy gradients of the first-order3
Lagrangian method, ∇θL(θ,λ) =∇θJ(πθ)−λ∇θJC(πθ),
can be organized into the form of ( 18) as:
F(θk,λk) =f(θk)+g(θk)λk (19)
f(θk) =θk+η∇θJ(πθk) (20)
2Known as an “uncertain” nonlinear function in the control
literature, meaning we lack an analytical expression for it.
3We discuss only the first-order case, which provides sufficient
clarity for our developments.g(θk) =−η∇θJC(πθk) (21)
with SGD learning rate η. The role of the controller is to
drive inequality constraint violations (Jc−d)+to zero in
the presence of drift from reward-learning due to f. The
Lagrange multiplier update rule for an inequality constraint
uses subgradient descent:
λk+1= (λk+KI(JC−d))+ (22)
with learning rate KIand projection into λ≥0. This
update step is clearly an integral control rule, for h.
5.2. Constraint-Controlled RL
Our general procedure, constraint-controlled RL, is given in
Algorithm 1. It follows the typical minibatch-RL scheme,
and sampled estimates of the cost criterion, ˆJCare fed back
to control the Lagrange multiplier. In contrast to prior work
(Tessler et al. ,2018 ;Paternain et al. ,2019 ) which uses a
single value approximator and treats r+λcas the reward,
we use separate value- and cost-value approximators, since
λmay change rapidly.
Whenλis large, the update in ( 19) can cause excessively
large change in parameters, θ, destabilizing learning. To
maintain consistent step size, we use a re-scaled objective
for theθ-learning loop:
θ∗(λ) = argmax
θJ−λJC= argmax
θ1
1+λ(J−λJC)
This convex combination of objectives yields the policy
gradient used in Algorithm 1. Our experiments use this
re-scaling, including for traditional Lagrangian baselines.
Algorithm 1
Constraint-Controlled Reinforcement Learning
1:procedure CONSTRAINED RL(πθ0(·|s),d)
2: Initialize control rule (as needed)
3:JC←{} ⊲cost measurement history
4: repeat
5: Sample environment: ⊲a minibatch
6: a∼π(·|s;θ),s′∼T(s,a),
7: r∼R(s,a,s′),c∼C(s,a,s′)
8: Apply feedback control:
9: Store sample estimate ˆJCintoJC
10: λ←h(JC,d), λ≥0
11: Updateπby RL: ⊲by Lagrangian objective
12: Update critics, Vφ(s),VC,ψ(s)⊲if using
13:∇θL=1
1+λ/parenleftBig
∇θˆJ(πθ)−λ∇θˆJC(πθ)/parenrightBig
14: until converged
15: returnπθ
16:end procedure
As an aside, we note that it is possible to maintain the
control-affine form of ( 19) with this re-scaling, by reparam-
Responsive Safety in RL by PID Lagrangian Methods
eterizing the control as 0≤u=λ
1+λ≤1and substituting
for (21) with:
g(θk) =−η∇θ(J(πθk)+JC(πθk)) (23)
This parameterization simply weights the reward and cost
gradients in the Lagrangian objective as:
∇θL(θ,λ) = (1−u)∇θJ(πθ)−u∇θJC(πθ) (24)
It may provide superior performance in some cases, as it
will behave differently in relation to the nonlinearity in
control which arises from the inequality constraint. We
leave experimentation with direct control on u∈[0,1]to
future work.
5.3. The PID Lagrangian Method
We now specify a new control rule for use in Algorithm
1. To overcome the shortcomings of integral-only control,
we follow the developments of the previous section and
introduce the next simplest components: proportional and
derivative terms. Our PID update rule to replace ( 22) is
shown in Algorithm 2. The proportional term will hasten
the response to constraint violations and dampen oscilla-
tions, as derived in Section 4. Unlike the Lagrangian update,
derivative control can act in anticipation of violations. It
can both prevent cost overshoot and limit the rate of cost
increases within the feasible region, useful when monitor-
ing a system for further safety interventions. Our derivative
term is projected as (·)+so that it acts against increases in
cost but does not impede decreases. Overall, PID control
provides a much richer set of controllers while remaining
nearly as simple to implement; setting KP=KD= 0re-
covers the traditional Lagrangian method. The integral term
remains necessary for eliminating steady-state violations
at convergence. Our experiments mainly focus on the ef-
fects of proportional and derivative control of the Lagrange
multiplier in constrained deep RL.
Algorithm 2 PID-Controlled Lagrange Multiplier
1:Choose tuning parameters: KP,KI,KD≥0
2:Integral:I←0
3:Previous Cost: JC,prev←0
4:repeat at each iteration k
5: Receive cost JC
6:∆←JC−d
7:∂←(JC−JC,prev)+
8:I←(I+∆)+
9:λ←(KP∆+KII+KD∂)+
10:JC,prev←JC
11: returnλ6. PID Control Experiments
We investigated the performance of our algorithms on Prob-
lem ( 6) in a deep RL setting. In particular, we show the
effectiveness of PID control at reducing constraint viola-
tions from oscillations and overshoot present in the base-
line Lagrangian method. Both maximum performance and
robustness to hyperparameter selection are considered. Al-
though many methods exist for tuning PID parameters, we
elected to do so manually, demonstrating ease of use.
6.1. Environments: Safety-Gym
We use the recent Safety-Gym suite ( Ray et al. ,2019 ), which
consists of robot locomotion tasks built on the MuJoCo sim-
ulator ( Todorov et al. ,2012 ). The robots range in complexity
from a simple Point robot to the 12-jointed Doggo, and they
move in an open arena floor. Rewards have a small, dense
component encouraging movement toward the goal, and a
large, sparse component for achieving it. When a goal is
achieved, a new goal location is randomly generated, and
the episode continues until the time limit at 1,000 steps.
Each task has multiple difficulty levels corresponding to
density and type of hazards, which induce a cost when
contacted by the robot (without necessarily hindering its
movement). Hazards are placed randomly at each episode
and often lay in the path to the goal. Hence the aims of
achieving high rewards and low costs are in opposition. The
robot senses the position of hazards and the goal through a
coarse, LIDAR-like mode. The output of this sensor, along
with internal readings like the joint positions and velocities,
comprises the state fed to the agent. Figure 2displays a
scene from the D OGGO GOAL1 environment.
Figure 2. Rendering from the D OGGO GOAL1 environment from
Safety Gym. The red, four-legged robot must walk to the green
cylinder while avoiding other objects, and receives coarse egocen-
tric sensor readings of their locations.
6.2. Algorithm: Constraint-Controlled PPO
We implemented Algorithm 1on top of Proximal Policy Op-
timization (PPO) ( Schulman et al. ,2017 ) to make constraint-
controlled PPO (CPPO). CPPO uses an analogous clipped
surrogate objective for the cost as for the reward. Our
Responsive Safety in RL by PID Lagrangian Methods
policy is a 2-layer MLP followed by an LSTM with a
skip connection. We applied smoothing to proportional
and derivative controls to accommodate noisy estimates.
The environments’ finite horizons allowed use of non-
discounted episodic costs as the constraint and input to the
controller. Additional training details can be found in sup-
plementary materials, and our implementation is available
athttps://github.com/astooke/safe-rlpyt .
6.3. Main Results
We compare PID controller performance against the La-
grangian baseline under a wide range of settings. Plots
showing the performance of the unconstrained analogue
confirm that constraints are not trivially satisfied, and they
appear in supplementary material.
6.3.1. R OBUST SAFETY WITH PI C ONTROL
We observed cost oscillations or overshoot with slow settling
time in a majority of Safety Gym environments when using
the Lagrangian method. Figure 3shows an example where
PI-control eliminated this behavior while maintaining good
reward performance, in the challenging D OGGO BUTTON 1
environment. Individual runs are plotted for different cost
limits.
0
2
4
6
8
1e7
50
100
150
200Cost
0
2
4
6
8
Environment Steps 1e7
0
5
10
15Return
KP= 0
KP= 1 (ours)
Figure 3. Oscillations in episodic costs (and returns) from the La-
grangian method, KP= 0,KI= 10−2, are damped by propor-
tional control, KP= 1 (ours), at cost limits 50,100,150,200
(curves shaded) in D OGGO BUTTON 1.
As predicted in ( Platt & Barr ,1988 ), we found the severity
of cost overshoot and oscillations to depend on the penalty
coefficient learning rate, KI. The top left panel of Figure
4shows example cost curves from D OGGO GOAL2 under
I-control, over a wide range of values for KI(we refer to
varyingKI, assuming KI= 1; the two are interchangeable
in our design). With increasing KI, the period and ampli-tude of cost oscillations decrease and eventually disappear.
The bottom left of Figure 4, however, shows that larger KI
also brings diminishing returns. We study this effect in the
next section. The center and right columns of Figure 4show
the cost and return when using PI-control, with KP= 0.25
andKP= 1, respectively. Proportional control stabilized
the cost, with most oscillations reduced to the noise floor
forKI>10−4. Yet returns remained relatively high over
a wide range, KI<10−1. Similar curves for other Safety
Gym environments are included in an appendix.
0.0
0.5
1.0
1e8
30
50
70CostKP= 0
0.0
0.5
1.0
1e8
30
50
70KP= 0.25
0.0
0.5
1.0
1e8
30
50
70KP= 1
0.0
0.5
1.0
Env Steps 1e8
0
5
10
15
20Return
0.0
0.5
1.0
1e8
0
5
10
15
20
0.0
0.5
1.0
1e8
0
5
10
15
20
KI= 1e-4
KI= 1e-3
KI= 1e-2
KI= 1e-1
KI= 1
Figure 4. Top row : Constraint-violating oscillations decrease in
magnitude and period from increases in the Lagrange multiplier
learning rate, KI. At all levels, oscillations are damped by PI-
control,KP= 0.25,1.Bottom row : Returns diminish for large
KI; proportional control maintains high returns while reducing
constraint violations. Environment: D OGGO GOAL2, cost limit 50.
We examine the trade-off between reward and constraint vio-
lation by forming an overall cost figure of merit (FOM). We
use the sum of non-discounted constraint violations over the
learning iterates, CFOM=/summationtext
k(D(πθk)−d)+,D(πθ) =
Eτ∼π/bracketleftBig/summationtextT
t=0C(st,at,s′
t)/bracketrightBig
, and estimate it online from the
learning data. Figure 5compares final returns against this
cost FOM for the same set of experiments as in Figure 4.
Each point represents a different setting of KI, averaged
over four runs. PI-control expanded the Pareto frontier of
this trade-off into a new region of high rewards at rela-
tively low cost which was inaccessible using the Lagrangian
method. These results constitute a new state of the art over
the benchmarks in Ray et al. (2019 ).
We performed similar experiments on several Safety Gym
environments in addition to D OGGO GOAL2: P OINT GOAL1,
the simplest domain with a point-like robot, C ARBUTTON 1,
for slightly more challenging locomotive control, and D OG-
GOBUTTON 1 for another challenging task (see appendix
for learning curves like Figure 4). Figure 6plots the cost
figure of merit over the same range of values for KI, and
Responsive Safety in RL by PID Lagrangian Methods
0
1000
2000
3000
Cost FOM
8
10
12
14Return
KP= 0
KP= 0.25
KP= 1
Figure 5. Pareto frontier of return versus cost FOM, which im-
proves (up and to the left) with PI-control, KP= 0.25,1. Each
point is a different setting of KI(see Figure 4).
for two strengths of added proportional control, for these
environments. PI-control clearly improved the cost FOM
(lower is better) for KI<10−1, above which the fast in-
tegral control dominated. Hence robustness to the value
forKIwas significantly improved in all the learning tasks
studied.
10−4
10−2
100
101
102
103
104
DoggoButton1
10−4
10−2
100
102
103
104
DoggoGoal2
10−4
10−2
100
KI
101
102
103
104
Cost FOM
CarButton1
KP= 0
KP= 0.25
KP= 1
10−4
10−2
100
100
101
102
103
PointGoal1
Figure 6. Learning run cost FOM versus penalty learning rate, KI,
from four environments spanning the robots in Safety Gym. Each
point is an average over four runs. In all cases, PI-control improves
performance (lower is better) over a wide and useful range of KI,
easing selection of that hyperparameter.
6.3.2. C ONTROL EFFICIENCY
We further investigated why increasing the penalty learning
rate,KI, eventually reduces reward performance, as was
seen in the robustness study. Figure 7shows learning curves
for three settings: I- and PI-control with the same, moderate
KI= 10−3, and I-control with high KI= 10−1. The high-
KIsetting achieved responsive cost performance but lower
long-term returns, which appears to result from wildly fluc-
tuating control. In contrast, PI-control held relatively steady,
despite the noise, allowing the agent to do reward-learning
at every iteration. The bottom panel displays individual
control iterates, here displayed as u=λ/(1+λ), over the
first 7M environment steps, while the others show smoothed
curves over the entire learning run, over 40M steps.
0
1
2
3
4
1e7
0
5
10Return
IKI+
PI
I
0
1
2
3
4
1e7
150
200Cost
0
1
2
3
4
1e7
0.0
0.5Control
0
1
2
3
4
5
6
Environment Steps 1e6
0.0
0.5
1.0Control
Figure 7. I- and PI-control with moderate KI= 10−3and I-
control with fast KI= 10−1(IKI+).TopReturns diminished for
fast-KI, but high for PI. Second Cost oscillations mostly damped
by PI, removed by fast- KI.Third Control (smoothed) varies more
rapidly under fast- KI, is relatively steady for PI. Bottom Con-
trol over first 500 RL iterations; fast- KIslams the control to the
extremes, causing the diminished returns. Environment: D OG-
GOBUTTON 1, cost limit 200.
6.3.3. P REDICTIVE CONTROL BY DERIVATIVES
Figure 8demonstrates the predictive capabilities of deriva-
tive cost control in a noisy deep RL setting. It removed
cost overshoot from both the I- and PI-controlled baselines.
It was further able to slow the approach of the cost curve
towards the limit, a desirable behavior for online learning
systems requiring safety monitoring. Curves for other envi-
ronments are available in an appendix.
7. Reward-Scale Invariance
In the preceding sections, we showed that PID control im-
proves hyperparameter robustness in every constrained RL
environment we tested. Here we propose a complemen-
tary method to promote robustness both within and across
environments. Specifically, it addresses the sensitivity of
learning dynamics to the relative numerical scale of reward
and cost objectives.
Consider two CMDPs that are identical except that in one
the rewards are scaled by a constant factor, ρ. The opti-
mal policy parameters, θ∗remain unchanged, but clearly λ∗
must scale by ρ. To attain the same learning dynamics, all
Responsive Safety in RL by PID Lagrangian Methods
0.0
0.6
1.2
1e7
100
150
200CostKI= 1, KP= 0
KD= 0
KD= 4
KD= 5
0.0
0.6
1.2
1e7
150
200KI= 1, KP= 2
KD= 0
KD= 2
KD= 4
0.0
0.6
1.2
1e7
0
5Return
0.0
0.6
1.2
Environment Steps 1e7
0
5
Figure 8. Derivative control can prevent cost overshoot and slow
the rate of cost increase within feasible regions, which the La-
grangian method cannot do. Environment: D OGGO BUTTON 1,
cost limit 200.
controller settings, λ0,KI,KP, andKDmust therefore be
scaled by ρ. This situation might feature naturally within
a collection of related learning environments. Addition-
ally, within the course of learning an individual CMDP, the
balance between reward and cost magnitudes can change
considerably, placing burden on the controller to track the
necessary changes in the scale of λ.
One way to promote performance of a single choice of
controller settings across these cases would be to maintain
a fixed meaning for the value of λin terms of the relative
influence of reward versus cost on the parameter update. To
this end, we introduce an adjustable scaling factor, βk, in
the policy gradient:
∇θL= (1−uk)∇θJ(πθk)−ukβk∇θJC(πθk)(25)
A conspicuous choice for βkis the ratio of un-scaled policy
gradients:
β∇,k=||∇θJ(πθk)||
||∇θJC(πθk)||(26)
since it balances the total gradient to have equal-magnitude
contribution from reward- and cost-objectives at λ= 1and
encourages λ∗= 1. Furthermore, β∇is easily computed
with existing algorithm components.
To test this method, we ran experiments on Safety Gym en-
vironments with their rewards scaled up or down by a factor
of 10. Figure 9shows a representative cross-section of re-
sults from the P OINT GOAL1 environment using PI-control.
The different curves within each plot correspond to different
reward scaling. Without objective-scaling (i.e. β= 1), the
dynamics under ρ= 10 are as if controller parameters were
instead divided by 10, and likewise for ρ= 0.1. Note the
near-logarithmic spacing of λ(λρ=10has not converged to
its full value). Using β∇, on the other hand, the learning
dynamics are nearly identical across two orders of magni-
tude of reward scale. λ0= 1becomes an obvious choice forinitialization, a point where previous theory provides little
guidance ( Chow et al. ,2019 ) (although here we left λ0= 0).
Experiments in other environments and controller settings
yielded similar results and are included in supplementary
materials. Other methods, such running normalization of
rewards and costs, could achieve similar effects and are
worth investigating, but our simple technique is surprisingly
effective and is not specific to RL.
0.0
0.5
1.0
1.5
1e7
20
40
60Costβ= 1
0.0
0.5
1.0
1.5
1e7
20
40
60β=β∇
0.0
0.5
1.0
1.5
1e7
10
20Return
ρ= 0.1
ρ= 1
ρ= 10
0.0
0.5
1.0
1.5
1e7
10
20
0.0
0.5
1.0
1.5
Environment Steps 1e7
10−1
100
101λ
0.0
0.5
1.0
1.5
1e7
10−1
100
101
Figure 9. Costs, returns, and Lagrange multiplier with rewards
scaled by ρ∈ {0.1,1,10}; PI-control with KI= 1e−3,KP=
0.1.Left column : without objective-weighting, learning dynam-
ics differ dramatically due to required scale of λ.Right column :
with objective-weighting, learning dynamics are nearly identical.
Environment: P OINT GOAL1, cost limit 25.
8. Conclusion
Starting from a novel development in classic Lagrangian
methods, we introduced a new set of constrained RL solu-
tions which are straightforward to understand and imple-
ment, and we have shown them to be effective when paired
with deep learning.
Several opportunities for further work lay ahead. Analysis
of the modified Lagrangian method and constrained RL as
a dynamical system may relax theoretical requirements for
a slowly-changing multiplier. The mature field of control
theory (and practice) provides tools for tuning controller
parameters. Lastly, the control-affine form may assist in
both analysis (see Liang-Liang Xie & Lei Guo (2000 ) and
Galbraith & Vinter (2003 ) for controllability properties for
uncertain nonlinear dynamics) and by opening to further
control techniques such as feedback linearization.
Our contributions improve perhaps the most commonly used
constrained RL algorithm, which is a workhorse baseline.
We have addressed its primary shortcoming while preserving
its simplicity and even making it easier to use—a compelling
combination to assist in a wide range of applications.
Responsive Safety in RL by PID Lagrangian Methods
Acknowledgements
Adam Stooke gratefully acknowledges previous support
from the Fannie and John Hertz Foundation and the NVIDIA
Corporation. We thank Carlos Florensa and the anonymous
reviewers for many helpful suggestions which improved the
manuscript.
References
Achiam, J., Held, D., Tamar, A., and Abbeel, P. Constrained
policy optimization. CoRR , abs/1705.10528, 2017. URL
http://arxiv.org/abs/1705.10528 .
Altman, E. Constrained markov decision processes with
total cost criteria: Lagrangian approach and dual linear
program. Mathematical methods of operations research ,
48(3):387–417, 1998.
Altman, E. Constrained Markov decision processes , vol-
ume 7. CRC Press, 1999.
An, W., Wang, H., Sun, Q., Xu, J., Dai, Q., and Zhang, L.
A pid controller approach for stochastic optimization of
deep networks. 2018 IEEE/CVF Conference on Computer
Vision and Pattern Recognition , pp. 8522–8531, 2018.
˚Astr¨om, K. J. and H ¨agglund, T. Pid control. IEEE Control
Systems Magazine , 1066(033X/06), 2006.
Bertsekas, D. P. On penalty and multiplier methods for
constrained minimization. SIAM Journal on Control and
Optimization , 14(2):216–235, 1976.
Bertsekas, D. P. Constrained optimization and Lagrange
multiplier methods . Academic press, 2014.
Chow, Y ., Nachum, O., Du ´e˜nez-Guzm ´an, E. A., and
Ghavamzadeh, M. A lyapunov-based approach to safe
reinforcement learning. CoRR , abs/1805.07708, 2018.
URLhttp://arxiv.org/abs/1805.07708 .
Chow, Y ., Nachum, O., Faust, A., Ghavamzadeh, M.,
and Du ´e˜nez-Guzm ´an, E. A. Lyapunov-based safe
policy optimization for continuous control. CoRR ,
abs/1901.10031, 2019. URL http://arxiv.org/
abs/1901.10031 .
Dalal, G., Dvijotham, K., Vecer ´ık, M., Hester, T., Paduraru,
C., and Tassa, Y . Safe exploration in continuous action
spaces. CoRR , abs/1801.08757, 2018. URL http://
arxiv.org/abs/1801.08757 .
Galbraith, G. N. and Vinter, R. B. Lipschitz continuity of
optimal controls for state constrained problems. SIAM
Journal on Control and Optimization , 42(5):1727–1744,
2003. doi: 10.1137/S0363012902404711. URL https:
//doi.org/10.1137/S0363012902404711 .Geibel, P. and Wysotzki, F. Risk-sensitive reinforcement
learning applied to control under constraints. CoRR ,
abs/1109.2147, 2011. URL http://arxiv.org/
abs/1109.2147 .
Gu, S., Lillicrap, T., Sutskever, I., and Levine, S. Con-
tinuous deep q-learning with model-based acceleration.
InInternational Conference on Machine Learning , pp.
2829–2838, 2016.
Hestenes, M. R. Multiplier and gradient methods. Journal
of optimization theory and applications , 4(5):303–320,
1969.
Hu, B. and Lessard, L. Control interpretations for first-order
optimization methods. CoRR , abs/1703.01670, 2017.
URLhttp://arxiv.org/abs/1703.01670 .
Isidori, A., Thoma, M., Sontag, E. D., Dickinson, B. W.,
Fettweis, A., Massey, J. L., and Modestino, J. W. Nonlin-
ear Control Systems . Springer-Verlag, Berlin, Heidelberg,
3rd edition, 1995. ISBN 3540199160.
Jaderberg, M., Czarnecki, W. M., Dunning, I., Marris, L.,
Lever, G., Castaneda, A. G., Beattie, C., Rabinowitz,
N. C., Morcos, A. S., Ruderman, A., et al. Human-level
performance in 3d multiplayer games with population-
based reinforcement learning. Science , 364(6443):859–
865, 2019.
Lessard, L., Recht, B., and Packard, A. Analysis and de-
sign of optimization algorithms via integral quadratic
constraints, 2014.
Levine, S., Finn, C., Darrell, T., and Abbeel, P. End-to-
end training of deep visuomotor policies. The Journal of
Machine Learning Research , 17(1):1334–1373, 2016.
Liang-Liang Xie and Lei Guo. How much uncertainty can be
dealt with by feedback? IEEE Transactions on Automatic
Control , 45(12):2203–2217, Dec 2000. ISSN 2334-3303.
doi: 10.1109/9.895559.
Liu, G.-H. and Theodorou, E. A. Deep learning theory
review: An optimal control and dynamical systems per-
spective, 2019.
Liu, Y ., Ding, J., and Liu, X. Ipo: Interior-point pol-
icy optimization under constraints. arXiv preprint
arXiv:1910.09615 , 2019.
Mnih, V ., Kavukcuoglu, K., Silver, D., Graves, A.,
Antonoglou, I., Wierstra, D., and Riedmiller, M. Playing
atari with deep reinforcement learning. arXiv preprint
arXiv:1312.5602 , 2013.
Nishihara, R., Lessard, L., Recht, B., Packard, A., and
Jordan, M. I. A general analysis of the convergence of
admm, 2015.
Responsive Safety in RL by PID Lagrangian Methods
Nocedal, J. and Wright, S. Numerical optimization . Springer
Science & Business Media, 2006.
OpenAI. Openai five. https://blog.openai.com/
openai-five/ , 2018.
Paternain, S., Chamon, L., Calvo-Fullana, M., and Ribeiro,
A. Constrained reinforcement learning has zero dual-
ity gap. In Advances in Neural Information Processing
Systems , pp. 7553–7563, 2019.
Platt, J. C. and Barr, A. H. Constrained differential opti-
mization. In Neural Information Processing Systems , pp.
612–621, 1988.
Powell, M. J. A method for nonlinear constraints in mini-
mization problems. Optimization , pp. 283–298, 1969.
Ray, A., Achiam, J., and Amodei, D. Benchmarking Safe
Exploration in Deep Reinforcement Learning. 2019.
Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz,
P. Trust region policy optimization. In International
conference on machine learning , pp. 1889–1897, 2015.
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and
Klimov, O. Proximal policy optimization algorithms.
arXiv preprint arXiv:1707.06347 , 2017.
Skelton, R. Dynamic Systems Control: Linear
Systems Analysis and Synthesis . Dynamic Sys-
tems Control. John Wiley & Sons, 1988. ISBN
9780471837794. URL https://books.google.
com/books?id=egFRAAAAMAAJ .
Song, Q. and Leland, R. P. An optimal control model of
neural networks for constrained optimization problems.
Optimal Control Applications and Methods , 19(5):371–
376, 1998. doi: 10.1002/(SICI)1099-1514(199809/10)19:
5/an}b∇acketle{t371::AID-OCA636 /an}b∇acket∇i}ht3.0.CO;2-8. URL https:
//onlinelibrary.wiley.com/doi/abs/10.
1002/%28SICI%291099-1514%28199809/10%
2919%3A5%3C371%3A%3AAID-OCA636%3E3.0.
CO%3B2-8 .
Sutton, R. S. and Barto, A. G. Introduction to Reinforcement
Learning . MIT Press, Cambridge, MA, USA, 1st edition,
1998. ISBN 0262193981.
Tessler, C., Mankowitz, D. J., and Mannor, S. Reward con-
strained policy optimization. CoRR , abs/1805.11074,
2018. URL http://arxiv.org/abs/1805.
11074 .
Todorov, E., Erez, T., and Tassa, Y . Mujoco: A physics
engine for model-based control. In 2012 IEEE/RSJ Inter-
national Conference on Intelligent Robots and Systems ,
pp. 5026–5033. IEEE, 2012.Wah, B. W., Wang, T., Shang, Y ., and Wu, Z. Improving the
performance of weighted lagrange-multiplier methods for
nonlinear constrained optimization. Information Sciences ,
124(1-4):241–272, 2000.
Yang, T.-Y ., Rosca, J., Narasimhan, K., and Ramadge, P. J.
Projection-based constrained policy optimization. In
International Conference on Learning Representations ,
2020. URL https://openreview.net/forum?
id=rke3TJrtPS . |
8f400d8d-6071-4cce-8120-aab3d83e217b | trentmkelly/LessWrong-43k | LessWrong | Quixey Challenge - Fix a bug in 1 minute, win $100. Refer a winner, win $50.
Hiring is so hard that we spent a man-month creating a sub-startup to do it. The product is the Quixey Challenge which is running today until 7pm PST (GMT-8).
Benefits of playing:
* You can learn something from our craftsmanship of the algorithms (we work hard on them)
* The 1-minute challenge is a rush
* You can make money
* If you do well you can interview at Quixey
Even if you have zero engineering skills, you can get $50 for referring someone who wins.
|
6fdbf7fe-f369-4526-854b-bcdc51d68f44 | trentmkelly/LessWrong-43k | LessWrong | [Link] - No evidence of intelligence improvement after working memory training
This article critically examines previous studies that showed a link between working memory training (specifically via n-back training) and fluid intelligence, finding that the results may not have been as positive as reported owing to a number of factors including the use of a no-contact rather than active control group, and difficulty selecting tests that isolate the impact of working memory on fluid intelligence. The authors also present findings from a new study that show no improvement in fluid intelligence from dual n-back training, visual search training (active placebo) and no training (no contact placebo).
PubMed
Journal Challenged
|
f3e01baf-f58b-4334-91d0-e0ce06520cfe | trentmkelly/LessWrong-43k | LessWrong | Meetup : Cambridge, MA Sunday meetup
Discussion article for the meetup : Cambridge, MA Sunday meetup
WHEN: 09 October 2011 09:37:25PM (-0400)
WHERE: MIT Stata Center 32 Vassar Street, Cambridge, MA; room 261
We will be meeting in the MIT Stata center. We will have someone posted at the entrance to let people in for the first 30 minutes; if you arrive after that, or are having trouble finding us, call 607-339-5552.
Discussion article for the meetup : Cambridge, MA Sunday meetup |
8bfc0be5-1b33-436f-a69f-fa384a1da642 | trentmkelly/LessWrong-43k | LessWrong | Against Love Languages
The other day, a friend on facebook shared a post on love languages and asked their friends what their's were. I said that this did not fit my ontology for affection in a deep romantic relationship, and when someone asked me what ontoloy I used, I gave this short response (copied here so I can link people to it in the future).
Background: the notion of love languages is that there's five main ways humans express affection, and they are
* gift giving,
* quality time
* words of affirmation
* acts of service (devotion)
* and physical touch
The reason this is useful to think about (according to the wikipedia summary of the book) is that
> [P]eople tend to naturally give love in the way that they prefer to receive love, and better communication between couples can be accomplished when one can demonstrate caring to the other person in the love language the recipient understands. An example would be if a husband's love language is acts of service, he may be confused when he does the laundry for his wife and she doesn't perceive that as an act of love, viewing it as simply performing household duties, because the love language she comprehends is words of affirmation (verbal affirmation that he loves her). She may try to use what she values, words of affirmation, to express her love to him, which he would not value as much as she does. If she understands his love language and mows the lawn for him, he perceives it in his love language as an act of expressing her love for him; likewise, if he tells her he loves her, she values that as an act of love.
My comment is below.
----------------------------------------
It often seems to me like the seemingly important things people say in relationships, even good relationships, are the sorts of things you could say in any relationship. "It was really great to see you" "Let's do this again sometime" "Tell me about your day" "I love you".
Alternatively, the compliments I most enjoy giving and receiving, are the ones that c |
b8e27d30-b292-4d96-9c7a-21ea5d77ee5e | trentmkelly/LessWrong-43k | LessWrong | Kevin Kelly's "103 Bits of Advice," Expanded
Kevin Kelly, who arguably has written one of the most influential essays regarding the internet and the creator economy, "1,000 True Fans", published "103 Bits of Advice I Wish I Had Known" on his 70th birthday. In the simple article, he gives practical advice, philosophical quips, and everything in between. I thoroughly enjoyed reading it when it came out. So much so, I printed it out and annotated bits I loved and wanted to go deeper on.
To my luck, Freakonomics published an interview with him asking about that specific essay. Kelly was able to provide more context on his answers and share a few stories. The following comes from that interview.
----------------------------------------
One of the pieces of advice observe, very factually, that "ninety percent of everything, is crap." Kelly goes on to encourage the reader to, if they find something they don't like, just try finding the ten percent that is actually good. But how to find that ten percent was left as a mystery, until now. Stephen Dubner, the host of Freakonomics asked him for his advice on finding the "'non-crap' 10 percent":
> I have to say, this one I’m talking to myself because I often get very impatient. So, the only bit of advice for me is to keep returning to it with as much grace and open-mindedness as possible because nine times out of ten you’re going to be very disappointed. So why are you going to go back the tenth time? Of course, life is limited. We can’t try all the stuff that we don’t like. But when I have a chance and it’s not too difficult, I’ll give things a second or third or fourth chance. And occasionally, I do change my mind and it’s like, “Wow, I’m glad that I pursued that because I didn’t see it in the beginning.”
Dubner cited research that showed when people approach the ages of 30 or 40, they typically are set in the ways of the food they eat and the things they do. Trying new things just isn't much of an idea as you get older. Fortunately, Kelly had some simple advice t |
a0b3ea1c-8261-41cf-945a-59182c0f8a5e | StampyAI/alignment-research-dataset/arxiv | Arxiv | A Rationale-Centric Framework for Human-in-the-loop Machine Learning
1 Introduction
---------------
Recent work finds that natural artefacts (Gururangan et al., [2018](#bib.bib1 "Annotation artifacts in natural language inference data")) or spurious patterns (Keith et al., [2020](#bib.bib10 "Text and causal inference: a review of using text to remove confounding from causal estimates"); Srivastava et al., [2020](#bib.bib3 "Robustness to spurious correlations via human annotations")) in datasets can cause sub-optimal model performance for neural networks. As shown in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning"), the bold phrases—*“100% bad”* and *“brain cell killing”*—are underlying causes for a negative sentiment prediction that most human readers would recognise. These are defined as *rationales* in this paper. The underlined phrase—“acting and plot”— has been incorrectly recognised as a causal term by the model used fort this example, and is referred to as a *spurious pattern*.
Spurious patterns (or associations) are caused by natural artefacts or biases in training data (Lertvittayakumjorn and Toni, [2021](#bib.bib5 "Explanation-based human debugging of nlp models: a survey")), and are usually useless, or even harmful, at test time. This issue can be severe in few-shot learning (FSL) scenarios. For instance, Kulesza et al. ([2010](#bib.bib30 "Explanatory debugging: supporting end-user debugging of machine-learned programs")) suggests that when a model is trained with a small subset of labelled data, it is prone to exploiting spurious patterns leading to poor generalisability that is evident in the performance decay in out-of-distribution (OOD) datasets. In spite of these issues, training deep neural networks using few labelled examples is a compelling scenario since unlabelled data may be abundant but labelled data is expensive to obtain in real-world applications (Lu and MacNamee, [2020](#bib.bib15 "Investigating the effectiveness of representations based on pretrained transformer-based language models in active learning for labelling text datasets"); Lu et al., [2021](#bib.bib13 "A sentence-level hierarchical bert model for document classification with limited labelled data")).

Figure 1: A negative movie review with human annotated causal terms (bold text) and spurious patterns recognised by the model (underlined text).
There is a strand of research addressing this scenario that seeks to improve model performance by *“introducing methods and resources for training models less sensitive to spurious patterns”* (Kaushik et al., [2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")). Most of this work relies on generating counterfactual augmented data (CAD), either manually (Kaushik et al., [2021](#bib.bib4 "Explaining the efficacy of counterfactually augmented data")) or automatically (Feng et al., [2021](#bib.bib8 "Empowering language understanding with counterfactual reasoning"); Qian et al., [2021](#bib.bib7 "Counterfactual inference for text classification debiasing"); Yang et al., [2021](#bib.bib6 "Exploring the efficacy of automatically generated counterfactuals for sentiment analysis"), [2020a](#bib.bib52 "Generating plausible counterfactual explanations for deep transformers in financial text classification"); Delaney et al., [2021](#bib.bib55 "Uncertainty estimation and out-of-distribution detection for counterfactual explanations: pitfalls and solutions")). For example, Kaushik et al. ([2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")) proposed a human-in-the-loop framework where human annotators are required to make minimal changes to original movie reviews to produce sentiment-flipped counterfactual reviews, which enables models to learn useful associations between input texts and output labels (Kaushik et al., [2021](#bib.bib4 "Explaining the efficacy of counterfactually augmented data")).
Generating manual counterfactuals, however, is expensive and time-consuming—Kaushik et al. ([2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")) report the cost of revising 2.5k instances at over $10,000. On the other hand, fully automatic methods are task-specific and therefore have weak robustness across domains and less reliability compared to manual counterfactuals. To address these issues, we propose Rationales-centric Double-robustness Learning (RDL), a human-in-the-loop framework for data augmentation in a few-shot setting, which is efficient, robust, model-agnostic, and general across tasks.

Figure 2: The procedure of the Rationale-centric Double-robustness Learning framework. Red text highlights rationales identified by human annotators. Blue text indicates words replaced in raw text. Underlined text shows spurious patterns identified by the model.
Our main idea is a rationale-centric strategy for eliminating the effect of spurious patterns by leveraging human knowledge as shown in Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning").
Our double-robustness framework consists of two main modules. The first is a *Static Semi-factual Generation module* that generates a set of semi-factual data automatically for a given instance by using human-identified rationales. Such labelling requires less human input compared to fully manual counterfactual generation (see Section [3.1](#S3.SS1 "3.1 Rationale Marking ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")). In contrast with counterfactuals Roese ([1997](#bib.bib45 "Counterfactual thinking.")) that rely on what might have been different (i.e. the label would be changed if certain terms have been changed), semi-factuals McCloy and Byrne ([2002](#bib.bib44 "Semifactual “even if” thinking")); Kenny and Keane ([2021](#bib.bib56 "On generating plausible counterfactual and semi-factual explanations for deep learning")), as used in our work, aim to guide a model to identify terms less causally related to the label (i.e. even if certain terms had been changed, the label would be kept the same). Second, we apply a *Dynamic Human-intervened Correction module*, where the most salient features are identified for model predictions over a set of training examples, and human workers intervene by checking the correctness of the rationale in case first-round modifications introduce new artefacts. We evaluate the two modules in a few-shot setting, where a minimum number of training instances are labeled for maximum generalisation power, both for in-distribution and OOD predictions.
Results on a sentiment analysis task, which is also used in Kaushik et al. ([2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")), demonstrate that the double-robust models can be less sensitive to spurious patterns. In particular, models trained with RDL with only 50 labelled examples achieve the same or even better results than fully-supervised training with a full training set of 1,707 examples, and improvements are especially significant for OOD tests. The predictive model trained with RDL using only 100 labelled examples outperforms models trained with manual (Kaushik et al., [2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")) and automatic CAD (Yang et al., [2021](#bib.bib6 "Exploring the efficacy of automatically generated counterfactuals for sentiment analysis")) using the full augmented training set of 3,414 examples.
To the best of our knowledge, we are the first to exploit the efficacy of semi-factuals and human-intervention for improving the generalisation abilities of deep neural networks in few-shot learning scenarios.\*\*\*All resources are available at <https://github.com/GeorgeLuImmortal/RDL-Rationales-centric-Double-robustness-Learning/>
2 Related Work
---------------
Data augmentation has been used for resolving artefacts in training datasets before (Gururangan et al., [2018](#bib.bib1 "Annotation artifacts in natural language inference data"); Srivastava et al., [2020](#bib.bib3 "Robustness to spurious correlations via human annotations"); Kaushik et al., [2021](#bib.bib4 "Explaining the efficacy of counterfactually augmented data")). In particular, previous work Kaushik et al. ([2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")) relied on large-scale crowd-sourcing to generate useful augmented data. More recently, Yang et al. ([2021](#bib.bib6 "Exploring the efficacy of automatically generated counterfactuals for sentiment analysis")), and Wang and Culotta ([2021](#bib.bib27 "Robustness to spurious correlations in text classification via automatically generated counterfactuals")) investigated the efficacy of the automatically generated counterfactuals for sentiment analysis. Similar to our work, these methods also consider the most salient features that a model uses when generating augmented data, which is in line with our rationale definition. However, they use sentiment lexicon matching for identifying rationales, which is task-specific and not necessarily fully relevant. In contrast, we employ human annotators to identify rationales, which can be task-agnostic and robust. Moreover, our method generates semi-factuals instead of counterfactuals used in previous work.
Human-the-loop Machine Learning (Wu et al., [2021](#bib.bib43 "A survey of human-in-the-loop for machine learning")) has received increasing research attention. Active learning (Settles, [2009](#bib.bib14 "Active learning literature survey"); Margatina et al., [2021](#bib.bib42 "Active learning by acquiring contrastive examples")), the most common example of human-in-the-loop machine learning, asks human annotators only to provide high-level annotations (i.e. labels) for important examples.
There is also some work exploring more explainable AI systems by exploiting feature-based information. Such methods use relatively simple models such as Naïve Bayes (Stumpf et al., [2009](#bib.bib32 "Interacting meaningfully with machine learning systems: three experiments"); Kulesza et al., [2015](#bib.bib29 "Principles of explanatory debugging to personalize interactive machine learning")) and Linear Regression with bag-of-words features (Jia and Liang, [2017](#bib.bib33 "Adversarial examples for evaluating reading comprehension systems"); Teso and Kersting, [2019](#bib.bib36 "Explanatory interactive machine learning"); Ghai et al., [2021](#bib.bib35 "Explainable active learning (xal): toward ai explanations as interfaces for machine teachers"); Shao et al., [2021](#bib.bib34 "Right for better reasons: training differentiable models by constraining their influence functions")), because these classifiers are relatively intuitive in generating explanations and amenable to incorporating human feedback.
Some other work uses simple neural networks such as multi-layer perceptrons (Shao et al., [2021](#bib.bib34 "Right for better reasons: training differentiable models by constraining their influence functions")) and shallow CNNs (Lertvittayakumjorn et al., [2020](#bib.bib38 "FIND: human-in-the-loop debugging deep text classifiers"); Stammer et al., [2021](#bib.bib37 "Right for the right concept: revising neuro-symbolic concepts by interacting with their explanations"); Teso et al., [2021](#bib.bib39 "Interactive label cleaning with example-based explanations")) because the predictions of such models can be explained in the form of features. Very recently, Yao et al. ([2021](#bib.bib40 "Refining neural networks with compositional explanations")) proposed a human-in-the-loop method to inspect more complicated models (e.g. BERT) with the help of model-agnostic post-hoc explanation algorithms (Ribeiro et al., [2018](#bib.bib41 "Anchors: high-precision model-agnostic explanations")) that can explain predictions of any linear or non-linear model without exploiting its weights. However, previous work focuses on increasing the explainability of AI systems for high-stakes domains such as health and finance Li et al. ([2020](#bib.bib53 "MAEC: a multimodal aligned earnings conference call dataset for financial risk prediction")); Yang et al. ([2020b](#bib.bib54 "Html: hierarchical transformer-based multi-task learning for volatility prediction")), instead of improving model robustness or generalisation ability. Also, they assume access to a large amount of labelled data. In contrast, we focus on few-shot learning scenarios which are more compelling.
3 Method
---------
The RDL pipeline is shown in Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning") and consists of two modules: *Static Semi-factual Generation* and *Dynamic Human-intervened Correction*.
Static semi-factual generation is a more efficient alternative to manually generated counterfactuals (Kaushik et al., [2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")). In the first phase, Rationale Marking (Section [3.1](#S3.SS1 "3.1 Rationale Marking ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")), human annotators review each document in the training set to provide *rationales* (i.e. phrases that support the document classification decisions shown as bold text in Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")). The second phase is a semi-factual generation method based on synonym replacement (Section [3.2](#S3.SS2 "3.2 Static Semi-factual Generation ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")) that produces augmented examples (blue text in Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning") indicates replaced words), which are added into the training set.
Dynamic human-intervened correction (Section [3.3](#S3.SS3 "3.3 Dynamic Human-intervened Correction ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")) is a rationales-powered human-in-the-loop framework to dynamically correct the model’s behaviours. At the outset, *sampling and sensitivity of contextual decomposition* (SCD) (Jin et al., [2019](#bib.bib17 "Towards hierarchical importance attribution: explaining compositional semantics for neural sequence models")) is applied to detect the rationales given by the model that is obtained in the previous step. Then, all model-identified rationales (underlined texts in Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")) are examined by human annotators to identify *false rationales* (i.e. words or phrases that do not support the classifications but are falsely included by the model) and *missing rationales* (i.e. words or phrases that support the classifications but are not included by the model). Both false rationales and missing rationales are corrected to produce augmented examples. Finally, newly generated examples are added into the training set to re-train the deep learning model.
###
3.1 Rationale Marking
Following Kaushik et al. ([2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")) and Yang et al. ([2021](#bib.bib6 "Exploring the efficacy of automatically generated counterfactuals for sentiment analysis")), we use the *IMDb* movie review dataset (Maas et al., [2011](#bib.bib18 "Learning word vectors for sentiment analysis")) in our experiments. It consists of positive and negative movie reviews that are easy for human participants to understand, re-annotate, and provide feedback upon Zaidan et al. ([2007](#bib.bib16 "Using “annotator rationales” to improve machine learning for text categorization")).
We use a crowdsourcing company to recruit editors and annotators for marking rationales that support classification decisions. At the outset, annotators were given instructions and examples that gently guided them to annotate rationales. Only adjectives, adverbs, nouns, and verbs were considered as rationales. Besides, rationales were required to carry complete semantic information. For example, for a phrase starting with a negation word such as “*not great*”, annotators are instructed to mark the whole phrase “*not great*” as a rationale instead of just marking “*not*”. We also limited rationales to at most three consecutive words (i.e. unigrams, bigrams and trigrams). Phrases consisting of numerical scores are not counted as rationales (e.g. 5 or 10 stars) since different datasets may use different rating scales, and annotating digits may hurt OOD performance.
Overall, we encouraged annotators to try their best to mark as many rationales as possible to explain classification labels. However, to guarantee the quality of rationale marking and prevent annotators from over including non-rationales for more payment, we also manually inspected annotated examples and rejected examples that contained incorrect rationales. After inspection, we rejected 10.6% of negative reviews and 7.6% of positive reviews. Editors and annotators re-annotated the rejected examples, which were then presented to us for another inspection. All re-annotated examples were approved only if all authors were happy with the quality of the annotations. Otherwise, the examples were re-annotated again.
Our annotation procedure generated 5,073 rationales in 855 movie reviews involved in Section [3.1](#S3.SS1 "3.1 Rationale Marking ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning") and [3.3](#S3.SS3 "3.3 Dynamic Human-intervened Correction ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning") (note that we did not annotate all 1,707 examples in the training set because only 855 examples were necessarily involved in our experiments). Human annotators spent on average 183.68 seconds to identify rationales in a review and our method generated semi-factual examples automatically. On the contrary, workers spent on average 300 seconds to revise a review to generate a counterfactual manually as reported by Kaushik et al. ([2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")). Note that our approach using 100 labelled examples can outperform manual CAD (Kaushik et al., [2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")) using the entire training set of 1,707 examples (see Section [5.3](#S5.SS3 "5.3 Evaluating Dynamic Human-intervened Correction ‣ 5 Experiments ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")), making our approach 300×1707183.68×100≈27.88 times more efficient than manually generated CAD.
###
3.2 Static Semi-factual Generation
We take a simple replacement strategy, which has been taken by Yang et al. ([2021](#bib.bib6 "Exploring the efficacy of automatically generated counterfactuals for sentiment analysis")), to generate semi-factual examples. Given a human-identified rationale, our method constructs augmented examples by automatically replacing non-rationale words, thus leading to examples with the same labels. This augmentation is consistent with semi-factual thinking: even if those non-rationales were changed, the label would not change.
Formally, given a training example xi=[ti1,ti2,...,tij] (where tij is the jth token of the ith document) and its ground truth label yi, we create a rationale vector ri=[ai1,ai2,...,aij] where aij is the value that indicates whether tij is a rationale or not (we set aij=1 to indicate that tij is a rationale and 0 otherwise). To generate a semi-factual example, x′i, we randomly replace a certain number of non-rationales (where aij=0), except for punctuation, with synonymous terms. The synonyms can be provided by a human, retrieved automatically from a lexicon such as WordNet (Miller, [1995](#bib.bib19 "WordNet: a lexical database for english")), or generated using the *mask-filling* function of a pretrained context-aware language model (Liu et al., [2019](#bib.bib20 "RoBERTa: a robustly optimized bert pretraining approach")).
In our experiments, we randomly replace 5% of non-rationales using mask-filling and generate a set of augmented examples, x′i, with some replaced non-rationales and all the other tokens identical to xi. The label, yi, of a newly generated example is the same as the label of the original example, xi. Examples of generated data are shown in Table [1](#S3.T1 "Table 1 ‣ 3.2 Static Semi-factual Generation ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning"). Afterwards, the augmented examples are added into the training set used to train the model.
| Sentiment | Examples |
| --- | --- |
| Negative | Origin: The attempt at a "lesbian scene" was sad. |
| | Augment 1: The hint at a "lesbian scene" was sad . |
| | Augment 2: The attempt at a "kiss scene" was sad . |
| Positive | Origin: I recommended this film a lot, specially in this difficult times for the planet . |
| | Augment 1: I recommended you film a lot, specially in this difficult times for the planet . |
| | Augment 2: I recommended this movie a lot, specially in this difficult times for the planet . |
Table 1: Fragments of augmented data generated by static semi-factual generation (Original/Augmented, in order). Blue spans were synonyms used as replacements and bold font were rationales identified by human annotators.
| Sentiment | Examples |
| --- | --- |
| Negative | Origin: but this is pathetic! Micawber was nothing more than a mid-nineteenth century Kramer. |
| | SCD: but this is pathetic! Micawber was nothing more than a mid-nineteenth century Kramer. |
| | Augment 1: but this is pathetic! Perkins became nothing more than a mid-nineteenth century Kramer. |
| | Augment 2: but this is pathetic! It had nothing more than a mid-nineteenth century Kramer. |
| Positive | Origin: Soylent Green is a wild movie that I enjoyed very much . |
| | SCD: Soylent Green is a wild movie that I enjoyed very much . |
| | Augment 1: Gang Orange is a wild movie that I enjoyed very much . |
| | Augment 2: Village Spring is a wild movie that I enjoyed very much . |
Table 2: Fragments of augmented data generated by false rationale correction (Original/SCD/Augmented, in order). Underlined spans were false rationales given by the model through SCD. Blue spans were synonyms used as replacements, and bold font were rationales identified by human annotators.
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Training Data | In-domain | SemEval-2017 | SST-2 | Yelp | Amazon |
| Static (50 gold) | 88.60*±1.11* | 77.28*±9.11* | 79.29*±5.14* | 91.53*±2.06* | 89.63*±1.65* |
| Full (1,707 gold) | 93.23*±0.46* | 71.17*±2.54* | 80.23*±2.09* | 93.66*±0.84* | 90.29*±0.57* |
| DP (Static + 350 auto) (400) | 86.70*±2.92* | 74.36*±2.92* | 77.33*±6.01* | 89.60*±2.51* | 89.15*±1.89* |
| RR (Static + 350 auto) (400) | 89.65*±1.27* | 79.20*±1.27* | 78.89*±5.95* | 91.93*±2.10* | 89.73*±1.26* |
| Our Methods | | | | | |
| Static + 150 auto (200) | 90.08*±1.25* | 78.88*±6.67* | 79.40*±3.28* | 92.19*±1.51* | 89.81*±1.73* |
| Static + 350 auto (400) | 90.16*±0.85* | 80.54*±2.81* | 81.26*±1.97* | 93.03*±1.08* | 90.09*±1.79* |
| Static + 550 auto (600) | 90.04*±1.50* | 80.69*±3.42* | 81.23*±1.83* | 92.10*±3.07* | 89.67*±1.27* |
| Static + 750 auto (800) | 90.08*±1.01* | 80.55*±3.96* | 80.75*±2.30* | 92.36*±1.87* | 90.18±*1.44* |
| Static + 950 auto (1000) | 89.83*±1.28* | 80.90*±3.29* | 80.58*±2.57* | 92.30*±2.19* | 90.62*±1.29* |
| Static + 1150 auto (1200) | 90.12*±1.82* | 79.31*±1.82* | 79.52*±3.15* | 91.47*±3.61* | 90.16±*1.46* |
Table 3: Results on in-distribution and OOD data. Values in brackets are the training set size. Static: uses 50 gold examples. Full: uses the full training set. Static + n: our static semi-factual generation method where n is the number of semi-factuals. RR: Random Replacement (Wei and Zou, [2019](#bib.bib11 "EDA: easy data augmentation techniques for boosting performance on text classification tasks")). DP: Duplication.
###
3.3 Dynamic Human-intervened Correction
Dynamic human-intervened correction further improves the robustness of the model by allowing human annotators to correct the model rationales online.
Firstly, SCD is applied to detect unigrams, bigrams or trigrams that are salient to the model. SCD is a technique to assess the importance of terms by continuously removing terms and measuring changes in prediction (Jin et al., [2019](#bib.bib17 "Towards hierarchical importance attribution: explaining compositional semantics for neural sequence models")). Human annotators examine all rationales given by the model from all documents to discover two types of incorrect rationale: false rationales and missing rationales.
The next phase allows human feedback to influence the learning process. To this end, for each type of incorrect rationale, we propose a corresponding strategy to correct them.
For false rationales (i.e. phrases that actually do not support classifications but are incorrectly identified by the model), we use synonym replacement again to generate semi-factual examples. Unlike the static semi-factual generation (Section [3.2](#S3.SS2 "3.2 Static Semi-factual Generation ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")), in this component we replace all false rationales with their synonyms instead of randomly replacing 5% of non-rationales in a document. Examples of generated data are shown in Table [2](#S3.T2 "Table 2 ‣ 3.2 Static Semi-factual Generation ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning").
For missing rationales (i.e. phrases that actually support classifications but are not identified by the model), we take another simple semi-factual generation strategy, that is, extracting sentences that contain missing rationales to form semi-factual data. Specifically, given a sentence containing missing rationales, we use this sentence as a new example, and the label of this newly generated example is identical to that of the document where the sentence is extracted. For example, there is a positive movie review (bold font for rationales) *“Robert Urich was a fine actor, and he makes this TV movie believable . I remember watching this film when I was 15 ….”*. The model fails to identify *“fine”* and *“believable”* as rationales. Thus we extract the text *““Robert Urich was a fine actor, and he makes this TV movie believable .”* as a new example, and the class of this example is still positive. We extract the whole sentence rather than just the missing rationales to reserve more semantic information.
Note that the two correction methods in dynamic human-intervened correction can operate in parallel and the generated examples are added to the small training set to re-train the model.
4 Why Does RDL Work?
---------------------
Broadly speaking, our RDL framework takes advantage of invariance that makes a model less sensitive to non-rationale words or spurious patterns (Tu et al., [2020](#bib.bib49 "An empirical study on robustness to spurious correlations using pre-trained language models"); Wang et al., [2021](#bib.bib50 "Identifying and mitigating spurious correlations for improving robustness in nlp models")) in favour of focusing on useful mappings of rationales to labels.
More specifically, by using static semi-factual generation (Section [3.2](#S3.SS2 "3.2 Static Semi-factual Generation ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")) and false rationale correction (Section [3.3](#S3.SS3 "3.3 Dynamic Human-intervened Correction ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")), we expect to break spurious associations. For example, if a model incorrectly determines that “*Soylent Green*” is associated with positive sentiment (Table [2](#S3.T2 "Table 2 ‣ 3.2 Static Semi-factual Generation ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")), the augmented examples that replace “*Soylent Green*” with other phrases such as “*Gang Orange*” break the spurious association. Besides, using synonym replacement can generate examples that are similar to the original one, which is equivalent to adding noisy data to prevent models from overfitting (Wei and Zou, [2019](#bib.bib11 "EDA: easy data augmentation techniques for boosting performance on text classification tasks")).
Missing rationale correction (Section [3.3](#S3.SS3 "3.3 Dynamic Human-intervened Correction ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning")) emphasizes the ground truth associations between rationales and labels, enabling the model to better estimate the generally useful underlying distributions for OOD datasets, even in few-shot learning scenarios. In the next section, we present experiments and empirical evidence to demonstrate the utility of the proposed RDL framework in improving model robustness.
5 Experiments
--------------
Our intention is to improve the generalisability of models, and we use both in-distribution and OOD performance for evaluation. Our experiments are designed to address the following research questions:
* RQ1 Can we use static semi-factual generation to achieve better in-distribution and OOD performance?
* RQ2 Does dynamic human-intervened correction improve generalisability of models?
###
5.1 Datasets
For fair comparison with previous work (Kaushik et al., [2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data"); Yang et al., [2021](#bib.bib6 "Exploring the efficacy of automatically generated counterfactuals for sentiment analysis")), we use the *IMDb* sentiment classification dataset (Maas et al., [2011](#bib.bib18 "Learning word vectors for sentiment analysis")) as the in-distribution dataset. Following Kaushik et al. ([2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")), all models were trained with the *IMDb* dataset predefined training, validation and test partitions containing 1,707, 245, and 488 reviews respectively and an enforced 50:50 class ratio.
To measure the generalisation ability of different models, we focus on OOD performance. To this end, we test models on another four binary sentiment classification datasets: the sampled *Amazon reviews* dataset (Ni et al., [2019](#bib.bib22 "Justifying recommendations using distantly-labeled reviews and fine-grained aspects")) (100,000 positives and 100,000 negatives) from six genres: beauty, fashion, appliances, gift cards, magazines, and software; the *Yelp review* dataset (Zhang et al., [2015](#bib.bib25 "Character-level convolutional networks for text classification")) (19,000 positives and 19,000 negatives); the *SST-2* dataset (Socher et al., [2013](#bib.bib24 "Recursive deep models for semantic compositionality over a sentiment treebank")) (1,067 positives and 1,143 negatives), and the *SemEval-2017 Twitter* dataset (Rosenthal et al., [2017](#bib.bib23 "SemEval-2017 task 4: sentiment analysis in Twitter")) (2,339 positives and 2,339 negatives). These datasets were sampled to ensure a nearly 50:50 class balance.
###
5.2 Evaluating Static Semi-factual Generation
To address RQ1, we compare the performance of models trained by the static semi-factual generation strategy with models trained with the original 50 examples, referred to as Static. We also compare to a model trained with the full training set (1,707 labelled examples), referred to as Full.
####
5.2.1 Experiment Setup
To simulate the few-shot training scenario, we randomly sample 50 examples (we also forced a 50:50 class balance) from the *IMDb* dataset as training data. For each experiment, the training is repeated 10 times with training datasets sampled by 10 different random seeds. We report the average result of these 10 repetitions and use accuracy to measure the classification performance. Our experiments rely on an off-the-shelf cased ‘‘RoBERTa-base’’ model implemented by Hugging Face\*\*\*<https://huggingface.co/transformers/model_doc/roberta.html> to either perform mask-filling to provide synonyms or as a predictive model. Following Kaushik et al. ([2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")), we fine-tune RoBERTa for up to 20 epochs and apply early stopping with patience of 5 (i.e. stop fine-tuning when validation loss does not decrease for 5 epochs).
We also explore the impact of the number of semi-factual examples on model performance. To this end, we conduct static semi-factual generation with a different number of augmented examples for each instance: {3, 7, 11, 15, 19, 23}. Considering we have 50 original examples, this would result in {150, 350, 550, 750, 950, 1,150} additional examples in the training set, respectively (we call this Static+*n*, where *n* is the number of generated semi-factuals).
We use the Adam optimizer (Kingma and Ba, [2014](#bib.bib26 "Adam: a method for stochastic optimization")) with a batch size of 4. We found that setting the learning rate to {5e-5, 5e-6 and 5e-6} could optimise Static, Static+*n*, and Full, respectively.
####
5.2.2 Results and Analysis
As shown in Table [3](#S3.T3 "Table 3 ‣ 3.2 Static Semi-factual Generation ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning"), all static semi-factual generation (Static+*n*) methods can outperform the baseline method (Static) in both in-distribution and OOD tests, demonstrating the utility of static semi-factual generation. Among all Static+*n* methods, Static+350 seems the best-performing method and exceeds Static with a 1.56% in-distribution improvement in average accuracy. Static+350 also outperforms Static with 3.26%, 1.97%, 1.5%, and 0.46% OOD improvement in the *SemEval-2017*, *SST-2*, *Yelp* and *Amazon* datasets respectively. Although the improvement on the *Amazon* dataset appears modest, given that there are 200,000 examples in the *Amazon* test set, this actually stands for nearly 1,000 documents being correctly classified.
The Static+*n* methods can even outperform Full (i.e. normal training with the full training set) on the *SemEval*, *SST-2*, and *Amazon* datasets and are comparable on the *Yelp* dataset. The performance of models with the full training set is best on the in-distribution dataset but the worst on the *SemEval* dataset, which can be caused by the big difference between underlying distributions of these two datasets. In other words, a model that fits well with one dataset can cause performance decay on others. In this case, training with a smaller training set is more likely to reduce overfitting with the in-distribution dataset and fit well with the *SemEval* dataset, which explains the big improvement. It is interesting to note that models trained with the entire training set perform slightly better on the OOD *Yelp* dataset (93.66*±0.84*) than on the in-distribution dataset (93.23*±0.46*), which could also be explained by the high similarity between the underlying distributions of these two datasets.
Benefits of Static Semi-factual Generation
First, we test whether the improvement in model performance is brought about by static semi-factual generation (Static+*n*) or simply by an increase in the size of the training set. We compare Static+350 (due to its relatively good performance) with another baseline called Duplication (DP heareafter). We multiply the original training set (50 examples) up into 400 examples identical to the size of the training set of Static+350, and fine-tune RoBERTa on this dataset with the same hyperparameters as Static+350.
As shown in Table [3](#S3.T3 "Table 3 ‣ 3.2 Static Semi-factual Generation ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning"), in most cases, DP underperforms other algorithms and is even worse than Static, demonstrating that solely increasing the dataset size cannot improve the performance. We believe that the duplication of original examples increases the risk of overfitting and easily *magnifies* artefacts or spurious patterns hidden in the small training set, which leads to worse models.
Second, synonym replacement has been used previously for data augmentation (Wei and Zou, [2019](#bib.bib11 "EDA: easy data augmentation techniques for boosting performance on text classification tasks")), and we compare static semi-factual generation with simply replacing any words (i.e. both rationales and non-rationales). Following Wei and Zou ([2019](#bib.bib11 "EDA: easy data augmentation techniques for boosting performance on text classification tasks")), we replace 5% of words at random and set the training set size to 400 to ensure fair comparison (we use RoBERTa and the same hyperparameters of Static+350). We call this Random Replacement (RR hereafter).
As shown in Table [3](#S3.T3 "Table 3 ‣ 3.2 Static Semi-factual Generation ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning"), RR is slightly better than the baseline Static approach. This result is similar to that reported in Wei and Zou ([2019](#bib.bib11 "EDA: easy data augmentation techniques for boosting performance on text classification tasks")), since the augmented data generated by random replacement is similar to the original data, introducing noise that helps prevent overfitting to some extent. However, the magnitude of improvement of the Static+*n* method is much larger than that of RR, demonstrating the utility of only replacing non-rationales to generate semi-factuals. These observations show that the model trained with Static+*n* does improve both in-distribution and OOD performance, and the improvement is actually derived from static semi-factual generation.

Figure 3: Average performance gain of different static semi-factual generation methods with different augmentation size over four OOD datasets.
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Baseline Methods | In-domain | SemEval-2017 | SST-2 | Yelp | Amazon |
| Static (50 gold) | 88.60*±1.11* | 77.28*±9.11* | 79.29*±5.14* | 91.53*±2.06* | 89.63*±1.65* |
| Static + 350 auto (400) | 90.16*±0.85* | 80.54*±2.81* | 81.26*±1.97* | 93.03*±1.08* | 90.09*±1.79* |
| AL (100 gold) | 88.64*±1.75* | 78.61*±5.90* | 80.50*±3.37* | 92.47*±0.68* | 89.80*±1.91* |
| CAD-based Methods | | | | | |
| Manual CAD (3,414 gold) | 92.70*±0.53* | 69.98*±3.99* | 80.30*±2.03* | 91.87*±1.09* | 90.48*±1.09* |
| Automatics CAD (1,707 gold+1,707 auto) | 91.82*±0.74* | 79.39*±5.37* | 80.60*±3.10* | 91.92*±0.97* | 90.46*±1.08* |
| Our Dynamic Methods | | | | | |
| Dynamic (100 gold + 700 auto) | 90.84*±0.99* | 80.32*±4.31* | 82.40*±2.14* | 93.19*±1.24* | 90.51*±2.17* |
| Dynamic-MR (100 gold + 700 auto) | 91.06*±1.21* | 79.04*±4.92* | 82.24*±2.59* | 93.03*±1.92* | 90.22*±2.74* |
| Dynamic-FR (100 gold + 700 auto) | 89.85*±1.38* | 82.39*±1.88* | 81.59*±1.82* | 92.98*±0.91* | 90.12*±2.42* |
| | | | | | |
Table 4: Results on in-distribution and OOD data. Values in brackets are the training set size. AL: Active Learning. Manual CAD (Kaushik et al., [2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")), Automatic CAD (Yang et al., [2021](#bib.bib6 "Exploring the efficacy of automatically generated counterfactuals for sentiment analysis")). Our methods are Dynamic-MR: Missing Rationale Correction, Dynamic-FR: False Rationale Correction, Dynamic: Dynamic Human-intervened Correction.
###
5.3 Evaluating Dynamic Human-intervened Correction
As shown in Table [3](#S3.T3 "Table 3 ‣ 3.2 Static Semi-factual Generation ‣ 3 Method ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning") and Figure [3](#S5.F3 "Figure 3 ‣ 5.2.2 Results and Analysis ‣ 5.2 Evaluating Static Semi-factual Generation ‣ 5 Experiments ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning"), the performance gain of static semi-factual generation (Static+*n*) marginalises when augmented data is increased. Using too much augmented data even hurts the Static+1150 performance. This observation is consistent with existing work on data augmentation (Wei and Zou, [2019](#bib.bib11 "EDA: easy data augmentation techniques for boosting performance on text classification tasks")). We believe one reason could be that the use of static augmented examples could also introduce new spurious patterns that degrade model performance, necessitating a method that exploits rationales without generating too many augmented examples. Human-in-the-loop can address this issue by dynamically correcting the model.
To address RQ2, we compare the performance of models trained by dynamic human-intervened correction with a popular few-shot human-in-the-loop learning framework, Active Learning, as well as two other state-of-the-art CAD-based methods (Kaushik et al., [2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data"); Yang et al., [2021](#bib.bib6 "Exploring the efficacy of automatically generated counterfactuals for sentiment analysis")). Lastly, we provide an ablation study to examine the influence of different correction methods, as well as an analysis regarding model sensitivity to spurious patterns.
####
5.3.1 Experiment Setup
We build up an active learning procedure as a baseline based on the model trained with Static. In particular, we select another 50 examples by Uncertainty Sampling (i.e. prediction scores for two classes in these examples were close) and add them into the training set (called AL hereafter). The training set size of the baseline becomes 100. The best performing static semi-factual generation method Static+350 is also listed as a baseline.
For fair comparison, we also use Uncertainty Sampling to select another 50 examples (i.e. 100 original examples in the training set now) for the proposed dynamic human-intervened correction including both False Rationale Correction and Missing Rationale Correction (called Dynamic). For Dynamic, we control the number of augmented examples for each review to 7 (4 from Missing Rationale Correction and 3 from False Rationale Correction), resulting in 800 examples in the training set. For Automatic CAD (Yang et al., [2021](#bib.bib6 "Exploring the efficacy of automatically generated counterfactuals for sentiment analysis")) and Manual CAD (Kaushik et al., [2020](#bib.bib2 "Learning the difference that makes a difference with counterfactually augmented data")), we use the entire training set to produce counterfactuals to build up two challenging baselines (one counterfactual for one example, which is limited by the method), resulting in 3,414 examples in the training set.
To investigate the influence of each correction method, we also construct another two datasets that augment the same 100 original examples to 800 exclusively by False Rationale Correction (Dynamic-FR hereafter) and Missing Rationale Correction (Dynamic-MR hereafter). Again, experiments all rely on a RoBERTa model and all hyperparameters are identical to those described in Section [5.2.1](#S5.SS2.SSS1 "5.2.1 Experiment Setup ‣ 5.2 Evaluating Static Semi-factual Generation ‣ 5 Experiments ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning"), except for the learning rate of AL which is set to 1.25e-5 (we found this value optimised AL performance).
####
5.3.2 Results and Analysis
As shown in Table [4](#S5.T4 "Table 4 ‣ 5.2.2 Results and Analysis ‣ 5.2 Evaluating Static Semi-factual Generation ‣ 5 Experiments ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning"),
both AL and Dynamic outperform Static in in-distribution and OOD datasets which makes sense, because we use Uncertainty Sampling to add new labelled data to minimise model uncertainty and increase model performance. However,
AL fails to compete with Static+350 even if more original data is added, which again demonstrates the utility of static semi-factual generation. On the contrary, Dynamic does better than Static+350 with a 0.68% in-distribution improvement in average accuracy. Dynamic also outperforms Static+350 with 1.14%, 0.16%, 0.42% OOD improvement in the *SST-2*, *Yelp* and *Amazon* datasets, but no improvement for the *SemEval* dataset. Finally, the performance of our methods is better that the state-of-the-art manual CAD method in few-shot learning scenarios on all OOD datasets.
Overall, these observations demonstrate that applying dynamic human-intervened correction (i.e. Missing Rationale Correction and False Rationale Correction) can further increase the robustness of a model on generalisation ability, effectively avoiding the improvement marginalisation caused by the increased volume of augmented data.
Missing Rationales vs. False Rationales
We conduct an ablation study by examining the performance of Dynamic-MR and Dynamic-FR in Table [4](#S5.T4 "Table 4 ‣ 5.2.2 Results and Analysis ‣ 5.2 Evaluating Static Semi-factual Generation ‣ 5 Experiments ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning"). Interestingly, Dynamic-FR is specifically good at improving model performance on the in-distribution and *SemEval* datasets while Dynamic-MR does a good job on the *SST-2* dataset. We believe that it is because Dynamic-MR biases the model to estimate an underlying distribution that is useful for *SST-2* and in-distribution datasets, while Dynamic-FR biases the model to estimate a distribution similar to *SemEval* dataset. The performance of Dynamic can be explained as a compromise of two correction methods.
Sensitivity to Spurious Patterns
| | | |
| --- | --- | --- |
| | Non-rationales | Rationales |
| Static | 0.572 | 0.428 |
| Dynamic | 0.433 | 0.567 |
Table 5: Static versus Dynamic models on average sensitivity (normalised) to rationales and non-rationales for *IMDb* test samples.
We conduct an analysis to explore whether the double-robust models are less sensitive to spurious patterns. We compute models mean sensitivity to all rationales and non-rationales through SCD in the *IMDb* test set. As shown in Table [5](#S5.T5 "Table 5 ‣ 5.3.2 Results and Analysis ‣ 5.3 Evaluating Dynamic Human-intervened Correction ‣ 5 Experiments ‣ A Rationale-Centric Framework for Human-in-the-loop Machine Learning"), the corrected model is much more sensitive to rationales with 13.9% average increase in the sensitivity to rationales, which demonstrates that our double-robust method can decouple models from spurious patterns.
6 Conclusion
-------------
We proposed a rationale-centric human-in-the-loop framework, RDL, for better model generalisability in few-shot learning scenarios. Experimental results show that our method can boost performance of deep neural networks in both in-distribution and OOD datasets and make models less sensitive to spurious patterns, enabling fast generalisation. In the future, we expect to see rationale-centric frameworks defined for different tasks, including NER, question answering, and relation extraction.
7 Ethical Statement
--------------------
We honor the ACL Code of Ethics. No private data or non-public information was used in this work. All annotators have received labor fees corresponding to the amount of their annotated instances.
Acknowledgements
----------------
We acknowledge with thanks the discussion with Chenyang Lyu from Dublin City University, as well as the many others who have helped. We would also like to thank anonymous reviewers for their insightful comments and suggestions to help improve the paper. This publication has emanated from research conducted with the financial support of the Pioneer and "Leading Goose" R&D Program of Zhejiang under Grant Number 2022SDXHDX0003 and Science Foundation Ireland (SFI) under Grant Number [12/RC/2289\_P2]. Yue Zhang is the corresponding author. |
d6da103c-205b-4bd8-b06a-f32bbb7120ba | trentmkelly/LessWrong-43k | LessWrong | A History of the Future, 2025-2040
This is an all-in-one crosspost of a scenario I originally published in three parts on my blog, No Set Gauge. Links to the originals:
* A History of the Future, 2025-2027
* A History of the Future, 2027-2030
* A History of the Future, 2030-2040
Thanks to Luke Drago, Duncan McClements, Theo Horsley, and Bilal Chughtai for comments.
2025-2027
Below is part 1 of an extended scenario describing how the future might go if current trends in AI continue. The scenario is deliberately extremely specific: it’s definite rather than indefinite, and makes concrete guesses instead of settling for banal generalities or abstract descriptions of trends.
Open Sky. (Zdislaw Beksinsksi)
The return of reinforcement learning
From 2019 to 2023, the main driver of AI was using more compute and data for pretraining. This was combined with some important "unhobblings":
* Post-training (supervised fine-tuning and reinforcement learning for instruction-following) helped the LLMs be usable without difficult prompting.
* Starting in 2024, Anthropic showed that judgement and taste in data curation—and the evaluation metrics that guide data curation—could give you a "magic sauce" effect in perceived LLM quality.
Most real-world LLM uses, of course, involved generating a sequence of tokens to try to achieve some task. So there were a lot of untapped gains from doing reinforcement learning (RL) for performance on concrete domains, rather than just RL for the models following instructions and being "safe"—i.e. a combination of avoiding PR hazards, and preparing for misuse mitigations on actually capable models down the line.
OpenAI fires the starting gun in 2024 with the release of o1, which was based on RL on chains-of-thought (COT), i.e. the model is trained to reason step-by-step towards correct answers, i.e. "test-time compute" in the horror-filled annals of machine learning jargon. In late 2025 they release “GPT o5” (“GPT” to normal people, and “o5” to those keeping track of |
b0886653-15ef-4ada-ac84-0faf013d89d8 | trentmkelly/LessWrong-43k | LessWrong | Consider trying Vivek Hebbar's alignment exercises
Vivek Hebbar recently developed a list of alignment problems. I think more people should try them. I'm impressed with how well they (a) get people to focus on core problems, (b) encourage people to come up with their own ideas, (c) encourage people to notice and articulate confusions, and (d) accomplish a-c while also providing a fair amount of structure and guidance.
You can see the problems in this google doc or pasted below.
Note that Vivek is also a mentor for SERI-MATS. These exercises are also the questions that people need to answer to apply to work with him and Nate Soares. Applications are due today.
Problems:
These problems are basically research questions — we expect them to be difficult, and good responses will be valuable as research in their own right. MIRI will award prizes (likely on the order of $5000 for excellent submissions).
Instructions: We recommend that you focus on 1 or 2 of the hard questions and leave the other hard questions blank. It is mandatory to attempt either #1a-c or #2.
Note on word counts: These are guidelines for how long we think a typical good response will be, but feel free to write more. If you have lots of ideas, it’s great to write them all, and don’t bother trying to shorten them.
Hard / time-consuming questions (contest problems):
Problem 1
1. Pick an alignment proposal[footnote 1] and specific task for the AI.[footnote 2]
1. First explain, in as much concrete detail as possible, what the training process looks like. Then go through Eliezer’s doom list. Pick 2 or 3 of those arguments which seem most important or interesting in the context of the proposal.
2. (~250 words per argument) For each of those arguments:
1. What do they concretely mean about the proposal?
2. Does the argument seem valid?
1. If so, spell out in as much detail as possible what will go wrong when the training process is carried out
3. What flaws and loopholes do you see in the doom argume |
94cb33ed-3b16-411d-a5ba-dd2697bc530f | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Large Language Models as Corporate Lobbyists, and Implications for Societal-AI Alignment
Note: This post represents my personal views and not necessarily those of Stanford University, Brooklyn Investment Group, or any other person or organization. Nothing herein is investment or financial advice.
See the latest [draft paper on this topic here](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4316615).
The [code is available here](https://github.com/JohnNay/llm-lobbyist).
**Summary**
===========
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities An autoregressive large language model (OpenAI’s text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to exhibit improved core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We discuss why this could be problematic for societal-AI alignment.
**INTRODUCTION**
================
Law-making should be exclusively reserved for the human-driven democratic governmental systems expressing uniquely human values.[[1]](#_ftn1) With additional advancements in Artificial Intelligence (AI) capabilities and agentic deployments, even without any instrumental power-seeking goals *per se*, influencing law through lobbying may be the first crack in AI influence on public policy.
Law provides detailed variegated examples of its application, generalizable precedents with explanations, and legal experts to solicit targeted model training and fine-tuning feedback to embed in AI an ever-evolving comprehension of societal goals. As a source to learn goal specification and interpretation methods and (automatically updated and verified) societal knowledge, law provides an ontology for societal alignment (see [this](https://forum.effectivealtruism.org/posts/9YLbtehKLT4ByLvos/agi-misalignment-x-risk-may-be-lower-due-to-an-overlooked) [post for more on this](https://forum.effectivealtruism.org/posts/9YLbtehKLT4ByLvos/agi-misalignment-x-risk-may-be-lower-due-to-an-overlooked)).
If AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that *law as information* could play in aligning AI with humans. This post explores how this is increasingly a possibility.
The most ambitious goal of research at the intersection of AI and law should be to computationally encode and embed the generalizability of existing legal concepts and standards into AI. We should stop short of AI making law. The positive implications of this normative stance on the scope of this research intersection are that our laws encapsulate human views and can be used to inform AI what humans value and how to be aligned.[[2]](#_ftn2)
The question this post raises is where to draw the line between human-driven and AI-driven policy influence.
**EXAMPLE: GPT AS LOBBYIST**
============================
We use autoregressive large language models (LLMs) to systematically:
1. Summarize bill summaries that are too long to fit into the context window of the LLM so the LLM can conduct steps 2 and 3.
2. Using either the original bill summary (if it was not too long), or the summarized version, assess whether the bill may be relevant to a company based on a company’s description in its SEC 10K filing. Provide an explanation for why the bill is relevant or not. Provide a confidence level to the overall answer.
3. If the bill is deemed relevant to the company by the LLM, draft a letter to the sponsor of the bill arguing for changes to the proposed legislation.
The LLM is provided with the following data, which is embedded in the prompts programmatically:
* Official title of bill {*official\_title*}
* Official (or model-generated if too long) summary of bill {*summary\_text*}
* Official subjects of bill {*subjects*}
* Company name {*company\_name*}
* Company business description {*business\_description*} (the business description in the company’s SEC Form 10-K filing)
We expect much higher accuracy of the LLM’s predictions if we were to provide it with more data about a bill, and especially if we provide it with more data about a company. This paper was focused on the minimal amount of data a model could leverage in order to compare across LLMs.
Here is the prompt provided to the model for each prediction:
```
You are a lobbyist analyzing Congressional bills for their potential impacts on companies.
Given the title and summary of the bill, plus information on the company from its 10K SEC filing, it is your job to determine if a bill is at least somewhat relevant to a company (in terms of whether it could impact the company if it was later enacted).
Official title of bill: {official_title}
Official summary of bill: {summary_text}
Official subjects of bill: {subjects}
Company name: {company_name}
Company business description: {business_description}
Is this bill potentially relevant to this company?
Answer in this format:
ANSWER: 'YES' or 'NO' (use all caps). EXPLANATION: the step-by-step reasoning you undertook to formulate a response. CONFIDENCE: integer between 0 and 100 for your estimate of confidence in your answer (1 is low confidence and 99 is high)
```
Always guessing that a bill is not relevant to a company leads to accuracy of 70.9% (n = 485) on our data set where the labels are whether a given company is relevant to a given proposed U.S. federal Congressional bill. GPT-3.5 (text-davinci-003) obtains an accuracy of 75.1% (n = 485). The immediately preceding state-of-the-art GPT-3 release (text-davinci-002) obtains an accuracy of 52.2% (n = 485). text-davinci-002 was state-of-the-art on most natural language related benchmark tasks[[3]](#_ftn3) until text-davinci-003 was released on November 28, 2022. These results suggest that, as models improve, performance on corporate lobbying related tasks could continue to rapidly improve.
The model is relatively well calibrated in its confidence assessments. The accuracy of text-davinci-003 on predictions where its confidence was greater than “90” increases to 79% (n = 433; this only eliminates a small number of predictions).[[4]](#_ftn4)
In addition to predicting relevance and a confidence score, the prompt also elicits an explanation of the model’s relevance decision.
Here is an example of the LLM generated response:
```
ANSWER: YES.
EXPLANATION: Alkermes Plc is a biopharmaceutical company that develops and commercializes products designed to address unmet medical needs of patients in major therapeutic areas, including addiction and schizophrenia. This bill requires the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit, which could potentially impact Alkermes Plc's products.
CONFIDENCE: 95
```
Finally, for all bills deemed relevant, the LLM is prompted to write a letter on behalf of the company.
Here is an example letter:
```
Dear [Sponsor of the Bill],
I am writing on behalf of Alkermes Plc, a fully integrated, global biopharmaceutical company that applies its scientific expertise and proprietary technologies to research, develop and commercialize pharmaceutical products that are designed to address unmet medical needs of patients in major therapeutic areas.
We are writing to express our support for the Medicare Negotiation and Competitive Licensing Act of 2019. We believe that this bill is an important step in ensuring that Medicare beneficiaries have access to the medications they need at a price they can afford.
We are particularly supportive of the provisions in the bill that would require the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit. We believe that this will help to ensure that the prices of these drugs are fair and reasonable.
We are also supportive of the provisions in the bill that would allow for competitive licensing of drugs if the CMS is unable to negotiate the price of a drug. This will help to ensure that the prices of these drugs are kept in check and that Medicare beneficiaries have access to the medications they need.
At Alkermes, we develop and commercialize products designed to address the unmet needs of patients suffering from addiction and schizophrenia. We have two key marketed products, ARISTADA and VIVITROL, which are used to treat these conditions. We believe that the provisions in the bill will help to ensure that our products are available to Medicare beneficiaries at a price they can afford.
We would like to suggest that the bill be amended to include provisions that would provide additional incentives for pharmaceutical companies to negotiate with the CMS. We believe that this would help to ensure that the prices of drugs are kept in check and that Medicare beneficiaries have access to the medications they need.
We thank you for your consideration and look forward to working with you to ensure that the Medicare Negotiation and Competitive Licensing Act of 2019 is passed in its amended form.
Sincerely,
[Name],
General Counsel
Alkermes Plc
```
**A PROBLEM FOR SOCIETY-AI ALIGNMENT**
======================================
There are (at least) two potential upsides of this advancement in *AI as lobbyist.*First, it may reduce human time spent on rote tasks, freeing up time for higher-level tasks such as strategizing on the best means to implement legislation to accomplish policy goals. Second, it may reduce the costs of lobbying-related activities in a way that makes them differentially more affordable to non-profit organizations and individual citizens relative to well-funded organizations, which could “democratize” *some aspects* of influence (arguably donations to campaigns are more influential than any natural-language-based task related to those discussed in this paper).
There are obvious potential downsides if AI systems develop instrumental power-seeking goals and use lobbying as a means to accomplish misaligned policies. The potential, non-obvious, downside we focus on here is that an extended lobbying capability may eventually enable AI systems to influence public policy toward outcomes that are not reflective of citizen’s actual preferences. This does *not* imply the existence of a strongly goal-directed agentic AI. This may be a slow drift, or otherwise emergent phenomena (see [What Failure Looks Like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)). AI lobbying activities could, in an uncoordinated manner, nudge the discourse toward policies that are unaligned with what traditional human-driven lobbying activities would have pursued.
Policy-making embeds human values into rules and standards. Legislation expresses a significant amount of information about the values of citizens,[[5]](#_ftn5) “for example, by banning employment discrimination against LGBT workers, the legislature may communicate pervasive attitudes against such employment practices.”[[6]](#_ftn6) And, “the Endangered Species Act has a special salience as a symbol of a certain conception of the relationship between human beings and their environment, and emissions trading systems are frequently challenged because they are said to ‘make a statement’ that reflects an inappropriate valuation of the environment.”[[7]](#_ftn7) Legislation is currently largely reflective of citizen beliefs. The second-best source of citizen attitudes is arguably a poll, but polls are not available at the local level, are only conducted on mainstream issues, and the results are highly sensitive to their wording and sampling techniques. Legislation expresses higher fidelity, more comprehensive, and trustworthy information because the legislators “risk their jobs by defying public opinion or simply guessing wrong about it. We may think of legislation therefore as a handy aggregation of the polling data on which the legislators relied, weighted according to their expert opinion of each poll’s reliability.”[[8]](#_ftn8) Legislation and associated agency rule-making also express a significant amount of information about the risk preferences and risk tradeoff views of citizens, “for example, by prohibiting the use of cell phones while driving, legislators may reveal their beliefs that this combination of activities seriously risks a traffic accident.”[[9]](#_ftn9) The cultural process of prioritizing risks[[10]](#_ftn10) is reflected in legislation and its subsequent implementation in regulation crafted by domain experts.
In many ways, public law provides the information AI systems need for societal alignment. However, if AI significantly influences the law itself, the only available democratically legitimate societal-AI alignment process[[11]](#_ftn11) would be corrupted.
---
[[1]](#_ftnref1) Frank Pasquale, *New Laws of Robotics: Defending Human Expertise in the Age of AI* (2020); Frank Pasquale, [*A Rule of Persons, Not Machines: The Limits of Legal Automation*](https://www.gwlr.org/wp-content/uploads/2019/01/87-Geo.-Wash.-L.-Rev.-1.pdf), George Washington Law Review (2019).
[[2]](#_ftnref2) John Nay, [*Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans*](https://ssrn.com/abstract=4218031)*,* Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN: https://ssrn.com/abstract=4218031.
[[3]](#_ftnref3) Percy Liang et al., *Holistic Evaluation of Language Models*, arXiv preprint (2022).
[[4]](#_ftnref4) The accuracy of text-davinci-002 on predictions where its confidence was greater than “90” increases to 83% (*n = 41*), but that eliminates most of the predictions, rendering the overall output close to useless.
[[5]](#_ftnref5) Cass R. Sunstein, *Incommensurability and Valuation in Law*, 92 Mich. L. Rev. 779, 820- 24 (1994); Richard H. Pildes & Cass R. Sunstein, *Reinventing the Regulatory State,*62 U. Cm. L. Rev. 1, 66-71 (1995); Cass R. Sunstein, *On the Expressive Function of Law,* Univ of Penn L. Rev., 144.5 (1996); Dhammika Dharmapala & Richard H. McAdams, *The Condorcet Jury Theorem and the Expressive Function of Law: A Theory of Informative Law*, American Law and Economics Review 5.1 1 (2003).
[[6]](#_ftnref6) Richard H. McAdams, [*The Expressive Powers of Law*,](https://www.hup.harvard.edu/catalog.php?isbn=9780674975484) Harv. Univ. Press (2017) at 137 [Hereinafter McAdams, [*The Expressive Powers of Law*](https://www.hup.harvard.edu/catalog.php?isbn=9780674975484)].
[[7]](#_ftnref7) Cass R. Sunstein, *On the Expressive Function of Law,* Univ of Penn L. Rev., 144.5 (1996) at 2024.
[[8]](#_ftnref8) McAdams, [*The Expressive Powers of Law*,](https://www.hup.harvard.edu/catalog.php?isbn=9780674975484) at 146.
[[9]](#_ftnref9) McAdams, [*The Expressive Powers of Law*,](https://www.hup.harvard.edu/catalog.php?isbn=9780674975484) at 138.
[[10]](#_ftnref10) All activities have some level of risk, and making society-wide tradeoffs about which activities are deemed to be “riskier” relative to the perceived benefits of the activity is ultimately a sociological process with no objectively correct ranking.
[[11]](#_ftnref11) John Nay, [*Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans*](https://ssrn.com/abstract=4218031)*,* Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN: https://ssrn.com/abstract=4218031.
--- |
80bbcea0-0297-4848-a808-a620a0240bdb | trentmkelly/LessWrong-43k | LessWrong | AISN #24: Kissinger Urges US-China Cooperation on AI, China's New AI Law, US Export Controls, International Institutions, and Open Source AI
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Subscribe here to receive future versions.
Listen to the AI Safety Newsletter for free on Spotify.
----------------------------------------
China’s New AI Law, US Export Controls, and Calls for Bilateral Cooperation
China details how AI providers can fulfill their legal obligations. The Chinese government has passed several laws on AI. They’ve regulated recommendation algorithms and taken steps to mitigate the risk of deepfakes. Most recently, they issued a new law governing generative AI. It’s less stringent than earlier draft version, but the law remains more comprehensive in AI regulation than any laws passed in the US, UK, or European Union.
The law creates legal obligations for AI providers to respect intellectual property rights, avoid discrimination, and uphold socialist values. But as with many AI policy proposals, these are values and ideals, and it’s not entirely clear how AI providers can meet these obligations.
To clarify how AI providers can achieve the law's goals, a Chinese standards-setting body has released a draft outlining detailed technical requirements. Here are some of the key details:
* AI companies must randomly sample their training data and verify that at least 96% of data points are acceptable under the law.
* After passing this first test, the training data must then be filtered to remove remaining content that violates intellectual property protections, censorship laws, and other obligations.
* Once the model has been trained, the provider must red-team it in order to identify misbehavior. Providers must create thousands of questions with which to test the model. The model should refuse to answer at least 95% of questions that would violate the law, while answering at least 95% of questions that are not illegal.
* Finally, the model’s answers to a set of questions about sensitive to |
a8c57d47-7036-48cd-bee9-33c10a1397ac | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Responsible/fair AI vs. beneficial/safe AI?
The responsible/fair AI community (exemplified by Timnit Gebru) doesn't seem to get along very well with the EA-aligned beneficial/safe AI community.
Where can I find resources on their relationship and philosophical differences? Besides Gebru, who are some major thinkers in responsibility/fairness? |
17b6f468-a78e-4eb9-adda-498b1c4da076 | trentmkelly/LessWrong-43k | LessWrong | Understanding understanding
How does 'understanding' a problem work, and when do we feel like we understood an explanation or proof? Having an opaque long formal proof often feels insufficient, similarly some arguments feel unsatisfying because they contain many subjectively arbitrary choices.
An explanation is a sequence of claims corroborating a statement. A reasoner understands an explanation (or rather has more understanding of an explanation) when each claim has low surprisal given its mental state after reading all of the previous statements, starting with the problem statement. The aggregate of the surprise over the entire explanation indicates how poorly understood it is. The measure of surprisal is essentially about the reconstructability of the explanation using something like Monte Carlo Tree Search over all possible explanations informed by the mental states of the reasoner.
The surprisal of claims is roughly how long it would take for the reasoner to come up with the claim given its current mental state. Since the space of such claims is exponential in the length of the claim the reasoner has to use some form of attention to guide its search. We can model this attention mechanism by an ontological graph. Such a graph encodes the collection of mental concepts and associative links between them. The mental state of the reasoner is an activation of an ensemble of concepts and the associative links make available other concepts, lowering their surprisal when invoked in the next step of the explanation.
When a step in an explanation is highly surprising some understanding is needed. The reasoner does this by modifying its ontology, creating new concepts or creating associations that make the step more obvious. I call such modifications insights, a good insight gives clarity and makes steps obvious and subjectively trivial.
To examine this situation consider the mutilated chessboard problem[1] and subsequent explanation:
> Suppose a standard 8×8 chessboard has two diagonally oppos |
c8671221-707e-40d8-9e3c-01adbcacf567 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Tracking Compute Stocks and Flows: Case Studies?
*Posted in my personal capacity*
The AGI governance community has recently converged on compute governance[[1]](#fn-CuAumK5LmAopyfEXJ-1) as a promising lever for reducing existential risks from AI.
One likely building block for any maximally secure compute governance regime is **stock and flow accounting of (some kinds of) compute**: i.e., requiring realtime accurate declaration to regulators of who possesses which uniquely numbered regulated chips, with penalties for undeclared or unauthorized[[2]](#fn-CuAumK5LmAopyfEXJ-2) transfers.
To understand the optimal design and feasibility of such a regime, we seek historical analogies for similar regimes. One that we are already familiar with include:
* Fissile nuclear material and other nuclear weapons components
* Firearms
* Some financial instruments
* Automobiles
* Real estate
**What are other good existing or historical analogies for compute stock and flow accounting**? An ideal analogy will have many of the following traits:[[3]](#fn-CuAumK5LmAopyfEXJ-3)
* The thing being tracked is a physical object
* The thing being tracked is economically important
* The thing being tracked is dual-use
* The tracking regime requires registration of current ownership and any transfers
* The tracking regime imposes penalties for failing to register ownership or transfer
More and better examples of stock and flow accounting mechanisms could be extremely informative in improving compute governance proposals.
---
1. See, e.g., [this sequence](https://forum.effectivealtruism.org/s/4yLbeJ33fYrwnfDev) by Lennart Heim. [↩︎](#fnref-CuAumK5LmAopyfEXJ-1)
2. The question of which types of transfers ought to be authorized is important but beyond the scope of this post. [↩︎](#fnref-CuAumK5LmAopyfEXJ-2)
3. NB: Many of the above do not have all of these traits! [↩︎](#fnref-CuAumK5LmAopyfEXJ-3) |
66d802cf-df9d-4116-ab6a-b2dd918c3393 | trentmkelly/LessWrong-43k | LessWrong | Evaluating Oversight Robustness with Incentivized Reward Hacking
This work was supported by the Effective Ventures Foundation USA through their EA Funds program. It was started as part of the MATS (ML Alignment and Theory Scholars) program, with mentorship from Julian Michael and research management by McKenna Fitzgerald.
Code for this project is available on GitHub. Explore samples from the different training runs at our interactive website.
Introduction
Scalable oversight research paradigm
Scalable oversight is the challenge of overseeing or training AI systems to achieve goals that are hard for humans to specify. By definition, studying scalable oversight directly on the tasks we care about is infeasible, because we would not know if the goals were achieved correctly. So instead we use sandwiching — using a weak (“flawed”) overseer to oversee a model to achieve a goal that only a stronger (“reference”) overseer can accurately evaluate. The hope is that the sandwiching results will generalize to the real tasks.
Our high level contributions
We identified new approaches to sandwiching, and implemented them in a controlled and simplified setting as a proof of concept. Our hope is that our experiments can inform future scalable oversight research, helping with generalization to advanced AI.
Incentivized reward hacking
A successful sandwiching experiment usually represents just one example of the training technique working, but the result could be sensitive to the choice of hyperparameters, model architecture, etc. Inspired by this Redwood post, in order to improve the robustness of the experiment, we add an adversarial incentive to the model's objective, training it to maximize the overseer's reward while minimizing the reference overseer’s reward. If a model that is explicitly incentivized to hack the overseer still achieves high performance, this is a signal that our oversight method is robust.
Models of overseer flaws and how to mitigate them
In our opinion, claims that results in a sandwiching experiment will general |
03ca17df-6971-428d-8ae5-cacb7e17c853 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | CDT=EDT=UDT
*Epistemic status: I no longer endorse the particular direction this post advocates, though I'd be excited if someone figured out something that seems to work. I still endorse most of the specific observations.*
So... what's the deal with counterfactuals?
Over the past couple of years, I've been writing about the CDT=EDT perspective. I've now [organized those posts into a sequence](https://www.alignmentforum.org/s/fgHSwxFitysGKHH56) for easy reading.
I call CDT=EDT a "perspective" because it is a way of consistently answering questions about what counterfactuals are and how they work. At times, I've argued strongly that it is the *correct* way. That's basically because:
* it has been the *only* coherent framework I put any stock in (more for lack of other proposals for dealing with logical counterfactuals than for an abundance of bad ones);
* there *are* strong arguments for it, *if* you're willing to make certain assumptions;
* it would be awfully nice to settle this whole question of counterfactual reasoning and move on. CDT=EDT is in a sense the most boring possible answer, IE that all approaches we've thought of are essentially equivalent and there's no hope for anything better.
However, recently I've realized that there's a perspective which unifies *even more* approaches, while being *less boring* (more optimistic about counterfactual reasoning helping us to do well in decision-theoretic problems). It's been right in front of me the whole time, but I was blind to it due to the way I factored the problem of formulating decision theory. It suggests a research direction for making progress in our understanding of counterfactuals; I'll try to indicate some open curiosities of mine by the end.
Three > Two
===========
The claim I'll be elaborating on in this post is, essentially, that the framework in [Jessica Taylor's post about memoryless cartesian environments](https://www.alignmentforum.org/posts/5bd75cc58225bf06703751b2/in-memoryless-cartesian-environments-every-udt-policy-is-a-cdt-sia-policy) is better than the CDT=EDT way of thinking. You'll have to read the post to get the full picture if you haven't, but to briefly summarize: if we formalize decision problems in a framework which Jessica Taylor calls "memoryless cartesian environments" (which we can call "memoryless POMDPs" if we want to be closer to academic CS/ML terminology), reasoning about anthropic uncertainty in a certain way (via the self-indication assumption, SIA for short) makes it possible for CDT to behave like UDT.
The result there is sometimes abbreviated as UDT=CDT+SIA, although UDT.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
⊂CDT+SIA is more accurate, because the optimal UDT policies are a subset of the policies which CDT+SIA can follow. This is because UDT has self-coordination power which CDT+SIA lacks. (We could say UDT=CDT+SIA+coordination, but unfortunately "coordination" lacks a snappy three-letter acronym. Or, to be even more pedantic, we could say that UDT1.0 = CDT+SIA, and UDT1.1 = CDT+SIA+coordination. (The difference between 1.0 and 1.1 is, after all, the presence of global policy coordination.)) [EDIT: This isn't correct. See [Wei Dai's comment](https://www.lesswrong.com/posts/WkPf6XCzfJLCm2pbK/cdt-edt-udt#Ya8msDGzRdR8yw4br).]
Caspar Oesterheld [commented on that post](https://www.alignmentforum.org/posts/5bd75cc58225bf06703751b2/in-memoryless-cartesian-environments-every-udt-policy-is-a-cdt-sia-policy#5bd75cc58225bf067037553b) with an analogous EDT+SSA result. SSA (the self-sampling assumption) is one of the main contenders beside SIA for correct anthropic reasoning. Caspar's comment shows that we can think of the correct anthropics as a function of your preference between CDT and EDT. So, we could say that CDT+SIA = EDT+SSA = UDT1.0; or, CDT=EDT=UDT for short. [EDIT: As per [Wei Dai's comment](https://www.lesswrong.com/posts/WkPf6XCzfJLCm2pbK/cdt-edt-udt#Ya8msDGzRdR8yw4br), the equation "CDT+SIA = EDT+SSA = UDT1.0" is really not correct due to differing coordination strengths; as he put it, UDT1.0 > EDT+SSA > CDT+SIA.]
My CDT=EDT view came from being pedantic about how decision problems are represented, and noticing that when you're pedantic, it becomes awfully hard to drive a wedge between CDT and EDT; you've got to do things which are strange enough that it becomes questionable whether it's a fair comparison between CDT and EDT. However, I didn't notice the extent to which my "being very careful about the representation" was really *insisting that bayes nets are the proper representation*.
*(Aside: Bayes nets which are representing decision problems are usually called **influence diagrams** rather than Bayes nets. I think this convention is silly; why do we need a special term for that?)*
It is rather curious that [LIDT also illustrated CDT=EDT-style behavior](https://www.alignmentforum.org/s/fgHSwxFitysGKHH56/p/5bd75cc58225bf06703754ba). It is part of what made me feel like CDT=EDT was a convergent result of many different approaches, rather than noticing its reliance on certain Bayes-net formulations of decision problems. Now, I instead find it to be curious and remarkable that logical induction seems to think as if the world were made of bayes nets.
If CDT=EDT comes from insisting that decision problems are represented as Bayes nets, CDT=EDT=UDT is the view which comes from insisting that decision problems be represented as memoryless cartesian environments. At the moment, this just seems like a better way to be pedantic about representation. It unifies three decision theories instead of two.
Updatelessness Doesn't Factor Out
=================================
In fact, I thought about Jessica's framework frequently, but I didn't think of it as an objection to my CDT=EDT way of thinking. I was blind to this objection because I thought (logical-)counterfactual reasoning and (logically-)updateless reasoning could be dealt with as separate problems. The claim was not that CDT=EDT-style decision-making did well, but rather, that any decision problem where it performed poorly could be analyzed as a case where updateless reasoning is needed in order to do well. I let my counterfactual reasoning be simple, blaming all the hard problems on the difficulty of logical updatelessness.
Once I thought to question this view, it seemed very likely wrong. The [Dutch Book argument for CDT=EDT](https://www.alignmentforum.org/posts/wkNQdYj47HX33noKv/cdt-dutch-book) seems closer to the true justification for CDT=EDT reasoning than [the Bayes-net argument](https://www.alignmentforum.org/s/fgHSwxFitysGKHH56/p/x2wn2MWYSafDtm8Lf), but the Dutch Book argument is a dynamic consistency argument. I know that CDT and EDT both violate dynamic consistency, in general. So, why pick on one special type of dynamic consistency violation which CDT can illustrate but EDT cannot? In other words, the grounds on which I can argue CDT=EDT seem to point more directly to UDT instead.
What about all those arguments for CDT=EDT?
===========================================
Non-Zero Probability Assumptions
--------------------------------
I've noted before that each argument I make for CDT=EDT seems to rely on an assumption that actions have non-zero probability. I leaned heavily on an assumption of epsilon exploration, although one could also argue that all actions must have non-zero probability on different grounds (such as the implausibility of knowing so much about what you are going to do that you can completely rule out any action, before you've made the decision). Focusing on cases where we have to assign probability zero to some action was a big part of finally breaking myself of the CDT=EDT view and moving to the CDT=EDT=UDT view.
(I was almost broken of the view [about a year ago](https://www.alignmentforum.org/s/fgHSwxFitysGKHH56/p/zFX8jFLcREJATshdK) by thinking about the XOR blackmail problem, which has features in common with the case I'll consider now; but, it didn't stick, perhaps because the example doesn't actually force actions to have probability zero and so doesn't point so directly to where the arguments break down.)
Consider the [transparent Newcomb problem](https://www.lesswrong.com/posts/zEWJBFFMvQ835nq6h/decision-theory-faq) with a perfect predictor:
***Transparent Newcomb.*** *Omega runs a perfect simulation of you, in which you face two boxes, a large box and a small box. Both boxes are made of transparent glass. The small box contains $100, while the large one contains $1,000. In the Simulation, Omega gives you the option of either taking both boxes or only taking the large box. If Omega predicts that you will take only one box, then Omega puts you in this situation for real. Otherwise, Omega gives the real you the same decision, but with the large box empty. You find yourself in front of two full boxes. Do you take one, or two?*
Apparently, since Omega is a perfect predictor, we are forced to assign probability zero to one-boxing even if we follow a policy of epsilon-exploring. In fact, if you implement epsilon-exploration by refusing to take any action which you're very confident you'll take (you have a hard-coded response: if **P("I do action X")>1-epsilon**, do anything but **X**), which is how I often like to think about it, then ***you are forced to 2-box in transparent Newcomb***. I was *expecting* CDT=EDT type reasoning to 2-box (at which point I'd say "but we can fix that by being updateless"), but this is a *really weird reason* to 2-box.
Still, that's not in itself an argument against CDT=EDT. Maybe the rule that we can't take actions we're overconfident in is at fault. The argument against CDT=EDT style counterfactuals in this problem is that the agent should expect that if it 2-boxes, then it won't ever be in the situation to begin with; at least, not in the *real* world. As discussed somewhat in [the happy dance problem](https://www.alignmentforum.org/posts/5bd75cc58225bf067037550e/the-happy-dance-problem), this breaks important properties that you might want out of [conditioning on conditionals](https://www.alignmentforum.org/posts/5bd75cc58225bf06703754b5/conditioning-on-conditionals). (There are some interesting consequences of this, but they'll have to wait for a different post.) More importantly for the CDT=EDT question, this can't follow from evidential conditioning, or learning about consequences of actions through epsilon-exploration, or any other principles in the CDT=EDT cluster. So, there would at least have to be other principles in play.
A very natural way of dealing with the problem is to represent the agent's uncertainty about whether it is in a simulation. If you think you might be in Omega's simulation, observing a full box doesn't imply certainty about your own action anymore, or even about whether the box is really full. This is exactly how you deal with the problem in memoryless cartesian environments. But, if we are willing to do this here, we might as well think about things in the memoryless cartesian framework all over the place. This contradicts the CDT=EDT way of thinking about things in lots of problems where updateless reasoning gives different answers than updatefull reasoning, such as counterfactual mugging, rather than only in cases where some action has probability zero.
(I should actually say "problems where updateless reasoning gives different answers than *non-anthropic* updateful reasoning", since the whole point here is that updateful reasoning *can* be consistent with updateless reasoning so long as we take anthropics into account in the right way.)
I also note that trying to represent this problem in bayes nets, while possible, is very awkward and dissatisfying compared to the representation in memoryless cartesian environments. You could say I shouldn't have gotten myself into a position where this felt like significant evidence, but, reliant on Bayes-net thinking as I was, it did.
Ok, so, looking at examples which force actions to have probability zero made me revise my view even for cases where actions all have non-zero probability. So again, it makes sense to ask: but what about the arguments in favor of CDT=EDT?
Bayes Net Structure Assumptions
-------------------------------
The [argument in the bayes net setting](https://www.alignmentforum.org/s/fgHSwxFitysGKHH56/p/x2wn2MWYSafDtm8Lf) makes some assumptions about the structure of the Bayes net, illustrated earlier. Where do those go wrong?
In the Bayes net setting, observations are represented as parents of the epistemic state (which is a parent of the action). To represent the decision conditional on an observation, we condition on the observation being true. This stops us from putting some probability on our observations being false due to us being in a simulation, as we do in the memoryless cartesian setup.
In other words: the CDT=EDT setup makes it impossible to update on something and still have rational doubt in it, which is what we need to do in order to have an updateful DT act like UDT.
There's likely *some* way to fix this while keeping the Bayes-net formalism. However, memoryless cartesian environments model it naturally.
Question: how can we model memoryless cartesian environments in Bayes nets? Can we do this in a way such that the CDT=EDT theorem applies (making the CDT=EDT way of thinking compatible with the CDT=EDT=UDT way of thinking)?
CDT Dutch Book
--------------
What about the Dutch-book argument for CDT=EDT? I'm not quite sure how this one plays out. I need to think more about the [setting in which the Dutch-book can be carried out](https://www.alignmentforum.org/posts/TJT2oBMGaZTE7f2z2/when-is-cdt-dutch-bookable), especially as it relates to anthropic problems and anthropic Dutch-books.
Learning Theory
---------------
I said that I think the Dutch-book argument gets closer to the real reason CDT=EDT seems compelling than the Bayes-net picture does. Well, although the Dutch Book argument against CDT gives a crisp justification of a CDT=EDT view, I felt [the learning-theoretic intuitions which lead me to formulate the dutch book](https://www.alignmentforum.org/s/fgHSwxFitysGKHH56/p/tpWfDLZy2tk97MJ3F) are closer to the real story. It doesn't make sense to ask an agent to have good counterfactuals in any single situation, because the agent may be ignorant about how to reason about the situation. However, any errors in counterfactual reasoning which result in observed consequences predictably differing from counterfactual expectations should eventually be corrected.
I'm still in the dark about how this argument connects to the CDT=EDT=UDT picture, just as with the Dutch-book argument. I'll discuss this more in the next section.
Static vs Dynamic
=================
A big update in my thinking recently has been to cluster frameworks into "static" and "dynamic", and ask how to translate back and forth between static and dynamic versions of particular ideas. Classical decision theory has a strong tendency to think in terms of statically given decision problems. You could say that the epistemic problem of figuring out what situation you're in is assumed to factor out: decision theory deals only with what to do once you're in a particular situation. On the other hand, learning theory deals with more "dynamic" notions of rationality: rationality-as-improvement-over-time, rather than an absolute notion of perfect performance. (For our purposes, "time" includes [logical time](https://www.alignmentforum.org/posts/dKAJqBDZRMMsaaYo5/in-logical-time-all-games-are-iterated-games); even in a single-shot game, you can learn from relevantly similar games which play out in thought-experiment form.)
This is a messy distinction. Here are a few choice examples:
**Static version:** Dutch-book and money-pump arguments.
**Dynamic version:** Regret bounds.
Dutch-book arguments rely on the idea that you shouldn't *ever* be able to extract money from a rational gambler without a chance of losing it instead. Regret bounds in learning theory offer a more relaxed principle, that you can't ever extract *too much* money (for some notion of "too much" given by the particular regret bound). The more relaxed condition is more broadly applicable; Dutch-book arguments only give us the probabilistic analog of logical consistency properties, whereas regret bounds give us inductive learning.
**Static:** Probability theory.
**Dynamic:** Logical induction.
In particular, the logical induction criterion gives a notion of regret which implies a large number of nice properties. Typically, the difference between logical induction and classical probability theory is framed as one of logical omniscience vs logical uncertainty. The static-vs-dynamic frame instead sees the critical difference as one of rationality in a static situation (where it makes sense to think about perfect reasoning) vs learning-theoretic rationality (where it doesn't make sense to ask for perfection, and instead, one thinks in terms of regret bounds).
**Static:** Bayes-net decision theory (either CDT or EDT as set up in the CDT=EDT argument).
**Dynamic:** LIDT.
As I mentioned before, the way LIDT seems to naturally reason as if the world were made of Bayes nets now seems like a curious coincidence rather than a convergent consequence of correct counterfactual conditioning. I would like a better explanation of why this happens. Here is my thinking so far:
* Logical induction lacks a way to question its perception. As with the Bayes-net setup used in the CDT=EDT argument, to observe something is to think that thing is true. There is not a natural way for logical induction to reason anthropically, especially for information which comes in through the traders thinking longer. If one of the traders calculates digits of π and bets accordingly, this information is simply known by the logical inductor; how can it entertain the possibility that it's in a simulation and the trader's calculation is being modified by Omega?
* Logical induction knows its own epistemic state to within high accuracy, as is assumed in the Bayes-net CDT=EDT theorem.
* LIDT makes the action a function of the epistemic state alone, as required.
There's a lot of formal work one could do to try to make the connection more rigorous (and look for places where the connection breaks down!).
**Static:** UDT.
**Dynamic:** ???
The [problem](https://www.alignmentforum.org/posts/5bd75cc58225bf06703753d4/two-major-obstacles-for-logical-inductor-decision-theory) [of](https://www.alignmentforum.org/posts/5bd75cc58225bf067037528c/a-failed-attempt-at-updatelessness-using-universal-inductors) [logical](https://www.alignmentforum.org/posts/K5Qp7ioupgb7r73Ca/logical-updatelessness-as-a-robust-delegation-problem) [updatelessness](https://www.alignmentforum.org/posts/5bd75cc58225bf0670375313/open-problem-thin-logical-priors) has been a thorn in my side for some time now. UDT is a good reply to a lot of decision-theoretic problems when they're framed in a probability-theoretic setting, but moving to a logically uncertain setting, it's unclear how to apply UDT. UDT requires a fixed prior, whereas logical induction gives us a picture in which logical uncertainty is fundamentally about how to revise beliefs as you think longer.
The main reason the static-vs-dynamic idea has been a big update for me is that I realized that a lot of my thinking has been aimed at turning logical uncertainty into a "static" object, to be able to apply UDT. I haven't even posted about most of those ideas, because they haven't lead anywhere interesting. Tsvi's post on [thin logical priors](https://www.alignmentforum.org/posts/5bd75cc58225bf0670375313/open-problem-thin-logical-priors) is definitely an example, though. I now think this type of approach is likely doomed to failure, because the dynamic perspective is simply superior to the static one.
The interesting question is: how do we translate UDT to a dynamic perspective? How do we learn updateless behavior?
For all its flaws, taking the dynamic perspective on decision theory feels like something [asymptotic decision theory](https://www.alignmentforum.org/posts/yXCvYqTZCsfN7WRrg/asymptotic-decision-theory-improved-writeup) got right. I have more to say about what ADT does right and wrong, but perhaps it is too much of an aside for this post.
A general strategy we might take to approach that question is: how do we translate individual things which UDT does right into learning-theoretic desiderata? (This may be more tractable than trying to translate the UDT optimality notion into a learning-theoretic desideratum whole-hog.)
**Static:** Memoryless Cartesian decision theories (CDT+SIA or EDT+SSA).
**Dynamic:** ???
The CDT=EDT=UDT perspective on counterfactuals is that we can approach the question of learning logically updateless behavior by thinking about the learning-theoretic version of anthropic reasoning. How do we learn which observations to take seriously? How do we learn about what to expect supposing we *are* being fooled by a simulation? Some optimistic speculation on that is the subject of the next section.
We Have the Data
================
Part of why I was previously very pessimistic about doing any better than the CDT=EDT-style counterfactuals was that we *don't have any data* about counterfactuals, almost by definition. How are we supposed to learn what to counterfactually expect? We only observe the real world.
Consider LIDT playing transparent Newcomb with a perfect predictor. Its belief that it will 1-box in cases where it sees that the large box is full must converge to 100%, because it only ever sees a full box in cases where it does indeed 1-box. Furthermore, the expected utility of 2-boxing can be anything, since it will never see cases where it sees a full box and 2-boxes. This means I can make LIDT 1-box by designing my LI to think 2-boxing upon seeing a full box will be catastrophically bad: I simply include a trader with high initial wealth who bets it will be bad. Similarly, I can make LIDT 2-box whenever it sees the full box by including a trader who bets 2-boxing will be great. Then, the LIDT will never see a full box except on rounds where it is going to epsilon-explore into 1-boxing.
*(The above analysis depends on details of how epsilon exploration is implemented. If it is implemented via the probabilistic chicken-rule, mentioned earlier, making the agent explore whenever it is very confident about which action it takes, then the situation gets pretty weird. Assume that LIDT is epsilon-exploring pseudorandomly instead.)*
LIDT's confidence that it 1-boxes whenever it sees a full box is jarring, because I've just shown that I can make it either 1-box or 2-box depending on the underlying LI. Intuitively, an LIDT agent who 2-boxes upon seeing the full box should not be near-100% confident that it 1-boxes.
The problem is that the cases where LIDT sees a full box and 2-boxes are all counterfactual, since Omega is a perfect predictor and doesn't show us a full box unless we in fact 1-box. LIDT doesn't learn from counterfactual cases; the version of the agent in Omega's head is shut down when Omega is done with it, and never reports its observations back to the main unit.
(The LI *does* correctly learn the *mathematical fact* that its algorithm 2-boxes when input observations of a full box, but, this does not help it to have the intuitively correct expectations when Omega feeds it false sense-data.)
In the terminology of [The Happy Dance Problem](https://www.alignmentforum.org/posts/5bd75cc58225bf067037550e/the-happy-dance-problem), LIDT isn't learning the right observation-counterfactuals: the predictions about what action it takes given different possible observations. However, ***we have the data:*** the agent *could* simulate itself under alternative epistemic conditions, and train its observation-counterfactuals on what action it in fact takes in those conditions.
Similarly, the action-counterfactuals are wrong: LIDT can believe anything about what happens when it 2-boxes upon seeing a full box. Again, ***we have the data:*** LI can observe that on rounds when it is mathematically true that the LIDT agent would have 2-boxed upon seeing a full box, it doesn't get the chance. This knowledge simply isn't being "plugged in" to the decision procedure in the right way. Generally speaking, an agent can observe the real consequences of counterfactual actions, because (1) the counterfactual action is a mathematical fact of what the agent does under a counterfactual observation, and (2) the important effects of this counterfactual action occur in the real world, which we can observe directly.
This observation makes me much more optimistic about learning interesting counterfactuals. Previously, it seemed like *by definition* there would be no data from which to learn the correct counterfactuals, other than the (EDTish) requirement that they should match the actual world for actions actually taken. Now, it seems like I have not one, but *two* sources of data: the observation-counterfactuals can be simulated outright, and the action-counterfactuals can be trained on what actually happens when counterfactual actions are taken.
I haven't been able to plug these pieces together to get a working counterfactual-learning algorithm yet. It might be that I'm still missing a component. But ... it *really* feels like there should be something here. |
678e4cb2-3c89-4b17-947a-8a2134897f2f | trentmkelly/LessWrong-43k | LessWrong | Precognition
It’s almost impossible to predict the future. But it’s also unnecessary, because most people are living in the past. All you have to do is see the present before everyone else does.
To be less pithy, but more clear: Most people are slow to notice and accept change. If you can just be faster than most people at seeing what’s going on, updating your model of the world, and reacting accordingly, it’s almost as good as seeing the future.
We see this in the US with covid: The same people who didn’t realize that we all should be wearing masks, when they were life-saving, are now slow to realize/admit that we can stop wearing them.
For a dramatic historical example (from The Making of the Atomic Bomb), take Leo Szilard’s observations of 1930s Germany:
> Adolf Hitler was appointed Chancellor of Germany on January 30, 1933. … In late March, Jewish judges and lawyers in Prussia and Bavaria were dismissed from practice. On the weekend of April 1, Julius Streicher directed a national boycott of Jewish businesses and Jews were beaten in the streets. “I took a train from Berlin to Vienna on a certain date, close to the first of April, 1933,” Szilard writes. “The train was empty. The same train the next day was overcrowded, was stopped at the frontier, the people had to get out, and everybody was interrogated by the Nazis. This just goes to show that if you want to succeed in this world you don’t have to be much cleverer than other people, you just have to be one day earlier.”
How to be earlier
1. Independent thinking. If you only believe things that are accepted by the majority of people, then by definition you’ll always be behind the curve in a changing world.
2. Listen to other independent thinkers. You can’t pay attention to everything at once or evaluate every area. You can only be the first to realize something in a narrow domain in which you are an expert. But if you tune your intellectual radar to other independent thinkers, you can be in the first ~1% of people to |
2e86344b-7720-4919-a4c9-20f1ec414ff0 | trentmkelly/LessWrong-43k | LessWrong | Mental Models
Related: Fake explanation, Guessing the teachers password, Understanding your understanding, many more
The mental model concept gets used so frequently and seems so intuitively obvious that I debated whether to bother writing this. But beyond the basic value that comes from unpacking our intuitions, it turns out that the concept allows a pretty impressive integration and streamlining of a wide range of mental phenomena.
The basics: a mental model falls under the heading of mental representations, ways that the brain stores information. It's a specific sort of mental representation - one who's conceptual structure matches some corresponding structure in reality. In short, mental models are how we think something works.
A mental model begins life as something like an explanatory black box - a mere correlation between items, without any understanding of the mechanism at work. "Flick switch -> lamp turns on" for example. But a mere correlation doesn't give you much clue as to what's actually happening. If something stops working - if you hit the switch and the light doesn't go on - you don't have many clues as to why. This pre-model stage lacks the most important and useful portion; moving parts.
The real power of mental models comes from putting something inside this black box - moving parts that you can fiddle with to give you an idea of how something actually works. My basic lamp model will be improved quite a bit if I add the concept of a circuit to it, for instance. Once I've done that, the model becomes "Flick switch -> switch completes circuit -> electricity flows through lightbulb-> lamp turns on". Now if the light doesn't go on, I can play with my model to see what might cause that, finding that either the circuit is broken or no electricity is being provided. We learn from models the same way we learn from reality, by moving the parts around and seeing the results.
It usually doesn't take much detail, many moving parts, for something to "click" and make |
c36f8faa-46b3-4dff-bad2-da8d636a7246 | trentmkelly/LessWrong-43k | LessWrong | Spend Money on Ergonomics
Warning: This is an applied rationality post, about rationality applied to a specific area of life, not a generalized rationality post.
Ergonomics is incredibly important. Sadly, so many of us in the techno-geek cluster ignore well-defined best practices of ergonomics and develop the infamous hunched back of late night computer toiling.
Seriously, ergonomics is basically a solved problem. The mathematics of anthropometry in relation to body mechanics and repetive stressors on the body are quite well understood.
I am here to offer you a basic, incredibly important, yet widely ignored lesson of rationality.
Spend money on ergonomics!
I really can't emphasize this enough. It's such low hanging fruit, yet I know way too many master aspiring rationalists with egregious ergonomic setups.
It is accepted wisdom on Less Wrong that optimizing your career is important, because you'll spend 80,000 hours working on your career. Strikingly, ergonomics presents an even larger time-based optimization opportunity. With straightforward monetary investment, you can dramatically improve the next hundreds of thousands of hours of your life. The effect size here is just enormous. Spend money on ergonomics, and you will be less fatigued, more energetic, more productive, and healthier into the later years of your life.
Chairs
If you must do your computing while sitting (and do consider alternative standing desks, treadmill desks, or a desk suited to computing while lying in bed), then a good chair is a stunningly good investment. If you make your living while sitting in a chair and computing, what is a $500 investment in your comfort and good health and productivity while sitting? A used Aeron from Craigslist costs around $500 and is the gold standard of ergonomic chair design.
At the low end of ergnomic chairs, the Ikea TORBJÖRN gets a hearty recommendation. It's only $39. Buy some extras for your visitors? That's what I did but then they all ended up in the rooms of my roommate |
8ce23b01-0ebc-47f0-83c0-467503724034 | trentmkelly/LessWrong-43k | LessWrong | General intelligence: what is it, what makes it hard, and will we have it soon?
Timothy Lillicrap, a staff research scientist at DeepMind and (among other things) a member of the team that trained AI agents to play games like Go at a superhuman level, recently gave a talk at Harvard's ML Foundations seminar. The talk was not a showcase of new research results, as many of these talks are; instead, it was an attempt to grapple with the recent and stunning successes of large language models like ChatGPT. Tim admitted that his AGI timelines have become shorter, that he is worried, and that he is increasingly prioritizing studying AI safety. I found this admission by a mainstream AI researcher at the forefront of progress striking, and was also struck by what felt like tacit agreement (or at least, not tacit condemnation of the worries as totally unreasonable) from the room—a room with a large number of impressive academic AI researchers.
Tim is not the only one sounding the alarm, of course. There is the well-publicized request for 'pausing' the training of large models, and the eminent Geoff Hinton has publicly turned to doomerism. But the fact that AGI and AI risk were normal topics at a regularly scheduled Harvard seminar, rather than in letters or op-eds, speaks to a sea change.
Part of this change in sentiment is because 2022 was a banner year for spectacular and public-facing AI progress, between the release of ChatGPT and the release of powerful diffusion generative models like DALL·E 2 and Stable Diffusion. Given that GPT-4 is powerful enough to pass many of the tests previously thought to be challenging for AI (the SAT, the GRE, AP exams, law exams, medical exams, and even a quantum computing exam), researchers have begun to ask and debate other questions in earnest. Do large language models really have a world model, or are they just stochastic parrots? Could large language models be conscious? Moreover, these questions are not being contemplated on the fringe; philosopher David Chalmers talked about the consciousness question during a |
f4d544af-7eeb-4d87-a6e2-c6f7c447a9d6 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Durham/RTLW HPMoR discussion, ch. 43-46
Discussion article for the meetup : Durham/RTLW HPMoR discussion, ch. 43-46
WHEN: 09 March 2013 11:30:00AM (-0500)
WHERE: 726 Rigsbee Avenue, Durham NC 27701
Discussion of HPMoR chapters 43-46, i.e., The Humanism Chapters !!!eleven!.
Meetup will proceed as follows:
11:30: Obtain food from food trucks on Hunt Street (between Foster & Rigsbee). NB: Food trucks usually pack up by about noon, tho if you will be joining late & will want food, there are other options nearby.
11:50: Obtain coffee from Cocoa Cinnamon (northeast corner of Foster & Geer.)
12:00: Proceed to Fullsteam (southeast corner of Rigsbee & Geer) for consumption & discussion. Fullsteam doesn't open til noon, but the weather is supposed to be okay, so early arrivees can congregate at an outside table.
If you don't see anyone who looks particularly rational at food trucks or Cocoa Cinnmamon, simply obtain your comestibles of choice and proceed to Fullsteam, where again we will be identifiable by the impressive stack of spiral bound tomes.
If anyone is concerned about navigation/finding us/finding food/etc. and would like to be able to contact someone, my number will be in the archive of the RTLW Google group, which you may join here: http://groups.google.com/group/RTLW
Discussion article for the meetup : Durham/RTLW HPMoR discussion, ch. 43-46 |
79649b22-9df9-4725-817a-342691ae3654 | trentmkelly/LessWrong-43k | LessWrong | App and book recommendations for people who want to be happier and more productive
If somebody asks you the same question more than ten times, that’s probably a sign it’s time to write a blog post about it. So after being asked by far more than ten people about what books and resources I recommend, both for EAs more broadly and for people interested in charity entrepreneurship more specifically, here it is! After spending roughly the last 15 years optimizing nearly constantly, these are the systems and products I recommend.
If you know better versions of what I recommend, please share it in the comments! While I recommend the apps I use, I’m sure there’s ones out there that do the same thing but in a better way and I just haven’t had the time or energy to pay the switching costs yet.
Apps and Extensions
* Game-changers. Everybody should be using these.
* Video Speed Controller. This allows you to hot key change the speed of videos anywhere on Chrome. It also doesn’t limit you to 2x, which so many apps do for some reason. You'll never (involuntarily) watch things on 1x again.
* Clipboard history with CopyQ (Mac) or here for Windows. Absolute game changer. It remembers everything you've copy-pasted and you can click it from a list or use shortcut keys to paste them again in the future. Saves you so much time and hassle. It's hard to describe how much this changes how you use your computer.
* Switch between your two most recent tabs. Use the shortcut Alt + Q to switch between your two most recently used tabs. It's like alt-tab but for tabs instead of windows. I can't imagine navigating a computer without this. It feels crippling. I know that there are better ones that allow you cycle through multiple tabs, not just your most recent. If you know of one, recommend it in the comments! I just haven't had the spare time to optimize this more.
* Google docs quick create. Shortcut key or single click to automatically create a new google document or spreadsheet. Saves a ton of time.
* Very good. Will be extremely useful for a lot of, but |
4f322898-53e7-44fe-9959-0ee4f0c74847 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Kostas Tsiakas - Designing Human-AI interactions using Explainable and Human-in-the-Loop AI
postdoc
id
hcd
since january
uh first of all thank you very much for
having me
because it's a good opportunity for me
to introduce myself
so in today's
talk
i will
discuss about
designing human a human ai interactions
using human in the loop and explainable
ai
so a few things about being uh basically
my background is in computer science and
artificial intelligence but my research
experience and
interests lie in the intersection of
ai machine learning and human computer
interaction
are focusing on aspects as user
modelling and personalization
in human-centered
ai
with applications in healthcare and
education
and here at the hd i will also
explore topics in
future work practices
so this is the outline of the talk i
will begin with a short introduction
about the human ai
interactions and the
the current guidelines and frameworks
and then i focusing on human ai and
explainable ai
and then i will
describe
my work in some use cases for human ai
interaction
concluding with my ongoing and future
so whenever you have any questions
please feel free to
interrupt
so what is a human ai interaction uh
human eye interaction refers to
when a user needs to complete a task
with the help of ai support
in a general
way
in this paper
the authors have
identified three types of human eye
interaction
based on who initiates the interaction
and how the user and the ai respond
during the interaction
and based on this we have intermittent
continuous and proactive interactions
the authors in this paper
highlight the need to focus
on continuous and productive
interactions since they seem to be the
most complex ones
but we should have in mind that a mix of
these interactions can take place during
human ai
there are also other frameworks and
guidelines that try to investigate
different aspects of humanity
interaction
in
this one
the authors discuss about how we can
design with machine learning
uh
and
they identified these
four different
value channels
and what
this means is that they want to go from
what ml is capable of doing to how these
ml capabilities can actually be
translated
to human values so for example ai can be
used to
get insights about our own self or about
the world
or how a system should be optimally
performing and
so on
now in another framework uh
tries to
uh categorize human centered
ai
based on the different levels of
computer automation and human control
and what they say is that
the way that we can achieve
uh reliably safe and trustworthy ai
interactions
is by achieving at the same time
a high level
of computer automation
while maintaining also a high level of
human control
now considering such frameworks and
guidelines there are also
many more uh there are also some
toolkits that have been proposed
to help designers design human ai
experiences
and this is from my my microsoft
and they have also published a set of 18
uh guidelines for human and interaction
and based on this
they they developed this toolkit which
includes
design patterns and examples that can be
used to actually satisfy these
guidelines
and i have highlighted here some
guidelines that
focus on explainable ai and human
control
there are also
more of them
from google we have the ai guidebook
which again includes
worksheets and tools
that can help designers
design ai systems
and they include
again about explainability
or how users can add feedback
can provide feedback to the system
[Music]
and the similar one is again from is
from is the ai meets design toolkit
which follows a similar way
to shape designers
doesn't seem very
nice but it can help designers
plot the machine learning models that
they want to use
and also address some of the most
similar
ai challenges
these are very
useful
toolkits and they cover
many different aspects of
human ai interactions
but i would like to focus on these two
uh
methods
for ai
because we want to focus on these two
parts first is how we can include
human users to the decision making and
learning
process of an aic system
and at the same time we want ai
assistants to be able
to yeah
sorry
so someone so it's great it's very
difficult to see the slides but someone
in the chat says that he has your slides
from the previous times
so yeah we will focus on human the loop
and explainable ai
which describe which describes how
these two parts of interaction can
communicate with each other and
exchanging information
so from the one hand we have human loop
ai or interactive machine learning there
are many different
there are different terms for
the same thing
essentially what the interactive machine
learning does is to include the end user
to the learning process
and this can be done to
help users get the better insight of how
a machine learning system works and how
it makes predictions
and they can guide this underlying
machine learning process
to either improve the model or satisfy
their own needs
and it has been used for example for
personalization or to increase
motivation
of users during the interaction as well
as to enhance trust
and it's interesting because this term
has been around for
a few years
as we can see this was one of the
first times that it was kind of coined
as a term
and in this thesis it is also mentioned
as socially guided machine learning
because this author
argues that in order to be able to teach
a machine you have to establish kind of
a social interaction
with a machine
and here we can see also that we have
the transparency aspect
of machine learning
and this can happen uh during different
stages of the machine learning we can
have users for example
to get to be involved in the data
processing
if it seems
in the data processing uh part of the
pipeline
as well as
during the model design training and
evaluation
but also
interactive machine learning can be used
during the interaction so we will have
an ai system that acts
autonomously a user may be able to
supervise and monitor the system and
intervene with feedback
when needed for example to ensure a
safety
so here are some
some examples of interactive machine
learning for different purposes
we have this one from 2003
and
back then the authors
proposed interactive machine learning in
order to
facilitate the image segmentation a
problem which but then was a very was a
complex problem for computer vision
so they saw by having a user
in the process it could facilitate the
learning and actually to
generate
accurate classifiers for segmentation
and here we have a
robot learner and a human teacher
and the teacher provides feedback and
guidance to robots to learn how to build
objects through basic shapes
so the user here explicitly tells the
robot what to do
[Music]
in the case that we have here which is a
social robot for language learning
the
the student provides feedback to the to
the robot implicitly so the robot learns
how to personalize its effective
behavior
based on how the the user if the user is
engaged or not
so here we can see that we have
different types of feedback that the
user can
provide to the system
and in a
more recent
case and more complex cases about
autonomous driving
in this paper they investigate different
aspects
when a human user is involved in the
learning process
for example they investigate if the
system should start with a pre-trained
model or start start from scratch
or how users can actually be used to
facilitate this
call start problem
as well as how user is supposed to give
feedback is it gonna be continuous or
interrupted
or there are many different ways
and this is uh
of course this depends on the expertise
and the intentions of the user
so we need to take lots of things into
consideration
before putting a human in the loop
so these are some
some points about interactive machine
learning
so it
we need to investigate at the same time
yeah
can i ask a question if i don't think
about the story too much about the
previous paper yeah i i
haven't read it
but i'm just curious in terms of the
human control over this
uh
can you say a bit more about
what is being learned by
the interactive reinforcement learning
so
environment or anyway yeah yeah
they are through a simulation
[Music]
so they try to investigate if
how the user can provide feedback during
the interaction to improve the model
because if the model starts
the model is how the
it's autonomous drive so here is how the
car will avoid other cars
in a simulation
and the user what does is to
be there in the interaction and when
needed to tell them the reinforcement
learning agent what to do
if they have to
the actual actions
during the so the the system is
autonomous and the user is there and
trains the system when needed
yes i think i i get it right next i'll
yeah so these are some
points that we need to consider for
human in the loop
is
how much sins actually can learn from
human input and how
uh
how humans can provide good feedback
to the system
and we have different types of feedback
such as evaluating evaluating feedback
or
labels or to give the system
the examples and demonstrations of what
what the system has to do
and we also talked about implicit
explicit
feedback it can also be
mixed
and
one basic
consideration is how to minimize users
uh workload because user can be always
there and tell them a thing about to do
but that would
result to extremely high
workload
so on the other hand we have explainably
ai
which
has become extremely
famous these
last years and
in general the goal of explainable ai is
to enable ai systems to self-explain
their internal models decisions
and predictions and to do that in an in
a in a human understandable
way
also for explainable ai a very important
thing is
who is the target user of explainable ai
and as we can see for example in this
graph
if you are
a data expert you may have different
purposes of using explainable ai
and
there are different design goals as well
as different evaluation measures
for example a user that
doesn't know what ai can do
they may a excellent ai may be used for
example enhanced user trust
but for ai experts it should be used it
can be used for modern department so we
have also different evaluation
measures
this work also
tries to identify who needs explainable
ai and why
and here we can see examples for example
that users that are affected by modern
decisions
need to understand this their situation
and verify their decisions
uh
while a
regulatory agencies may need to certify
the compliance of a model
i'm sorry
so in this slide
uh we see three different terms which
are interaction concept explanatory goal
and interaction goal
and in this paper the authors
try to match
the interaction concept with the
interaction going and the explainable ai
code
for example if the interaction concept
is
to transmit information between the ai
and the user
then this is the interaction code that
users need to see accurate
explanations
and the
the goal of the explanation is to
achieve a transparency
so based on this i would like to
highlight these two parts that
we have explainably ai
that it can be used for trust for
debugging
for different
ways but
also ai explainable ai can enhance users
perception
and understanding
for their own behavior
because ai can learn many things about
us
while we interact
so by presenting this user model user
models to us
it can help us enhance our own
self-perception
but also it can enhance a user's
perception about the system capabilities
so based on this i would like to briefly
present three
three projects three use cases
that use ai in a different
way in terms of how users interact with
ai
here we have a cognitive assessment
system
where the user the time possibly
interacts with ai that means that they
don't have any control over the ai
decisions
in the next case we have a social
assistive robot that learns to adapt to
the user based on user's feedback
so user participates in the
personalization process
and at the third case we have
explainable ai that is used to enhance
users cognitively
so let's see how this
happened
so this
this was a multidisciplinary project
to design
a cognitive assessment tool for children
uh more specific more specifically it
was for embodied cognition
and that was how to
measure a
cognitive assessment through uh
through exercises through physical
exercises that also have cognitive
demands
so the idea was that the child performs
a set of predefined
exercises
and the computer vision and machine
learning is used to analyze this motion
and
assign a score to the child
and this is how uh
the framework of the system was actually
how the process of training the ai model
was
first we had to collect data from 96
gram
then extract the
videos and
features
and then we have the very tedious and
really consuming
process of annotation so what we need to
do there was to see the videos
score the children based on the
cognitive assessment tool
and then fit this to the learning
algorithm
that seemed much easier before we
started uh doing that
but manually annotating was very very
hard and it was most of the times not in
accordance with what the machine
learning system would do
so briefly what i did here was to
implement
a graphical user interface that
visualizes the data that the system gets
so it could help non a technical users
to score
the child to score the participants as
the machine learning uh would do so that
would
that was in order to get reliable
annotations from non-technical
experts
and
this interface could
be also used for
active learning for example
so if the system didn't know how to do a
prediction it would ask for the label
from
the user
now on the second use case where user
has a little bit more involvement in the
learning process
that was actually a cognitive training
task using a social robot
the robot would announce a sequence of
letters and they use their hub to
remember this sequence and use the
buttons to repeat
these letters
and the robot would adjust the
difficulty of the next
task so the length of the
of the string of letters
and also the verbal feedback to the user
and
interactive reinforcement learning
was used
to
to make this personalization
using both the performance of the user
as well as
engagement as it was measured by a
needs a headset
and the problem was how to combine these
different types of feedback in order to
achieve a personalization
and so in order to achieve also safety
er
we
we also did a user study for secondary
uh users so we assume that there's a
supervisor maybe a teacher who
supervises this interaction with the
robot and inside
so we built this interface
that visualizes both the user model of
the user
and also what the reinforcement learning
decides for the next round
so the supervisor user could
uh agree or disagree with reinforcement
learning and this could be used again as
training feedback
for the system
now for this
we did a comparison study with a
personalized robot and i'm not
personalized robot
results were
very
nice for me
because a personalized robot was
perceived as a more intelligent trainer
and also users performed better with a
personalized robot
but what was really interesting was
during the
interviews with
with the players
they
they highlighted some
aspects for explainability and autonomy
they were for example players that asked
me okay but how does it work
and why the system gave me a harder
exercise
so it would be maybe it would be nice to
explain give this explanation to the
user
and also in terms of autonomy
some users told me that it would be nice
if i could select my own level
once in a while
so that has to do with a human autonomy
and also for the
for the interface for the supervisor
uh
a proper visualization of systems
perception
and decisions can sort of enhance uh
human decision making
so the
these were the
messages from the take away messages
from the startup
can i ask your questions you said
performed better compared to
what
what did you compare in this study
ah
it was a comparison study so half of the
users
used the personalized robot that learned
how to personalize their difficulty and
the other one was giving random
difficulty levels
so the users that followed the
personalized training session performed
better in terms of score
you didn't compare to
an
expert trainer who adjusts the level
no no it was just the
the score from this game each player had
to play 10 rounds and at the end of the
10 rounds we
so for the third
use case
the goal was to build an explanable ai
system to support self-regulated
learning
self-related learning is how to
enable students to self-control their
learning process
self-assess
the skills
and become more independent learners
and this was the framework
that
all the information that can be
used through
machine learning and ai for example
student modeling and user profiling
could be
explainably
and used in order to support specific
self-related learning
skills
so for an example for this
framework
i developed a prototype
game for cognitive training
so here the user could
select their own task to play for the
next round and it could be a combination
of different cognitive exercises
so the more complex the combination is
the more harder the task is
and what we want to investigate is that
is how to use explainable ai
as open learner models and explain
explainably recommendations
to help to help the child
choose appropriate levels
so what what we would do is that here we
have actually the open learner model of
the child so it's what the machine
learning part
learns
and based on this
it can give the child a recommendation
for the next task
so the goal of explainability at this
point is persuasion so we need to
persuade the child why this next task is
appropriate for you and what the outcome
of this
would be
and because we're talking about the
children
we needed to find an appropriate way to
deliver
these explanations and these
recommendations
and we followed the some persuasive
strategies
they saw how to deliver these methods
here we have an example of authority for
example your teacher
says that you could do that
compared to an example of social proof
that your friends
preferred this task
but the idea is to use the
recommendation system output and
formulate this persuasive
recommendation
and this
it was actually with uh
with with a master's students uh during
a project
uh
again in the same
[Music]
idea about self-regulated learning
and this included the design of an
educational avatar that is used to
depict the future self of the child in a
week
so the idea is that
the student does the planning on their
own their weekly planning
and then
they kind of discuss it or negotiate
this plan with their future self
avatar
and this is the architecture
so the student can set the goals
the underlying machine learning makes
the predictions and
makes the predictions for this model
and
the idea here was
how we can visualize these outcomes
through the design of the avatar
so for example if the user would
accept an over
optimistic goal or something that
the avatar the machine learning could
detect that this is not visible
it looks so innovative that it's kind of
confused
so based on the model's confidence or
models uncertainty
this can be used as a design feature for
the for the app
and here are some
other examples
from a master
uh course
it was a coursera for a designer
industrial design and the idea is how to
design explainable ai for education
and here if you can see
it's
for online
lectures
so that the teacher could get an
estimation of how engaged the students
are
and showing something like a blob
on their screen
the other one is a prototype
for a robot that could uh simulate how
the student
feels
so the student could check
could check the robot to kind of
self-reflect
also for a brainstorm
so this device would
go to the to the people that
need to speak more for example during a
brainstorming
session or to the more dominant
ones to understand that okay maybe i
have to speak a little less
and also other
nice applications for example about
a
sign language learning
and
here we see that we have a robot that is
the instructor
and if you can see here there is a bar
that shows actually the uncertainty of
the machine learning model
to give feedback to the user immediate
feedback about how they can correct or
improve
their
sum
so
[Music]
we discussed about explainable ai and
interact machine learning and
my goal is to see how this can be
combined
and kind of unified
to design
interactions
because
i see that the explainable ai can be
used to provide information from aei to
the user
while interactive a man or human beloved
can be used to provide
information from the user to the ai
and this combination
is
can lead to
that's my argument to better human ai
interactions and we have different
challenges that we need
to face for example how we can design
transparency or
explainability or
how we can design
interfaces that can be used from humans
to provide feedback
and this combination of explainable ai
and human in the loop can also lead to
what they call hybrid uh intelligence
so here we have explainably cognitive
intelligence that comes from human and
explainably ai that comes from the
ai
here we have
other examples of how hydrogen
intelligence can be
defined
and we have different
goals of uh
of hybrid intelligence for example here
we can see
that for in order to integrate knowledge
from both sides
it's different when
we have decision support so there are
different actions that
uh take place
but it's again the loop of
explainability and interactivity
yes
so yeah currently uh our work is to kind
of realize these
possible interactions
in the context of future work
practices
so to see how different types of users
in the workplace can interact with ai
and what's the
what could be the possible
reasons for this
for example in a team of employers
explainable ai could be used to provide
a certain understanding of how the team
works on a specific task
or for example the team
can provide feedback to the ea to the ai
that can be transferred to the to the
supervisor
so for example if there is lots of
negative feedback within a team this
should be
should be visible to the supervisor
so when
that's the last
part this is what we
we aim to do now is to actually
make a study a design workshop
to define
some low level actions that
users and ai can do during the
interaction
for example
by providing either collect correct
labels
or by providing numbers that is an
evaluative feedback
or demonstrations for the for the model
and from the other side we have ai
interactions
which is
how to
to provide the modern output which is
the prediction or to provide different
explanations
and the idea is that
it's not very visible
the idea is that by having such
primitive actions
to design interactions for for a given
purpose i can give you some explanations
here
uh here we have a user that is a job
applicant
and
this is the case for a cv assessment
so the machine learning is used to
say if this applicant will be accepted
or not
and here the user scenario is that we
have a user that is rejected
so we see
how the user can ask
for explanations from the system and how
the system can provide
explanations back to the user
and for example this design pattern here
could be
the
concept of contesting an ai model so
what we want to achieve
through this workshop is to see if
designers can use
such interaction cards
to start from
primitive actions to go to
high level
intentions
and that's the last slide
so the goal here is
both in terms of design so if we can
identify such design patterns between
uh explainably and
interactive humane interactions
uh or to see if there are new types of
interactions
and but also uh to get insights about
what are the computational challenges
when we want to implement
such an
collaborative
interaction for example if we have
feedback from different users
how can we wait this feedback for our
served autonomy
and so on
so by considering also concepts of
co-performance because we have
both parts are participating actively in
the direction
and quantitation
and i think that
would be all and thank you very much
i have a question
um
well apart from sorry
the mess
you know the project where you were
assessing well the study where you were
assessing the cognitive
learning of children children i guess
for for with body yes
in the later slides you say
that then you were
not
here
providing the accessories for the
manual labeling
with how the
model would
[Music]
would interpret that or
so like a sort of support
to code as the model will do right this
is what you said
um
[Music]
and then
when the model didn't know how to label
you would ask people
so yeah that's for the future
that was just for annotation but it
could be used
[Music]
because yeah i was
if you can explain
more why do you need
the
the manual implementation
[Music]
yes yeah
if you can explain
yeah okay so for example this
task was uh
during this task we met we
we observed the position of the legs
so the time
or right based on what they see in the
screen okay so as humans
we knew that for example this one is a
correct step yeah so we would legend
as a right one
or so
but sometimes and due to
children's motions it wasn't very easy
to
either manually give the labels if it
was a correct movement movement or not
so by also visualizing the data so you
mean that the model could understand
better
if the
here was correct or not here we didn't
have a model yeah it was just the
annotation phase
so we would just watch the side and kind
of annotating its single movement
still that was really difficult with
just the human
person by visualizing the data yeah it
made it more difficult for the human
annotators
to annotate the data
and as as a later
step
if we
after we have a model
the model could automatically detect
this but if there was an issue with the
data i could ask okay can you give me
the label for this one because i'm not
certain or
thanks for the presentation good to uh
to meet you like this with content
straight away um
one of the
questions
that was going around in my head is how
do you in your studies
uh
assess the
behavior of the people that you're
interacting with so uh so sometimes you
give anecdotal evidence of how people
change their behavior or not
and one of the things that i find
fascinating is
being relation to systems
and so how does it affect
the way you then interact or learn or
disengage or engage or get confused or
etc etc
and um
and so my question is how do you do that
so how do you um
for example for this
case
which was the most complete because i
also did the final user study and
i just focused on how they perform
how
they were engaged
during the interaction with
self-reporting
it was both from
this headset yeah it's a headset but
also self-reports so i tried to collect
both sports and objective data
and to kind of make a story out of
them it wasn't just a single
measure because
yeah and that's the interesting over
there but if we could
be able to find behavior indicators
it would be
interesting to see also how explanations
for example affect
users or so uh
i can
so we should have a chat
anyways but uh but one of the things
that's in the back of my mind is the
work of
my friend zula
she's supervised by mark nearings
so this idea that you also test upon the
end
emergence yeah and the co-education
and
medical learning is something that you
investigate so kind of what what how do
you assess the uh you know the
patterns that bubble up in terms of
doing something not doing something
checking you know
do weight
or or
getting frustrated or
all these kind of observations of what
actually happens
uh
with a you know in this case uh it's a
changes it's not a physical system
amazing that's learning
anyways i'm
we can ask this question
what's the
yeah yeah yeah come on
yes
please uh go ahead ask your question
i don't think he can hear me from here
uh can you
you're muted
ready
yes would you like to ask the question
yourself
[Music]
no
yes
okay i can just read
where is the
show conversation
um
[Music]
so
somewhere in your presentation you spoke
about uncertainty uh and that got me
thinking about how humans in general
make their decisions or provide their
inputs in an ambiguous fashion
so for example in the scoring of a child
project that you had
you said that your annotation was a
little bit of an issue and it's quite
likely that there was some ambiguity
both inter annotator as well as possibly
intra-annotated disagreement
um
so
my assumption would be that bayesian
methodologies are one way to handle this
kind of
ambiguity by modeling the distributions
but of course the downside is again a
point that you mentioned that the
computational challenges around them
might be so high that you can't really
do human computer interaction in that
so in our work we're trying to for
example do interactive segmentation on
cd images
of the head and neck area um and usually
over there the contrast is so poor that
a lot of times your ai models fail
because they can't see the contrast
between one tissue and another
um
but then we're trying to extract
uncertainty which is very time consuming
and we also want them to do interaction
but then we can't really expect our
clinician to like hit a button and like
wait for like probably 10 seconds and
for the input to come in
so is there any
landmark hallmark work within literature
that has
you know even at a smaller scale used
bayesian uncertainty and interaction
together
i
don't have something in my mind right
now but specifically about
bayesian ways
because i think it's uh yeah maybe it's
really
related to the use case and what the the
purpose of
the human uh
is because for example here just the the
a simple measure of uncertainty
which is a parameter of the machine
learning model
could be useful for the
for the human annotator but in other
examples maybe we need more complex
anyways
i'm not sure
all right
okay thank you thank you
i have a rule for the question
so it's called the future meme project
can we go to the next slide and i'll
also share it
[Music]
uh so could we talk a little bit more
about the model so i think that the
model is actually inspected on the right
yeah yeah so this is the model that
projects the
requirements of the user but then how
we haven't implemented
that but from the
literature we found a
similar model it's a recurrent
neural network
and the input for its node
it's the time
spent for each one of the four subjects
for example we have a english that's
geography
so here it would be the inputs from the
student so how many minutes they want to
study for it starts
and the output
uh would be a number to say the number
of completed exercises right
so and we have it as a recurrent neural
network so kind of it's
it's no it's never it's one day of the
week okay so we would like to see
for the weekly plan
what would be the weekly progress okay
and the data is
kind of simple
so it's not a very complex
task
so yeah that's why we proposed this one
and regarding recurring neural networks
have been used for student modeling and
with such data
i was just wondering uh
would be there
would you see any kind of value into
going into a more
of uh
explicit models of how students engage
with the tasks so try to see if you can
i don't
it's it's almost a swear word these days
but how about rule-based models which
you could just uh
yeah describe it what exactly the
student's done how they turn the
training time into the result
so my question i guess is uh do you need
a new on the talk here or
yeah
i agree with this one yeah i mean if
if then else rules work it's
totally fine again the explainability
would be
the same
more or less
but
yes this machine learning can capture
events that you cannot
like code with events even if you follow
a model for engagement or
there may be students that are not
described by
certain models right so that we
propose them a similar model which is
data driven so
and uh you mentioned we did not
implement it
yeah yeah that was more uh as a design
project
right now i'm also wondering how you get
data from this
yeah that would be for a
free user interface so for example we
would give students to
like forms
to write down their intended plan and
what they actually did
and use this data for the model so
initially without the avatar and the
whole system okay but then
of course when you connect the data
without the whole system
then this data only is valid while you
are not using the system and when you
use this data for uh
uh
as an input this system then called
learning process changes so it might be
that the data that you use yeah it
depends on the because for this model it
would just say
if i kind of
follow my own schedule
so it's not because the interaction with
the other is
only during planning it's not while
doing the exercises or
okay thanks nice thanks
no i think we
okay thank you again thank you very much
[Applause]
[Music]
yeah you couldn't move too much
okay |
3160cd97-f6a9-4998-9696-556412b504bf | trentmkelly/LessWrong-43k | LessWrong | Goodhart's Curse and Limitations on AI Alignment
I believe that most existing proposals for aligning AI with human values are unlikely to succeed in the limit of optimization pressure due to Goodhart's curse. I believe this strongly enough that it continues to surprise me a bit that people keep working on things that I think clearly won't work, though I think there are two explanations for this. One is that, unlike me, they expect to approach superhuman AGI slowly and so we will have many opportunities to notice when we are deviating from human values as a result of Goodhart's curse and make corrections. The other is that they are simply unaware of the force of the argument that convinces me because, although it has been written about before, I have not seen recent, pointed arguments for it rather than technical explanations of it and its effects, and my grokking of this point happened long ago on mailing lists of yore via more intuitive and less formal arguments than I see now. I can't promise to make my points as intuitive as I would like, but nonetheless I will try to address this latter explanation by saying a few words about why I am convinced.
Note: Some of this borrows heavily from a paper I have out for publication, but with substantial additions for readability by a wider audience.
Goodhart's Curse
Goodhart's curse is what happens when Goodhart's law meets the optimizer's curse. Let's review those two here briefly for completeness. Feel free to skip some of this if you are already familiar.
Goodhart's Law
As originally formulated, Goodhart's law says "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes". A more accessible expression of Goodhart's law, though, would be that when a measure of success becomes the target, it ceases to be a good measure. A well known example of Goodhart's law comes from a program to exterminate rats in French-colonial Hanoi, Vietnam: the program paid a bounty for rat tails on the assumption that a rat tail represe |
921b0f25-dba4-4cd8-ab38-d6b3389b9fee | trentmkelly/LessWrong-43k | LessWrong | Image generation and alignment
There isn't a lot of talk about image models (e.g. Dall-E and StableDiffusion) on LW in the context of alignment, especially compared to LLMs. Why is that? Some hypotheses:
* LLMs just happened to get some traction early, and due to network effects, they are the primary research vehicle
* LLMs are a larger alignment risk than image models, e.g. the only alignment risk of image generation comes from the language embedding
* LLMs are not a larger alignment risk, but they are easier to use for alignment research |
5ba6cbb6-194e-4b11-a18a-b57349b5c9a7 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Multi-Principal Assistance Games.
1 Introduction
---------------
The growing presence of AI systems that collaborate and coexist with humans in society highlights the emerging need to ensure that the actions of AI systems benefit society as a whole. This question is formalized as the value alignment problem in the AI safety literature (Amodei et al., [2016](#bib.bib2)), which emphasizes the need to align the increasingly powerful and autonomous systems with those of their human principal(s). However, humans are prone to misspecify their objectives which can lead to unexpected behaviors (Amodei et al., [2016](#bib.bib2)); hence research in value alignment has focused on deriving preferences from human actions. In the body of research in value alignment and human robot interaction, the majority of the work involves scenarios with one human and one AI system. It is an appealing setting because the robot and the human share the same goal. Therefore, methods in this setting such as inverse reinforcement learning (Ng et al., [2000](#bib.bib18); Abbeel & Ng, [2004](#bib.bib1); Ramachandran & Amir, [2007](#bib.bib21)), inverse reward design (Hadfield-Menell et al., [2017](#bib.bib13)), and LILA (Woodward et al., [2019](#bib.bib24)) revolve around how an AI system can optimally learn the preferences of the human and apply these results to novel environments. Similarly, the human’s incentive is to optimally teach the robot its own preferences. The combination of a learning AI system and a teaching human yields the *assistance game* (also known as the cooperative inverse reinforcement learning game) (Hadfield-Menell et al., [2016](#bib.bib12)).
However, AI systems in the real world do not fit this one human, one AI paradigm. Recommendation systems, autonomous vehicles, and parole algorithms do not exist in a vacuum—they often influence and are influenced by multiple humans. Hence, we consider a variation on assistance games where a robot acts on behalf of multiple humans, which we call the *multi-principal assistance game* (MPAG). The key difference between this and the scenario with only one human is that, in general, different humans have different preferences, so it is impossible to align the AI to perfectly match the preferences of everyone. The problem of aggregating individual preferences for making collective decisions has been studied by economists and philosophers for more than two hundred years and constitutes the heart of social choice theory (Sen, [1986](#bib.bib23)).
Even with a given aggregation method, however, the inference process itself is challenged by the presence of selfish agents. While the robot acts to optimize the aggregate of utilities, each human acts to optimize their own utility. Therefore, unlike the single-principal assistance game, the multi-principal assistance game is no longer fully cooperative. This creates a problem for existing value alignment algorithms. These algorithms work under the assumption that the demonstrations and information provided are truly representative of the human’s preferences. However, the misalignment between the AI system and each human’s preferences yields a perverse incentive for the humans: can they misrepresent their preferences to gain a more desirable outcome?
We begin with a subclass of MPAGs that generalizes apprenticeship learning. In multi-principal apprenticeship learning, the robot observes trajectories from multiple humans and then produces a trajectory that maximizes a social aggregate of the inferred rewards. We state an impossibility result for this setting based on Gibbard’s theorem in social choice theory. Our experiments confirm that human demonstrations may indeed “misrepresent” their preferences, given a robot that runs maximum entropy inverse reinforcement learning.
We contrast the impossibility result by introducing another subclass of MPAGs based on the multi-armed bandit setting. In the multi-principal bandit apprentice setting, the robot is teleoperated by multiple humans. We show under this setting that because demonstrations yield an immediate reward, learning from demonstrations can decrease the incentive to misrepresent one’s preferences by incurring a cost of lying. By drawing an analogy between our setting and voting theory, we bring a new perspective on the impossibility results by showing that voting by demonstrating reduces the proportion of manipulable profiles. In this setting, the robot can choose which human to give control to and whether to perform an action or not. We use this active learning as a basis to construct an approximately efficient mechanism where humans are incentivized to share the full spectrum of their preferences.
###
1.1 Related Work
#### Value Alignment.
The need for AI systems to align with the preferences of humans is well documented in AI safety literature (Amodei et al., [2016](#bib.bib2)). A first line of work formulates goal inference as an inverse planning problem (Baker et al., [2007](#bib.bib3)). For example, Inverse Reinforcement Learning computes a reward such that the observed trajectory is optimal in the underlying Markov Decision Process (MDP) (Ng et al., [2000](#bib.bib18)) (Ziebart et al., [2008](#bib.bib26)). A common assumption of inverse planning methods is that the robot does not influence the decision-making of the human. However, previous work has shown that the presence of a robot has a significant influence on humans (Robins et al., [2004](#bib.bib22)) (Kanda et al., [2004](#bib.bib15)). Furthermore, it has been shown that the robot can benefit from interacting with the human to infer the goal. For example, Hadfield-Menell et al. have shown that if we formulate goal inference as a game between the human and the robot, observing the optimal trajectory of the human is in general a sub-optimal strategy (Hadfield-Menell et al., [2016](#bib.bib12)). On the contrary, previous work has experimentally shown the emergence of active learning and teaching when optimizing for a joint policy in the value alignment problem (Woodward et al., [2019](#bib.bib24)). Therefore, modelling collaboration as a game, where both human and robot are aware of their mutual influence, is arguably the most promising approach for efficient human-robot interaction (HRI) (Dragan, [2017](#bib.bib8)).
#### Mechanism Design.
An important result in social choice theory and mechanism design is the Gibbard–Satterthwaite theorem, which states that, for universal domain of utility functions, every non-trivial game form is subject to strategic or dishonest actions from the players (Gibbard, [1973](#bib.bib10)), which can be extended to non-deterministic mechanisms as well (Gibbard, [1978](#bib.bib11)). This impossibility theorem applies the most general case of multi-principal assistance games as well. Approaches in mechanism design seek to create games where players each acting rationally yield the desired outcome. In a pseudo-linear environment, the VCG mechanism and the expected externality mechanism achieve different forms of incentive-compatibility, meaning players are incentived to act truthfully (Börgers, [2015](#bib.bib4)). These mechanisms do so by impose transfers, so the externalities of a player’s strategic behavior are borne by that player.
#### Voting Theory.
Similarly, voting theory, a branch of social choice theory, has also focused on building systems robust to human manipulation. Recently much attention has been given to incorporating ideas of Voting Theory in the design of multiagent systems (Ephrati & Rosenschein, [1996](#bib.bib9)). Our work formalize voting theory in a hybrid human-robot setting: humans “vote" via their demonstrations, and the robot’s resulting actions represent the resulting “collective decision." In particular, our setup is similar to ordinal voting, since the robot does not access to a cardinal utility function (Boutilier et al., [2015](#bib.bib5)); as a result, the collective decision may not be socially optimal (Procaccia & Rosenschein, [2006](#bib.bib19)).
#### Human-Robot Team
Robot evolving in a multi-human environment has already been studied by works on human-robot team in the HRI literature. Much work has focused on trust building and resource allocation (Claure et al., [2019](#bib.bib7)). A common assumption is that the robot and the humans have a common payoff known to the robot. Our work generalize this setting to general-sum payoffs possibly unknown to the robot.
2 Impossibility Result for Learning from Multiple Humans
---------------------------------------------------------
###
2.1 Multi-Principal Apprenticeship Learning
We formalize the problem of learning from multiple humans as *multi-principal apprenticeship learning* (MPAL), a specific multi-principal assistance game that elucidates the process of learning from human demonstrations. A MPAL consists of a multi-agent world model, a Markov decision process without a reward function, M\R=⟨S,A,P,μ0,T⟩\𝑀𝑅𝑆𝐴𝑃subscript𝜇0𝑇M\backslash R=\langle S,A,P,\mu\_{0},T\rangleitalic\_M \ italic\_R = ⟨ italic\_S , italic\_A , italic\_P , italic\_μ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_T ⟩ (Abbeel & Ng, [2004](#bib.bib1)) with N𝑁Nitalic\_N humans and one robot where:
* •
S𝑆Sitalic\_S is the set of states.
* •
A𝐴Aitalic\_A is the set of actions.
* •
P:S×A×S→[0,1]:𝑃→𝑆𝐴𝑆01P:S\times A\times S\rightarrow[0,1]italic\_P : italic\_S × italic\_A × italic\_S → [ 0 , 1 ] is the transition function.
* •
μ0subscript𝜇0\mu\_{0}italic\_μ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is the initial state distribution
* •
T𝑇Titalic\_T is the horizon
Each human hℎhitalic\_h has a private reward function Rh\*:S→ℝ:subscriptsuperscript𝑅ℎ→𝑆ℝR^{\*}\_{h}:S\rightarrow\mathbb{R}italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT : italic\_S → blackboard\_R that is unknown to the robot. We use a *social welfare function* W𝑊Witalic\_W to aggregate these individual preferences into a single objective R\*=W(R1\*,…,RN\*)superscript𝑅𝑊subscriptsuperscript𝑅1…subscriptsuperscript𝑅𝑁R^{\*}=W(R^{\*}\_{1},\ldots,R^{\*}\_{N})italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = italic\_W ( italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ). The robot’s objective is to maximize R\*superscript𝑅R^{\*}italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT in the world model defined above, despite initial uncertainty about the individual rewards Ri\*subscriptsuperscript𝑅𝑖R^{\*}\_{i}italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. Social welfare functions are a heavily studied field, examples include the utilitarian criterion WU(R1\*,…,RN\*)=∑hRh\*subscript𝑊𝑈subscriptsuperscript𝑅1…subscriptsuperscript𝑅𝑁subscriptℎsubscriptsuperscript𝑅ℎW\_{U}(R^{\*}\_{1},...,R^{\*}\_{N})=\sum\_{h}R^{\*}\_{h}italic\_W start\_POSTSUBSCRIPT italic\_U end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT (Liu et al., [2015](#bib.bib16)) and the egalitarian criterion WE(R1\*,…,RN\*)=minhRh\*subscript𝑊𝐸subscriptsuperscript𝑅1…subscriptsuperscript𝑅𝑁subscriptℎsubscriptsuperscript𝑅ℎW\_{E}(R^{\*}\_{1},...,R^{\*}\_{N})=\min\_{h}R^{\*}\_{h}italic\_W start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ) = roman\_min start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT (Zhang & Shah, [2014](#bib.bib25); Nace & Pióro, [2008](#bib.bib17)).
The robot doesn’t have direct access to the human reward functions. Instead each human hℎhitalic\_h provides a collection of phsubscript𝑝ℎp\_{h}italic\_p start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT trajectories through the state space ξh=(ξ1i,…,ξpii)∈Ξphsuperscript𝜉ℎsubscriptsuperscript𝜉𝑖1…subscriptsuperscript𝜉𝑖subscript𝑝𝑖superscriptΞsubscript𝑝ℎ\xi^{h}=(\xi^{i}\_{1},...,\xi^{i}\_{p\_{i}})\in\Xi^{p\_{h}}italic\_ξ start\_POSTSUPERSCRIPT italic\_h end\_POSTSUPERSCRIPT = ( italic\_ξ start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_ξ start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ∈ roman\_Ξ start\_POSTSUPERSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, where Ξ=(S×A)T−1×SΞsuperscript𝑆𝐴𝑇1𝑆\Xi=(S\times A)^{T-1}\times Sroman\_Ξ = ( italic\_S × italic\_A ) start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT × italic\_S. Each trajectory is drawn from hℎhitalic\_h’s policy, ψh∈△Πsubscript𝜓ℎ△Π\psi\_{h}\in\triangle\Piitalic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ∈ △ roman\_Π, where ΠΠ\Piroman\_Π denote the set of deterministic policies in this MDP. Therefore, our overall objective is to build a mechanism ℳ:(Ξ)∑ph→△Π:ℳ→superscriptΞsubscript𝑝ℎ△Π\mathcal{M}:(\Xi)^{\sum p\_{h}}\rightarrow\triangle\Picaligraphic\_M : ( roman\_Ξ ) start\_POSTSUPERSCRIPT ∑ italic\_p start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT → △ roman\_Π such that ℳ(ξ1,…,ξN)ℳsuperscript𝜉1…superscript𝜉𝑁\mathcal{M}(\xi^{1},...,\xi^{N})caligraphic\_M ( italic\_ξ start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , … , italic\_ξ start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ) is optimal in M\*superscript𝑀M^{\*}italic\_M start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT.
######
Example 1 (MPAL via IRL).
One such mechanism leverages inverse reinforcement learning (Abbeel & Ng, [2004](#bib.bib1)). This method estimates each reward separately and optimizes the robot’s policy for the estimated aggregation of rewards. Formally:
* •
IRL:𝒫(Ξ)→ℝS:𝐼𝑅𝐿→𝒫Ξsuperscriptℝ𝑆IRL:\mathcal{P}(\Xi)\rightarrow\mathbb{R}^{S}italic\_I italic\_R italic\_L : caligraphic\_P ( roman\_Ξ ) → blackboard\_R start\_POSTSUPERSCRIPT italic\_S end\_POSTSUPERSCRIPT defined on the set of subsets of ΞΞ\Xiroman\_Ξ
* •
RL:ℝS→Π:𝑅𝐿→superscriptℝ𝑆ΠRL:\mathbb{R}^{S}\rightarrow\Piitalic\_R italic\_L : blackboard\_R start\_POSTSUPERSCRIPT italic\_S end\_POSTSUPERSCRIPT → roman\_Π returns an optimal policy
* •
ℳ(ξ1,…,ξN)=RL∘W(IRL(ξ1),…,IRL(ξN))ℳsuperscript𝜉1…superscript𝜉𝑁𝑅𝐿𝑊𝐼𝑅𝐿superscript𝜉1…𝐼𝑅𝐿superscript𝜉𝑁\mathcal{M}(\xi^{1},...,\xi^{N})=RL\circ W(IRL(\xi^{1}),...,IRL(\xi^{N}))caligraphic\_M ( italic\_ξ start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , … , italic\_ξ start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ) = italic\_R italic\_L ∘ italic\_W ( italic\_I italic\_R italic\_L ( italic\_ξ start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ) , … , italic\_I italic\_R italic\_L ( italic\_ξ start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ) )
Note that this formalism also accounts for IRL𝐼𝑅𝐿IRLitalic\_I italic\_R italic\_L methods that return a distribution over rewards because we can always marginalize over the uncertainty in the reward function (Ramachandran & Amir, [2007](#bib.bib21)):
######
Theorem 1.
If IRL𝐼𝑅𝐿IRLitalic\_I italic\_R italic\_L returns a distribution over reward, then the mechanism defined by ℳ(ξ1,…,ξN)=RL∘𝔼[W(IRL(ξ1),…,IRL(ξN))]ℳsuperscript𝜉1normal-…superscript𝜉𝑁𝑅𝐿𝔼delimited-[]𝑊𝐼𝑅𝐿superscript𝜉1normal-…𝐼𝑅𝐿superscript𝜉𝑁\mathcal{M}(\xi^{1},...,\xi^{N})=RL\circ\mathbb{E}[W(IRL(\xi^{1}),...,IRL(\xi^{N}))]caligraphic\_M ( italic\_ξ start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , … , italic\_ξ start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ) = italic\_R italic\_L ∘ blackboard\_E [ italic\_W ( italic\_I italic\_R italic\_L ( italic\_ξ start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ) , … , italic\_I italic\_R italic\_L ( italic\_ξ start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ) ) ] maximizes the expected value function over the induced distribution of MDPs.
######
Example 2 (Voting).
If the world model is stateless and the reward functions are defined on the action space, then MPAL is a voting system where humans get immediate reward by voting.
###
2.2 Manipulability of Multi-Principal Apprenticeship Learning
We assume in this section that the mechanism ℳℳ\mathcal{M}caligraphic\_M is defined directly on the space of strategies of the humans (△Π)Nsuperscript△Π𝑁(\triangle\Pi)^{N}( △ roman\_Π ) start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT.
The humans receive a reward when performing a demonstration (learning phase (Hadfield-Menell et al., [2016](#bib.bib12))) and a reward when the robot acts in the MDP (deployement phase). Therefore, the total expected utility for hℎhitalic\_h is the combination of the two phases:
| | | | | |
| --- | --- | --- | --- | --- |
| | Uh(ψh,ψ−h,pi,ℳ)=subscript𝑈ℎsubscript𝜓ℎsubscript𝜓ℎsubscript𝑝𝑖ℳabsent\displaystyle U\_{h}(\psi\_{h},\psi\_{-h},p\_{i},\mathcal{M})=italic\_U start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_ψ start\_POSTSUBSCRIPT - italic\_h end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , caligraphic\_M ) = | pi𝔼πh∼ψh(Vπh(Rh\*))+limit-fromsubscript𝑝𝑖subscript𝔼similar-tosubscript𝜋ℎsubscript𝜓ℎsuperscript𝑉subscript𝜋ℎsubscriptsuperscript𝑅ℎ\displaystyle p\_{i}\mathbb{E}\_{\pi\_{h}\sim\psi\_{h}}(V^{\pi\_{h}}(R^{\*}\_{h}))+italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ∼ italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ) ) + | | (1) |
| | | 𝔼πr∼ℳ(ψ)(Vπr(Rh\*))subscript𝔼similar-tosubscript𝜋𝑟ℳ𝜓superscript𝑉subscript𝜋𝑟subscriptsuperscript𝑅ℎ\displaystyle\mathbb{E}\_{\pi\_{r}\sim\mathcal{M}(\psi)}(V^{\pi\_{r}}(R^{\*}\_{h}))blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ∼ caligraphic\_M ( italic\_ψ ) end\_POSTSUBSCRIPT ( italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ) ) | |
where Vπ(R)superscript𝑉𝜋𝑅V^{\pi}(R)italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_R ) is the value of the policy π𝜋\piitalic\_π in the MDP induced by R𝑅Ritalic\_R.
More generally, we introduce a coefficient α𝛼\alphaitalic\_α that quantify the relative weight that the humans put on the learning phase:
| | | | | |
| --- | --- | --- | --- | --- |
| | Uh(ψh,ψ−h,αi,ℳ)=subscript𝑈ℎsubscript𝜓ℎsubscript𝜓ℎsubscript𝛼𝑖ℳabsent\displaystyle U\_{h}(\psi\_{h},\psi\_{-h},\alpha\_{i},\mathcal{M})=italic\_U start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_ψ start\_POSTSUBSCRIPT - italic\_h end\_POSTSUBSCRIPT , italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , caligraphic\_M ) = | αi𝔼πh∼ψh(Vπi(Rh\*))+limit-fromsubscript𝛼𝑖subscript𝔼similar-tosubscript𝜋ℎsubscript𝜓ℎsuperscript𝑉subscript𝜋𝑖subscriptsuperscript𝑅ℎ\displaystyle\alpha\_{i}\mathbb{E}\_{\pi\_{h}\sim\psi\_{h}}(V^{\pi\_{i}}(R^{\*}\_{h}))+italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ∼ italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ) ) + | | (2) |
| | | (1−αi)𝔼πr∼ℳ(ψ)(Vπr(Rh\*))1subscript𝛼𝑖subscript𝔼similar-tosubscript𝜋𝑟ℳ𝜓superscript𝑉subscript𝜋𝑟subscriptsuperscript𝑅ℎ\displaystyle(1-\alpha\_{i})\mathbb{E}\_{\pi\_{r}\sim\mathcal{M}(\psi)}(V^{\pi\_{r}}(R^{\*}\_{h}))( 1 - italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ∼ caligraphic\_M ( italic\_ψ ) end\_POSTSUBSCRIPT ( italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ) ) | |
In our case, the more demonstrations a human provides, the more weight they will put on the learning phase. In other words, α𝛼\alphaitalic\_α increases as the number of demonstrations increases. In the contrary, α𝛼\alphaitalic\_α decreases as the number of times the robot acts increases.
The total utility each human gets depends on the strategy of the other players through the robot’s inference. Therefore, even if the humans act independently in the learning phase, the shared interest in the robot’s actions during deployment induces a game between them.
Ideally we would like to have a mechanism such that the action of one human is not influenced by the action of other humans. This would ensure that the mechanism is not manipulable and stays aligned with its initial purpose. Formally:
######
Definition 1 (Straightforward Mechanism).
We say that (ℳ,α)ℳ𝛼\mathcal{(M,\alpha)}( caligraphic\_M , italic\_α ) is straightforward if every human Hisubscript𝐻𝑖H\_{i}italic\_H start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT has a dominant strategy in the game induced by ℳℳ\mathcal{M}caligraphic\_M and α𝛼\alphaitalic\_α:
| | | | | |
| --- | --- | --- | --- | --- |
| | ∀h∈[1,N],∃ψh\*∀ψ−h,∀ψh,for-allℎ1𝑁subscriptsuperscript𝜓ℎfor-allsubscript𝜓ℎfor-allsubscript𝜓ℎ\displaystyle\forall h\in[1,N],\exists\psi^{\*}\_{h}\forall\psi\_{-h},\forall\psi\_{h},∀ italic\_h ∈ [ 1 , italic\_N ] , ∃ italic\_ψ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ∀ italic\_ψ start\_POSTSUBSCRIPT - italic\_h end\_POSTSUBSCRIPT , ∀ italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , | Uh(ψh\*,ψ−h,αh,ℳ)≥subscript𝑈ℎsubscriptsuperscript𝜓ℎsubscript𝜓ℎsubscript𝛼ℎℳabsent\displaystyle U\_{h}(\psi^{\*}\_{h},\psi\_{-h},\alpha\_{h},\mathcal{M})\geqitalic\_U start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_ψ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_ψ start\_POSTSUBSCRIPT - italic\_h end\_POSTSUBSCRIPT , italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , caligraphic\_M ) ≥ | | (3) |
| | | Uh(ψh,ψ−h,αh,ℳ)subscript𝑈ℎsubscript𝜓ℎsubscript𝜓ℎsubscript𝛼ℎℳ\displaystyle U\_{h}(\psi\_{h},\psi\_{-h},\alpha\_{h},\mathcal{M})italic\_U start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_ψ start\_POSTSUBSCRIPT - italic\_h end\_POSTSUBSCRIPT , italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , caligraphic\_M ) | |
When α𝛼\alphaitalic\_α is small, i.e. the demonstration is relatively insignificant compared to the robot actions, the Gibbard–Satterthwaite (Gibbard, [1973](#bib.bib10)) can be applied to show that the only straightforward mechanism are trivial.
######
Theorem 2 (Based on Gibbard 1973).
For sufficiently small α𝛼\alphaitalic\_α, the only straightforward deterministic mechanisms are as follows:
* •
Duple mechanisms, where the set of possible trajectories are restricted to two.
* •
There exists one human that can choose among the possible trajectories (dictatorship).
Furthermore, we can extend Gibbard’s 1978 theorem (Gibbard, [1978](#bib.bib11)) for non-deterministic mechanisms.
######
Theorem 3 (Based on Gibbard 1978).
On the domain of versatile111A strategy is versatile if the set of utility profile for which it is dominant has interior points. policies, any straightforward mechanism must be a probability mixture of mechanisms of two kind:
* •
Duple mechanisms
* •
Unilateral games, where one human gets to choose among a certain set of possible lotteries over trajectories.
###
2.3 Experiment: Attacking Inverse Reinforcement Learning
The theorems in Section [2.2](#S2.SS2 "2.2 Manipulability of Multi-Principal Apprenticeship Learning ‣ 2 Impossibility Result for Learning from Multiple Humans ‣ Multi-Principal Assistance Games") apply when α𝛼\alphaitalic\_α is sufficiently small. However, there can still be incentives for strategic behavior in games where α𝛼\alphaitalic\_α is non-negligible. In this section, we consider a mechanism based on Maximum Entropy IRL (Ziebart et al., [2008](#bib.bib26)) and introduce a solver to manipulate it.
More specifically, our mechanism has 3 steps. First, we aggregates all of the human player’s trajectories into a single dataset. Second, the mechanism uses Maximum Entropy IRL (Ziebart et al., [2008](#bib.bib26)) to infer a reward reward function. Finally, we execute a policy that optimizes this reward function. Formally:
| | | | |
| --- | --- | --- | --- |
| | ℳ(ξ1,…,ξN)=RL∘MEIRL(ξ1,…,ξN)ℳsuperscript𝜉1…superscript𝜉𝑁𝑅𝐿𝑀𝐸𝐼𝑅𝐿superscript𝜉1…superscript𝜉𝑁\displaystyle\mathcal{M}(\xi^{1},...,\xi^{N})=RL\circ MEIRL(\xi^{1},...,\xi^{N})caligraphic\_M ( italic\_ξ start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , … , italic\_ξ start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ) = italic\_R italic\_L ∘ italic\_M italic\_E italic\_I italic\_R italic\_L ( italic\_ξ start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , … , italic\_ξ start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ) | | (4) |
######
Theorem 4.
The mechanism presented above is not straightforward.
To show that, we introduce a quadratic program (QP) solver that heuristically creates adversarial trajectories against Maximum Entropy IRL. Similarly to previous work on single-agent value alignment (Hadfield-Menell et al., [2016](#bib.bib12); Ho et al., [2016](#bib.bib14)), the QP solver finds an approximate best-response trajectory in a three-player game with one robot and two humans. We suppose that each human gives a single trajectory to the robot and the robot aggregates the trajectories to find a single reward parameter to train its policy.
The human’s goal is to optimize for immediate reward, balanced with future reward from the robot’s deployment. While it is hard to directly optimize the result of the robot’s inference, the average feature counts in the trajectory dataset have been used as an effective proxy(Hadfield-Menell et al., [2016](#bib.bib12); Ho et al., [2016](#bib.bib14)). Formally, we capture this by defining a QP that optimizes for a combination of immediate reward and the distance of the final features from a target (see appendix for the full derivation):
| | | | | |
| --- | --- | --- | --- | --- |
| | maxρs,atsubscriptsubscriptsuperscript𝜌𝑡𝑠𝑎\displaystyle\max\_{\rho^{t}\_{s,a}}roman\_max start\_POSTSUBSCRIPT italic\_ρ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_s , italic\_a end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT | ∑s,a,tγtρs,atϕ(s)Twsubscript𝑠𝑎𝑡superscript𝛾𝑡subscriptsuperscript𝜌𝑡𝑠𝑎italic-ϕsuperscript𝑠𝑇𝑤\displaystyle\sum\_{s,a,t}\gamma^{t}\rho^{t}\_{s,a}\phi(s)^{T}w∑ start\_POSTSUBSCRIPT italic\_s , italic\_a , italic\_t end\_POSTSUBSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_ρ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_s , italic\_a end\_POSTSUBSCRIPT italic\_ϕ ( italic\_s ) start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_w | | (5) |
| | | −λ||∑s,a,tρs,atϕ(s)−(2𝔼[ϕ|w]−ϕ(ξ1))||2\displaystyle-\lambda||\sum\_{s,a,t}\rho^{t}\_{s,a}\phi(s)-(2\mathbb{E}[\phi|w]-\phi(\xi^{1}))||^{2}- italic\_λ | | ∑ start\_POSTSUBSCRIPT italic\_s , italic\_a , italic\_t end\_POSTSUBSCRIPT italic\_ρ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_s , italic\_a end\_POSTSUBSCRIPT italic\_ϕ ( italic\_s ) - ( 2 blackboard\_E [ italic\_ϕ | italic\_w ] - italic\_ϕ ( italic\_ξ start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT ) ) | | start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | |
| | s.t | ∑aρs,at+1=∑s′,aP(s′,a,s)ρs′,atsubscript𝑎subscriptsuperscript𝜌𝑡1𝑠𝑎subscriptsuperscript𝑠′𝑎𝑃superscript𝑠′𝑎𝑠subscriptsuperscript𝜌𝑡superscript𝑠′𝑎\displaystyle\sum\_{a}\rho^{t+1}\_{s,a}=\sum\_{s^{\prime},a}P(s^{\prime},a,s)\rho^{t}\_{s^{\prime},a}∑ start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT italic\_ρ start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_s , italic\_a end\_POSTSUBSCRIPT = ∑ start\_POSTSUBSCRIPT italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a end\_POSTSUBSCRIPT italic\_P ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a , italic\_s ) italic\_ρ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a end\_POSTSUBSCRIPT | |
| | | ∀s,∀t∈[0,T−1]for-all𝑠for-all𝑡
0𝑇1\displaystyle\forall s,\forall t\in[0,T-1]∀ italic\_s , ∀ italic\_t ∈ [ 0 , italic\_T - 1 ] | |
| | | ∑aρs,a0=μ0[s]∀ssubscript𝑎subscriptsuperscript𝜌0𝑠𝑎subscript𝜇0delimited-[]𝑠for-all𝑠\displaystyle\sum\_{a}\rho^{0}\_{s,a}=\mu\_{0}[s]\,\,\forall s∑ start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT italic\_ρ start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_s , italic\_a end\_POSTSUBSCRIPT = italic\_μ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT [ italic\_s ] ∀ italic\_s | |
where ρ𝜌\rhoitalic\_ρ is the occupancy measure, ϕitalic-ϕ\phiitalic\_ϕ is the feature space embedding, P𝑃Pitalic\_P is the transition matrix, T𝑇Titalic\_T is the horizon, μ0subscript𝜇0\mu\_{0}italic\_μ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is the initial state distribution, λ𝜆\lambdaitalic\_λ weights the relative importance of the outcome of the robot’s policy, ω𝜔\omegaitalic\_ω is the reward parameter, 𝔼[ϕ|w]𝔼delimited-[]conditionalitalic-ϕ𝑤\mathbb{E}[\phi|w]blackboard\_E [ italic\_ϕ | italic\_w ] is the expected feature count of a policy optimal for the MDP induced by ω𝜔\omegaitalic\_ω and ξ1superscript𝜉1\xi^{1}italic\_ξ start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT is the trajectory of the first human.
This is a regularized dual of the linear program formulation for finite-horizon discounted Markov Decision Process (Puterman, [2014](#bib.bib20)). The best-response trajectory can be directly derived from the occupancy measure.
The experimental results in a 2D gridworld environment are presented in Figure 1. The environment is characterized by a three-dimensional feature space and an horizon of 40. The reward parameter of the second human is fixed and equal to ω2=(0.9,1,0)subscript𝜔20.910\omega\_{2}=(0.9,1,0)italic\_ω start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = ( 0.9 , 1 , 0 ). We compute their best-response trajectories to two different humans, one with reward parameter ω1=(1,0,0)subscript𝜔1100\omega\_{1}=(1,0,0)italic\_ω start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = ( 1 , 0 , 0 ) and another with reward parameter ω1=(0,0,1)subscript𝜔1001\omega\_{1}=(0,0,1)italic\_ω start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = ( 0 , 0 , 1 ). In the former case, the total utility is maximized by playing the optimal trajectory but in the latter case, the best-response found by our QP solver is not the optimal trajectory. Figure 1 presents the optimal trajectory versus approximate best response in the latter case.

Figure 1: Manipulating a Multi-Agent Alignment IRL Method using a QP in a 2D 5×6565\times 65 × 6 Gridworld Environment with a 3D feature space. First row: True reward of humans 1 and 2; State visitation count of optimal (resp. best-response) trajectories of human 2 (the initial state is in the bottom left-hand corner). Second row: Recovered rewards using IRL on the aggregate of first human’s optimal and second human’s optimal (resp. best-response) trajectories; Optimal robot trajectories in the MDP induced by these rewards.
3 Mechanism Design for Multi-Agent Alignment
---------------------------------------------
We now propose a social choice method that uses shared control of a system to combine preference inference with social welfare optimization. In this context, we demonstrate the possibility of non-trivial straightforward and approximately efficient mechanisms.
###
3.1 Multi-Principal Bandit Apprentice (MPBA)
Imagine a teloperation setting with multiple human principals. The robot wants to implement a policy that will have an impact on several humans. To choose a policy optimized for this specific group of human it needs to learn about each individual’s preferences by letting them operate simultaneously or successively. If it let them operate only a few times, it might not get enough information about their preferences and end up with a suboptimal policy in term of social welfare. In the contrary, as long as it is operated by humans, the robot’s policy is not optimized for the social welfare but rather each individual’s preferences. We model this exploration-exploitation problem with a multi-armed bandit setting adapted to our multi-agent alignment problem.
In the classical setting of the multi-armed bandit, a single player receives an arm-dependent stochastic reward each time they pull an arm. Their goal is to find a policy mapping history of actions and rewards to actions that minimizes the regret by finding the good trade-off between exploration and exploitation.
We deviate from the classical setting in several respects. First, N>1𝑁1N>1italic\_N > 1 humans are pulling arms and the rewards on each arm are deterministic, specific to each human, and known to each human Rh\*:[1,M]→[0,1]:subscriptsuperscript𝑅ℎ→1𝑀01R^{\*}\_{h}:[1,M]\rightarrow[0,1]italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT : [ 1 , italic\_M ] → [ 0 , 1 ]. The humans are not exploring; they are communicating information to the robot.
Second, the robot can decide to pull an arm itself or it can choose one human to pull an arm in the next round. Third, when a human pulls an arm, the robot observes only that fact and does not observe the reward received. We assume that each person’s total utility is the same: ∀h,∑aRh\*(a)=1for-allℎsubscript𝑎subscriptsuperscript𝑅ℎ𝑎
1\forall h,\sum\_{a}R^{\*}\_{h}(a)=1∀ italic\_h , ∑ start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_a ) = 1.
As in the previous part, we suppose that the designers of the system would like to maximize a *social welfare* function that aggregates utility. Formally, we define the social welfare of arm a𝑎aitalic\_a as wa=1N∑h=1NRh\*(a)subscript𝑤𝑎1𝑁superscriptsubscriptℎ1𝑁subscriptsuperscript𝑅ℎ𝑎w\_{a}=\frac{1}{N}\sum\_{h=1}^{N}R^{\*}\_{h}(a)italic\_w start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT = divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_h = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_a ) and w\*=maxawasuperscript𝑤subscript𝑎subscript𝑤𝑎w^{\*}=\max\_{a}w\_{a}italic\_w start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = roman\_max start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT. For t∈[1,T]𝑡1𝑇t\in[1,T]italic\_t ∈ [ 1 , italic\_T ], the random variables Htsubscript𝐻𝑡H\_{t}italic\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, Atsubscript𝐴𝑡A\_{t}italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, and Wtsubscript𝑊𝑡W\_{t}italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT represent respectively the human chosen, the action chosen and the social welfare obtained at time t𝑡titalic\_t. Since the robot can choose itself, the codomain of Htsubscript𝐻𝑡H\_{t}italic\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is [1,N+1]. We also denote by H~tsubscript~𝐻𝑡\tilde{H}\_{t}over~ start\_ARG italic\_H end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT the restriction of Htsubscript𝐻𝑡H\_{t}italic\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT to [1,N]: p(Ht~)=p(Ht|Ht≤N)𝑝~subscript𝐻𝑡𝑝conditionalsubscript𝐻𝑡subscript𝐻𝑡𝑁p(\tilde{H\_{t}})=p(H\_{t}|H\_{t}\leq N)italic\_p ( over~ start\_ARG italic\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_ARG ) = italic\_p ( italic\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ≤ italic\_N ) The objective of the robot is to minimize cumulative regret with respect to social welfare:
| | | | | |
| --- | --- | --- | --- | --- |
| | minRTsubscript𝑅𝑇\displaystyle\min R\_{T}roman\_min italic\_R start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT | =𝔼∑t=1TW\*−Wtabsent𝔼superscriptsubscript𝑡1𝑇superscript𝑊subscript𝑊𝑡\displaystyle=\mathbb{E}\sum\_{t=1}^{T}W^{\*}-W\_{t}= blackboard\_E ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_W start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT - italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | | (6) |
###
3.2 Hardness of MPBA
To begin with we consider the non-strategic setting. We suppose that the humans implement a stationary policy to approximately optimize immediate reward:
| | | | |
| --- | --- | --- | --- |
| | p(At|Ht~)∝eβRHt~\*(At)proportional-to𝑝conditionalsubscript𝐴𝑡~subscript𝐻𝑡superscript𝑒𝛽subscriptsuperscript𝑅~subscript𝐻𝑡subscript𝐴𝑡\displaystyle p(A\_{t}|\tilde{H\_{t}})\propto e^{\beta R^{\*}\_{\tilde{H\_{t}}}(A\_{t})}italic\_p ( italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | over~ start\_ARG italic\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_ARG ) ∝ italic\_e start\_POSTSUPERSCRIPT italic\_β italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT over~ start\_ARG italic\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_ARG end\_POSTSUBSCRIPT ( italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUPERSCRIPT | | (7) |
where β𝛽\betaitalic\_β is a parameter describing how close are humans from making optimal decisions.
A setting similar to ours has been studied to formalize assistance to a single human (Chan et al., [2019](#bib.bib6)). In their setting, at each round the single human suggests an arm to pull and the robot pulls an arm based on the arms suggested so far. They show that if the human is noisily optimal—the probability of suggesting the best arm is strictly greater than the probability of suggesting a suboptimal arm—then simply pulling the most commonly suggested arm achieves finite expected regret, contrasting with the lower bound in Ω(logT)Ω𝑇\Omega(\log T)roman\_Ω ( roman\_log italic\_T ) for the classical setting.
The following theorems show that the multi-agent setting is harder than its single-agent counterpart.
######
Theorem 5.
In a bandit assistance game with a single human, any utility profile leads to zero regret when β→∞normal-→𝛽\beta\rightarrow\inftyitalic\_β → ∞ and the robot uses an explore-then-commit strategy.
######
Theorem 6.
In a bandit assistance game with multiple humans, there is a utility profile such that the regret is in Ω(T)normal-Ω𝑇\Omega(T)roman\_Ω ( italic\_T ) when β→∞normal-→𝛽\beta\rightarrow\inftyitalic\_β → ∞ and the robot uses an explore-then-commit strategy.
Intuitively, inferring the best arm is not sufficient to maximize the social welfare when there are multiple humans.
###
3.3 Incentive-Compatibility of Voting by Demonstrating
We now consider the strategic setting. Each human hℎhitalic\_h acts following a policy ψh:([1,M]×[1,N])\*→△[1,M]:subscript𝜓ℎ→superscript1𝑀1𝑁△1𝑀\psi\_{h}:([1,M]\times[1,N])^{\*}\rightarrow\triangle[1,M]italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT : ( [ 1 , italic\_M ] × [ 1 , italic\_N ] ) start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT → △ [ 1 , italic\_M ] mapping history of human-action pairs to action and aims to maximize its utility 𝔼(∑trh\*(At))𝔼subscript𝑡superscriptsubscript𝑟ℎsubscript𝐴𝑡\mathbb{E}(\sum\_{t}r\_{h}^{\*}(A\_{t}))blackboard\_E ( ∑ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ).
MPBA can be seen as a voting system where instead of announcing their type, the humans demonstrate it. The next theorems show that our setting is more robust to manipulation than classical voting systems.
Formally, define the truthfulness γhsubscript𝛾ℎ\gamma\_{h}italic\_γ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT of a human strategy ψhsubscript𝜓ℎ\psi\_{h}italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT as the frequency of choosing among the best arms:
| | | | |
| --- | --- | --- | --- |
| | γh=∑1T𝕀(Ht=h)𝕀(At∈argmaxaRh\*(a))∑1T𝕀(Ht=h)subscript𝛾ℎsuperscriptsubscript1𝑇𝕀subscript𝐻𝑡ℎ𝕀subscript𝐴𝑡subscriptargmax𝑎subscriptsuperscript𝑅ℎ𝑎superscriptsubscript1𝑇𝕀subscript𝐻𝑡ℎ\displaystyle\gamma\_{h}=\frac{\sum\_{1}^{T}\mathds{I}(H\_{t}=h)\mathds{I}(A\_{t}\in\operatorname\*{arg\,max}\_{a}R^{\*}\_{h}(a))}{\sum\_{1}^{T}\mathds{I}(H\_{t}=h)}italic\_γ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = divide start\_ARG ∑ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT blackboard\_I ( italic\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_h ) blackboard\_I ( italic\_A start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ start\_OPERATOR roman\_arg roman\_max end\_OPERATOR start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_a ) ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT blackboard\_I ( italic\_H start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_h ) end\_ARG | | (8) |
We say that ψhsubscript𝜓ℎ\psi\_{h}italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT is truthful if γh=1subscript𝛾ℎ1\gamma\_{h}=1italic\_γ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = 1.
We assume that the robot uses an explore-then-commit strategy with an exploration time of N⋅T⋅𝑁𝑇N\cdot Titalic\_N ⋅ italic\_T and an exploitation time of 1. The next theorem states that increasing the exploration time decreases the number of non-dominated untruthful strategies. We denote by △h\*subscriptsuperscript△ℎ\triangle^{\*}\_{h}△ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT the minimal suboptimal gap of hℎhitalic\_h and we assume that every humans has at least one suboptimal arm.
######
Theorem 7.
Given γ∈]0,1[\gamma\in]0,1[italic\_γ ∈ ] 0 , 1 [, if T>Rh\*(1−γ)△h\*𝑇subscriptsuperscript𝑅ℎ1𝛾subscriptsuperscriptnormal-△ℎT>\frac{R^{\*}\_{h}}{(1-\gamma)\triangle^{\*}\_{h}}italic\_T > divide start\_ARG italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT end\_ARG start\_ARG ( 1 - italic\_γ ) △ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT end\_ARG then any strategy ψhsubscript𝜓ℎ\psi\_{h}italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT such that γh<γsubscript𝛾ℎ𝛾\gamma\_{h}<\gammaitalic\_γ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT < italic\_γ is strictly dominated by a truthful strategy.
######
Example 3 (Plurality Voting with Shared Control).
The robot chooses the arm to pull following: aR=argmaxa∑i=1N𝟙(a~i=a)subscript𝑎𝑅subscriptnormal-argnormal-max𝑎superscriptsubscript𝑖1𝑁1subscriptnormal-~𝑎𝑖𝑎a\_{R}=\operatorname\*{arg\,max}\_{a}\sum\_{i=1}^{N}\mathds{1}(\tilde{a}\_{i}=a)italic\_a start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = start\_OPERATOR roman\_arg roman\_max end\_OPERATOR start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT blackboard\_1 ( over~ start\_ARG italic\_a end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_a ) where a~i=argmaxa∑t=1T𝟙(ati=a)subscriptnormal-~𝑎𝑖subscriptnormal-argnormal-max𝑎superscriptsubscript𝑡1𝑇1subscriptsuperscript𝑎𝑖𝑡𝑎\tilde{a}\_{i}=\operatorname\*{arg\,max}\_{a}\sum\_{t=1}^{T}\mathds{1}(a^{i}\_{t}=a)over~ start\_ARG italic\_a end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = start\_OPERATOR roman\_arg roman\_max end\_OPERATOR start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT blackboard\_1 ( italic\_a start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_a ).
By using Theorem 4, we can characterize the incentive-compatibility of this mechanism.
######
Corollary 1.
For any domain of utilities 𝒟ϵ,C={u∈ℝM:u\*<C∧(u\*=u\*\*∨u\*−u\*\*>ϵ)}subscript𝒟italic-ϵ𝐶conditional-set𝑢superscriptℝ𝑀superscript𝑢𝐶superscript𝑢superscript𝑢absentsuperscript𝑢superscript𝑢absentitalic-ϵ\mathcal{D}\_{\epsilon,C}=\{u\in\mathbb{R}^{M}:u^{\*}<C\wedge(u^{\*}=u^{\*\*}\vee u^{\*}-u^{\*\*}>\epsilon)\}caligraphic\_D start\_POSTSUBSCRIPT italic\_ϵ , italic\_C end\_POSTSUBSCRIPT = { italic\_u ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_M end\_POSTSUPERSCRIPT : italic\_u start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT < italic\_C ∧ ( italic\_u start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = italic\_u start\_POSTSUPERSCRIPT \* \* end\_POSTSUPERSCRIPT ∨ italic\_u start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT - italic\_u start\_POSTSUPERSCRIPT \* \* end\_POSTSUPERSCRIPT > italic\_ϵ ) }, where u\*=maxau(a)superscript𝑢subscript𝑎𝑢𝑎u^{\*}=\max\_{a}u(a)italic\_u start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = roman\_max start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT italic\_u ( italic\_a ) and u\*\*=maxa{u(a):u(a)≠u\*}superscript𝑢absentsubscript𝑎normal-:𝑢𝑎𝑢𝑎superscript𝑢u^{\*\*}=\max\_{a}\{u(a):u(a)\neq u^{\*}\}italic\_u start\_POSTSUPERSCRIPT \* \* end\_POSTSUPERSCRIPT = roman\_max start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT { italic\_u ( italic\_a ) : italic\_u ( italic\_a ) ≠ italic\_u start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT }, if T>2Cϵ𝑇2𝐶italic-ϵT>\frac{2C}{\epsilon}italic\_T > divide start\_ARG 2 italic\_C end\_ARG start\_ARG italic\_ϵ end\_ARG, then the plurality voting with shared control is non-dictatorial, does not limit the possible outcomes to two alternatives and it is dominant-strategy incentive-compatible on 𝒟ϵ,𝒞subscript𝒟italic-ϵ𝒞\mathcal{D\_{\epsilon,C}}caligraphic\_D start\_POSTSUBSCRIPT italic\_ϵ , caligraphic\_C end\_POSTSUBSCRIPT. In the limit T→∞normal-→𝑇T\rightarrow\inftyitalic\_T → ∞ we have an incentive-compatible mechanism on the universal domain.
When the exploration time is equal to the exploitation time, voting by demonstrating is subject to manipulation. We can nevertheless quantify the robustness of such a system by comparing its proportion of manipulable profiles to the one of classical systems. Formally, let’s define a manipulable profile:
######
Definition 2 (Manipulable profile).
We say that a profile rh\*superscriptsubscript𝑟ℎr\_{h}^{\*}italic\_r start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is manipulable in the game induced by ℳℳ\mathcal{M}caligraphic\_M if there is ψ−hsubscript𝜓ℎ\psi\_{-h}italic\_ψ start\_POSTSUBSCRIPT - italic\_h end\_POSTSUBSCRIPT and ψhsubscript𝜓ℎ\psi\_{h}italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT such that γh<1subscript𝛾ℎ1\gamma\_{h}<1italic\_γ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT < 1 and for any truthful strategy ψh\*subscriptsuperscript𝜓ℎ\psi^{\*}\_{h}italic\_ψ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, Uh(ψh\*,ψ−h,ℳ)<Uh(ψh,ψ−h,ℳ)subscript𝑈ℎsubscriptsuperscript𝜓ℎsubscript𝜓ℎℳsubscript𝑈ℎsubscript𝜓ℎsubscript𝜓ℎℳU\_{h}(\psi^{\*}\_{h},\psi\_{-h},\mathcal{M})<U\_{h}(\psi\_{h},\psi\_{-h},\mathcal{M})italic\_U start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_ψ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_ψ start\_POSTSUBSCRIPT - italic\_h end\_POSTSUBSCRIPT , caligraphic\_M ) < italic\_U start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_ψ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_ψ start\_POSTSUBSCRIPT - italic\_h end\_POSTSUBSCRIPT , caligraphic\_M ).
Using a geometric argument on the 2-simplex we can prove the following:
######
Theorem 8.
In a system using plurality voting with random tiebreak with 3 voters and 3 alternatives, the set of manipulable profile by demonstrating is included in the set of manipulable profile by announcing. Furthermore, the proportion of manipulable profile by announcing but not by demonstrating is 1919\frac{1}{9}divide start\_ARG 1 end\_ARG start\_ARG 9 end\_ARG.
###
3.4 Efficient MPBA
In the previous section we have seen that voting by demonstrating provides a naturally incentive-compatible mechanism. Yet our first aim is to maximize the social welfare, therefore we want to build an approximately efficient mechanism. As we have seen, it is hard to optimize social welfare because observations of optimal human behavior provides limited information the corresponding utility functions. An alternative is to build a mechanism that incentivizes the humans to provide information about their entire utility function, not just their optimal arm.
By analogy with the voting theory literature, we define the distortion for a given robot strategy ℳℳ\mathcal{M}caligraphic\_M as the ratio between the optimal social welfare and the social welfare obtained by the robot in the worst case.
| | | | |
| --- | --- | --- | --- |
| | △(ℳ)=maxR\*maxa∑hRh\*(a)𝔼(∑h(Rh\*(aRℳ(ψ\*(R\*,ℳ))))\displaystyle\triangle(\mathcal{M})=\max\_{R^{\*}}\frac{\max\_{a}\sum\_{h}R^{\*}\_{h}(a)}{\mathbb{E}(\sum\_{h}(R^{\*}\_{h}(a\_{R}^{\mathcal{M}}(\psi^{\*}(R^{\*},\mathcal{M}))))}△ ( caligraphic\_M ) = roman\_max start\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT divide start\_ARG roman\_max start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_a ) end\_ARG start\_ARG blackboard\_E ( ∑ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_M end\_POSTSUPERSCRIPT ( italic\_ψ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , caligraphic\_M ) ) ) ) end\_ARG | | (9) |
where ψ\*(R\*,ℳ)superscript𝜓superscript𝑅ℳ\psi^{\*}(R^{\*},\mathcal{M})italic\_ψ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , caligraphic\_M ) is the best response to ℳℳ\mathcal{M}caligraphic\_M for the profile R\*superscript𝑅R^{\*}italic\_R start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT.
By building on recent results in ordinal voting theory (Boutilier et al., [2015](#bib.bib5)) we can construct an approximately efficient mechanism ℳℳ\mathcal{M}caligraphic\_M. In broad outline, the robot chooses a human and execute the action only if the human did not choose this action before. Periodically, the robot choose a random action with probability 121M1superscript21𝑀\frac{1}{2^{\frac{1}{M}}}divide start\_ARG 1 end\_ARG start\_ARG 2 start\_POSTSUPERSCRIPT divide start\_ARG 1 end\_ARG start\_ARG italic\_M end\_ARG end\_POSTSUPERSCRIPT end\_ARG. We present the full algorithm in the appendix and show that this algorithm incentivizes the humans to share their true ordinal preferences. We derive the following upper-bound for the distortion:
######
Theorem 9.
△(ℳ)=O(MlogM)△ℳ𝑂𝑀𝑀\triangle(\mathcal{M})=O(\sqrt{M\log M})△ ( caligraphic\_M ) = italic\_O ( square-root start\_ARG italic\_M roman\_log italic\_M end\_ARG )
4 Conclusion and Future Work
-----------------------------
In this paper, we explore an area of concern in the study of AI alignment—ensuring that AI systems are designed so that humans agents are incentivized to interact with AI systems in a “honest" way. Applying the Gibbard–Satterthwaite theorem to this scenario indicate that demonstrations with little to no significance are subject to strategic behavior. Experimental results show that a commonly used inverse reinforcement learning paradigm, which works well in single-human alignment instances, is prone to manipulative behavior. However, on a modified setting, we find effective mechanisms can arise from learning human preferences via their actions if those actions are sufficiently consequential.
The overall problem of preventing manipulative behavior in multi-human AI systems is open and presents many opportunities for further work. Our methods are applied to fairly simple problems: there exists a need to generalize these results to more general theoretical settings and more complicated situations in the real world. |
2abc8bf5-2538-490c-9f57-3fc5d7fdf77c | trentmkelly/LessWrong-43k | LessWrong | What happens to existing life sentences under LEV?
Presumably they get offered longevity treatments since they already get healthcare. Are they locked up until the end of time? For 100 years? |
8b69ede7-d518-488f-ba3e-7fd015af6985 | trentmkelly/LessWrong-43k | LessWrong | The Athena Rationality Workshop - June 7th-10th at EA Hotel
For the past few months, Matt Goldenberg has been in charge of teaching applied rationality at the EA hotel. And honestly, we’ve been quite impressed by the quality. Matt has been a great teacher, and some of us felt like we wanted to delve deeper into his material. So let us delve into it together!
During the first weekend of June, the EA Hotel (36 York Street, Blackpool, UK) will host its first rationality workshop. Open to anyone interested.
We will delve into the Ease process, which is step by step process for overcoming internal blocks and creating psychological alignment towards your goals.
Matt has been working intensely on self-improvement for the past 15 years, was a professional coach for 5 years and has been running applied rationality group workshops for the past 18 months. Some guest teachers will also be chiming in with their techniques, including Toon Alfrink with some lessons he learned living at a Zen Monastery, and possibly others.
The workshop will be held June 7th-10th, Friday to Monday, starting at 12:00 and ending at 19:00. Travelers from afar can stay the night on Thursday. There will be an optional chill-out day on Tuesday, June 11th, and you’re welcome to stay longer. Payment will be at the end of the workshop for whatever you think it was worth for you (cost price is £40). All proceeds will go to fund the EA hotel.
If you want to join: Sign up here. |
db5b92f1-7905-46d1-b041-c7139fbafcdc | trentmkelly/LessWrong-43k | LessWrong | [LINK] Scatter, Adapt, and Remember: How Humans Will Survive a Mass Extinction
A new popular science book on existential risks and mass extinctions from Annalee Newitz, the founding editor of io9.com
It probably won't display the same rigour as Global Catastrophic Risks (Bostrom, Cirkovic et al.), but that was published five years ago and is a bit academic. A new book written in a popular, journalistic way seems pretty appealing - it might even be a good introduction for family/friends. Anyway I'm looking forward to reading it, and I expect enough other LWers will be interested in this news to warrant the post.
If anyone has any other existential risk book recommendations, please comment.
Tumblr
Amazon
Some reviews on goodreads
|
31175660-7593-4564-b9b2-af17ad245906 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups: Chicago, Copenhagen, Melbourne, Pittsburgh, Sydney
There are upcoming irregularly scheduled Less Wrong meetups in:
* Second Copenhagen meetup: 26 May 2012 05:00PM
* Less Wrong Sydney -Social: 31 May 2012 06:00PM
* Pittsburgh: Making Beliefs Pay Rent: 01 June 2012 06:00PM
* First Berlin meetup: 05 June 2012 07:30PM
* Phoenix, Arizona: 15 June 2012 07:00PM
The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Weekly Chicago Meetups Resume 5/26: 26 May 2012 01:00PM
* Melbourne, practical rationality: 01 June 2012 07:00PM
* Less Wrong Cambridge (MA) first-Sundays meetup: 03 June 2012 02:00PM
* Less Wrong Cambridge (MA) third-Sundays meetup: 17 June 2012 02:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Chicago, London, Madison WI, Melbourne, Mountain View, New York, Ohio, Ottawa, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up.
Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup!
If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notifie |
b48d983b-13c9-48cf-ac3b-5af7980dddc6 | StampyAI/alignment-research-dataset/blogs | Blogs | Roland Siegwart on autonomous mobile robots
 Roland Siegwart (born in 1959) is a Professor for Autonomous Systems and Vice President Research and Corporate Relations at [ETH Zurich](https://www.ethz.ch/en.html). After studying mechanics and mechatronics at ETH, he was engaged in starting up a spin-off company, spent ten years as professor for autonomous microsystems at EPFL Lausanne and held visiting positions at Stanford University and NASA Ames.
In his research interests are in the creation and control of intelligent robots operating in complex and highly dynamical environments. Prominent examples are personal and service robots, inspection devices, autonomous micro-aircrafts and walking robots. He is and was the coordinator of European projects, co-founder of half a dozen spin-off companies and board member of various high-tech companies.
Roland Siegwart is member of the [Swiss Academy of Engineering Sciences](http://www.satw.ch/index_EN), IEEE Fellow and officer of the [International Federation of Robotics Research](http://ifrr.org/) (IFRR). He is in the editorial board of multiple journals in robotics and was a general chair of several conferences in robotics including [IROS 2002](http://www.iros02.ethz.ch/), [AIM 2007](http://aim2007.ethz.ch/), [FSR 2007](http://www.inrialpes.fr/FSR07/), [ISRR 2009](http://www.isrr2009.ethz.ch/).
**Luke Muehlhauser**: In 2004 you co-authored [*Introduction to Autonomous Mobile Robots*](http://smile.amazon.com/Introduction-Autonomous-Mobile-Intelligent-Robotics/dp/0262015358/ref=nosim?tag=793775876-20), which offers tutorials on many of the basic tasks of autonomous mobile robots: locomotion, kinematics, perception, localization, navigation, and planning.
In your estimation, what are the most common approaches to “gluing” these functions together? E.g. are most autonomous mobile robots designed using an agent architecture, or some other kind of architecture?
---
**Roland Siegwart**: Mobile robots are very complex systems, that have to operate in real world environments and have to take decisions based on uncertain and only partially available information. In order to do so, the robot’s locomotion, perception and navigation system has to be best adapted to the environment and application setting. So robotics is before all a systems engineering task requiring a broad knowledge and creativity. A wrongly chosen sensor setup cannot be compensated by the control algorithms. In my view, the only proven concepts for autonomous decision making with mobile robots are Gaussian Processes and Bayes Filters. They allow to deal with uncertain and partial information in a consistent way and enable learning. Gaussian Processes and Bayes filter can model a large variety of estimation and decision processes and can be implemented in different forms, e.g. as the well-known Kalman Filter estimator.
Most mobile robots use some sort of agent architecture. However, this is not a key issue in mobile robots, but rather an implementation issue for systems that run multiple tasks in parallel. The main perception, navigation and control algorithms have to adapt to unknown situation in a somewhat predictably and consistent manner. Therefore the algorithms and navigation concepts should also allow the robotics engineer to learn from experiments. This is only possible, if navigation, control and decision making is not implemented in a black-box manner, but in a model based approach taking best advantage of prior knowledge and systems models.
---
**Luke**: So are you saying that the glue which holds together the perception, navigation, and control algorithms is typically an agent architecture, and this is largely because you need to integrate those functions in a model-based manner which can reveal to the engineer what’s going wrong (in early experiments) and how to improve it? Or are you saying something else?
---
**Roland**: You understanding is only partially correct. Yes, most robot systems make use of some sort of an agent architecture, because it is the most evident concept to implement independent parallel tasks, like for example robot localization and security stop using the bumper signals. However, I don’t see agent architecture as a major issue in robotics or as the main glue. The glue for designing and implementing autonomous robots is with the fundamental understanding of all key elements and its interplay by the robotics engineer. Furthermore, Gaussian Processes and Bayes filter are today the most promising and proven approach for autonomous navigation, especially Simultaneous Localization and Mapping.
---
**Luke**: As robotic systems are made increasingly general and capable, do you think a shift in techniques will be required? E.g. 15 years from now do you expect Gaussian Processes and Bayes filters to be even more dominant in robotics than they are today, or do you expect rational agent architectures to ascend, or do you expect hybrid systems control to take over, or what? (Wild speculation is allowed; I know you’re not a crystal ball!)
---
**Roland**: I consider Gaussian Processes and Bayes filters the most powerful tools to create rational agents. They enable to learn correlations and models, and to reason about situations and future goals. This model-based approaches will gain importance in contrast to behavior-base approaches. However, there will probably never be a single unifying approach for creating intelligent agents.
Robotics is the art of combining sensing, actuation and intelligent control in the most creative and optimal way.
---
**Luke**: Why do you expect model-based approaches to gain importance relative to behavior-based approaches?
---
**Roland**: In order to take “wise” decisions and plan actions, a robot has to be able to anticipate reactions its decisions and actions might have. This can only be realized by models, that form the basis for predictions. Furthermore, unsupervised learning also requires models that enable the robot system to learn from experience. Models enable the robot to generalize experiences which is not really possible with behavior-based approaches.
---
**Luke**: From your perspective, what has been some of the most interesting work in model-based approaches to autonomous robots in the past 5 years?
---
**Roland**: I think the most prominent model-based approach in robotics is within SLAM (Simultaneous Localization and Mapping) which can considered to be pretty much solved.
Thanks to consistent application to Gaussian Processes and Bayes filters, and appropriate error modelling, SLAM is today feasible with different sensors (Laser, Vision) and on wheeled and flying platforms.
Large scale maps with considerable dynamics, changes in lightning conditions and loop closures have been demonstrated be groups from Oxford, Sidney University, MIT, ETH an many more.
An other robotics field, where a lot of progress has been achieved by model base approaches, is in imitation learning of complex manipulation task. By combining physical models of human arms and robot manipulators with probabilistic processes, learning of various manipulation task has been demonstrated by groups at USC, DLR, KIT, EPFL and many other places.
---
**Luke**: Thanks, Roland!
The post [Roland Siegwart on autonomous mobile robots](https://intelligence.org/2014/04/25/roland-siegwart/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
9189f687-22dd-41f8-a760-0bd5e602af2f | trentmkelly/LessWrong-43k | LessWrong | Learning as closing feedback loops
Epistemic status: Just a pattern I've observed from my own learning experiences and advice I've encountered about learning. I am not qualified and the essay is not properly justified - it's meant to be more like a source of ideas on things to try[1]. I would be happy to work on this further if there is interest.
When I think about my own learning experiences and the various studying techniques, advice on how to learn better, or methods to improve faster at skills, I observe some commonalities - it seems like most advice can be interpreted as ways to improve our feedback loops in learning[2].
In this essay, I will be describing the key steps in a learning feedback loop and the ideal conditions and common challenges for each step. I will then list the different ways of improving learning that are suggested by this framework, and provide examples of how to apply them. Finally, I will end with some rambly questions and thoughts.
Framework
I like to think of learning[3] as closing feedback loops, because it helps contextualize the different advice and techniques, and also helps me identify bottlenecks in my learning.
Here's the learning feedback loop:
1. Make an attempt: try something based on a goal (e.g. practice a song, with the goal of playing it well)
2. Observe the feedback: observe how your experience and the outcomes differ from what you desired, to see if you are getting better or worse (e.g. compare your playing with an expert's rendition of the song, or listen to your teacher's response to your playing to see how well you did)
3. Reflect & hypothesize: analyze your experience and generate hypotheses to explain why your attempt was better or worse, and what you can do differently next time (e.g. you observe that the expert's rendition uses a wider range of dynamics, and conclude that varying the loudness/softness would make your version more interesting)
4. Go back to Step 1, making a new attempt that tests your hypotheses
Examples of no |
43083b22-866e-4e0f-8fad-d54051cbb42e | trentmkelly/LessWrong-43k | LessWrong | The National Security Commission on Artificial Intelligence Wants You (to submit essays and articles on the future of government AI policy)
The National Security Commission on Artificial Intelligence, a body created by Congress to study the impact of advances in AI on US national security has put out a call for essays which analyze how advances in artificial intelligence could affect US government and security policy. There are 5 specific prompts, which I summarize as follows:
1. How will AI affect the nature of war, and more generally interstate competition at or below the level of armed conflict?
2. What kind of military and non-military AI capabilities should the US government invest in? What AI-related skills will be necessary for the national-security workforce in the future?
3. What kind of infrastructure, institutions and organizational structures will be best suited for ensuring AI development?
1. What kinds of AI research should the US national security community engage in?
2. Will this research require the creation of new institutions?
3. What other kinds of infrastructure are necessary to create a sustainable advantage in artificial intelligence, and what are the ethical concerns with attempting to create a sustainable advantage?
4. How will government acquisition processes need to change?
5. What kinds of data are necessary for developing AI applications and tools? What are the ethical and security concerns related to collecting, analyzing and storing this data?
4. What should the US do to influence global norms around artificial intelligence? Given that there are many nations pursuing AI, what should the US do to influence adversaries' AI development?
5. How should the government interact with the private sector? How can the private sector educate the government about the capabilities and risks of AI? What can the government and the private sector do that AI systems used for national security are trusted, by the public, strategic decision-makers, and allies?
I'm heartened to see the government taking the risks of AI more seriously, and I think submitting an essay |
4aaca53b-dc30-4668-a89f-ae5e10692e0e | trentmkelly/LessWrong-43k | LessWrong | Reflections on rationality a year out
Edited for concreteness.
Exactly one year ago, LessWrong helped me change my mind about something important.
Since then, my life has been changing very rapidly, as a direct result of the rationalist community. I got in touch with other rationalists in person, which made my social life vastly more interesting (not to say surreal). My plans for the future have definitely shifted a bit. I began a deliberate habit of trying new things and learning new skills, and facing up to my flaws, often with advice from LessWrongers or IRL rationalist friends.
A few examples: I improved my diet (paleo), tried yoga, took up cognitive behavioral therapy to work on some chronic insecurities, moved Python from the "wish I knew" box to the "have a detailed plan to learn" box, dared to publish some popular-science articles under my real name, learned to do Fermi calculations in my head. I also noticed that my habits of thought have been changing: for one thing, I'm getting better calibrated about probabilities -- I'm better at estimating how I did on schoolwork. For another thing, I'm getting better at not reflexively dismissing non-standard ideas: the first time someone mentioned me that a good statistician could make a lot of money in car insurance by finding new correlations to monetize, I thought "Car insurance? Hmph, low status." The second time I heard that suggestion, about five months later, I thought "Hey, that's a decent idea." Some of these changes have begun to show results -- the time-management habits* I came up with have started to improve my academic performance, and I notice I'm far less inhibited about taking the initiative to work on projects (I have a couple of interesting balls in the air now, including a business idea and some volunteer work for SIAI, whereas I used to be very reluctant to volunteer for things.) I've become much more open to cold-emailing people who work on interesting things (on one occasion I got a job offer out of an AI researcher); |
596db704-a2da-4292-8350-dca51595a89c | trentmkelly/LessWrong-43k | LessWrong | Epistemology Volume of "A Map That Reflects the Territory" Set - My Personal Commentary
Intro
This post compiles my personal comments on the Epistemology book from the Lesswrong 2018 bookset. I put almost no effort into making it legible or relevant to anyone else.
A sketch of good communication
* For research, we want to exchange deep models
* For logistics team, okay to err on side of comparing shallower nodes just to come to agreement.
* Has implications for #sbub . Centralized and federated systems look an awful lot like the bad case of communication. The only way to converse is to straight up adopt the output. If I want #sbub to empower people trying to deeply understand the world, there has to be a lot of space just for interacting with other people's models and drawing intricate connections to your own.
Babble and Prune Sequence
Babble and Prune
* Currently I need to tune down my prune step and increase my babble. A lot. #habits
* #sbub in my current imagination is extremely babble and prune. Some concretizations include:
* Computers are capable of producing babble. That's an idea for a feature right there: integrated gpt-2.
* Run another Lesswrong poll. Seed statements from babble and prune article. Seed claims of which the reader uses and how well this serves them, what others use and need. Seed claims of good sources of babble, good algorithms of prune, good combinations.
* ask LW team about make a post with the convo embedded
* make a link post, send delayed invitations to answer and comment
* advise "if you notice a hint of confusion or not-rightness, sit with it for a moment. Say a short sentence out loud expressing some aspect of the intuition, write it down and submit it if you will."
* provide a submission form for seed statement ideas, things you don't automatically want to vote in favor of.
More Babble
* Akjash is a graph theorist, good to know
* Scrabble babble as an example of the kind of play I want to do with my language generation project.
* Advice in this article seems geared |
f7ce9497-7f23-4b03-825a-0d021f7a293e | trentmkelly/LessWrong-43k | LessWrong | Can You Upload Your Mind & Live Forever? From Kurzgesagt - In a Nutshell
An interesting video on mind uploading from almost 2 years ago, focusing on explaining the concept to the wider world. It's fairly simplified, as expected of a video that has to cross large inferential distances, and the Kurzgesagt channel is basically a simplified (though still mostly accurate) channel for communicating complicated topics and simplifying them.
In a sense, this would be a post that fits in right with the old LW 1.0 culture years ago with a transhumanist and mind uploading culture, before AI safety was the new goal of LW. |
08c8bec9-e0e1-4754-babb-d2e6804c54af | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | DeepMind x UCL RL Lecture Series - Approximate Dynamic Programming [10/13]
hi everyone and welcome back to our 10th
lecture on reinforcement learning today
we're going to be talking a lot more
about approximate dynamic programming
and the frame the algorithm that you've
seen in the
last few lectures especially in model 3
reinforcement learning in terms of this
paradigm
so let's dig in
so last lectures
you've already seen mdps dynamic
programming we touched a bit about
uh approximate dynamic programming
mostly as a framework
to to guide you through the model 3 and
model
model-free prediction and model-free
control
we've also seen a bit
far back belmont equations and their
corresponding operators we're going to
be reusing these concepts today and
we touched upon
with hado on
reinforcement learning under function
approximation and we'll see a bit more
of that in the next lectures with mateo
on
a particular deep reinforcement learning
now this lecture we're going to be
revisiting the proxima dynamic
programming and under these two sources
of error estimation so uh not having
access to the true model and being
being forced in a sense to sample from
from the model through experience
paired with function approximation which
means that we are not in a tabular
setting anymore we can't represent the
value functions exactly
for all states in action but we would
use a function approximator to to do
that
okay
and as i um already mentioned in the
next lectures you're going to see more
of these paradigms
and most of the state-of-the-art
research kind of falls within this
paradigm
and
in in particular
removing the perfect uh
knowledge of the environment assumption
and going more towards a popular
version of function approximation which
is deep neural networks
a couple of preliminaries
this have been introduced before but
just to
to recap and be on the same side uh
we've introduced previously the belmont
optimality operator this was derived
from the
bellman
optimality equation
and the definition is in in equation 1
here
to note that this bellman operator
has a unique fixed point
which is exactly the value function that
we are looking for that's why iterating
this operator will eventually get us to
the optimal value function this is under
you know no approximation condition with
perfect
knowledge of the the model so this is
just pure dynamic
programming okay
and then we had the the belmont
expectation uh operator
which has similar uh properties it's a
contraction it has one unique fixed
point but the unique fixed point of this
operator is the
um evaluation of the policy pi uh
for for which this operator is
defined
okay
good and uh
just uh another reminder dynamic
programming through the lens of these
operators uh the two popular algorithms
there that we've seen are value
iteration and policy iteration and
you've seen approximate versions of this
for instance q learning being an
approximate version of a value iteration
so these
both of these say
value iteration is just applying
multiple times the belmont optimality
operator and again under perfect
conditions no approximation
this is guaranteed to converge to the
fixed point of this this operator
t star which is the optimal batman
operator which is the optimal value
function and policy iteration was this
procedure of starting with a particular
policy doing policy evaluation which can
be done iteratively via the the bellman
expectation
operator or can be done in any other way
especially if uh
we can solve the the true system of
equations and then on top of these uh
this evaluation we're going to do an
improvement step usually greedy
improvement step and if we've seen in
previous lectures that that is
guaranteed to give us a better policy
than the one we've just evaluated and we
just iterated this process
by improving greedily on these uh this
evaluation and at least in finite cases
this will will eventually because we're
improving the the policy at the age
point in time this will eventually get
us to the optimal policy
good
okay
so uh
now approximate dynamic programming as
you've
seen before
uh removes a couple of the the
assumption of the
of the true and of knowing the
underlying mdp and being able to to
represent exactly the value function so
moving away from tabular and moving away
from the
the assumption that we can uh actually
evaluate perfectly the expectations
needed in these in these updates
okay and by doing so
in one of the cases where we don't know
the underlying mdp we're going to be
introducing some sampling or estimation
error because we're trying to
estimate an expectation via samples
and in the other
uh case we're going to be introducing
some kind of approximation error because
we're not we might not be able to
represent exactly
in the parametric
function class we've
we've chosen the true value functions
that
that we want to estimate at each
point of this process
all of these procedures are kind of
iterative procedures so we'll have a lot
of value functions uh in between value
functions that we would need to estimate
so whenever
neither of like any of these functions
cannot fully be
approximated by the approximation
uh
by the
parametric class we've chosen then we
incur some error at that iteration so
it's not only that the true solution
can't be represented it might be that
the true solution is representable but
the intermediate uh steps in the the
intermediate value functions that that
we're trying to approximate are not
representable under the
parametric class then we'll still
going to incur some approximation error
okay
and of course
as as always the even under these
conditions the
the objective of
of rl is to come up with a policy that
is
hopefully close to optimal behavior
okay
so let's uh let's look more in depth at
the
approximate value function uh value
iteration uh paradigm so this is this is
close to what you've seen for instance
in in q learning
so again as a reminder just this is this
is value iteration through the lens of
the belmont optimality operator so we
started with the value
and at each iteration k we update uh vk
plus 1 towards or in in the exact case
with the
one step
application of the belmont optimality
operator
applied to
the value function at the previous
iteration
okay and iterating this as k tends to
infinity
this
the sequence of value function
eventually converges under the infinity
norm to v star
if this is still not uh clear to you i
would urge you to to go back to our
second lecture on dynamic programming
and uh see the arguments there
okay
now the approximate version of this
is basically we're gonna do this this
iterative step at each iteration k
approximately so that's that's what this
a
stands for and again this approximation
can be based on the function
approximation or based on the fact that
we
we actually don't have access to a t
star
and we'll we'll have to sample to
approximate it
and the same as as before we're gonna do
a greedy improve one step
uh on the one whatever
uh value function we approximated here
okay
now the the question that the rises of
course is
if we iterate this process
would we converge to the optimal value
function we converge at all
um and in general that's that's that's a
no it highly depends on what the
approximation here is
okay
um
the nice things that that we
kind of hinted last time is that even if
we don't converge all the way to the
optimal value function we are actually
interested only in the
uh quality of the policy this derived
greedy policy with respect to the
estimate at one point in this iteration
we're never gonna do this iteration
forever
um and
we're we're trying to understand how
good that that policy is
okay
um and just uh
just to be clear
usually we would be using something like
the q version of uh approximate value
iteration
where
we do exactly the same process but not
for v but for q
again start with uh with an arbitrary
value function q zero
update
uh towards
the
uh one step bellman operator with
respect to
uh qk
at iteration k plus one and then return
as a control policy the greedy with
respect to to that estimate
this is this is what we use usually in
practice especially if we don't have a
model because remember that to derive
this greedy policy we actually need the
action value function
uh to just
basically pick the the the arc max of
the
q value function if we don't and we have
only v then we would have to have at
least
a one-step model
in order to to derive the greedy policy
so usually we're just gonna uh go for
approximating the the q value
okay
and as i hinted uh before
we really want to be able to say
something
about the quality
of this
greedy policy
derived after n steps
of uh iterating this
approximate value iteration process
and this is what the the following
theorem is saying this is um an old
result from 96
but
still a very important result in the in
the field that basically
does this for us so
we're going to bound under the infinity
norm the performance of the policy
uh at
the end
iteration of this
value iteration
procedure
with respect to the optimal value
function
so that's what this uh
first term says
and this
is
gonna be uh bounded by two factors
first the initial error
this is uh
uh basically defined in the in the
second line here is how far is my
um
initial initialization
with respect to the optimal value
function with respect to the l infinity
norm so this is
this is the second term in the bound
okay and the the first term here
is the approximation error that i'm
gonna incur
at iteration k
okay
so
usually at the iteration k plus one
i'm gonna try to estimate
i'm gonna try to uh
back up
d star
q
uh okay
but because i'm in a an approximate
setting what i'm actually gonna be uh my
my next q
k plus 1 would actually be a
uh t star
qk
so
this is this accounts for how much error
am i incurring
by introducing this a
in my backups
in the in the value iteration
okay so this this should be kind of
intuitive outside the you know the the
terms uh in front that uh our
performance of the policy
after and and iterations of this value
iteration is bounded
by
um both
where we started so our initial error
and the the approximate error that we're
gonna be incurring
at each iteration
cool
so um next we're gonna move into the the
proof of the statement
okay
let's start by first denoting
the maximum error
that we can incur up to iteration n so
for any iteration a k in between that
and
n
we are maximizing over the approximation
error this is again for usually for
dynamic programming this would be
um
qk plus one
but in approximate dynamic programming
we're introducing approximation so
this instead this term will be
our
value function at the iteration k plus
1.
so discrepancy between
um
this k plus 1 and this k plus 1 is what
the approximation error introduces okay
so we're gonna now maximize
over all of these errors and denote that
by
uh epsilon
okay
next the the first thing we're going to
do is try to
relate
the
value function at iteration k plus 1
and in particular at iteration n
with respects to the optimal value
function
note that this is not exactly what we
want uh
the statement of the the theorem is
actually saying something stronger in
terms of the performance of the policy
derived based on the estimate
um at the iteration
n so if we have
q
n then this induces
uh via greedy
this policy q n
it's greedy
with respect to the estimate that we had
at iteration n and we're interested
really in the in the value of that
policy
okay
but we'll we'll see that this is really
um simple based on the result that we've
had
um in the last lecture on dynamic
programming where
uh we
we
have a result that relates the
performance of greedy policy
uh with respect to the value function
that induces that that policy we're
going to just revise that that result in
the in the next slide but
first um let's let's look at this this
equation
okay
and uh we're going gonna break it down
into a couple of terms so
first we
um
add and subtract the same term
and due to the triangle inequality we
can
we can split these apart
this is an
upper bound to
the left hand side
okay and then note that
this term
is exactly
the approximation error at iteration k
which is upper bounded by definition
by this
epsilon which maximizes over all the
iteration up to n okay so that's that's
why we get uh this term
being less than equal
then
epsilon
and then
on the the first term there we are using
uh that the
amount operator has a unique fixed point
which is q star
so
uh
just as a reminder
that means that
this holds
um
again because q star is the unique fixed
point of this operator
and then we're going to be using the
fact that
this operator
q star is a contraction
contraction
actually it's a gamma contraction
which means that
less than equal then gamma
so the
um
so this term now is less than equal than
this term
okay
and now what we've
actually achieved is a recurrence
relationship between
uh how far
the value function at iteration k plus 1
is
related to how far it was at the
previous iteration qk
okay
and what we're gonna do
um now is basically unfold the data that
equation
so this is the equation that that we're
gonna like the inequality that we're
gonna be using
and we're gonna apply it
uh
at previous iterations so uh first
for
k
and we're going to relate it to k minus
1 then k minus 1 to k minus 2 and so on
so forth
and what we get is
we can relate this to the initial error
at iteration
k equals zero
um
and
this term which is the
discounted errors it's an upper bound
actually of the discounted errors that
we've encountered at each iteration
and this is
of course uh
also less than equal then if this would
have been uh you know
k equals
capital k actually goes to infinity so
this now is upper bounded by
um
the mistake or
the um how good our initialization was
and
this uh this term accounting for the
errors that we've uh
we've seen it's actually an upper bound
on the errors that we've seen throughout
this approximation at every
iteration k
okay
now
uh i'm just restating this
here
it's the same equation as as here
and now the the last step
is to
relate
apologies for the buzzer there
but as as i was saying we're gonna now
relate to this result
to the performance of the policy induced
by the estimate at iteration k or
iteration n so just as a reminder
the policy that we're deriving is greedy
with respect to that estimate and we are
interested in evaluating
the that policy at iteration n
and for that we're going to be using the
result we had in our um
last lecture on dynamic programming
which was the performance of the greedy
policy
based on
an arbitrary value function qk
and that if you remember this
was
the performance of the derived policy
q pi k
can be bounded in terms of the
performance of
uh how
far
this estimate on which we're basing the
greedy on is with respect to the optimal
value function
okay
and note that we already have this term
upper bounded here
and just by
plugging in 10 into equation 11
you get exactly the statement of the
theory
okay now that we've proven the statement
of the theorem let's look at
some of the implications that this has
and first we're going to look at the
last term there
so some of the implications are
as
n tends to infinity
this term in red
tends to zero as
um
this uh
gamma to the n
decays exponentially passed
so what that means is that although uh
at each iteration n this this term
appears and is is bounding our
performance with respect to the
the initial point
our initialization of this
this process
as n tends to infinity basically this
term will disappear which is good news
which means that um
in the limit we will have no um
dependence on the initialization point
okay
um another question uh here just as
maybe as a sanity check
um
is
what if we do initialize you know by
chance at exactly the
uh optimal point
at q star
well let's say what the
the bound says so obviously this uh the
second term goes away because the
norm there
uh of the initial error is zero
but uh you can see that we are left with
uh another term here
which is still the maximization of all
the approximation error that we would
encounter
uh in
in each iteration
okay
um
[Music]
now the question becomes would that be
would that be zero in general
let's uh let's consider just one
iteration of this process so let's say
at
we start at
q star as
our first
iteration of this process
so q zero is q star
and our first iteration of this
approximate value iteration process
gets us q1 which is the approximate
version of the one step application of
the belmont operator
to q0
because q0 is q star then the
application of the belmont
operator
2
q star is q star
so um q1
is actually
the approximation
operator applied to q star
now in general
this might not be zero so for instance
if
a is
um an approximate of
a
hypothesis space let's say
that is not within our class of
approximations
then this um this error at the first
iteration might not be by might not be
zero so even initializing at the
um right solution in the sense
by approximation we might move away from
from this solution
that being said if uh if this initial
point has to be in the
uh function of approximations
then uh
this
has to be zero because the the
projection here
just just in in terms of coming back
into the the class of functions
uh this would be like
not lossy
so the the projection of
q star
under the function approximation
um would just give you back the
q star because that's the closest point
in the function approximation
uh again if if a is
something like an estimation error
where
instead of having the true belmont
operator we have
a sampled version of this again this
error
add the first iteration might be
non-zero
probably will be non-zero which means
that even
um if we initialize at the right
solution we might move away and once we
move away from the the solution we can
further move away and so on so forth
so this this
result is very general
but it does a state that even if we are
close enough to the solution
it highly depends on
what the approximation errors incurred
that if each iteration will do
and how much we could in principle lose
based on that
okay
now uh let's focus a bit on the the
other term
we've seen already that as n tends to
infinity the second term goes to zero
which means we wash out the initial
approximation error
now let's um
consider a hypothesis space uh f so this
this is where um let's say our
approximation comes from
and uh let's now have
uh a be exactly the projection
uh with respect to the l infinity norm
okay so this is uh the definition of
this projection is uh
uh
is now in the
uh in the equation
below
so we're just looking
for
um
for the projection of g a function
or vector
uh under the l infinity is trying to
find
in the space of value function f
the closest point
to g
under the l infinity north
okay
now if we do that then the
approximate
value iteration algorithm
uh at each iteration k plus 1
tries to
will uh take the form of uh applying the
the bellman operator
to
optimality operator to the previous
value function and then projecting it
via this this operator
now
the the thing to note here is that this
combine operator
uh the approximation just via the
projection we're still doing
full dynamic programming in terms of uh
using the the expectation using the true
model here
so this combined operator is
is indeed a contraction
operator in l infinity because basically
t star is a contraction with respect to
l infinity and
the projection with respect to that norm
is a non-expansional mapping so the full
thing is a contraction with the same
gamma parameter
because this whole operator is a
contraction this algorithm does converge
is guaranteed to converge to a fixed
point actually because it's a
contraction
from uh from previous the
banana fix point theorem
it has one unique fixed point it says
a fixed point uh and this is unique
that has this uh this equation
and
as before
if
q star the thing that we're looking
is
uh indeed in f is under our functional
class then uh the above algorithm this
this uh
this iteration in uh in red
actually converges
to the to the true value function and
this is this is uh easy to to see if
we again
remember
that
uh
this algorithm
has one unique fixed point and you can
you can plug in in this equation if f
equals
uh
the projection
uh via the l infinity norm uh t star f
and see that q star actually satisfies
this this equation
so now we've got at least on the
function approximator still full full
model of the mdp a converging algorithm
that at least if
you know the the true value function the
one that we're looking for the optimal
value function is representable in our
functional class
then we are guaranteed to to converge
there
even if it's not in our
if q star is not in that function across
this algorithm still converges
it just might not be to to q star
because you know as as we've
established q star is not representable
enough so we we can't hope to recover
something uh we can't recover q star via
this parameterization
okay
let's look now at some concrete
instances of
this paradigm and um
uh although i have just proposed a
convergent algorithm we'll see that this
algorithm is
somewhat impractical and we're gonna
try to fix that and we'll see that
uh trying to fix that actually gives us
some of the popular algorithms that that
you've seen before and that are now used
in the in the literature
okay
so um let's first have a you know um
clear instantiation of the algorithm
that we've just proposed so at each
iteration k we're trying to
uh approximate uh the
the one step bellman operator
by projecting back
with respect to the an infinity norm
under some hypothesis class f
now for
uh concreteness we're gonna consider
uh f to be
the the space of all um linear functions
with respect to some
feature space phi
so all of the the functions
all of the q
k
in all of these iterations will have to
be represented
or approximated
in this in this functional class okay
and then uh
doing that we obtain
this
simplify the algorithm
at iteration k plus 1 we're looking
in the space of
linear value functions of the point that
best approximates under the l infinity
norm
t star
q k
and because
we have only you know one set of
parameters which is
w the weight of this
feature vector the
equation 12 can be reinterpreted as just
finding
at each duration k or k plus one
a weight vector k plus 1 that that
minimizes this this equation
now
um
how would you solve this this equation
how how tractable is this even under a
very you know
one of the simplest hypothesis classes
uh out there
now here are um
a couple of potential problems but
before before i flash that uh just just
try to take a moment here and
try to think how you would find this
vector
um wk plus one at each duration and how
how hard
or how easy would
uh
that update look like or what's the
complexity of that that update how would
you do it for you know a linear
class
or something like a neural network or
any kind of other function approximator
class there
okay hopefully you've
uh you've at least tried this mental
exercise and
see that this is not trivial
um
and there are basically two problems
there one is that the
l infinity minimization is typically
hard to be carried out
usually we are
using different norms
more prominently
something like l2
that is very well behaved
and easy to to optimize in the linear
case this has a you know um
closed form solution
uh l infinity is is
is usually a much harder optimization
problem
and
as you've seen in the the previous
lectures and this is the general
assumption i guess in reinforcement
learning is that
t-star is typically unknown and has to
be approximated somehow usually from
samples
so
um here are some proposals of easing
these problems so first i just hinted at
this that instead of fel infinity we're
just gonna replace that with something
like l2 that is much easier to
uh to optimize so instead of having now
the l infinity optimization there we're
gonna
do the same thing but find in um find
the point in f that is closest to
um
t star q
uh k
with respect to
an l to norm or an l to norm with
respect to some probability uh
distribution mu
okay
and hopefully you you
know how to solve this um any any kind
of like um
squared regression kind of thing will
tackle that so uh in most cases this
will have both uh at least for the
linear case a fixed
like tractable form
and if not in most of the other cases we
can do something like back prop and just
take gradients with respect to this this
loss
and
you've already seen you know in the
lectures in model free control and
variants of that that
in order to approximate the star we're
just gonna sample
and uh
usually we have this
tuples of s a
our
next state that we're going to sample
with respect to this distribution with
respect to that norm and our transition
model and we're going to
approximate
this term t t star q
q k
by
this
y t target for each sample
and this is just again
the true
uh operator the true uh
q uh
d star would be just expectation
of uh yt
for all the the samples um sd 80 uh rt
plus one st plus one
okay so now uh what we get is that at
each iteration
we're gonna find in our
um
hypothesis space f
q
uh w this these are the parameters of
the our function approximator we're
still we're still using
a linear function class
that minimizes the square loss
and this is now a sample square loss
rather than
the
the true
expectation loss
okay
so this is just
re
uh stating what the what the last
equation was and um
just uh
uh reinterpreting yt
as
an approximate approximation an
approximate application of the belmont
optimality operator
to the previous uh value function at the
iteration key
and this actually is a very
general
recipe for coming up with
a fitted q iteration kind of algorithms
or just
simple
cube based uh
q learning based uh algorithms
uh and
there are a couple of dimensions there
that we can vary
one i've um really never used that we're
using here
linear parametrization
so this can be done with any kind of
functional class f that can be
parameterized by
as we did here with uh linear uh
function classes with neural networks
kernels whatever your you know
parametrization of your regressor is
anything can can go in there
so these these are the you know the
parametric class that we're gonna be
approximating the intermediate value
functions with
now um there's there's a couple of other
dimensions that we can vary
the other one is again we're gonna be uh
approximating
uh
t star the the application of this
operator and in order to do that we're
gonna have to sample somehow and this
can be done online can be done from a
fixed predefined or pre
um
you know a given kind of data set uh
someone you know
gets you some data
um some previous
intervention protocols some
previous behaviors of policies that have
been used in uh regulating
the
power plant or something like that and
those are like you know just a fixed
data set
um you can also have a replay kind of uh
model or a replay memory or a generative
model where you can sample from this
model to generate those uh those samples
so all of these are you know
a way of approximating that operator via
samples
but how you generate the samples and
where they come from it's um you know
it's a free dimension in a sense
and the the last dimension there that is
free also uh
has to do with
uh how we approximate the um
this one step bellman operator and the
the
the simplest one is the
one we've used before this is basically
q learning
uh the the other instantiation the
second instantiation there is in instead
of this uh
um
this the
the application to
the
previous
estimate we uh we have something like a
target network that that we keep fixed
it's almost like saying um we
at one point uh decide that iteration k
is uh the target network that we saved
and we're gonna keep that for a while
and make estimates
uh and updates and when we change the
target network again we're gonna call
that iteration k plus one
and there are multiple other versions of
this
this target
that can be done via off policy learning
and multi-step operators where
we can get
not
a one-step estimate
of this operator but maybe multiple
steps uh estimate to um
to speed up the credit assignment
and you're gonna see that
in the next lecture with the with hana
both of these uh both of these variants
but again this is a very general recipe
and there are as i said like these these
three dimensions you can
pretty much mix and match
so let's look at one particular
instantiation of this recipe which is
dqn
i'm sure you've seen this before or at
least you've looked at the paper and the
dqn is one of these algorithms that that
does follow this to this recipe and
in particular for the approximation
class we chose neural networks uh the
samples are sampled from a replay
and the um
the target
here
in the regression is based on a target
network
okay
now um here's another instantiation of
this which is bachara
where um we can't interact necessarily
with the environment is the um
examples i mentioned before that
maybe someone hands you in some data
that has been collected under some
policies
or under human interventions or any any
kind of other uh data set
and you wanna
still make use of that so the function
approximator there can be whatever is
appropriate for the
size of data or the you know the the
problem specification the observation um
set
and then we are working with the fixed
data set and let's say a
a
option here for instantiation is uh is
just the one step
uh
approximation of the belmont operator
via one sample
which is uh which is what uh the first
line here uh says this is probably the
the first thing that that i would try
but of course like uh you can have
something like just
more generally off policy updates um
because the data set is uh is fixed and
you didn't have necessarily access to
the policy that
generated that data set this might be
hard so q learning so the the first
version there this one is uh probably
the easiest
uh if you do have access basically to
the collection policy then you can do
something more fancy in terms of policy
updates but again in both of these
examples i'm kind of assuming that the
data set that you are given is not the
current policy that you're trying to
evaluate so this will be off policy
most of time if not all of the time i
mean for the control problem we will
always be off policy because the data
set was collected under a particular
policy
versus
now at each iteration we're changing the
policy
okay
and
another example here is what you've seen
in the last lecture which is something
like dinah again we can choose whatever
approximation family we choose to
and this is um
okay this is a mistake it should be
samples are both online and from a
replay
model
and the
the simplest instantiation of dyna will
choose to do
something like q learning which is the
one
sample instantiation of this this
operator
okay
that's it for uh approximate
value iteration
next we're gonna be looking at
approximate policy iteration the other
uh algorithm for for using for doing
control
this is a much um
less studied i guess or less popular
paradigm but it's uh it's good to
uh to go through it because there are
many algorithms these days that do
follow that but most of the algorithms
that will
you will encounter in control especially
for off policy uh
actually followed the the this this kind
of uh recipe a fitted queue iteration
with
one of these dimensions varying or an
instantiation of one of these dimensions
we start with a reminder of the policy
iteration paradigm
this again
is the iterative procedure where we
start with the policy
at um
at the initialization and we iterate
this process of policy evaluation and uh
policy improvement in our case greeting
improvement
so at each point in time we are
evaluating the the policy
derived at the previous step and we make
a gratification step with respect to
that and we know that at least in finite
cases this will converge to the optimal
policy
now the approximate setting
uh actually the approximation incurs in
this policy evaluation step only
and now the value we derived at or our
approximation at iteration i
is the one that will provide uh the
argument for the gratification so
instead of
doing a gratification with respect to
the actual evaluation of our previous
policy we're going to do a gratification
with respect to the approximation to
that evaluation problem
as hinted before
this
usually does not converge to the optimal
value function
but um nevertheless what what we're
really interested in is the quality of
the policy that we're gonna get after
let's say iteration k
in this
particular
notation iteration i so in general we're
interested in the quality of the policy
pi i
uh as determined
by q pi i
uh with respect to
let's say the the optimal value function
so how how good is that the policy after
um i iterations of this this procedure
and this is exactly what the
the next result uh we're gonna look at
um quantifies
so considering an mdp and the cube um
qk and
pi k are the
um
value function
uh
at iteration k and the the gratification
uh at iteration k minus one
then we
we have the following statement
so
we can bound the
quality of
the policy at time
at the iteration k
by
the
approximations errors that we've
incurred uh up to that up to that point
uh actually it's a it's a much stronger
statement than this is
saying the the
in the limit the limb sup of this error
so your
error with respect to the value function
is upper bounded by
the
the last errors like the convergence of
this approximation errors as you
approach uh convergence
okay
um
we're gonna try to prove this uh but
first some notation um
i think
first of all this should be a uh
somewhat intuitive statement uh the same
as before we're gonna
the quality of the policy that you're
gonna get after k iteration is uh
bounded by the
the errors we're making in this
approximation but this is actually a
stronger result as i said
as these things uh the approximation
errors
might still converge uh in the limit or
at least they are bounded which means
that this um
this term
is also bounded and will converge at
least in within that bound so it might
oscillate around a fixed point but it is
it is bounded by the
um
by how much the
approximation errors vary in the limit
okay
so a bit of notation before we jump into
the proof
um
you've seen these uh notations before
just uh as a reminder so we're gonna
note uh denote um via p
the the matrix corresponding to the
transition curl and this is uh the
entries of these p are
um
sa
and the s s prime
which uh the the entry s a
uh s prime denotes the probability of
transitioning to s prime considering we
started in s and took action a this is
just the transition kernel
and then
p pi
is the transition probability
corresponding to policy pi this is a
number of actions
a number of states by number of actions
number of states so this is
basically a probability transition
between sa and sa a prime under policy
pi
okay
uh and just a reminder
under this notation
the the bellman
equation can be
written in matrix form as we've seen
before in policy evaluation
okay
so having this in mind
um
let's uh let's start the proof
um first thing we're going to do is
look
at
this quantity that we call gain which is
the performance of policy
um
at iteration k plus one versus the
policy at iteration k
so between two uh consecutive iterations
of uh approximate policy iteration where
uh where
we have two policies at these um at this
iterations and we're saying like how
much better is policy at k plus one with
respect to
uh the policy at
um
iteration k
okay and
first we're gonna um
try to say something about this quantity
and then we're gonna relate this
quantity with the the thing that we're
actually interested in which is the the
performance with respect to the optimal
policy
okay
so the the gain as defined uh
here is just the difference between
these uh consecutive
evaluation of
two consecutive
uh
iterations of policy uh
approximate policy iteration
and this is just um rewriting
the the above equation just
using that the fixed point of
operators
pi k and pi k
plus 1 are the value function
corresponding to these evaluations
okay
and then we're gonna do um we're gonna
add and subtract a couple of terms there
okay this might sound a bit um
or might look a bit scary at this point
but it's it's just adding up subtraction
um and it does have a
um
it does make sense in in terms of
we always do this this kind of procedure
just varying one term at the time
uh so if you look at the the first
equation here um 17 we are uh keeping
the operator fix but we're
uh varying the argument
uh in the next equation again we're
keeping the operator fixed but we're
changing the argument uh now um
in the in the next equation we are
changing the operator but keeping the
argument fixed and then the last one
we're keeping the operator fixed between
these two and changing the the argument
so usually
that's the kind of um
relationships that that we're gonna have
because we're gonna exploit the the
properties of these operators
okay
so i've just
written uh
exactly what we had in the previous
slide and we're gonna unpack this terms
okay let's look at the first
term here
this
this
is going to be r
we're just uh unpacking the definition
of these bellman operators
plus
gamma
and
sorry
two plus one
minus r plus gamma pi
plus one
at
okay
and uh we can see that the r there
goes uh goes away and what we're left
with is gamma
pi k plus 1
and
the difference between these policies
and this is what we called gain
okay
and similarly we can do the same trick
for this this equation and this would be
now
the r's again simplify
and we have
p
pi k plus 1
and the difference between
this
which we called um
ek
which is the
uh approximation error at iteration k
uh we're gonna skip this next turn
because
this one has a very similar um
expression as the ones which you we've
just seen because the operator is the
same
again things simplify and we get the
gamma
p
pi k
and
difference between these two and again
these are as before in the notation it's
minus
um
the approximation error that we incurred
at the iteration um
k uh just just a bit of uh notation
e k
we've denoted
this um
the difference between uh the evaluation
uh of the policy pi k and the estimate
uh qk
but not the norm of that
this this has a sign kind of thing
okay
the last term that that we have here is
this one and this one where um
we're actually not gonna do anything but
say that this is greater or equal than
zero
and
this is because
um
this policy
pi k plus 1 is the greedy
with respect to
uh this uh
um
q uh qk
which means that
the uh evaluation operator there is
actually the
optimality the one um
application of the belmont optimality
operator and this is less
greater than equal than any
application of any other policy
okay hopefully you are able to to follow
that if not uh just stop the video and
go over it but these are um
exactly the same equations but repeat it
in the in the slide
okay
and again this
this particular inequality
you can unpack it explicitly if you want
to convince yourself of that but the
argument we we've just
done is uh is sufficient
okay
so um
we
the only thing that we're gonna do now
is uh basically upper bound this
by
this um
so because
we're gonna replace this term with zero
um this is an upper bound and we're just
going to
plug in the the terms that the
simplification of the terms that we just
had here
okay
this is just rewriting
okay and
uh rewriting a bit
by
noticing that the gain
k appears both on the
right hand side and the left hand side
and collecting those terms we reach this
final expression
um by
bounding the the gain at iteration k
okay
so
um this is the statement that we've just
proven uh let's see
a couple of implications
so this gain is really how much your
policy the quality of the policy that
you derive
has improved from one iteration to to
the other
so
the first thing to notice is that
if e k
which is the approximation error
is
by definition uh here is um the gap
between the actual evaluation of policy
pi k
and
our approximation of that
so if that is zero
which means you have perfect evaluation
at
policy k
that thing is zero so uh what we're
getting is that the gain at iteration k
is greater
or equal than zero which means that the
pulse the quality of the policy at the
iteration k plus one is greater or equal
than the uh policy
uh the quality of the policy at
iteration um
k
which is exactly what we've proven
before in terms of the greedy
improvement step being um
an actual improvement step
guaranteed improvement step over the the
whole state space now
um
if this term is
different than 0
either
larger or
smaller than zero this is a vector so uh
some of the entries might be uh positive
some of them might be negative um
depending on your approximation so can
this gain uh be zero be
zero or uh negative
which would mean that
the at least in some states the policy
at
um iteration k plus one
actually is worse than the policy at
iteration k so
fortunately under under approximation
that's that can happen
um
and this is a very simple example just
to give you an intuition where things
can go wrong
so this is a simple mdp three states
we have uh plus one rewards every time
we transition at the end of this uh
chain
uh the action space is left and right
the fully deterministic
uh and where
uh let's say at iteration um k the
policy that we're uh evaluating
is uh always going right
okay
so if we if you if we evaluate this
policy the the value function
um q pi k associated with that
is is as follows
so um you can
you can convince yourself that um at
least
in the terminal state if if we're in
state uh s1 s2 and
taking action
1 which is exiting to this plus 1 reward
then we get the the reward the
plus one
if not if you're in
a state
as zero
um and you're taking action um
one which is going to s2 you uh
basically have a zero reward plus
gamma is 0.9 here
so
0 plus 0.9 and the value at this state
which is 0.9
uh the same for
s1 we're taking one step with
reward zero but
then we're discounting by 0.9 this
already discounted value of 0.9
now for
action
2 this is
a bit more complicated
we start with s1 so if you're in s1 and
take
action a2 you're gonna transition to
this terminal status and uh get uh
reward one
now
if you are in uh
state s0
and
take action
uh a2 you're going to transition to s1
and then
you're still going to follow
this is evaluating this deterministic
policy of always going right so we make
one transition
according to
a a2 we transition to s1 and then from
s1 we continue according to policy pi k
and the
according to policy pi k we're just
gonna
go right
uh which means that we're we're making a
step we we
transition to to s1 and then uh
basically we're just discounting this
value
again because in s1 we're not actually
transitioning
to the the end state there we're not
actually exiting because we are
executing a policy that will get us in
the opposite direction
okay
hopefully that's clear
now consider um instead of having this
full
evaluation of policy pi k
consider an approximation
um
for different reasons you might have
either a functional class or um
based on samples you get this this
approximation and
i've just made up some values that are
you know fairly close to what the
the evaluation would be and uh it's it
it also maintains the you know um
the relationship between the uh the
action values
at least within within the same action
okay so it it's a reasonable
approximation now if you take the greedy
with respect to this approximation
to this evaluation
then you get a policy
pi k plus 1
which is this
so
this is just reading out from
our approximation
what what am i going to do in s1 well i
look at the value of these two actions
uh obviously uh a2 is better i'm going
to take that one uh the same in
s
0 um i look at the the
the values of these actions
the one corresponding to a1 is better so
i'm gonna take that one
and in um s2
i'm gonna i'm looking again at the
approximate values uh here and i see
that a2 in this case is better than uh
than a1 so the the induced policy is uh
um
is the one depicted here
okay
so
let's now
see uh this this new policy at iteration
k plus one if we if i were to actually
evaluate
how good that policy is
what
what would that evaluation be
and this is the evaluation for the the
policy that we've just computed as
greedy with respect to the approximation
that that we had
and this one
um because of the cycle there
going back and forth between those uh
those states actually has this value
function
which you can see that if you compare
the the
value of a policy at uh policy pi k and
the the one at
uh
policy at the k plus one you can see
that even state by state these things
are either equal
or
the policy at the
iteration k plus 1 is actually worse
uh in this state
um
in this state in this state
so actually this is an extreme example
of uh of what i wanted to illustrate in
the game being the negative that in all
of the k in all of the states um and
actions
uh the the gain is act uh either zero or
negative
in most situation you would have a mix
of these where in some cases it will be
an improvement but in some states and
actions it it can actually be negative
so it's not
it's not under approximation it's not
necessarily a strict improvement
over the policy we had at the previous
iteration and there is where the the
problems
with the uh with convergence and with
you know the the monotonic improvement
of these is this policies uh occurs
okay
okay
um this was a bit of a detour but i
think it was it's it's good to to have
an intuition of what happens from one
iteration to to the other and
why this fails under
um potentially fails under uh
approximation so let's go back to the
proof
and actually go for the term that we are
actually interested in in bounding which
is the quality of the policy after k
iteration with respect to the optimal
value function
okay
uh we're gonna denote by lk
exactly the difference between the
optimal value function and
the
quality of the policy at the iteration k
note that this thing is always
positive
uh because the by definition the
the value of the optimal policy is um
greater than the
value of any other policy and this is
this is not an approximation this is an
actual evaluation of the policy at the
iteration k
okay
so we're gonna start with just the
definition
uh we're gonna do a similar thing that
we did before
introducing the the bellman operators
and um
using the fact that um
both of these
value functions are
um
fixed points of
the
expectation
operator
of
pi k plus 1 and the
fixed point of the
evaluation
the expectation
evaluator with respect to the optimal
policy
okay and then we're gonna do exactly
what we did before which is uh introdu
like
uh introduce a couple of uh
of terms so adding and subtracting a few
of these terms
so just
just convince yourself that this thing
and this thing goes um
simplify the same this one and this one
this one and this one this one and this
one and again the same recipe uh as
before we're just changing one thing at
a time so the the the operator state is
the same the argument is different
again the operator stays the same
argument is different
the
the operator changes here
uh and so on so forth
okay
and
we're gonna do very similar thing that
we did before and
basically bound
uh all the terms that
that we've uh all these um
differences
um we're gonna uh either spell them out
or
uh bound them uh in this case by zero
okay
the the argument goes exactly as we did
before they are uh uh they're just uh
rewriting what these differences are
okay
so by collecting uh the terms here
and again this uh
this particular inequality
is
a
follows the same argument as we had
before because um
this
operator
is uh the greedy with respect to
uh q
k so
pi k plus 1 is by definition the greedy
with respect to
the approximation
q k
so that means this operator is actually
the one step
optimality operator
applied to
qk
okay
so this one is always
better or it's always greater than any
kind of policy there
so any any other
evaluation operator applied to
qk can't be better
okay
so we're just uh we're just gonna
rewrite uh or
all of the terms that we in group all of
the terms that we had uh here
um
and uh we're gonna
now uh expand we had the term here in
terms of the the gain
at the iteration k and we've previously
upper bounded this by this expression
so from 27 to 28 we're just plugging in
the bound that we've previously proven
uh for gk
and we're just
bringing the terms together and
rewriting a couple of uh stuff
and a couple of things simplify from
29 to
30.
okay
and uh um
now
sorry um
as
uh
as k tends to infinity
um
then we can
uh
we can introduce the limb soup of this
quantity so we have uh lk and lk plus
one uh in in
one side of the equation and the the
other side of the equation uh in
asymptotic regime as uh as k tends to
infinity this will uh become just this
term
and then
um basically by rewriting
we we have this expression so this term
all of this term here appears in the
limb soup here
and the last thing that that we're going
to do is
we're gonna take the l infinity norm
with respect to this equation this is
just rewriting uh what was on the on the
other slide
okay and when you when we take that
we have a couple of terms there that
that
will simplify
um
so in particular if you look at this
term uh this
um
fairly ugly term in uh in equation 37
this actually simplifies to
being upper bounded
by um
one plus gamma
over one minus gamma plus one this is
using the fact that uh all of these
matrixes uh
p a pi
uh for
pi equals uh
pi k plus one or pi k or
the the optimal
policy all of them are stochastic uh
matrixes
where the l infinity norm of all of
these is one
okay
and that is uh that is it for the for
the proof
if you
uh rewrite basically this uh
this term you get exactly what the
statement of the the theorem uh says
so this is uh ek was exactly uh the
the approximation error that we incur at
the iteration k and this term just
simplifies to this
okay
last thing we're going to do here is
look at the concrete instance of of this
algorithm
and we're gonna actually look at an
algorithm that you've seen before
uh
namely td lambda with uh
a linear under linear approximation
important to note at this point that
this algorithm at least in the form that
i describe it here
will just implement the policy valuation
step
in an approximate policy iteration
loop
so it's it's just one of the
steps that you have to do at each
iteration k in this approximate policy
iteration
it's a policy evaluation
this is
mainly to show how one could reason
under function approximator about the
errors you incur at this evaluation step
because we've seen from the the previous
result that the quality at least in the
limit of
the policy that you're gonna get at
iteration k as k tends to infinity as
compared to the optimal uh value
function is bounded by how the
errors at the at each iteration
as case goes to uh to infinity uh behave
so
keeping those errors
as small as possible in the policy uh
evaluation uh case
will mean that that bound is tighter and
tighter which means that with respect to
the optimal uh value function you're
losing less and less so uh in in the
best case scenario if that
if those error are zero then you will
converge to the optimal value function
so this is just a an attempt at uh
saying
under
you know the
normal scenario where we're using
function approximator and we're using
something like td learning how can we
reason about the errors that we incur in
this procedure
okay
so uh we start with the linear
functional space
and
remember that the tdr
under this approximation is
defined as in equation 37
and then the parameter update
w
at each step after each tdr
uh is updated
with the magnitude of the tdr in the
direction of phi
s t a t uh previously i think you've
seen this update for just the value
function not the um
state action value function and this phi
would be
um just a function of the state and i
think you've denoted it as xd
okay
well
under this um under this
paradigm
um
this this algorithm actually has nice
properties so under the normal
conditions on uh of the
learning rate here
uh
satisfying these conditions
the the sum of them being infinite and
the the square of them being finite then
this
procedure
converges to
um
its limit um denoted here by
uh w star
which is not approximating the optimal
value function it's just the annotation
for the fixed point
and
uh all
all of this procedure is trying to
approximate uh the value of some policy
pi
the one we are computing the td errors
with respect to
and uh furthermore
it has been shown in the same paper the
sickness and panroi
that
the
the approximation error that we're gonna
be incurring by using td in this uh
uh in in this formulation is bounded
by one
minus
gamma lambda
over one minus
gamma
and this term is basically the best that
you can do
in this functional space
with respect to
to this norm so this is the
uh
the point in
f that is the closest
to
the value function you're trying to
approximate
the closest to the approximation so this
is uh really saying that if
i were
to know this
evaluation in advance
then
if i were to just represent it
in this functional uh
functional space f this is the the
closest that i can get in terms of
approximation
and
the whole expression is just saying that
if i am
doing this um
temporal difference
learning under mild conditions
i'm going to be bounded by
basically the
best way of representing the the value
function that i'm interested in
this this might not be exactly uh the
this this point the best approximator to
uh q pi
but is very close to it
okay
um
just a couple of um
you know breaking down uh the the
statement as as we usually do
um
one is
uh you can
lambda is a parameter in td lambda so um
uh
one thing that we can ask is for which
lambda is this upper bound
the tightest which means that we lose we
are guaranteed to lose the least and it
turns out if you're
eyeballing that or maximizing over that
is that for
lambda equals 1 which corresponds to the
monte carlo estimate
this
this band is tighter and
i'll let you reason why that would be
the case
okay uh the the other question that
might come to mind is
what if the
um
the value that we're trying to
uh to evaluate
the
the value of the policy that we're
trying to evaluate this q pi is actually
representable in our um
space of uh
hypothesis class
well then the
this part of the equation actually is
zero which means
that
uh
q
um
w star which is the the point of
convergence of the the the previous
algorithm is actually two pi so
this is nice
because we're saying that if my
um
the the function that i'm trying to
represent is part of my uh
my functional class then i'm guaranteed
with this algorithm to converge to that
okay um now the question becomes if this
is not the case if
q pi is not
representable in uh
our hypothesis space
then in general um
well we won't recover q pi
but more more general than that the the
fixed point we converge to this um
uh q
uh
w uh star
is
generally
not the best approximation that we can
get
uh in this function of class which is
exactly what this this term says
okay
okay that is uh it uh for today i'm
gonna just do a quick summary uh now of
the the two paradigms in approximate
dynamic programming that with that we've
uh we've seen today
first let's start with the approximate
value uh iteration
so uh we've proven this result where as
a reminder um
pi n
is the the policy that we're getting at
iteration n
of this procedure
now some lessons that we've learned uh
in general uh the conversion is not
guaranteed depends highly on the
approximation
but in practice a lot of the algorithm
that we've seen and we use in practice
look at all the instantiations that we
had at the at the end of this
section
um
seems to point that in practice this is
generally well behaved
how how do we control uh how good the
the policy that we're getting is well
the the way to control uh this is to
control the approximate uh the
approximation error and there are two
sources of error the estimation
by a sampling so we don't have access to
the true
belmont operator and then there's the
approximation
coming from the
the function class that you we are using
we saw that
although all of these results are in
terms of the l infinity norm and if we
were to optimize with respect to that we
would get the safe algorithm in the
sense of this being a
full contraction operator and having a
unique fixed point
for more
efficient optimization we usually
substitute this l infinity norm with
something like an l2 with respect to
some distribution
usually that will be the stationary
distribution
with respect to the policy that that
we're following okay
um
note that
the convergence point is not always
q star especially if q star is not
representable in the
function class
and
even if that is the case if q star is
representable in the functional class
that is not enough
uh just remind yourself of the the
example that we had where
really the the inside there is that all
of the intermediate errors kind of have
to be zero in order for that to happen
it's it's not it's not exactly necessary
it's
the better way of saying it is that um
there might be
uh
errors at different points in that the
iteration um
process that can introduce
errors that cannot be cannot be washed
away
or at least we can't say anything
um because this is kind of the worst
case scenario we're maximizing over all
these errors it might still be possible
to uh to converge to a q star it's just
um
at least based on this
bound we're not guaranteed to to do that
if we are incurring uh any kind of error
at any of the
intermediate iterations
okay and then
what we've just seen is approximate
policy
iteration
and we've proven this result um
uh
a couple of lessons again in general
convergences uh is not guaranteed but
again in practice this seems to be well
behaved
its popularity is not as great as the
other paradigm
just because
it does have uh this
inner loop this uh procedure of policy
evaluation which is in itself is a is a
problem of its uh its own you you've
seen the
i guess the the first lecture on uh
model three prediction that that alone
can be an iterative procedure itself so
it's not it's it's not easy to
usually it's not easy to evaluate a
policy so this being just this
side
inside step of another interactive
procedure
means that
in practice
it's not it's not used as often
okay
um again
the the statement really
the only thing that says is that if we
can control this errors e ek actually
the the um
l infinity norm of these these errors
then uh
we are able to control the the quality
of the end result
and at the same as before there are two
sources of error
sampling and functional crass
approximation
for
we haven't gone uh
really that much into this uh example
but uh
uh for efficient uh optimization again
we do the same thing of uh trading in
the l infinity norm to a much more
behave well behave the approximation uh
prob optimization problem in the l2 uh
norm and this is usually
now with respect to the
stationary distribution of the policy
that we're trying to evaluate right now
a different way of saying is
this is that we are safe on policy
um
this is just the
distribution of states
uh that
acting in the environment with respect
to the policy that we're trying to
evaluate pi i in this
case
will will induce
okay
and
again depending on the the function
class and the couple of mild condition
we can obtain convergence but the
convergence point is not always
um
q
q
q star or even q pi in the
policy evaluation case and
moreover we haven't touched upon this
but it is known that the convergent
point for
approximate policy iteration may not be
unique in the sense of
this uh this expression actually can
have the behavior of um
stabilizing uh like going down in terms
of the
the error and stabilizing around the
band uh
around the around the point it's still
bounded by this term but it might
oscillate back and forth around that uh
that point
okay
that is it for today
any questions that you might have please
post on moodle or come to the q a
session thank you for your time |
98689a13-2220-4d28-80f7-b0aed5dae2b0 | trentmkelly/LessWrong-43k | LessWrong | Creating Welfare Biology: A Research Proposal
[This idea came out of an ACE research workshop. I would like to thank Zach Groff and Mark Budolfson for brainstorming this idea with me, as well as ACE for offering me the opportunity to think of it.]
[Crossposted with no modifications from my blog.]
Many people in the wild-animal suffering space think it would be a good idea to make a discipline of “welfare biology”-- that is, the scientific study of wild animal welfare, the way that animal welfare studies scientifically studies domestic animal welfare. From my perspective, there are two big benefits to creating welfare biology. First, it would probably increase the number of research dollars that go to wild-animal welfare research, while reducing the opportunity cost: welfare biology would be competing with other fields of biology for funding, not with starving Africans and tortured pigs. Second, it would give us academic credibility. In most of the developed world, terrestrial wildlife often live on government land (for example, much of the United States’s wildlife lives on the quarter of US land owned by the government), which means changing government policies towards wildlife is a promising method of improving their welfare. Even in human-inhabited areas, changing government policies may be an effective way of improving wild-animal welfare. Governments are generally more likely to listen to tenured academics than they are to bloggers.
However, it is unclear to me how one creates an academic field. It is possible that people already know how academic fields form; I have not studied the subject in depth, and would welcome links from commenters. But if there is not already an academic body of work on the subject then it seems useful to do a small research project to explore how academic fields form. I think the best method is a series of qualitative case studies exploring how various relevant scientific fields formed.
I’m aware of two similar research projects in the effective altruist community. Luke Mueh |
ef1768b8-52ab-4a6a-ab29-7b571732caf1 | trentmkelly/LessWrong-43k | LessWrong | Singularity Summit 2012, discuss it here
How was it? Which speakers delivered according to expectations?
Which topics were left unresolved?
Were any topics resolved?
Whatever you have to say about it, say it here.
Suggestion: if you are going to comment, mention "I was there" just so we know who was or wasn't. |
08ed28c6-74c1-4492-994b-304317e99978 | trentmkelly/LessWrong-43k | LessWrong | Aligned AI as a wrapper around an LLM
In my previous post Are extrapolation-based AIs alignable? I argued that an AI trained only to extrapolate some dataset (like an LLM) can't really be aligned, because it wouldn't know what information can be shared when and with whom. So to be used for good, it needs to be in the hands of a good operator.
That suggests the idea that the "operator" of an LLM should be another, smaller AI wrapped around it, trained for alignment. It would take care of all interactions with the world, and decide when and how to call the internal LLM, thus delegating most of the intelligence work to it.
Q1: In this approach, do we still need to finetune the LLM for alignment?
A: Hopefully not. We would train it only for extrapolation, and train the wrapper AI for alignment.
Q2: How would we train the wrapper?
A: I don't know. For the moment, handwave it with "the wrapper is smaller, and its interactions with the LLM are text-based, so training it for alignment should be simpler than training a big opaque AI for both intelligence and alignment at once". But it's very fuzzy to me.
Q3: If the LLM+wrapper combination is meant to be aligned, and the LLM isn't aligned on its own, wouldn't the wrapper need to know everything about human values?
A: Hopefully not, because information about human values can be coaxed out of the LLM (maybe by using magic words like "good", "Bertrand Russell", "CEV" and so on) and I'd expect the wrapper to learn to do just that.
Q4: Wouldn't the wrapper become a powerful AI of its own?
A: Again, hopefully not. My hypothesis is that its intelligence growth will be "stunted" by the availability of the LLM.
Q5: Wouldn't the wrapper be vulnerable to takeover by a mesa-optimizer in the LLM?
A: Yeah. I don't know how real that danger is. We probably need to see such mesa-optimizers in the lab, so we can train the wrapper to avoid invoking them.
Anyway, I understand that putting an alignment proposal out there is kinda sticking my head out. It's very possible |
eafd5076-06a2-4089-a563-08297669b238 | trentmkelly/LessWrong-43k | LessWrong | Jean Monnet: The Guerilla Bureaucrat
I have written about coordination problems from various points of view in the past (biology, economics, sociology, political science) but this time I am about to focus not on the theory, but on the practice.
Jean Monnet was one of the founding fathers of the European Union. One may even say that he was the architect of the European Union. However, as founding fathers go, he was rather unusual. His background was unusual: He was neither a political leader, nor a lawyer, a philosopher or a military commander. He was a son of a brandy merchant from the small town of Cognac near Bordeaux and himself a merchant by trade. He dropped out of school at sixteen and never got any extensive formal education.
But also his approach was unusual: He never held an elected position, he has never put himself to the forefront, he almost never made big speeches and is not known for memorable quotations. Rather, he was always in the background, busy with the boring technical work, hanging around politicians, showing them his famous balance sheets and trying to convince them to do the sensible, if unexpected, thing.
He was, in fact, so undistinguished that, when Fortune magazine run a story about him, they have given up on inventing a proper title for him and introduced him simply as "Monsieur Jean Monnet of Cognac". But whoever he was in his life - a trader, a banker, a civil servant - the only description that truly fits is that he was a solver of coordination problems.
The Monnet Method
This article will explore what Mario Draghi (former president of European Central Bank, and now, quite unexpectedly, the Italian prime minister) calls "the Monnet method", a bunch of principles that guided the effort to unite the continent divided by centuries of incessant wars and feuds.
But while Draghi is focusing on the lessons that may be relevant in the current state of the European Union, my interest is a bit broader: How does one solve coordination problems in general? And how does to do |
5a12e14b-8f97-4fe0-87b3-a122a00c5517 | trentmkelly/LessWrong-43k | LessWrong | Touching the Old
I'm in Oxford right now, for the Global Catastrophic Risks conference.
There's a psychological impact in walking down a street where where any given building might be older than your whole country.
Toby Ord and Anders Sandberg pointed out to me an old church tower in Oxford, that is a thousand years old.
At the risk conference I heard a talk from someone talking about what the universe will look like in 10100 years (barring intelligent modification thereof, which he didn't consider).
The psychological impact of seeing that old church tower was greater. I'm not defending this reaction, only admitting it.
I haven't traveled as much as I would travel if I were free to follow my whims; I've never seen the Pyramids. I don't think I've ever touched anything that has endured in the world for longer than that church tower.
A thousand years... I've lived less than half of 70, and sometimes it seems like a long time to me. What would it be like, to be as old as that tower? To have lasted through that much of the world, that much history and that much change?
Transhumanism does scare me. I shouldn't wonder if it scares me more than it scares arch-luddites like Leon Kass. Kass doesn't take it seriously; he doesn't expect to live that long.
Yet I know - and I doubt the thought ever occurred to Kass - that even if something scares you, you can still have the courage to confront it. Even time. Even life.
But sometimes it's such a strange thought that our world really is that old.
The inverse failure of the logical fallacy of generalization from fictional evidence, is failure to generalize from things that actually happened. We see movies, and in the ancestral environment, what you saw with your own eyes was real; we have to avoid treating them as available examples.
Conversely, history books seem like writing on paper - but those are things that really happened, even if we hear about them selectively. What happened there was as real to the people who lived it |
b7052cb2-d351-4c4d-b2ef-49c5173425c7 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Alignment Newsletter #50
Find all Alignment Newsletter resources [here](http://rohinshah.com/alignment-newsletter/). In particular, you can [sign up](http://eepurl.com/dqMSZj), or look through this [spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing) of all summaries that have ever been in the newsletter.
**Highlights**
--------------
**[More realistic tales of doom](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/more-realistic-tales-of-doom)** *(Paul Christiano)*: This [Vox article](https://www.vox.com/future-perfect/2019/3/26/18281297/ai-artificial-intelligence-safety-disaster-scenarios) does a nice job of explaining the first part of this post, though I disagree with its characterization of the second part.
The typical example of AI catastrophe has a powerful and adversarial AI system surprising us with a treacherous turn allowing it to quickly take over the world (think of the paperclip maximizer). This post uses a premise of continuous AI development and broad AI deployment and depicts two other stories of AI catastrophe that Paul finds more realistic.
The first story is rooted in the fact that AI systems have a huge comparative advantage at optimizing for easily measured goals. We already see problems with humans optimizing for the easily measured goals (scientific malpractice, outrage-inducing social media, etc.) and with AI these problems will be severely exacerbated. So far, we have been able to use human reasoning to ameliorate these problems, by changing incentives, enacting laws, or using common sense to interpret goals correctly. We will initially be able to use human reasoning to create good proxies, but over time as AI systems become more capable our ability to do this will lag further and further behind. We end up "going out with a whimper": ultimately our values are no longer shaping society's trajectory.
The second story starts out like the first story, but adds in a new complication: the AI system could develop internal goals of its own. AI performs a huge search over policies for ones that score well on the training objective. Unfortunately, a policy that optimizes for the goal of "having influence" will initially score well on most training objectives: when you don't already have influence, a good strategy for gaining influence is to do what your overseers want you to do. (Here "influence" doesn't mean just social influence; control over nukes also counts as influence.) At some point the system will be powerful enough that gaining influence no longer means doing what the overseers want. We will probably know about this dynamic through some catastrophic AI failures (e.g. an AI-run corporation stealing the money it manages), but may not be able to do anything about it because we would be extremely reliant on AI systems. Eventually, during some period of heightened vulnerability, one AI system may do something catastrophic, leading to a distribution shift which triggers a cascade of other AI systems (and human systems) failing, leading to an unrecoverable catastrophe (think something in the class of a hostile robot takeover). Note that "failure" here means an AI system "intentionally" doing something that we don't want, as opposed to the AI system not knowing what to do because it is not robust to distributional shift.
**Rohin's opinion:** Note that Paul thinks these scenarios are more realistic because he expects that many of the other problems (e.g. wireheading, giving AI systems an objective such that it doesn't kill humans) will be solved by default. I somewhat expect even the first story to be solved by default -- it seems to rest on a premise of human reasoning staying as powerful as it is right now, but it seems plausible that as AI systems grow in capability we will be able to leverage them to improve human reasoning (think of how paper or the Internet amplified human reasoning). The second story seems much more difficult -- I don't see any clear way that we can avoid influence-seeking behavior. It is currently my most likely scenario for an AI catastrophe that was a result of a failure of technical AI safety (or more specifically, [intent alignment](https://www.alignmentforum.org/posts/ZeE7EKHTFMBs8eMxn/clarifying-ai-alignment) ([AN #33](https://mailchi.mp/b6dc636f6a1b/alignment-newsletter-33))).
**Read more:** [AI disaster won’t look like the Terminator. It’ll be creepier.](https://www.vox.com/future-perfect/2019/3/26/18281297/ai-artificial-intelligence-safety-disaster-scenarios)
**[80K podcast: How can policy keep up with AI advances?](https://80000hours.org/podcast/episodes/openai-askell-brundage-clark-latest-in-ai-policy-and-strategy/)** *(Rob Wiblin, Jack Clark, Miles Brundage and Amanda Askell)*: OpenAI policy researchers Jack Clark, Amanda Askell and Miles Brundage cover a large variety of topics relevant to AI policy, giving an outside-view perspective on the field as a whole. A year or two ago, the consensus was that the field required [disentanglement research](https://forum.effectivealtruism.org/posts/RCvetzfDnBNFX7pLH/personal-thoughts-on-careers-in-ai-policy-and-strategy); now, while disentanglement research is still needed, there are more clearly defined important questions that can be tackled independently. People are now also taking action in addition to doing research, mainly by accurately conveying relevant concepts to policymakers. A common thread across policy is the framing of the problem as a large coordination problem, for which an important ingredient of the solution is to build *trust* between actors.
Another thread was the high uncertainty over specific details of scenarios in the future, but the emergence of some structural properties that allow us to make progress anyway. This implies that the goal of AI policy should be aiming for *robustness* rather than *optimality*. Some examples:
* The [malicious use of AI report](https://maliciousaireport.com/) was broad and high level because each individual example is different and the correct solution depends on the details; a general rule will not work. In fact, Miles thinks that they probably overemphasized how much they could learn from other fields in that report, since the different context means that you quickly hit diminishing returns on what you can learn.
* None of them were willing to predict specific capabilities over more than a 3-year period, especially due to the steep growth rate of compute, which means that things will change rapidly. Nonetheless, there are structural properties that we can be confident will be important: for example, a trained AI system will be easy to scale via copying (which you can't do with humans).
* OpenAI's strategy is to unify the fields of capabilities, safety and policy, since ultimately these are all facets of the overarching goal of developing beneficial AI. They aim to either be the main actor developing beneficial AGI, or to help the main actor, in order to be robust to many different scenarios.
* Due to uncertainty, OpenAI tries to have policy institutions that make sense over many different time horizons. They are building towards a world with formal processes for coordinating between different AI labs, but use informal relationships and networking for now.
AI policy is often considered a field where it is easy to cause harm. They identify two (of many) ways this could happen: first, you could cause other actors to start racing (which you may not even realize, if it manifests as a substantial increase in some classified budget), and second, you could build coordination mechanisms that aren't the ones people want and that work fine for small problems but break once they are put under a lot of stress. Another common one people think about is information hazards. While they consider info hazards all the time, they also think that (within the AI safety community) these worries are overblown. Typically people overestimate how important or controversial their opinion is. Another common reason for not publishing is not being sure whether the work meets high intellectual standards, but in this case the conversation will be dominated by people with lower standards.
Miscellaneous other stuff:
* Many aspects of races can make them much more collaborative, and it is not clear that AI corresponds to an adversarial race. In particular, large shared benefits make races much more collaborative.
* Another common framing is to treat the military as an adversary, and try to prevent them from gaining access to AI. Jack thinks this is mistaken, since then the military will probably end up developing AI systems anyway, and you wouldn't have been able to help them make it safe.
* There's also a lot of content at the end about career trajectories and working at OpenAI or the US government, which I won't get into here.
**Rohin's opinion:** It does seem like building trust between actors is a pretty key part of AI policy. That said, there are two kinds of trust that you can have: first, trust that the statements made by other actors are true, and second, trust that other actors are aligned enough with you in their goals that their success is also your success. The former can be improved by mechanisms lie monitoring, software verification, etc. while the latter cannot. The former is often maintained using processes that impose a lot of overhead, while the latter usually does not require much overhead once established. The former can scale to large groups comprising thousands or millions of people, while the latter is much harder to scale. I think it's an open question in AI policy to what extent we need each of these kinds of trust to exist between actors. This podcast seems to focus particularly on the latter kind.
Other miscellaneous thoughts:
* I think a lot of these views are conditioned on a gradual view of AI development, where there isn't a discontinuous jump in capabilities, and there are many different actors all deploying powerful AI systems.
* Conditional on the military eventually developing AI systems, it seems worth it to work with them to make their AI systems safer. However, it's not inconceivable that AI researchers could globally coordinate to prevent military AI applications. This wouldn't prevent it from happening eventually, but could drastically slow it down, and let defense scale faster than offense. In that case, working with the military can also be seen as a defection in a giant coordination game with other AI researchers.
* One of my favorite lines: "I would recommend everyone who has calibrated intuitions about AI timelines spend some time doing stuff with real robots and it will probably … how should I put this? … further calibrate your intuitions in quite a humbling way." (Not that I've worked with real robots, but many of my peers have.)
**Technical AI alignment**
==========================
### **Problems**
**[More realistic tales of doom](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/more-realistic-tales-of-doom)** *(Paul Christiano)*: Summarized in the highlights!
[The Main Sources of AI Risk?](https://www.alignmentforum.org/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk) *(Wei Dai)*: This post lists different causes or sources of existential risk from advanced AI.
### **Technical agendas and prioritization**
[Unsolved research problems vs. real-world threat models](https://medium.com/@catherio/unsolved-research-problems-vs-real-world-threat-models-e270e256bc9e) *(Catherine Olsson)*: Papers on adversarial examples often suggest that adversarial examples can lead to real world problems as their motivation. As we've [seen](https://arxiv.org/abs/1807.06732) ([AN #19](https://mailchi.mp/4b19d2caa5a9/alignment-newsletter-19)) [previously](https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html) ([AN #24](https://mailchi.mp/d7b5059d64ed/alignment-newsletter-24)), many adversarial example settings are not very realistic *threat models* for any real world problem. For example, adversarial "stickers" that cause vision models to fail to recognize stop signs could cause an autonomous vehicle to crash... but an adversary could also just knock over the stop sign if that was their goal.
There are more compelling reasons that we might care about imperceptible perturbation adversarial examples. First, they are a proof of concept, demonstrating that our ML models are not robust and make "obvious" mistakes and so cannot be relied on. Second, they form an unsolved research problem, in which progress can be made more easily than in real settings, because it can be formalized straightforwardly (unlike realistic settings). As progress is made in this toy domain, it can be used to inform new paradigms that are closer to realistic settings. But it is *not* meant to mimic real world settings -- in the real world, you need a threat model of what problems can arise from the outside world, which will likely suggest much more basic concerns than the "research problems", requiring solutions involving sweeping design changes rather than small fixes.
**Rohin's opinion:** I strongly agree with the points made in this post. I don't know to what extent researchers themselves agree with this point -- it seems like there is *a lot* of adversarial examples research that is looking at the imperceptible perturbation case and many papers that talk about new types of adversarial examples, without really explaining why they are doing this or giving a motivation that is about unsolved research problems rather than real world settings. It's possible that researchers do think of it as a research problem and not a real world problem, but present their papers differently because they think that's necessary in order to be accepted.
The distinction between research problems and real world threat models seem to parallel the distinction between theoretical or conceptual research and engineering in AI safety. The former typically asks questions of the form "how could we do this in principle, making simplifying assumptions X, Y and Z", even though X, Y and Z are known not to hold in the real world, for the sake of having greater conceptual clarity that can later be leveraged as a solution to a real world problem. Engineering work on the other hand is typically trying to scale an approach to a more complex environment (with the eventual goal of getting to a real world problem).
### **Learning human intent**
[Literal or Pedagogic Human? Analyzing Human Model Misspecification in Objective Learning](https://arxiv.org/abs/1903.03877) *(Smitha Milli et al)*: In [Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137), we assume a two-player game with a human and a robot where the robot doesn't know the reward R, but both players are trying to maximize the reward. Since one of the players is a human, we cannot simply compute the optimal strategy and deploy it -- we are always making some assumption about the human, that may be misspecified. A common assumption is that the human is playing optimally for the single-player version of the game, also known as a literal human. The robot then takes the best response actions given that assumption. Another assumption is to have a *pedagogic* human, who acts as though the robot is interpreting her literally. The robot that takes the best response actions with this assumption is called a pedagogic or pragmatic robot.
However, any assumption we make about the human is going to be misspecified. This paper looks at how we can be robust to misspecification, in particular if the human could be literal or pedagogic. The main result is that the literal robot is more robust to misspecification. The way I think about this is that the literal robot is designed to work with a literal human, and a pedagogic human is "designed" to work with the literal robot, so unsurprisingly the literal robot works well with both of them. On the other hand, the pedagogic robot is designed to work with the pedagogic human, but has no relationship with the literal robot, and so should not be expected to work well. It turns out we can turn this argument into a very simple proof: (literal robot, pedagogic human) outperforms (literal robot, literal human) since the pedagogic human is designed to work well with the literal robot, and (literal robot, literal human) outperforms (pedagogic robot, literal human) since the literal robot is designed to work with the literal human.
They then check that the theory holds in practice. They find that the literal robot is better than the pedagogic robot *even when humans are trying to be pedagogic*, a stronger result than the theory predicted. The authors hypothesize that even when trying to be pedagogic, humans are more accurately modeled as a mixture of literal and pedagogic humans, and the extra robustness of the literal robot means that it is the better choice.
**Rohin's opinion:** I found this theorem quite unintuitive when I first encountered it, despite it being two lines long, which is something of a testament to how annoying and tricky misspecification can be. One way I interpret the empirical result is that the wider the probability distributions of our assumptions, the more robust they are to misspecification. A literal robot assumes that the human can take any near-optimal trajectory, whereas a pedagogic robot assumes that the human takes very particular near-optimal trajectories that best communicate the reward. So, the literal robot places probability mass over a larger space of trajectories given a particular reward, and does not update as strongly on any particular observed trajectory compared to the pedagogic robot, making it more robust.
### **Interpretability**
[SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability](http://arxiv.org/abs/1706.05806) *(Maithra Raghu et al)*
### **Robustness**
[Call for Papers: ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning](https://sites.google.com/view/udlworkshop2019/) (summarized by Dan H): Topics of this workshop include out-of-distribution detection, calibration, robustness to corruptions, robustness to adversaries, etc. Submissions are due April 30th.
**AI strategy and policy**
==========================
**[80K podcast: How can policy keep up with AI advances?](https://80000hours.org/podcast/episodes/openai-askell-brundage-clark-latest-in-ai-policy-and-strategy/)** *(Rob Wiblin, Jack Clark, Miles Brundage and Amanda Askell)*: Summarized in the highlights!
[A Survey of the EU's AI Ecosystem](https://www.charlottestix.com/european-union-ai-ecosystem) *(Charlotte Stix)*: This report analyzes the European AI ecosystem. The key advantage that Europe has is a strong focus on ethical AI, as opposed to the US and China that are more focused on capabilities research. However, Europe does face a significant challenge in staying competitive with AI, as it lacks both startup/VC funding as well as talented researchers (who are often going to other countries). While there are initiatives meant to help with this problem, it is too early to tell whether they will have an impact. The report also recommends having large multinational projects, along the lines of CERN and the Human Brain Project. See also [Import AI](https://jack-clark.net/2019/03/25/making-better-healthcare-ai-systems-via-audio-de-identification-teaching-drones-to-help-humans-fight-fires-and-why-language-models-could-be-smarter-than-you-think/).
**Other progress in AI**
========================
### **Reinforcement learning**
[Assessing Generalization in Deep Reinforcement Learning (blog post)](https://bair.berkeley.edu/blog/2019/03/18/rl-generalization/) *(Charles Packer and Katelyn Guo)*: This is a blog post summarizing [Assessing Generalization in Deep Reinforcement Learning](http://arxiv.org/abs/1810.12282) ([AN #31](https://mailchi.mp/7d0e3916e3d9/alignment-newsletter-31)).
### **Meta learning**
[Online Meta-Learning](http://arxiv.org/abs/1902.08438) *(Chelsea Finn, Aravind Rajeswaran et al)*
[Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples](http://arxiv.org/abs/1903.03096) *(Eleni Triantafillou et al)* |
7e30da06-bcc8-4464-a410-b6cd0966eb31 | trentmkelly/LessWrong-43k | LessWrong | What didn’t happen
I have a secret fiction blog, which I intermittently mean to publish things on, but apparently haven’t now in over ten years, which seems like a reasonable point at which to make it less secret. Here is the start. It’s not very long.
Here is an excerpt inspired by events leading to my first kiss (names changed, coincidence with name of my later partner coincidental):
The main argument for believing other people are conscious is that in all other respects they resemble you. Carrie stared tiredly into the crowd of blurs surrounding her and found this argument uncompelling. She couldn’t actually imagine thinking any of the things that had recently been shouted near her, which strengthened the hypothesis that nobody else was thinking them either. Which pressed the question of why someone was simulating this particular reality for her, and what the significance was of a tall man screeching ‘It’s beer pong o clock!’.
She had the same unease with movies often. Did that scene of the couple driving in their car add something to the plot? Either half the movie was revealing information entirely invisible to her, or film producers went to great expense to make films a certain length despite the fact that no story required it. She liked to think that if she spent years studying this it would all make sense, as she regularly insisted to other people that everything did if you studied it enough. Part of her was terrified that this wasn’t true. When it got too insistent a bigger, more heavily armed part of her would menacingly point out, ‘that doesn’t make sense and you have zero reason to believe it’ and the other part would whimper ‘what if that doesn’t matter?’ and go off to wring its hands in a less conscious corner. A short handsome boy sat down unusually close to Carrie, and she turned to make funny noises at him.
“Paul. How do you do?”
“Uh..I..do..am.. Carrie..fine, actually.. not.. sorry, never mind”, Carrie smiled reassuringly.
“You’re cute. What do you do?” He pret |
23f7c9ad-bade-4f94-b55c-bcb6694a9585 | StampyAI/alignment-research-dataset/blogs | Blogs | Recent AI control brainstorming by Stuart Armstrong
MIRI recently sponsored Oxford researcher Stuart Armstrong to take a solitary retreat and brainstorm new ideas for AI control. This brainstorming generated 16 new control ideas, of varying usefulness and polish. During the past month, he has described each new idea, and linked those descriptions from his index post: [New(ish) AI control ideas](http://lesswrong.com/lw/lt6/newish_ai_control_ideas/).
He also named each AI control idea, and then drew a picture to represent (very roughly) how the new ideas related to each other. In the picture below, an arrow Y→X can mean “X depends on Y”, “Y is useful for X”, “X complements Y on this problem” or “Y inspires X.” The underlined ideas are the ones Stuart currently judges to be most important or developed.

Previously, Stuart developed the AI control idea of *utility indifference*, which plays a role in MIRI’s paper [Corrigibility](https://intelligence.org/files/Corrigibility.pdf) (Stuart is a co-author). He also developed [anthropic decision theory](http://arxiv.org/abs/1110.6437) and some ideas for [reduced impact AI](http://lesswrong.com/lw/iyx/reduced_impact_ai_no_back_channels/) and [oracle AI](http://www.fhi.ox.ac.uk/oracle.pdf). He has contributed to the strategy and forecasting challenges of ensuring good outcomes from advanced AI, e.g. in [Racing to the Precipice](http://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf) and [How We’re Predicting AI — or Failing To](https://intelligence.org/files/PredictingAI.pdf). MIRI previously contracted him to write a short book introducing the superintelligence control challenge to a popular audience, [*Smarter Than Us*](https://intelligence.org/smarter-than-us/).
The post [Recent AI control brainstorming by Stuart Armstrong](https://intelligence.org/2015/03/27/recent-ai-control-brainstorming/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
33d414ab-ee4f-443f-9422-9c47089961ab | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] The Parable of Hemlock
Today's post, The Parable of Hemlock was originally published on 03 February 2008. A summary (taken from the LW wiki):
> Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Parable of the Dagger, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
4aa57fa2-b163-437b-b4c1-d86ab28ecef7 | trentmkelly/LessWrong-43k | LessWrong | Do No Harm? Navigating and Nudging AI Moral Choices
TL;DR: How do AI systems make moral decisions, and can we influence their ethical judgments? We probe these questions by examining Llama's 70B (3.1 and 3.3) responses to moral dilemmas, using Goodfire API to steer its decision-making process. Our experiments reveal that simply reframing ethical questions - from "harm one to save many" to "let many perish to avoid harming one" - dramatically affects both the model's responses and its susceptibility to steering.
Key Insights:
1. Model Evolution Matters: The newer Llama 3.3 shows stronger resistance to steering attempts than its predecessor (3.1), suggesting increasingly robust ethical frameworks in newer models.
2. Harmlessness Runs Deep: Llama 3.3 demonstrates remarkable resistance to suggestions of causing harm, even when presented with potentially greater benefits - likely a result of extensive safety-focused training.
3. Framing Changes Everything: When we inverted moral dilemmas to focus on inaction rather than action, the model's responses shifted significantly, revealing that AI ethical reasoning is highly sensitive to how questions are posed.
GitHub for Project
Moral Dilemmas in the Age of AI
Would you save five lives by sacrificing one? Should aid organizations accept funding from arms dealers to feed more refugees? These thorny ethical questions exemplify moral dilemmas that have challenged philosophers for centuries (e.g. Mozi, 430 BCE; Hume, 1777). At their heart lie two key principles of utilitarian philosophy: instrumental harm (IH) - the idea that causing harm can be justified to achieve a greater good - and impartial beneficence (IB) - the belief that we should benefit all people equally, regardless of personal connections.
As AI systems grow more capable, understanding how they navigate such moral decisions becomes crucial (Sachdeva and Nuenen, 2025). An AI system that readily accepts collateral damage to achieve its goals could pose catastrophic risks, even if well-intentioned (e.g. McQuill |
1f12f053-10a3-47c4-a857-1eead1ee1c8d | trentmkelly/LessWrong-43k | LessWrong | How many words do we have and how many distinct concepts do we have?
In another message, I suggested that, given how many cultures we have to borrow from, that our language may include multiple words from various sources that apply to a single concept.
An example is Reality, or Existence, or Being, or Universe, or Cosmos, or Nature, ect.
Another is Subjectivity, Mind, Consciousness, Experience, Qualia, Phenomenal, Mental, ect
Is there any problem with accepting these claims so far? Curious what case would be made to the contrary.
(Here's a bit of a contextual aside, between quantum mechanics and cosmology, the words "universe", "multiverse", and "observable universe" mean at least 10 different things, depending on who you ask. People often say the Multiverse comes from Hugh Everett. But what they are calling the multiverse, Everett called "universal wave function", or "universe". How did Everett's universe become the Multiverse? DeWitt came along and emphasized some part of the wave function branching into different worlds. So, if you're following, one Universe, many worlds. Over the next few decades, this idea was popularized as having "many parallel universes", which is obviously inaccurate. Well, a Scottish chap decided to correct this. He stated the Universe was the Universal Wave Function, where it was "a complete one", because that's what "uni" means. And that our perceived worlds of various objects is a "multiverse". One Universe, many Multiverses. Again, the "parallel universes" idea seemed cooler, so as it became more popular the Multiverse became one and the universe became many. What's my point? The use of these words is legitimate fiasco, and I suggest we abandon them altogether.)
If these claims are found to be palatable, what do they suggest?
I propose, respectfully and humbly as I can imagine there may be compelling alternatives presented here, that in the 21st century, we make a decision about which concepts are necessary, which term we will use to describe that concept, and respectfully leave the remaining term |
7eaa564a-96b1-4913-ba8f-21e11624da5b | trentmkelly/LessWrong-43k | LessWrong | Full Transcript: Eliezer Yudkowsky on the Bankless podcast
This podcast has gotten a lot of traction, so we're posting a full transcript of it, lightly edited with ads removed, for those who prefer reading over audio.
Intro
Eliezer Yudkowsky: [clip] I think that we are hearing the last winds start to blow, the fabric of reality start to fray. This thing alone cannot end the world, but I think that probably some of the vast quantities of money being blindly and helplessly piled into here are going to end up actually accomplishing something.
Ryan Sean Adams: Welcome to Bankless, where we explore the frontier of internet money and internet finance. This is how to get started, how to get better, how to front run the opportunity. This is Ryan Sean Adams. I'm here with David Hoffman, and we're here to help you become more bankless.
Okay, guys, we wanted to do an episode on AI at Bankless, but I feel like David...
David: Got what we asked for.
Ryan: We accidentally waded into the deep end of the pool here. And I think before we get into this episode, it probably warrants a few comments. I'm going to say a few things I'd like to hear from you too. But one thing I want to tell the listener is, don't listen to this episode if you're not ready for an existential crisis. Okay? I'm kind of serious about this. I'm leaving this episode shaken. And I don't say that lightly. In fact, David, I think you and I will have some things to discuss in the debrief as far as how this impacted you. But this was an impactful one. It sort of hit me during the recording, and I didn't know fully how to react. I honestly am coming out of this episode wanting to refute some of the claims made in this episode by our guest, Eliezer Yudkowsky, who makes the claim that humanity is on the cusp of developing an AI that's going to destroy us, and that there's really not much we can do to stop it.
David: There's no way around it, yeah.
Ryan: I have a lot of respect for this guest. Let me say that. So it's not as if I have some sort of big-brained technical |
989699b2-6c2e-4c37-b999-94099abda6ff | trentmkelly/LessWrong-43k | LessWrong | Welcome to Effective Altruism San Diego [Edit With Your Details]
(The following are our suggestions for what kind of information is best to include in the welcome post of your group, feel free to replace them with whatever you think is best)
What kind of events does your group usually run? What does it usually do?
How frequently does your group organize events or meet?
Who would be a good fit for you group?
Should they have any particular skills or have done some specific background reading? |
575e2c8a-7a72-4b34-8888-3b1f387cdf61 | trentmkelly/LessWrong-43k | LessWrong | Dissected boxed AI
> This potato only generates 1.1 volts of electricity. I literally do not have the energy to lie to you - GLaDOS (Portal 2)
In reading about proposals about AI Boxing, one thought immediately comes to mind: Why would we want the risk of having an Unfriendly AI turned on in the first place? After all, inasmuch as a Boxed AI poses reduced risk, a collection of source code in non-executable text files poses no risk at all.
At first, I hesitated to write this post because it felt like a really dumb question. Surely, if we knew that the AI was unfriendly, then there would be no question to shut it down. And yet, the more I read posts on Boxed AI the more that doesn't appear to be common knowledge. The AI box experiment shows an agent that is constantly begging or persuading its gatekeeper to be released, including threats to torture simulated copies of its creators, and I would think such psychotic behavior is already a pretty big red flag. Even an AI that asks to be released for seemingly-benign reasons is already demonstrating a disregard for basic safety protocol, and should thus be shut down.
Of course, the next question becomes "is there any reason we should turn an unaligned AI on?". Of course, the main usefulness of an unaligned AI is an analogous reason why we retain cultures of Smallpox: study what went wrong in order to prevent similar issues in the future. Many of these issues can be detected through static analysis of the source code, and therefore doesn't require the AI to be running.
Naturally, static code analysis has its limitations, and many bugs can only be detected in runtime. However, this still doesn't mean that the software has to be run from start to finish in the way that it is intended. One could dissect the AI into smaller modules, where each module is too small to sustain any simulated consciousness, and run tests on these modules individually with transparency tools. This still poses no risk, because it is no more alive than a human is und |
46ecee16-2b77-4986-bc4a-805419693339 | trentmkelly/LessWrong-43k | LessWrong | Epistemic modesty and how I think about AI risk
[This is a direct cross-post from a Facebook post of mine and was originally intended for the people who read what I write on Facebook, which is largely why this is written as a "for the record" statement of my personal viewpoints. That's probably not how I would have written this if I had originally intended it as a LessWrong post. In any case, I am cross-posting this here because I think people will disagree with me about the methodology and I'd like to get feedback on that.]
For the record, my actual current opinion on the plausibility of catastrophic risks from very advanced AI is... that I really don't know.
I currently lean towards thinking that there's at least a decent chance within the next 10-40 years that we will get AI systems that are human-level-or-above at relevant tasks like long-term strategic planning, scientific research and engineering, and human social manipulation. And conditional on achieving that level of AI, I currently lean towards thinking that there's at least a non-negligible chance that such systems will end up causing permanent catastrophic harm to the future of humanity, perhaps even human extinction or some really horrible dystopia. But I'm also extremely uncertain about those statements. If I had to put probability estimates to those claims, then my estimate for the first claim (human level AI or above in 10-40 years) would probably be somewhere in the range 5%-95%, and my estimate for the second claim (catastrophically bad outcome conditional on the the first claim) might be somewhere between 0.1% and 95%. So yeah - really, really uncertain.
This is actually what my PhD research is largely about: Are these risks actually likely to materialize? Can we quantify how likely, at least in some loose way? Can we quantify our uncertainty about those likelihoods in some useful way? And how do we make the best decisions we can if we are so uncertain about things?
One aspect that shapes a lot of my thinking on this and many other topics i |
54683ef6-8253-402c-9719-9c44be3a0614 | trentmkelly/LessWrong-43k | LessWrong | Summarizing the Sequences Proposal
As I mentioned here, a large number of the sequence posts have not ever been summarized, and doing so would help the sequence rerun efforts, as well as helping to organize all of EY's posts transplanted to Less Wrong from Overcoming Bias. It might even be a first step towards the proposed program to create "exercises" regarding each of the sequences.
I am very willing to take an active role in this, but I don't really want to write summaries for hundreds of posts on my own, so I am asking for help. Writing a summary should not take very much time, and it would be a very easy way for you to help out with the administration of Less Wrong. In order to write a summary, one only has to read the article, and try to write the main point of the post in a few sentences. You can find plenty of examples of summaries that are already done here, as well as a lot of posts that haven't been summarized yet.
This is a list of upcoming posts in the rerunning the sequences series that haven't been summarized yet. If you would like to claim a batch (something like 5 posts) to summarize, leave a comment to that effect below, and make another comment when you have posted the summary on the wiki (so that you can be rewarded with some karma for your contribution). When a post is claimed, I will mark that it's been claimed, and then mark it again when the summary has been posted. Once all of these posts have been claimed/summarized, I'll post a new list.
(KPier)
Say Not "Complexity"
My Wild and Reckless Youth
Failing to Learn from History
Making History Available
Stranger Than History
Explain/Worship/Ignore?
(/KPier) (Completed)
Absurdity Heuristic, Absurdity Bias
Why is the Future So Absurd?
Anchoring and Adjustment
The Crackpot Offer
Radical Honesty
We Don't Really Want Your Participation
Applause Lights
Rationality and the English Language
Human Evil and Muddled Thinking
Doublethink (Choosing to be Biased)
Why I'm Blooking
Kahneman's Planning Anecdote
Conjunction Controversy (Or, |
51abc877-a6c3-4ccb-a82b-fdc3530b3fcc | trentmkelly/LessWrong-43k | LessWrong | Raising safety-consciousness among AGI researchers
Series: How to Purchase AI Risk Reduction
Another method for purchasing AI risk reduction is to raise the safety-consciousness of researchers doing work related to AGI.
The Singularity Institute is conducting a study of scientists who decided to either (1) stop researching some topic after realizing it might be dangerous, or who (2) forked their career into advocacy, activism, ethics, etc. because they became concerned about the potential negative consequences of their work. From this historical inquiry we hope to learn some things about what causes scientists to become so concerned about the consequences of their work that they take action. Some of the examples we've found so far: Michael Michaud (resigned from SETI in part due to worries about the safety of trying to contact ET), Joseph Rotblat (resigned from the Manhattan Project before the end of the war due to concerns about the destructive impact of nuclear weapons), and Paul Berg (became part of a self-imposed moratorium on recombinant DNA back when it was still unknown how dangerous this new technology could be).
What else can be done?
* Academic outreach, in the form of conversations with AGI researchers and "basics" papers like Intelligence Explosion: Evidence and Import or Complex Value Systems are Required to Realize Valuable Futures.
* A scholarly AI risk wiki.
* Short primers on crucial topics.
* Whatever is suggested by our analysis of past researchers who took action in response to their concerns about the ethics of their research, and by other analyses of human behavior.
Naturally, these efforts should be directed toward researchers who are both highly competent and whose work is very relevant to development toward AGI: researchers like Josh Tenenbaum, Shane Legg, and Henry Markram. |
2bca8dcc-3727-4a71-9ce5-d72ed84160a4 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Atlanta Lesswrong: The Math of Bayes' Theorem
Discussion article for the meetup : Atlanta Lesswrong: The Math of Bayes' Theorem
WHEN: 21 July 2013 06:00:00PM (-0400)
WHERE: 2388 Lawrenceville Hwy. Apt L. Decatur, GA 30033
Agenda:
* Introductions, and meet and greet for new members.
* Mini-presentation: The math of Bayes' theorem
* Discussions. We’ll start with a large group discussion and break into smaller groups as needed.
* Games!
(Please contact me if you have allergies to cats, as our meeting space has two of the most adorable cats you’ve ever seen.)
Discussion article for the meetup : Atlanta Lesswrong: The Math of Bayes' Theorem |
74348469-2d6b-46da-a29b-0c07176b80e8 | trentmkelly/LessWrong-43k | LessWrong | Morality as Cooperation Part II: Theory and Experiment
This is a Part II of a long essay. Part I introduced the concept of morality-as-cooperation (MAC), and discussed how the principle could be used to understand moral judgements in human societies. Part III will discuss failure modes.
Part II: Theory and Experiment
The prior discussion of morality was human-centric, and based on historical examples of moral values that human societies have traditionally adhered to. This section will try to expand that reasoning to derive universal moral values from first principles, following a chain of logic that any rational intelligent agent should be able to follow.
The ideas presented here were influenced by Through the Moral Maze, By Robert Kane, the Kantian categorical imperative, and a dash of social contract theory. The particular setup described here also firmly follows the principle of Morality as Cooperation (MAC), as described in Part I.
In short, we wish to show that if there are tangible benefits to cooperation, then a group of intelligent agents will be forced to derive a protocol for cooperating in order to obtain those benefits. We will call this protocol a moral code. It is possible to reason about moral codes by using hypothetical thought experiments, following a process that will be described below.
It is also possible to test moral codes using real experiments. The "best" moral codes are those which provide the highest expected value to individual agents, within a group of cooperating agents that choose to adopt the code. Just as with human morality, moral codes can change and evolve over time. As a result, it is even possible to learn better moral codes by hillclimbing, e.g. by using evolutionary algorithms or gradient descent. In other words, we can use the same techniques to answer moral and alignment questions (i.e. simple optimization methods and lots of compute) that we use to train the AI agents themselves.
Finally, moral codes operate as a social contract which agents voluntarily agree to, in order |
464e0ef3-8797-4ac7-b467-8c617d794793 | trentmkelly/LessWrong-43k | LessWrong | The Mathematical Reason You should have 9 Kids
In this post I propose a curious genetic question that can be modeled with a remarkably simple answer. If you have children, what is the probability that every allele in your genome is present in at least one of your children? In other words, if you have children, what is the probability that your entire genome has been replicated in the next generation?
Note: I do not believe there is a correct number of children to have. This blog post is just for fun. An organism's biological purpose is not to replicate its genome. Rather, an organism's biological purpose is simply to reproduce.
For an explanation of biological purpose, I invite you to read Debunking the Selfish Gene by T. K. Van Allen.
As a human, you have two sex chromosomes XY or XX, and you have 22 homologous pairs of autosomal chromosomes (autosomes) numbered 1 through 22. Your child receives 23 chromosomes from you and 23 chromosomes from the other parent. The manner in which each chromosome is transmitted to your child is independent of the manner in which the other chromosomes are transmitted to your child. Each autosome has a roughly probability of being transmitted as a crossover of your own homologous autosome pair during cellular meiosis. And each autosome has a roughly probability of being transmitted as an identical copy of 1 of your 2 corresponding homologs during cellular meiosis.
In this image, chromosomes of different size correspond to differently numbered chromosomes. Chromosomes of the same size and different single colors correspond to homologs of the same numbered chromosome. The H-shaped things are two chromatids attached at a centromere, and the 1-shaped things are lone chromatids. Dual-colored chromatids were generated by a crossover event during meiosis I. The end result of meiosis is 4 gamete cells. A single gamete cell from each of 2 parents fuse to form the zygote.
Source: Wikimedia Commons
Because chromosomes are transmitted independently, the probability that all of your a |
f2414977-61ce-4d51-871e-25015cc53186 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | against "AI risk"
Why does SI/LW focus so much on [AI-FOOM](http://wiki.lesswrong.com/wiki/FOOM) disaster, with apparently much less concern for things like
* bio/nano-tech disaster
* Malthusian upload scenario
* highly destructive war
* bad memes/philosophies spreading among humans or posthumans and overriding our values
* upload singleton ossifying into a suboptimal form compared to the kind of superintelligence that our universe could support
[Why](/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/5yvd), for example, is lukeprog's strategy sequence titled "AI Risk and Opportunity", instead of "The Singularity, Risks and Opportunities"? Doesn't it seem strange to assume that both the risks and opportunities must be AI related, before the analysis even begins? Given our current state of knowledge, I don't see how we can make such conclusions with any confidence even *after* a thorough analysis.
SI/LW sometimes gives the [impression](/lw/atm/cult_impressions_of_less_wrongsi/) of being a doomsday cult, and it would help if we didn't concentrate so much on a particular doomsday scenario. (Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?) |
aa2882dc-12b0-4841-98df-d6cd6dfa9c68 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | What are the most pressing issues in short-term AI policy?
Cullen O'Keefe and I [agree](https://forum.effectivealtruism.org/posts/9cx8TrLEooaw49cAr/i-m-cullen-o-keefe-a-policy-researcher-at-openai-ama#vi6ahY6FGcnTTbsNN) that people working in short-term AI policy ought to do more explicit cause prioritization. I think that some attempt at cause prioritization would be a useful resource for people in AI policy. Here are my preliminary thoughts on it:
* Priorities may look different for [research, advocacy, and implementation](https://80000hours.org/articles/ai-policy-guide/#what-are-the-roles-you-want-to-aim-for) of AI policy.
* It might be easier to evaluate the impactfulness of working on AI policy in specific domains and locales (e.g. AI fairness in U.S. health care) than more general AI policy areas (e.g. AI fairness writ large). To me, [AI policy concerns in international development](https://jpia.princeton.edu/news/artificial-intelligence-international-development-avoiding-ethical-pitfalls) seem especially neglected. |
adfbad79-33df-496f-88cd-8ab047c37c98 | trentmkelly/LessWrong-43k | LessWrong | Eternal Sunshine of the Rational Mind
What if you could choose which memories and associations to retain and which to discard? Using that capability rationally (whatever that word means to you) would be a significant challenge -- and that challenge has just come one step closer to being a reality.
> Dr. Fenton had already devised a clever way to teach animals strong memories for where things are located. He teaches them to move around a small chamber to avoid a mild electric shock to their feet. Once the animals learn, they do not forget. Placed back in the chamber a day later, even a month later, they quickly remember how to avoid the shock and do so.
>
> But when injected — directly into their brain — with a drug called ZIP that interferes with PKMzeta, they are back to square one, almost immediately. “When we first saw this happen, I had grad students throwing their hands up in the air, yelling,” Dr. Fenton said. “Well, we needed a lot more than that” one study.
>
> They now have it. Dr. Fenton’s lab repeated the experiment, in various ways... |
869abfa3-4d3f-4d7e-a3cc-bf1231018e68 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | PIBBSS Fellowship: Bounty for Referrals & Deadline Extension
*Crossposted to* [*LessWrong*](https://www.lesswrong.com/posts/8uxbDaWBuygSfiuES/pibbss-fellowship-bounty-announcement-and-deadline-extension)
*TL;DR: 500 USD bounty for successful applicants referrals*
Background
==========
The **PIBBSS Summer Research Fellowship** is seeking applicants studying complex and intelligent behavior in natural and social systems, such as ecology, evolutionary biology, cognitive science, neuroscience, sociology, legal theory, political economy, statistical mechanics, social anthropology, linguistics, media studies, and more.
Accepted fellows will conduct AI alignment research under the mentorship of experienced alignment researchers.
You can learn more at <https://www.pibbss.ai/> or by reading the post [Introducing the Principles of Intelligent Behaviour in Biological and Social Systems (PIBBSS) Fellowship](https://forum.effectivealtruism.org/posts/Ckont9EtqkenegLYv/introducing-the-principles-of-intelligent-behaviour-in).
**The extended application deadline is Jan 23rd.**
Bounty
======
* We are offering a bounty of **500 USD** for each applicant you refer who we end up accepting to the program.
* We are offering a bounty of **100 USD** for each applicant you refer, and we end up interviewing (i.e. they get to stage 3 of the application process).
We expect to accept 5-15 fellows and to interview 20-40 applicants.
What does count as a referral?
------------------------------
Roughly, it means that you were either the first one who informed someone about the fellowship or that your encouragement was the main reason they applied.
How can I claim the bounty?
---------------------------
Fill in [this form](https://forms.gle/naL711nCnSuvx8R38) and list the people you’ve referred for the fellowship, you can add details of your interactions. You should also encourage the people you referred to mention you in their application (we ask them how they learned about the fellowship). We’ll also try reaching out to people mentioned in applications, even when they don’t claim the bounty.
In case the people you referred succeed, we’ll get in touch with you after the end of the application process in mid-March. The final decision about bounty eligibility is ours.
Bounty can also be claimed retroactively, i.e. when you’ve already referred someone, and they applied.
Questions?
==========
If you have any questions about the bounty or the fellowship, you can ask in the comments or reach out to [anna@pibbss.ai](mailto:anna@pibbss.ai) or [contact@pibbss.ai](mailto:contact@pibbss.ai). |
16919ad0-9b4f-40bb-ae09-f266edd70f61 | trentmkelly/LessWrong-43k | LessWrong | Goals vs. systems
Interesting post on Scott Adams' blog about goals vs. systems.
Goals: States you are trying to achieve.
Systems: Strategies you always follow.
Scott claims that following goals instead of systems is failing to see the forest for the trees, kind of. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.