id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
b4b3cd21-aeaa-4034-ae34-1f5051637140
trentmkelly/LessWrong-43k
LessWrong
Consulting Opportunity for Budding Econometricians As a part of my job, I recently created an econometric model. My boss wants someone to look over the math before its submitted internally throughout the company. We have a modest amount of money set aside for someone to audit the process. The model is an ARMA(2,1) with seasonality, trend, and a dummy variable. There's no heteroscedasticity or serial correlation, but the Ramsey Reset test suggests a more different model might work better. I currently have the data in an eviews file, so you'd need to do zero data entry. There's a small chance this will be used in court, but none of the liability will be transferred to you. There should be an emphasis placed on parsimony. You'd have to sign a confidentiality agreement. If you're qualified to review this/suggest a marginally better model, then this would be an easy way for you to make bank in a couple hours time. If it goes well, there might be more work like this in the future. Let me know if you're interested.
f59eb688-55f2-4c39-aa1b-eeb7d6d9cb48
trentmkelly/LessWrong-43k
LessWrong
Something about the Pinker Cancellation seems Suspicious Something about the recent attempt to cancel Steve Pinker seems really off. They problem is that the argument is suspiciously bad. The open letter presents only six real pieces of evidence, and they're all really, trivially weak. The left isn't incompetent when it comes to tallying up crimes for a show trial. In fact, they're pretty good at it. But for some reason this letter has only the weakest of attacks, and what's more, it stops at only six "relevant occasions". For comparison, take a look at this similar attack on Stephen Hsu which has, to put it mildly, more than six pieces of evidence. There is plenty of reasonable criticism of Pinker out there. Why didn't they use any of it? Pinker has been a public figure for decades. Surely he has said something stupid and offensive at least once during that time. If not something honestly offensive, perhaps a slip of the tongue. If not a slip of the tongue, maybe something that sounds really terrible out of context. We know that the authors of the piece are not above misrepresenting the evidence or taking statements out of context, because they do so multiple times in their letter. It's clear that they spent a lot of effort stretching the evidence to make Pinker look as bad as possible. Why didn't they spend that effort finding more damning evidence, things that look worse when taken out of context? Just as one example, this debate could easily be mined for quotes that sound sexist or racist to a moderately progressive reader. How about, "in all cultures men and women are seen as having different natures." Even when they do have better ammunition, they seem to downplay it. The most egregious statement they include from Pinker is hidden in a footnote! They also pick a very strange target. Attacking Pinker's status as an LSA fellow and a media expert doesn't pose that much of a threat to him; he just doesn't have that much to lose here. Why are they bringing this to the LSA rather than to Pinker's publisher? Why are
7fe17bd8-0bff-49bf-9a26-71f8dbbcc7cd
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Intuitions about goal-directed behavior One broad argument for AI risk is the Misspecified Goal argument: > **The Misspecified Goal Argument for AI Risk:** Very intelligent AI systems will be able to make long-term plans in order to achieve their goals, and if their goals are even slightly misspecified then the AI system will become adversarial and work against us. My main goal in this post is to make conceptual clarifications and suggest how they affect the Misspecified Goal argument, without making any recommendations about what we should actually do. Future posts will argue more directly for a particular position. As a result, I will not be considering other arguments for focusing on AI risk even though I find some of them more compelling. I think of this as a concern about *long-term goal-directed behavior*. Unfortunately, it’s not clear how to categorize behavior as goal-directed vs. not. Intuitively, any agent that searches over actions and chooses the one that best achieves some measure of “goodness” is goal-directed (though there are exceptions, such as the agent that selects actions that begin with the letter “A”). (ETA: I also think that agents that show goal-directed behavior because they are looking at some other agent are not goal-directed themselves -- see this [comment](https://www.alignmentforum.org/posts/9zpT9dikrrebdq3Jf/will-humans-build-goal-directed-agents#FdQsD6Q78SZQeXa64).) However, this is not a necessary condition: many humans are goal-directed, but there is no goal baked into the brain that they are using to choose actions. This is related to the concept of [optimization](https://www.lesswrong.com/posts/D7EcMhL26zFNbJ3ED/optimization), though with intuitions around optimization we typically assume that we know the agent’s preference ordering, which I don’t want to assume here. (In fact, I don’t want to assume that the agent even *has* a preference ordering.) One potential formalization is to say that goal-directed behavior is any behavior that can be modelled as maximizing expected utility for some utility function; in the next post I will argue that this does not properly capture the behaviors we are worried about. In this post I’ll give some intuitions about what “goal-directed behavior” means, and how these intuitions relate to the Misspecified Goal argument. Generalization to novel circumstances ===================================== Consider two possible agents for playing some game, let’s say TicTacToe. The first agent looks at the state and the rules of the game, and uses the [minimax algorithm](https://en.wikipedia.org/wiki/Minimax#Minimax_algorithm_with_alternate_moves) to find the optimal move to play. The second agent has a giant lookup table that tells it what move to play given any state. Intuitively, the first one is more “agentic” or “goal-driven”, while the second one is not. But both of these agents play the game in exactly the same way! The difference is in how the two agents *generalize to new situations*. Let’s suppose that we suddenly change the rules of TicTacToe -- perhaps now the win condition is reversed, so that anyone who gets three in a row loses. The minimax agent is still going to be optimal at this game, whereas the lookup-table agent will lose against any opponent with half a brain. The minimax agent looks like it is “trying to win”, while the lookup-table agent does not. (You could say that the lookup-table agent is “trying to take actions according to <policy>”, but this is a weird complicated goal so maybe it doesn’t count.) In general, when we say that an agent is pursuing some goal, this is meant to allow us to predict how the agent will generalize to some novel circumstance. This sort of reasoning is critical for the Goal-Directed argument for AI risk. For example, we worry that an AI agent will prevent us from turning it off, because that would prevent it from achieving its goal: “You can't fetch the coffee if you're dead.” This is a prediction about what an AI agent would do in the novel circumstance where a human is trying to turn the agent off. This suggests a way to characterize these sorts of goal-directed agents: there is some goal such that the agent’s behavior *in new circumstances* can be predicted by figuring out which behavior best achieves the goal. There's a lot of complexity in the space of goals we consider: something like "human well-being" should count, but "the particular policy <x>" and “pick actions that start with the letter A” should not. When I use the word goal I mean to include only the first kind, even though I currently don’t know theoretically how to distinguish between the various cases. Note that this is in stark contrast to existing AI systems, which are particularly bad at generalizing to new situations. ![](https://res.cloudinary.com/dq3pms5lt/image/upload/q_auto/v1543642073/meme_pued2q.png) Honestly, I’m surprised it’s only 90%. [1] Empowerment =========== We could also look at whether or not the agent acquires more power and resources. It seems likely that an agent that is optimizing for some goal over the long term would want more power and resources in order to more easily achieve that goal. In addition, the agent would probably try to improve its own algorithms in order to become more intelligent. This feels like a *consequence* of goal-directed behavior, and not its defining characteristic, because it is about being able to achieve a *wide variety* of goals, instead of a particular one. Nonetheless, it seems crucial to the broad argument for AI risk presented above, since an AI system will probably need to first accumulate power, resources, intelligence, etc. in order to cause catastrophic outcomes. I find this concept most useful when thinking about the problem of inner optimizers, where in the course of optimization through a rich space you stumble across a member of the space that is itself doing optimization, but for a related but still misspecified metric. Since the inner optimizer is being “controlled” by the outer optimization process, it is probably not going to cause major harm unless it is able to “take over” the outer optimization process, which sounds a lot like accumulating power. (This discussion is extremely imprecise and vague; see [Risks from Learned Optimization](https://arxiv.org/abs/1906.01820) for a more thorough discussion.) Our understanding of the behavior ================================= There is a general pattern in which as soon as we understand something, it becomes something lesser. As soon as we understand rainbows, they are relegated to the [“dull catalogue of common things”](https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real). This suggests a somewhat cynical explanation of our concept of “intelligence”: an agent is considered intelligent if we do not know how to achieve the outcomes it does using the resources that it has (in which case our best model for that agent may be that it is pursuing some goal, reflecting our tendency to anthropomorphize). That is, our evaluation about intelligence is a statement about our epistemic state. Some examples that follow this pattern are: * As soon as we understand how some AI technique solves a challenging problem, it is [no longer considered AI](https://www.zdnet.com/article/ai-tends-to-lose-its-definition-once-it-becomes-commonplace-sas/). Before we’ve solved the problem, we imagine that we need some sort of “intelligence” that is pointed towards the goal and solves it: the only method we have of predicting what this AI system will do is to think about what a system that tries to achieve the goal would do. Once we understand how the AI technique works, we have more insight into what it is doing and can make more detailed predictions about where it will work well, where it tends to make mistakes, etc. and so it no longer seems like “intelligence”. Once you know that OpenAI Five is trained by self-play, you can predict that they haven’t seen certain behaviors like standing still to turn invisible, and probably won’t work well there. * Before we understood the idea of natural selection and evolution, we would look at the complexity of nature and ascribe it to intelligent design; once we had the [mathematics](https://en.wikipedia.org/wiki/Price_equation) (and even just the qualitative insight), we could make much more detailed predictions, and nature no longer seemed like it required intelligence. For example, we can predict the timescales on which we can expect evolutionary changes, which we couldn’t do if we just modeled evolution as optimizing reproductive fitness. * Many phenomena (eg. rain, wind) that we now have scientific explanations for were previously explained to be the result of some anthropomorphic deity. * When someone performs a feat of mental math, or can tell you instantly what day of the week a random date falls on, you might be impressed and think them very intelligent. But if they explain to you [how they did it](http://mathforum.org/dr.math/faq/faq.calendar.html), you may find it much less impressive. (Though of course these feats are selected to seem more impressive than they are.) Note that an alternative hypothesis is that humans equate intelligence with mystery; as we learn more and remove mystery around eg. evolution, we automatically think of it as less intelligent. To the extent that the Misspecified Goal argument relies on this intuition, the argument feels a lot weaker to me. If the Misspecified Goal argument rested entirely upon this intuition, then it would be asserting that *because* we are ignorant about what an intelligent agent would do, we should assume that it is optimizing a goal, which means that it is going to accumulate power and resources and lead to catastrophe. In other words, it is arguing that assuming that an agent is intelligent *definitionally* means that it will accumulate power and resources. This seems clearly wrong; it is possible in principle to have an intelligent agent that nonetheless does not accumulate power and resources. Also, the argument is *not* saying that *in practice* most intelligent agents accumulate power and resources. It says that we have no better model to go off of other than “goal-directed”, and then pushes this model to extreme scenarios where we should have a lot more uncertainty. To be clear, I do *not* think that anyone would endorse the argument as stated. I am suggesting as a possibility that the Misspecified Goal argument relies on us incorrectly equating superintelligence with “pursuing a goal” because we use “pursuing a goal” as a default model for anything that can do interesting things, even if that is not the best model to be using. Summary ======= Intuitively, goal-directed behavior can lead to catastrophic outcomes with a sufficiently intelligent agent, because the optimal behavior for even a slightly misspecified goal can be very bad according to the true goal. However, it’s not clear exactly what we mean by goal-directed behavior. Often, an algorithm that searches over possible actions and chooses the one with the highest “goodness” will be goal-directed, but this is neither necessary nor sufficient. “From the outside”, it seems like a goal-directed agent is characterized by the fact that we can predict the agent’s behavior in new situations by assuming that it is pursuing some goal, and as a result it is acquires power and resources. This can be interpreted either as a statement about our epistemic state (we know so little about the agent that our best model is that it pursues a goal, even though this model is not very accurate or precise) or as a statement about the agent (predicting the behavior of the agent in new situations based on pursuit of a goal actually has very high precision and accuracy). These two views have very different implications on the validity of the Misspecified Goal argument for AI risk. --- [1] This is an entirely made-up number.
5c612215-e56a-4d94-8a2e-8b48f2245f8f
trentmkelly/LessWrong-43k
LessWrong
Are healthy choices effective for improving live expectancy anymore? The usual way to improve your life expectancy is through a healthy lifestyle of some-sort, especially during peace time. However, in many plausible futures, personal health now seems to have little effect: * We create aligned AGI: aligned AGI could presumably solve any health problems we accumulated due to our lifestyle choices. * We create unaligned AGI: unaligned AGI would definitely solve any health problems we accumulated due to our lifestyle choices. * Humans solve aging without AGI: health problems we accumulated probably don't matter in this scenario. * Some other existential crisis occurs. Whereas the scenarios where health choices do effect longevity seem slim: * Humans do not invent AGI and to do not solve aging themselves and no existential catastrophes occur in your lifetime * No existential catastrophes occur, we/an AI do find a cure for aging, but medical regulations prevent the cure from being distributed widely (please no 🙏). One important consideration to keep in mind though is that things that extend our life expectancy usually have short term health benefits as well. For example, sleep, diet, and exercise have a massive effect on energy levels. But if you're only optimizing short-term health, does the optimal lifestyle look different?
f08dfc8b-1bf7-4875-825c-b1ab31e17b9f
trentmkelly/LessWrong-43k
LessWrong
Building rationalist communities: lessons from the Latter-day Saints Or: How I Learned Everything I Know About Group Organization By Spending Two Years on a Mormon Mission in India. The official name is the Church of Jesus Christ of Latter-day Saints. You may know us as ‘Mormons.’ We like to call ourselves ‘Latter-day Saints.’ If you’re a Less Wrongian and trying to organize a rationalist community, you should be interested in the Latter-day Saint organizational model for four reasons: -       it’s a nonprofit, but franchise-based and designed to propagate itself, -       everyone has a responsibility, -       no one is paid, and -       it works. This is an introductory post. I'm not trying to persuade you to join, but rather that there’s something to learn here. Here, I will give you some basic details about what the LDS Church is.  In later posts, I will explain more how it works. A series overview is here. A franchise model The Church has about 55,000 missionaries worldwide, all of whom follow the same basic dress code and go about in pairs, basically recruiting people to join the organization. For men, white shirt and ‘conservative’ tie, suitjacket if it’s cold. Clean-shaven. No chewing gum in public.  Short hair. And so forth. Church buildings are selected from a basic set of designs. Each congregation unit has about 150 people each week at Sunday services. The internal organization is the same for each congregation, albeit with procedures for simplification in smaller units. Everyone has a responsibility, from the congregation head down to the teenage boys who prepare and serve the ‘sacrament.’[1] And nobody is paid. Everyone has a responsibility The Church is an organization, but members also comprise a distinct culture. Within the culture, there is an expectation that church members accept a ‘calling’ or specific unpaid organizational responsibility. Callings are assigned by the head of the local congregation. You can privately decline, but there is an expectation is to accept the responsibility. Examples inc
01f8bb58-eb72-4372-ba32-ecbc5dc7e384
trentmkelly/LessWrong-43k
LessWrong
On the fragility of values Programming human values into an AI is often taken to be very hard because values are complex (no argument there) and fragile. I would agree that values are fragile in the construction; anything lost in the definition might doom us all. But once coded into a utility function, they are reasonably robust. As a toy model, let's say the friendly utility function U has a hundred valuable components - friendship, love, autonomy, etc... - assumed to have positive numeric values. Then to ensure that we don't lose any of these, U is defined as the minimum of all those hundred components. Now define V as U, except we forgot the autonomy term. This will result in a terrible world, without autonomy or independence, and there will be wailing and gnashing of teeth (or there would, except the AI won't let us do that). Values are indeed fragile in the definition. However... A world in which V is maximised is a terrible world from the perspective of U as well. U will likely be zero in that world, as the V-maximising entity never bothers to move autonomy above zero. So in utility function space, V and U are actually quite far apart. Indeed we can add any small, bounded utility to W to U. Assume W is bounded between zero and one; then an AI that maximises W+U will never be more that one expected 'utiliton' away, according to U, from one that maximises U. So - assuming that one 'utiliton' is small change for U - a world run by an W+U maximiser will be good. So once they're fully spelled out inside utility space, values are reasonably robust, it's in their initial definition that they're fragile.
8f86d991-60f4-44b0-a51d-5db8148c608a
trentmkelly/LessWrong-43k
LessWrong
A Girardian interpretation of the Altman affair, it's on my to-do list Cross-posted from New Savanna. Tyler Cowen posted the following tweet over at Marginal Revolution under the title "Solving for the Equilibrium": I posted this comment: > Earlier in the week Scott Alexander had posted a skeptical review of a Girard book and I commented that, though I'm a Girard skeptic, albeit a somewhat interested one, Tyler regarded him as one of the great 20th century thinkers. In the course of introducing today's Open Thread, Alexander notes: "I would love to know more about Tyler’s interpretation of Girard and the single-victim process. Maybe in the context of recent events?" While we've got lots of recent to choose from – I'm thinking of the Israel/Palestine mess (ancient Israel, after all, is central to Girard's thinking on this matter) – I suspect the single-victim prompt points to the OpenAI upheaval. > > Indeed, my interpretive Spidey sense suggests that a Girardian reading might be illuminating. I'd start with the idea that Sam Altman is the sacrificial victim. His position as leader of OpenAI is a natural focal point for mimetic dynamics. In this case those dynamics ripple far and wide. One might wish, for example, to include the fairy extensive commentary on Altman over at LessWrong, and not just recently. What about Sutskever's role? Just how this inquiry would play out, I do not know. No way to tell about these things until you actually do the work. From the new interim CEO at OpenAI:   
3dcbb373-4c59-43c8-9b22-1dbbfbae7075
trentmkelly/LessWrong-43k
LessWrong
Cheat codes Most things worth doing take serious, sustained effort. If you want to become an expert violinist, you're going to have to spend a lot of time practicing. If you want to write a good book, there really is no quick-and-dirty way to do it. But sustained effort is hard, and can be difficult to get rolling. Maybe there are some easier gains to be had with simple, local optimizations. Contrary to oft-repeated cached wisdom, not everything worth doing is hard. Some little things you can do are like cheat codes for the real world. Take habits, for example: your habits are not fixed. My diet got dramatically better once I figured out how to change my own habits, and actually applied that knowledge. The general trick was to figure out a new, stable state to change my habits to, then use willpower for a week or two until I settle into that stable state. In the case of diet, a stable state was one where junk food was replaced with fruit, tea, or having a slightly more substantial meal beforehand so I wouldn't feel hungry for snacks. That's an equilibrium I can live with, long-term, without needing to worry about "falling off the wagon." Once I figured out the pattern -- work out a stable state, and force myself into it over 1-2 weeks -- I was able to improve several habits, permanently. It was amazing. Why didn't anybody tell me about this? In education, there are similar easy wins. If you're trying to commit a lot of things to memory, there's solid evidence that spaced repetition works. If you're trying to learn from a difficult textbook, reading in multiple overlapping passes is often more time-efficient than reading through linearly. And I've personally witnessed several people academically un-cripple themselves by learning to reflexively look everything up on Wikipedia. None of this stuff is particularly hard. The problem is just that a lot of people don't know about it. What other easy things have a high marginal return-on-effort? Feel free to include speculative ones,
9be4c570-545a-40fb-a853-2afabd0c7748
StampyAI/alignment-research-dataset/distill
Distill Scientific Journal
Thread: Differentiable Self-organizing Systems Self-organisation is omnipresent on all scales of biological life. From complex interactions between molecules forming structures such as proteins, to cell colonies achieving global goals like exploration by means of the individual cells collaborating and communicating, to humans forming collectives in society such as tribes, governments or countries. The old adage “the whole is greater than the sum of its parts”, often ascribed to Aristotle, rings true everywhere we look. The articles in this thread focus on practical ways of designing self-organizing systems. In particular we use Differentiable Programming (optimization) to learn agent-level policies that satisfy system-level objectives. The cross-disciplinary nature of this thread aims to facilitate ideas exchange between ML and developmental biology communities. Articles & Comments ------------------- Distill has invited several researchers to publish a “thread” of short articles exploring differentiable self-organizing systems, interspersed with critical commentary from several experts in adjacent fields. The thread will be a living document, with new articles added over time. Articles and comments are presented below in chronological order: ### [Growing Neural Cellular Automata](/2020/growing-ca/) ### Authors ### Affiliations [Alexander Mordvintsev](https://znah.net/), Ettore Randazzo, [Eyvind Niklasson](https://eyvind.me/), [Michael Levin](http://www.drmichaellevin.org/) [Google](https://research.google/), [Allen Discovery Center](https://allencenter.tufts.edu/) Building their own bodies is the very first skill all living creatures possess. How can we design systems that grow, maintain and repair themselves by regenerating damages? This work investigates morphogenesis, the process by which living creatures self-assemble their bodies. It proposes a differentiable, Cellular Automata model of morphogenesis and shows how such a model learns a robust and persistent set of dynamics to grow any arbitrary structure starting from a single cell. [Read Full Article](/2020/growing-ca/) ### [Self-classifying MNIST Digits](/2020/selforg/mnist/) ### Authors ### Affiliations Ettore Randazzo, [Alexander Mordvintsev](https://znah.net/), [Eyvind Niklasson](https://eyvind.me/), [Michael Levin](http://www.drmichaellevin.org/), [Sam Greydanus](https://greydanus.github.io/about.html) [Google](https://research.google/), [Allen Discovery Center](https://allencenter.tufts.edu/), [Oregon State University and the ML Collective](http://mlcollective.org/) This work presents a follow up to Growing Neural CAs, using a similar computational model for the goal of digit “self-classification”. The authors show how neural CAs can self-classify the MNIST digit they form. The resulting CAs can be interacted with by dynamically changing the underlying digit. The CAs respond to perturbations with a learned self-correcting classification behaviour. [Read Full Article](/2020/selforg/mnist/) ### [Self-Organising Textures](/selforg/2021/textures/) ### Authors ### Affiliations [Eyvind Niklasson](https://eyvind.me/), [Alexander Mordvintsev](https://znah.net/), Ettore Randazzo, [Michael Levin](http://www.drmichaellevin.org/) [Google](https://research.google/), [Allen Discovery Center](https://allencenter.tufts.edu/), Here the authors apply Neural Cellular Automata to a new domain: texture synthesis. They begin by training NCA to mimic a series of textures taken from template images. Then, taking inspiration from adversarial camouflages which appear in nature, they use NCA to create textures which maximally excite neurons in a pretrained vision model. These results reveal that a simple model combined with well-known objectives can lead to robust and unexpected behaviors. [Read Full Article](/selforg/2021/textures/) ### [Adversarial Reprogramming of Neural Cellular Automata](/selforg/2021/adversarial/) ### Authors ### Affiliations [Ettore Randazzo](https://oteret.github.io/), [Alexander Mordvintsev](https://znah.net/), [Eyvind Niklasson](https://eyvind.me/), [Michael Levin](http://www.drmichaellevin.org/) [Google](https://research.google/), [Allen Discovery Center](https://allencenter.tufts.edu/), This work takes existing Neural CA models and shows how they can be adversarially reprogrammed to perform novel tasks. MNIST CA can be deceived into outputting incorrect classifications and the patterns in Growing CA can be made to have their shape and colour altered. [Read Full Article](/selforg/2021/adversarial/) #### This is a living document Expect more articles on this topic, along with critical comments from experts. Get Involved ------------ The Self-Organizing systems thread is open to articles exploring differentiable self-organizing sytems. Critical commentary and discussion of existing articles is also welcome. The thread is organized through the open `#selforg` channel on the [Distill slack](http://slack.distill.pub). Articles can be suggested there, and will be included at the discretion of previous authors in the thread, or in the case of disagreement by an uninvolved editor. If you would like get involved but don’t know where to start, small projects may be available if you ask in the channel. About the Thread Format ----------------------- Part of Distill’s mandate is to experiment with new forms of scientific publishing. We believe that that reconciling faster and more continuous approaches to publication with review and discussion is an important open problem in scientific publishing. Threads are collections of short articles, experiments, and critical commentary around a narrow or unusual research topic, along with a slack channel for real time discussion and collaboration. They are intended to be earlier stage than a full Distill paper, and allow for more fluid publishing, feedback and discussion. We also hope they’ll allow for wider participation. Think of a cross between a Twitter thread, an academic workshop, and a book of collected essays. Threads are very much an experiment. We think it’s possible they’re a great format, and also possible they’re terrible. We plan to trial two such threads and then re-evaluate our thought on the format.
7d55ee5c-530f-4b50-8c69-649bf3873290
trentmkelly/LessWrong-43k
LessWrong
[Oops, there is actually an event] Notice: No LW event this weekend Just a notice to say there's no event this weekend. Will be back to your regularly scheduled LW event on Sunday 29th, which we'll announce in a few days. Update: We're having a double crux, between Buck Shlegeris and Oliver Habryka, on AI takeoff speeds. Event page.
d4cb2dc1-f5c9-4242-9775-c6449bd567d0
trentmkelly/LessWrong-43k
LessWrong
The uniquely awful example of theism When an LW contributor is in need of an example of something that (1) is plainly, uncontroversially (here on LW, at least) very wrong but (2) an otherwise reasonable person might get lured into believing by dint of inadequate epistemic hygiene, there seems to be only one example that everyone reaches for: belief in God. (Of course there are different sorts of god-belief, but I don't think that makes it count as more than one example.) Eliezer is particularly fond of this trope, but he's not alone. How odd that there should be exactly one example. How convenient that there is one at all! How strange that there isn't more than one! In the population at large (even the smarter parts of it) god-belief is sufficiently widespread that using it as a canonical example of irrationality would run the risk of annoying enough of your audience to be counterproductive. Not here, apparently. Perhaps we-here-on-LW are just better reasoners than everyone else ... but then, again, isn't it strange that there aren't a bunch of other popular beliefs that we've all seen through? In the realm of politics or economics, for instance, surely there ought to be some. Also: it doesn't seem to me that I'm that a much better thinker than I was a few years ago when (alas) I was a theist; nor does it seem to me that everyone on LW is substantially better at thinking than I am; which makes it hard for me to believe that there's a certain level of rationality that almost everyone here has attained, and that makes theism vanishingly rare. I offer the following uncomfortable conjecture: We all want to find (and advertise) things that our superior rationality has freed us from, or kept us free from. (Because the idea that Rationality Just Isn't That Great is disagreeable when one has invested time and/or effort and/or identity in rationality, and because we want to look impressive.) We observe our own atheism, and that everyone else here seems to be an atheist too, and not unnaturally we conclude t
35e6341d-9514-45b6-8437-d9b7eb54c65c
trentmkelly/LessWrong-43k
LessWrong
Is your uncertainty resolvable? I was chatting with Andrew Critch about the idea of Reacts on LessWrong. Specifically, the part where I thought there are particular epistemic states that don’t have words yet, but should. And that a function of LessWrong might be to make various possible epistemic states more salient as options. You might have reacts for “approve/disapprove” and “agree/disagree”... but you might also want reactions that let you quickly and effortless express “this isn’t exactly false or bad but it’s subtly making this discussion worse.” Fictionalized, Paraphrased Critch said “hmm, this reminds me of some particular epistemic states I recently noticed that don’t have names.” “Go on”, said I. “So, you know the feeling of being uncertain? And how it feels different to be 60% sure of something, vs 90%?” “Sure.” “Okay. So here’s two other states you might be in: * 75% sure that you’ll eventually be 99% sure, * 80% sure that you’ll eventually be 90% sure. He let me process those numbers for a moment. ... Then he continued: "Okay, now imagine you’re thinking about a particular AI system you’re designing, which might or might not be alignable. “If you’re feeling 75% sure that you’ll eventually be 99% sure that that AI is safe, this means you think that eventually you’ll have a clear understanding of the AI, such that you feel confident turning it on without destroying humanity. Moreover you expect to be able to convince other people that it’s safe to turn it on without destroying humanity. “Whereas if you’re 80% sure that eventually you’ll be 90% sure that it’ll be safe, even in the future state where you’re better informed and more optimistic, you might still not actually be confident enough to turn it on. And even if for some reason you are, other people might disagree about whether you should turn it on. “I’ve noticed people tracking how certain they are of something, without paying attention to whether their uncertainty is possible to resolve. And this has important rami
7e1cb9f5-5692-49d8-996d-1973c1da4dc7
trentmkelly/LessWrong-43k
LessWrong
How likely are the USA to decay and how will it influence the AI development? Trump's politics is so far from being understandable that it appears to cause the decay of the USA; for instance, the governor of California ended up pleading to exclude California-made products from tariffs introduced as retailation against Trump's measures; another example of a state wishing to secede is New Hampshire where the Republican state Representative Jason Gerhard proposed that New Hampshire should peacefully declare independence from the U.S. if the national debt surpasses $40 trillion, which is apparently to take place in less than 1.5 years.  [1] Meanwhile, the optimistic 2027 timeline implies that Taiwan is to be invaded in late 2026, if not earlier, while the pessimistic timeline implies that AI is to start taking jobs by that time and that the full effects of the AI-related research are to kick in during 2027. How likely is a potential decay of the USA to let Chinese spies do much more work like destroying the  data centres or leaving them without electricity? What effect will it have on the race to AGI? EDIT: Trump somehow decided to impose tariffs on most goods from Taiwan, except for the chips necessary to perform calculations.  1. ^ The national debt is currently $36.7 trillion. In four years it is projected to reach $46.4 trillion.
7cd811ff-83ae-459c-a7a9-cdbb944844f3
trentmkelly/LessWrong-43k
LessWrong
Singleton: the risks and benefits of one world governments Many thanks to all those whose conversations have contributed to forming these ideas. Will the singleton save us? For most of the large existential risks that we deal with here, the situation would be improved with a single world government (a singleton), or at least greater global coordination. The risk of nuclear war would fade, pandemics would be met with a comprehensive global strategy rather than a mess of national priorities. Workable regulations for the technology risks - such as synthetic biology or AI – become at least conceivable. All in all, a great improvement in safety... ...with one important exception. A stable tyrannical one-world government, empowered by future mass surveillance, is itself an existential risk (it might not destroy humanity, but it would “permanently and drastically curtail its potential”). So to decide whether to oppose or advocate for more global coordination, we need to see how likely such a despotic government could be. This is the kind of research I would love to do if I had the time to develop the relevant domain skills. In the meantime, I’ll just take all my thoughts on the subject and form them into a “proto-research project plan”, in the hopes that someone could make use of them in a real research project. Please contact me if you would want to do research on this, and would fancy a chat. Defining “acceptable” Before we can talk about the likelihood of a good outcome, we need to define what a good outcome actually is. For this analysis, I will take the definition that: * A singleton regime is acceptable, if it is at least as good as any developed democratic government of today. This definition can be criticised for its conservatism, or its cowardice. Shouldn’t we be aiming to do much better than what we’re doing now? What about the inefficiency of current governments, the huge opportunity costs? Is this not a disaster in itself? As should be evident from some of my previous posts, I don’t see loss of efficiency
67c7fdaf-2140-4a78-a8c6-79e73994dbf0
trentmkelly/LessWrong-43k
LessWrong
Applied Rationality Workshop Cologne, Germany Together with Anne Wissemann we are going to do a 2-day weekend workshop on applied rationality in Cologne, Germany. If you cannot apply since you have already scheduled something important, feel free to leave your email to be informed about future workshops in the Cologne area. Summary * Date: 19. & 20.01.2019, 9am - 6pm * Application-form: https://goo.gl/forms/DLfsznUTbWzoIJV12 (Deadline: Friday, 11.01.2018, confirmations will be send on Saturday) * Location: somewhere in Cologne... to be announced :) * Costs: 40-60€ * Topic: CFAR-techniques and other useful rationality-concepts * Limited to 20 attendees * Target Group: Aspiring EAs & Rationalists * Housing available on request Aims We - the EA Köln/Cologne group - want to contribute to the positive development of the EA & Rationality community. Over the past year we organized weekly Meetups including Socials, Book-Clubs as well as talks with discussion rounds. Recently we decided to make weekend seminars and workshops a priority since they seem to have multiple benefits: * real deep-diving into topics compared to evening events * 'Unite' the EA & LessWrong communities addressing mutual interests * socialize with great people * filter for (really interested) newcomers * Due to weekend time frame: invite great speakers and people from other cities (who would not take the effort traveling for an evening only) Overview Anne works as a coach for members of the rationality and effective altruism communities. In addition to her own CFAR workshop, she has volunteered as a mentor at 5 other mainline workshops since 2014, has undergone and taught at CFAR's mentorship training, as well as helped develop her local weekly Rationality Dojo in Berlin. She plans to teach the participants: - Double Crux - Focusing - Internal Double Crux - Trigger-Action Plans (TAP) - Flash classes/concepts: Mindfulness, Negative Visualisation, Decreasing Marginal Returns, helpful Debugging prompts, Frogs, How to notice thing
5f2f79fc-c457-4e0b-9b37-1fbff6bfb44c
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Free Guy, a rom-com on the moral patienthood of digital sentience **Warning: Some spoilers follow** --- *Free Guy* (2020) is romantic comedy about a bank teller named Guy who falls in love with a woman. Then, when he discovers that his entire world is just a video game, he has to be the one to save it from destruction by a maniacal video game company executive. Many people find it weird to think about the lives of digital sentient beings as morally valuable, whether they number a hundred (as in the film) or a trillion. But film is exhilarating, hilarious, and also so relatable that you ever don’t stop to wonder, wait, why do we care about a video game character at all? He’s not even real! Yet when the main character has an existential crisis about this fact, his best friend, Buddy, [compassionately says](https://www.youtube.com/watch?v=p_grnZ6DTkI): > > I say, okay, so what if I’m not real? […] Look, brother, I am sitting here with my best friend, trying to help him get through a tough time. Right? And even if I’m not real, this moment is. Right here, right now, this moment is real. I mean, what’s more real than a person tryin’ to help someone they love? Now, if that’s not real, I don’t know what is. > > > ![Guy's best friend Buddy says, "Now, if that's not real, I don't know what is"](https://i.imgur.com/cPO9SDn.png) *Free Guy* is able to effortlessly have viewers sympathize with its digital agents because they’re just like humans, even as they’re AIs in a video game, written with a bunch of lines of code. (Not whole-brain emulations, for example.) They have rich, complex thoughts and lives, even as most characters lack the “free will” to deviate from a routine. *Free Guy* isn’t sci-fi. It’s set in the present day, with present-day technology. And it keeps things small-scale with limited consequences for society. It doesn’t consider, how would the economy be transformed if we have human-level AI? What if the video game characters of Free City could not only write personal essays about feminism but also share these novel contributions with the rest of society? What would it look like to scale up the digital population of Free City by a billion times? Nevertheless, it provides a glimpse of how life could be different for digitally sentient beings—in particular, what Shulman and Bostrom call “[hedonic skew](https://nickbostrom.com/papers/digital-minds.pdf)”. Despite living in a world like *Grand Theft Auto*, where bank robberies and gun violence are daily experiences, the characters are still upbeat and optimistic. Fortunately, they’re eventually transferred to a world built for them rather than the entertainment of human players, where they can live out their lives with friendship and harmony. *Free Guy* is the most powerful (and funniest) film about artificial consciousness that I know of. If you’re looking for an EA-relevant movie to add to your watch list, I strongly recommend *Free Guy*, for both its entertainment and philosophical value. --- Postscript: If I were to make an actual argument in this post: Many people think that digital sentience is too weird to advocate for at this stage. Although I have not tried this with other people yet, the film *Free Guy* might be a promising conduit for promoting concern for digital sentient beings – when paired with relevant discussion, as the few reviews I've read of *Free Guy* don't touch upon moral consideration of digital sentient beings.
0299b1d2-47ab-48e3-8957-53b7d9ddee47
trentmkelly/LessWrong-43k
LessWrong
Meetup : Boston: Antifragile Discussion article for the meetup : Boston: Antifragile WHEN: 04 January 2015 03:30:33PM (-0500) WHERE: 98 Elm Street, Somerville This Sunday, Jesse Galef will be reviewing the book Antifragile, by Nassim Nicholas Taleb, author of The Black Swan. Topics will include effective decision making, catastrophic risk, and pop culture references. Reviews of Antifragile: "The glossary alone offered more thought-provoking ideas than any other nonfiction book I read this year. That said, Antifragile is far from flawless." "As always, an imperfect, infuriating but intriguing book" "A big mixed bag of insights and misconceptions" Cambridge/Boston-area Less Wrong meetups start at 3:30pm, and have an alternating location: * 1st Sunday meetups are at Citadel in Porter Sq, at 98 Elm St, apt 1, Somerville. * 3rd Sunday meetups are in MIT's building 66 at 25 Ames St, room 156. Room number subject to change based on availability; signs will be posted with the actual room number. (We also have last Wednesday meetups at Citadel at 7pm.) Our default schedule is as follows: —Phase 1: Arrival, greetings, unstructured conversation. —Phase 2: The headline event. This starts promptly at 4pm, and lasts 30-60 minutes. —Phase 3: Further discussion. We'll explore the ideas raised in phase 2, often in smaller groups. —Phase 4: Dinner. Discussion article for the meetup : Boston: Antifragile
118145f2-17e3-4c32-8690-f454c8a2abbf
StampyAI/alignment-research-dataset/arxiv
Arxiv
Probabilistic Dependency Graphs 1 Introduction --------------- In this paper we introduce yet another graphical tool for modeling beliefs, *Probabilistic Dependency Graphs* (PDGs). There are already many such models in the literature, including Bayesian networks (BNs) and factor graphs. (For an overview, see (Koller and Friedman [2009](#bib.bib5)).) Why does the world need one more? Our original motivation for introducing PDGs was to be able capture inconsistency. We want to be able to model the process of resolving inconsistency; to do so, we have to model the inconsistency itself. But our approach to modeling inconsistency has many other advantages. In particular, PDGs are significantly more modular than other directed graphical models: operations like restriction and union that are easily done with PDGs are difficult or impossible to do with other representations. The following examples motivate PDGs and illustrate some of their advantages. ###### Example 1. Grok is visiting a neighboring district. From prior reading, she thinks it likely (probability .95) that guns are illegal here. Some brief conversations with locals lead her to believe that, with probility .1, the law prohibits floomps. The obvious way to represent this as a BN is to use two random variables F𝐹Fitalic\_F and G𝐺Gitalic\_G (respectively taking values {f,f¯}𝑓normal-¯𝑓\{f,\smash{\overline{f}}\}{ italic\_f , over¯ start\_ARG italic\_f end\_ARG } and g,g¯𝑔normal-¯𝑔g,\overline{g}italic\_g , over¯ start\_ARG italic\_g end\_ARG), indicating whether floomps and guns are prohibited. The semantics of a BN offer her two choices: either assume that F𝐹Fitalic\_F and G𝐺Gitalic\_G to be independent and give (unconditional) probabilities of F𝐹Fitalic\_F and G𝐺Gitalic\_G, or choose a direction of dependency, and give one of the two unconditional probabilities and a conditional probability distribution. As there is no reason to choose either direction of dependence, the natural choice is to assume independence, giving her the BN on the left of [Figure 1](#S1.F1 "Figure 1 ‣ Example 1. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"). ![Refer to caption](/html/2012.10800/assets/x1.png)     ![Refer to caption](/html/2012.10800/assets/x2.png) Figure 1: A BN (left) and corresponding PDG (right), which can include more cpds; p𝑝pitalic\_p or p′superscript𝑝normal-′p^{\prime}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT make it inconsistent. A traumatic experience a few hours later leaves Grok believing that “floomp” is likely (probability .92) to be another word for gun. Let p(G∣F)𝑝conditional𝐺𝐹p(G\mid F)italic\_p ( italic\_G ∣ italic\_F ) be the *c*onditional *p*robability *d*istribution (cpd) that describes the belief that if floomps are legal (resp., illegal), then with probability .92, guns are as well, and p′(F∣G)superscript𝑝normal-′conditional𝐹𝐺p^{\prime}(F\mid G)italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_F ∣ italic\_G ) be the reverse. Starting with p𝑝pitalic\_p, Grok’s first instinct is to simply incorporate the conditional information by adding F𝐹Fitalic\_F as a parent of G𝐺Gitalic\_G, and then associating the cpd p𝑝pitalic\_p with G𝐺Gitalic\_G. But then what should she do with the original probability she had for G𝐺Gitalic\_G? Should she just discard it? It is easy to check that there is no joint distribution that is consistent with both the two original priors on F𝐹Fitalic\_F and G𝐺Gitalic\_G and also p𝑝pitalic\_p. So if she is to represent the information with a BN, which always represents a consistent distribution, she must resolve the inconsistency. However, sorting this out immediately may not be ideal. For instance, if the inconsistency arises from a conflation between two definitions of “gun”, a resolution will have destroyed the original cpds. A better use of computation may be to notice the inconsistency and look up the actual law. By way of contrast, consider the corresponding PDG. In a PDG, the cpds are attached to edges, rather than nodes of the graph. In order to represent unconditional probabilities, we introduce a *unit variable* 1111\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}start\_ARG italic\_1 end\_ARG italic\_1 which takes only one value, denoted ⋆normal-⋆\star⋆. This leads Grok to the PDG depicted in [Figure 1](#S1.F1 "Figure 1 ‣ Example 1. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"), where the edges from 1111\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}start\_ARG italic\_1 end\_ARG italic\_1 to F𝐹Fitalic\_F and G𝐺Gitalic\_G are associated with the unconditional probabilities of F𝐹Fitalic\_F and G𝐺Gitalic\_G, and the edges between F𝐹Fitalic\_F and G𝐺Gitalic\_G are associated with p𝑝pitalic\_p and p′superscript𝑝normal-′p^{\prime}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT. The original state of knowledge consists of all three nodes and the two solid edges from 1111\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}start\_ARG italic\_1 end\_ARG italic\_1. This is like Bayes Net that we considered above, except that we no longer explicitly take F𝐹Fitalic\_F and G𝐺Gitalic\_G to be independent; we merely record the constraints imposed by the given probabilities. The key point is that we can incorporate the new information into our original representation (the graph in [Figure 1](#S1.F1 "Figure 1 ‣ Example 1. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs") without the edge from F𝐹Fitalic\_F to G𝐺Gitalic\_G) simply by adding the edge from F𝐹Fitalic\_F to G𝐺Gitalic\_G and the associated cpd p𝑝pitalic\_p (the new infromation is shown in blue). Doing so does not change the meaning of the original edges. Unlike a Bayesian update, the operation is even reversible: all we need to do recover our original belief state is delete the new edge, making it possible to mull over and then reject an observation. The ability of PDGs to model inconsistency, as illustrated in [Example 1](#Thmexample1 "Example 1. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"), appears to have come at a significant cost. We seem to have lost a key benefit of BNs: the ease with which they can capture (conditional) independencies, which, as Pearl ([1988](#bib.bib9)) has argued forcefully, are omnipresent. ###### Example 2 (emulating a BN). We now consider the classic (quantitative) Bayesian network ℬℬ\cal Bcaligraphic\_B, which has four binary variables indicating whether a person (C𝐶Citalic\_C) develops cancer, (S𝑆Sitalic\_S) smokes, (𝑆𝐻𝑆𝐻\mathit{SH}italic\_SH) is exposed to second-hand smoke, and (𝑃𝑆𝑃𝑆\mathit{PS}italic\_PS) has parents who smoke, presented graphically in [Figure 2](#S1.F2 "Figure 2 ‣ Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"). We now walk through what is required to represent ℬℬ\cal Bcaligraphic\_B as a PDG, which we call 𝐌ℬsubscript𝐌ℬ{\mathbdcal{M}}\_{{\mathcal{B}}}bold\_M start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT, shown as the solid nodes and edges in [Figure 2](#S1.F2 "Figure 2 ‣ Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"). ![Refer to caption](/html/2012.10800/assets/x3.png)      ![Refer to caption](/html/2012.10800/assets/x4.png) Figure 2: (a) The Bayesian Network ℬℬ\mathcal{B}caligraphic\_B in [Example 2](#Thmexample2 "Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs") (left), and (b) 𝐌ℬsubscript𝐌ℬ{\mathbdcal{M}}\_{\mathcal{B}}bold\_M start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT, its corresponding PDG (right). The shaded box indicates a restriction of 𝐌ℬsubscript𝐌ℬ{\mathbdcal{M}}\_{\mathcal{B}}bold\_M start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT to only the nodes and edges it contains, and the dashed node T𝑇Titalic\_T and its arrow to C𝐶Citalic\_C can be added in the PDG, without taking into account S𝑆Sitalic\_S and SH𝑆𝐻SHitalic\_S italic\_H. We start with the nodes corresponding to the variables in ℬℬ\cal Bcaligraphic\_B, together with the special node 1111\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}start\_ARG italic\_1 end\_ARG italic\_1 from [Example 1](#Thmexample1 "Example 1. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"); we add an edge from 1111{\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}}start\_ARG italic\_1 end\_ARG italic\_1 to 𝑃𝑆𝑃𝑆\mathit{PS}italic\_PS, to which we associate the unconditional probability given by the cpd for 𝑃𝑆𝑃𝑆\mathit{PS}italic\_PS in ℬℬ\cal Bcaligraphic\_B. We can also re-use the cpds for S𝑆Sitalic\_S and 𝑆𝐻𝑆𝐻\mathit{SH}italic\_SH, assigning them, respectively, to the edges PS→Snormal-→𝑃𝑆𝑆PS\to Sitalic\_P italic\_S → italic\_S and PS→SHnormal-→𝑃𝑆𝑆𝐻PS\to SHitalic\_P italic\_S → italic\_S italic\_H in 𝐌ℬsubscript𝐌ℬ{\mathbdcal{M}}\_{{\mathcal{B}}}bold\_M start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT. There are two remaining problems: (1) modeling the remaining table in ℬℬ\cal Bcaligraphic\_B, which corresponds to the conditional probability of C𝐶Citalic\_C given S𝑆Sitalic\_S and SH𝑆𝐻SHitalic\_S italic\_H; and (2) recovering the additional conditional independence assumptions in the BN. For (1), we cannot just add the edges S→Cnormal-→𝑆𝐶S\to Citalic\_S → italic\_C and SH→Cnormal-→𝑆𝐻𝐶SH\to Citalic\_S italic\_H → italic\_C that are present in ℬℬ\cal Bcaligraphic\_B. As we saw in [Example 1](#Thmexample1 "Example 1. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"), this would mean supplying two *separate* tables, one indicating the probability of C𝐶Citalic\_C given S𝑆Sitalic\_S, and the other indicating the probability of C𝐶Citalic\_C given 𝑆𝐻𝑆𝐻\mathit{SH}italic\_SH. We would lose significant information that is present in ℬℬ\cal Bcaligraphic\_B about how C𝐶Citalic\_C depends jointly on S𝑆Sitalic\_S and SH𝑆𝐻SHitalic\_S italic\_H. To distinguish the joint dependence on S𝑆Sitalic\_S and 𝑆𝐻𝑆𝐻\mathit{SH}italic\_SH, for now, we draw an edge with two tails—a (directed) *hyperedge*—that completes the diagram in [Figure 2](#S1.F2 "Figure 2 ‣ Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"). With regard to (2), there are many distributions consistent with the conditional marginal probabilities in the cpds, and the independences presumed by ℬℬ\cal Bcaligraphic\_B need not hold for them. Rather than trying to distinguish between them with additional constraints, we develop a a scoring-function semantics for PDGs which is in this case uniquely minimized by the distribution specified by ℬℬ{\mathcal{B}}caligraphic\_B (LABEL:thm:bns-are-pdgs). This allows us to recover the semantics of Bayesian networks without requiring the independencies that they assume. Next suppose that we get information beyond that captured by the original BN. Specifically, we read a thorough empirical study demonstrating that people who use tanning beds have a 10% incidence of cancer, compared with 1% in the control group (call the cpd for this p𝑝pitalic\_p); we would like to add this information to ℬℬ\cal Bcaligraphic\_B. The first step is clearly to add a new node labeled T𝑇Titalic\_T, for “tanning bed use”. But simply making T𝑇Titalic\_T a parent of C𝐶Citalic\_C (as clearly seems appropriate, given that the incidence of cancer depends on tanning bed use) requires a substantial expansion of the cpd; in particular, it requires us to make assumptions about the interactions between tanning beds and smoking. The corresponding PDG, 𝐌ℬsubscript𝐌ℬ{\mathbdcal{M}}\_{{\mathcal{B}}}bold\_M start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT, on the other hand, has no trouble: We can simply add the node T𝑇Titalic\_T with an edge to C𝐶Citalic\_C that is associated with 𝐩𝐩\mathbf{p}bold\_p. But note that doing this makes it possible for our knowledge to be inconsistent. To take a simple example, if the distribution on C𝐶Citalic\_C given S𝑆Sitalic\_S and H𝐻Hitalic\_H encoded in the original cpd was always deterministically “has cancer” for every possible value of S𝑆Sitalic\_S and H𝐻Hitalic\_H, but the distribution according to the new cpd from T𝑇Titalic\_T was deterministically “no cancer”, the resulting PDG would be inconsistent. We have seen that we can easily add information to PDGs; removing information is equally painless. ###### Example 3 (restriction). After the Communist party came to power, children were raised communally, and so parents’ smoking habits no longer had any impact on them. Grok is reading her favorite book on graphical models, and she realizes that while the node 𝑃𝑆𝑃𝑆\mathit{PS}italic\_PS in [Figure 2](#S1.F2 "Figure 2 ‣ Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs") has lost its usefulness, and nodes S𝑆Sitalic\_S and 𝑆𝐻𝑆𝐻\mathit{SH}italic\_SH no longer ought to have 𝑃𝑆𝑃𝑆\mathit{PS}italic\_PS as a parent, the other half of the diagram—that is, the node C𝐶Citalic\_C and its dependence on S𝑆Sitalic\_S and 𝑆𝐻𝑆𝐻\mathit{SH}italic\_SH—should apply as before. Grok has identified two obstacles to modeling deletion of information from a BN by simply deleting nodes and their associated cpds. First, this restricted model is technically no longer a BN (which in this case would require unconditional distributions on S𝑆Sitalic\_S and 𝑆𝐻𝑆𝐻\mathit{SH}italic\_SH), but rather a *conditional* BN (Koller and Friedman [2009](#bib.bib5)), which allows for these nodes to be marked as observations; observation nodes do not have associated beliefs. Second, even regarded as a conditional BN, the result of deleting a node may introduce *new* independence information, incompatible with the original BN. For instance, by deleting the node B𝐵Bitalic\_B in a chain A→B→Cnormal-→𝐴𝐵normal-→𝐶A\rightarrow B\rightarrow Citalic\_A → italic\_B → italic\_C, one concludes that A𝐴Aitalic\_A and C𝐶Citalic\_C are independent, a conclusion incompatible with the original BN containing all three nodes. PDGs do not suffer from either problem. We can easily delete the nodes labeled 1 and PS𝑃𝑆PSitalic\_P italic\_S in [Figure 2](#S1.F2 "Figure 2 ‣ Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs") to get the restricted PDG shown in the figure, which captures Grok’s updated information. The resulting PDG has no edges leading to S𝑆Sitalic\_S or 𝑆𝐻𝑆𝐻\mathit{SH}italic\_SH, and hence no distributions specified on them; no special modeling distinction between observation nodes and other nodes are required. Because PDGs do not directly make independence assumptions, the information in this fragment is truly a subset of the information in the whole PDG. Being able to form a well-behaved local picture and restrict knowledge is useful, but an even more compelling reason to use PDGs is their ability to aggregate information. ###### Example 4. Grok dreams of becoming Supreme Leader (𝑆𝐿𝑆𝐿\it SLitalic\_SL), and has come up with a plan. She has noticed that people who use tanning beds have significantly more power than those who don’t. Unfortunately, her mom has always told her that tanning beds cause cancer; specifically, that 15% of people who use tanning beds get it, compared to the baseline of 2%. Call this cpd q𝑞qitalic\_q. Grok thinks people will make fun of her if she uses a tanning bed and gets cancer, making becoming Supreme Leader impossible. This mental state is depicted as a PDG on the left of [Figure 3](#S1.F3 "Figure 3 ‣ Example 4. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"). Grok is reading about graphical models because she vaguely remembers that the variables in [Example 2](#Thmexample2 "Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs") match the ones she already knows about. When she finishes reading the statistics on smoking and the original study on tanning beds (associated to a cpd 𝐩𝐩\mathbf{p}bold\_p in [Example 2](#Thmexample2 "Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs")), but before she has time to reflect, we can represent her (conflicted) knowledge state as the union of the two graphs, depicted graphically on the right of [Figure 3](#S1.F3 "Figure 3 ‣ Example 4. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"). ![Refer to caption](/html/2012.10800/assets/x5.png)          ![Refer to caption](/html/2012.10800/assets/x6.png) Figure 3: Grok’s prior (left) and combined (right) knowledge. The union of the two PDGs, even with overlapping nodes, is still a PDG. This is not the case in general for BNs. Note that the PDG that Grok used to represent her two different sources of information (the mother’s wisdom and the study) regarding the distribution of C𝐶Citalic\_C is a *multigraph*: there are two edges from T𝑇Titalic\_T to C𝐶Citalic\_C, with inconsistent information. Had we not allowed multigraphs, we would have needed to choose between the two edges, or represent the information some other (arguably less natural) way. As we are already allowing inconsistency, merely recording both is much more in keeping with the way we have handled other types of uncertainty. Not all inconsistencies are equally egregious. For example, even though the cpds p𝑝pitalic\_p and q𝑞qitalic\_q are different, they are numerically close, so, intuitively, the PDG on the right in [Figure 3](#S1.F3 "Figure 3 ‣ Example 4. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs") is not very inconsistent. Making this precise is the focus of [Section 3.2](#S3.SS2 "3.2 PDGs As Distribution Scoring Functions ‣ 3 Semantics ‣ Probabilistic Dependency Graphs"). These examples give a taste of the power of PDGs. In the coming sections, we formalize PDGs and relate them to other approaches. 2 Syntax --------- We now provide formal definitions for PDGs. Although it is possible to formalize PDGS with hyperedges directly, we opt for a different approach here, in which PDGs have only regular edges, and hyperedges are captured using a simple construction that involves adding an extra node. ###### Definition 2.1. A *Probabilistic Dependency Graph* is a tuple 𝐌=(𝒩,ℰ,𝒱,𝐩,α,β)𝐌𝒩ℰ𝒱𝐩𝛼𝛽\mathbdcal{M}=(\mathcal{N},\mathcal{E},\mathcal{V},\mathbf{p},\alpha,\beta)bold\_M = ( caligraphic\_N , caligraphic\_E , caligraphic\_V , bold\_p , italic\_α , italic\_β ), where 𝒩𝒩\mathcal{N}caligraphic\_N is a finite set of nodes, corresponding to variables; ℰℰ\mathcal{E}caligraphic\_E is a set of labeled edges {X→𝐿Y}𝑋𝐿→𝑌\{X\!\overset{\smash{\mskip-5.0mu\raisebox{-1.0pt}{$\scriptscriptstyle L$}}}{\rightarrow}\!Y\}{ italic\_X start\_OVERACCENT italic\_L end\_OVERACCENT start\_ARG → end\_ARG italic\_Y }, each with a source X𝑋Xitalic\_X and target Y𝑌Yitalic\_Y in 𝒩𝒩\mathcal{N}caligraphic\_N; 𝒱𝒱\mathcal{V}caligraphic\_V associates each variable N∈𝒩𝑁𝒩N\in\mathcal{N}italic\_N ∈ caligraphic\_N with a set 𝒱(N)𝒱𝑁\mathcal{V}(N)caligraphic\_V ( italic\_N ) of values that the variable N𝑁Nitalic\_N can take; 𝐩𝐩\mathbf{p}bold\_p associates to each edge X→𝐿Y∈ℰ𝑋𝐿→𝑌ℰX\!\overset{\smash{\mskip-5.0mu\raisebox{-1.0pt}{$\scriptscriptstyle L$}}}{\rightarrow}\!Y\in\mathcal{E}italic\_X start\_OVERACCENT italic\_L end\_OVERACCENT start\_ARG → end\_ARG italic\_Y ∈ caligraphic\_E a distribution 𝐩L(x)subscript𝐩𝐿𝑥\mathbf{p}\_{\!{}\_{L}\!}(x)bold\_p start\_POSTSUBSCRIPT start\_FLOATSUBSCRIPT italic\_L end\_FLOATSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x ) on Y𝑌Yitalic\_Y for each x∈𝒱(X)𝑥𝒱𝑋x\in\mathcal{V}(X)italic\_x ∈ caligraphic\_V ( italic\_X ); α𝛼\alphaitalic\_α associates to each edge X→𝐿Y𝑋𝐿→𝑌X\!\overset{\smash{\mskip-5.0mu\raisebox{-1.0pt}{$\scriptscriptstyle L$}}}{\rightarrow}\!Yitalic\_X start\_OVERACCENT italic\_L end\_OVERACCENT start\_ARG → end\_ARG italic\_Y a non-negative number αLsubscript𝛼𝐿\alpha\_{L}italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT which, roughly speaking, is the modeler’s confidence in the functional dependence of Y𝑌Yitalic\_Y on X𝑋Xitalic\_X implicit in L𝐿Litalic\_L; β𝛽\betaitalic\_β associates to each edge L𝐿Litalic\_L a positive real number βLsubscript𝛽𝐿\beta\_{L}italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT, the modeler’s subjective confidence in the reliability of 𝐩Lsubscript𝐩𝐿\mathbf{p}\_{\!{}\_{L}\!}bold\_p start\_POSTSUBSCRIPT start\_FLOATSUBSCRIPT italic\_L end\_FLOATSUBSCRIPT end\_POSTSUBSCRIPT. Note that we allow multiple edges in ℰℰ\mathcal{E}caligraphic\_E with the same source and target; thus (𝒩,ℰ)𝒩ℰ(\mathcal{N},\mathcal{E})( caligraphic\_N , caligraphic\_E ) is a multigraph. We occasionally write a PDG as 𝐌=(𝒢,𝐩,α,β)𝐌𝒢𝐩𝛼𝛽\mathbdcal{M}=(\mathcal{G},\mathbf{p},\alpha,\beta)bold\_M = ( caligraphic\_G , bold\_p , italic\_α , italic\_β ), where 𝒢=(𝒩,ℰ,𝒱)𝒢𝒩ℰ𝒱\mathcal{G}=(\mathcal{N},\mathcal{E},\mathcal{V})caligraphic\_G = ( caligraphic\_N , caligraphic\_E , caligraphic\_V ), and abuse terminology by referring to 𝒢𝒢\mathcal{G}caligraphic\_G as a multigraph. We refer to 𝐍=(𝒢,𝐩)𝐍𝒢𝐩{\mathbdcal{N}}=(\mathcal{G},\mathbf{p})bold\_N = ( caligraphic\_G , bold\_p ) as an *unweighted* PDG, and give it semantics as though it were the (weighted) PDG (𝒢,𝐩,𝟏,𝟏)𝒢𝐩11(\mathcal{G},\mathbf{p},\mathbf{1},\mathbf{1})( caligraphic\_G , bold\_p , bold\_1 , bold\_1 ), where 𝟏1\bf 1bold\_1 is the constant function (i.e., so that αL=βL=1subscript𝛼𝐿subscript𝛽𝐿1\alpha\_{L}=\beta\_{L}=1italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT = italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT = 1 for all L𝐿Litalic\_L). In this paper, with the exception of [Section 4.3](#S4.SS3 "4.3 Factored Exponential Families ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs"), we implicitly take α=𝟏𝛼1\alpha={\bf 1}italic\_α = bold\_1 and omit α𝛼\alphaitalic\_α, writing 𝐌=(𝒢,𝐩,β)𝐌𝒢𝐩𝛽\mathbdcal{M}=(\mathcal{G},\mathbf{p},\beta)bold\_M = ( caligraphic\_G , bold\_p , italic\_β ).111The appendix gives results for arbitrary α𝛼\alphaitalic\_α. If 𝐌𝐌\mathbdcal{M}bold\_M is a PDG, we reserve the names 𝒩𝐌,ℰ𝐌,…superscript𝒩𝐌superscriptℰ𝐌…\mathcal{N}^{\mathbdcal{M}},\mathcal{E}^{\mathbdcal{M}},\ldotscaligraphic\_N start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT , caligraphic\_E start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT , …, for the components of 𝐌𝐌\mathbdcal{M}bold\_M, so that we may reference one without naming them all explicitly. We write 𝒱(S)𝒱𝑆\mathcal{V}(S)caligraphic\_V ( italic\_S ) for the set of possible joint settings of a set S𝑆Sitalic\_S of variables, and write 𝒱(𝐌)𝒱𝐌\mathcal{V}(\mathbdcal{M})caligraphic\_V ( bold\_M ) for all settings of the variables in 𝒩𝐌superscript𝒩𝐌\mathcal{N}^{\mathbdcal{M}}caligraphic\_N start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT; we refer to these settings as “worlds”. While the definition above is sufficient to represent the class of all legal PDGs, we often use two additional bits of syntax to indicate common constraints: the special variable 1111\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}start\_ARG italic\_1 end\_ARG italic\_1 such that 𝒱(11)={⋆}𝒱11⋆\mathcal{V}(\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1})=\{\star\}caligraphic\_V ( start\_ARG italic\_1 end\_ARG italic\_1 ) = { ⋆ } from [Examples 1](#Thmexample1 "Example 1. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs") and [2](#Thmexample2 "Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"), and double-headed arrows, A→→BA\rightarrow\mathrel{\mspace{-15.0mu}}\rightarrow Bitalic\_A → → italic\_B, which visually indicate that the corresponding cpd is degenerate, effectively representing a deterministic function f:𝒱(A)→𝒱(B):𝑓→𝒱𝐴𝒱𝐵f:\mathcal{V}(A)\to\mathcal{V}(B)italic\_f : caligraphic\_V ( italic\_A ) → caligraphic\_V ( italic\_B ). ###### Construction 2.2. We can now explain how we capture the multi-tailed edges that were used in [Examples 2](#Thmexample2 "Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs") to [4](#Thmexample4 "Example 4. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"). That notation can be viewed as shorthand for the graph that results by adding a new node at the junction representing the joint value of the nodes at the tails, with projections going back. For instance, the diagram displaying Grok’s prior knowledge in [Example 4](#Thmexample4 "Example 4. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs"), on the left of [Figure 3](#S1.F3 "Figure 3 ‣ Example 4. ‣ 1 Introduction ‣ Probabilistic Dependency Graphs") is really shorthand for the following PDG, where where we insert a node labeled C×T𝐶𝑇C\times Titalic\_C × italic\_T at the junction: ![[Uncaptioned image]](/html/2012.10800/assets/x7.png) As the notation suggests, 𝒱(C×T)=𝒱(C)×𝒱(T)𝒱𝐶𝑇𝒱𝐶𝒱𝑇\mathcal{V}(C\times T)=\mathcal{V}(C)\times\mathcal{V}(T)caligraphic\_V ( italic\_C × italic\_T ) = caligraphic\_V ( italic\_C ) × caligraphic\_V ( italic\_T ). For any joint setting (c,t)∈𝒱(C×T)𝑐𝑡𝒱𝐶𝑇(c,t)\in\mathcal{V}(C\times T)( italic\_c , italic\_t ) ∈ caligraphic\_V ( italic\_C × italic\_T ) of both variables, the cpd for the edge from C×T𝐶𝑇C\times Titalic\_C × italic\_T to C𝐶Citalic\_C gives probability 1 to c𝑐citalic\_c; similarly, the cpd for the edge from C×T𝐶𝑇C\times Titalic\_C × italic\_T to T𝑇Titalic\_T gives probability 1 to t𝑡titalic\_t. 3 Semantics ------------ Although the meaning of an individual cpd is clear, we have not yet given PDGs a “global” semantics. We discuss three related approaches to doing so. The first is the simplest: we associate with a PDG the set of distributions that are consistent with it. This set will be empty if the PDG is inconsistent. The second approach associates a PDG with a scoring function, indicating the fit of an arbitrary distribution μ𝜇\muitalic\_μ, and can be thought of as a *weighted* set of distributions (Halpern and Leung [2015](#bib.bib2)). This approach allows us to distinguish inconsistent PDGs, while the first approach does not. The third approach chooses the distributions with the best score, typically associating with a PDG a unique distribution. ### 3.1 PDGs As Sets Of Distributions We have been thinking of a PDG as a collection of constraints on distributions, specified by matching cpds. From this perspective, it is natural to consider the set of all distributions that are consistent with the constraints. ###### Definition 3.1. If 𝐌𝐌\mathbdcal{M}bold\_M is a PDG (weighted or unweighted) with edges ℰℰ\mathcal{E}caligraphic\_E and cpds 𝐩𝐩\mathbf{p}bold\_p, let [[𝐌]]𝑠𝑑subscriptdelimited-[]delimited-[]𝐌𝑠𝑑[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\text{sd}}[ [ bold\_M ] ] start\_POSTSUBSCRIPT sd end\_POSTSUBSCRIPT be the *s*et of *d*istributions over the variables in 𝐌𝐌\mathbdcal{M}bold\_M whose conditional marginals are exactly those given by 𝐩𝐩\mathbf{p}bold\_p. That is, μ∈[[𝐌]]𝑠𝑑𝜇subscriptdelimited-[]delimited-[]𝐌𝑠𝑑\mu\in[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\text{sd}}italic\_μ ∈ [ [ bold\_M ] ] start\_POSTSUBSCRIPT sd end\_POSTSUBSCRIPT iff, for all edges L∈ℰ𝐿ℰL\in\mathcal{E}italic\_L ∈ caligraphic\_E from X𝑋Xitalic\_X to Y𝑌Yitalic\_Y, x∈𝒱(X)𝑥𝒱𝑋x\in\mathcal{V}(X)italic\_x ∈ caligraphic\_V ( italic\_X ), and y∈𝒱(Y)𝑦𝒱𝑌y\in\mathcal{V}(Y)italic\_y ∈ caligraphic\_V ( italic\_Y ), we have that μ(Y=y∣X=x)=𝐩L(x)𝜇𝑌conditional𝑦𝑋𝑥subscript𝐩𝐿𝑥\mu(Y\!=\!y\mid X\!=\!x)=\mathbf{p}\_{\!{}\_{L}\!}(x)italic\_μ ( italic\_Y = italic\_y ∣ italic\_X = italic\_x ) = bold\_p start\_POSTSUBSCRIPT start\_FLOATSUBSCRIPT italic\_L end\_FLOATSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x ). 𝐌𝐌\mathbdcal{M}bold\_M is *inconsistent* if [[𝐌]]𝑠𝑑=∅subscriptdelimited-[]delimited-[]𝐌𝑠𝑑[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\text{sd}}=\emptyset[ [ bold\_M ] ] start\_POSTSUBSCRIPT sd end\_POSTSUBSCRIPT = ∅, and *consistent* otherwise. Note that [[𝐌]]sdsubscriptdelimited-[]delimited-[]𝐌sd[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\text{sd}}[ [ bold\_M ] ] start\_POSTSUBSCRIPT sd end\_POSTSUBSCRIPT is independent of the weights α𝛼\alphaitalic\_α and β𝛽\betaitalic\_β. ### 3.2 PDGs As Distribution Scoring Functions We now generalize the previous semantics by viewing a PDG 𝐌𝐌\mathbdcal{M}bold\_M as a *scoring function* that, given an arbitrary distribution μ𝜇\muitalic\_μ on 𝒱(𝐌)𝒱𝐌\mathcal{V}(\mathbdcal{M})caligraphic\_V ( bold\_M ), returns a real-valued score indicating how well μ𝜇\muitalic\_μ fits 𝐌𝐌\mathbdcal{M}bold\_M. Distributions with the lowest (best) scores are those that most closely match the cpds in 𝐌𝐌\mathbdcal{M}bold\_M, and contain the fewest unspecified correlations. We start with the first component of the score, which assigns higher scores to distributions that require a larger perturbation in order to be consistent with 𝐌𝐌\mathbdcal{M}bold\_M. We measure the magnitude of this perturbation with relative entropy. In particular, for an edge X→𝐿Y𝑋𝐿→𝑌X\!\overset{\smash{\mskip-5.0mu\raisebox{-1.0pt}{$\scriptscriptstyle L$}}}{\rightarrow}\!Yitalic\_X start\_OVERACCENT italic\_L end\_OVERACCENT start\_ARG → end\_ARG italic\_Y and x∈𝒱(X)𝑥𝒱𝑋x\in\mathcal{V}(X)italic\_x ∈ caligraphic\_V ( italic\_X ), we measure the relative entropy from 𝐩L(x)subscript𝐩𝐿𝑥\mathbf{p}\_{\!{}\_{L}\!}(x)bold\_p start\_POSTSUBSCRIPT start\_FLOATSUBSCRIPT italic\_L end\_FLOATSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x ) to μ(Y=⋅∣X=x)\mu(Y\!=\cdot\mid X=x)italic\_μ ( italic\_Y = ⋅ ∣ italic\_X = italic\_x ), and take the expectation over μXsubscript𝜇𝑋\mu\_{X}italic\_μ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT (that is, the marginal of μ𝜇\muitalic\_μ on X𝑋Xitalic\_X). We then sum over all the edges L𝐿Litalic\_L in the PDG, weighted by their reliability. ###### Definition 3.2. For a PDG 𝐌𝐌\mathbdcal{M}bold\_M, the *incompatibility* of a a joint distribution μ𝜇\muitalic\_μ over 𝒱(𝐌)𝒱𝐌\mathcal{V}(\mathbdcal{M})caligraphic\_V ( bold\_M ), is given by | | | | | --- | --- | --- | | | 𝐼𝑛𝑐𝐌(μ):=∑X→𝐿Y∈ℰ𝐌βL𝐌𝔼x∼μX[ID(μ(Y∣X=x)∥𝐩L𝐌(x))],assignsubscript𝐼𝑛𝑐𝐌𝜇subscript𝑋𝐿→𝑌superscriptℰ𝐌superscriptsubscript𝛽𝐿𝐌subscript𝔼similar-to𝑥subscript𝜇𝑋𝐼𝐷conditional𝜇conditional𝑌𝑋𝑥superscriptsubscript𝐩𝐿𝐌𝑥\mathit{Inc}\_{\mathbdcal{M}}(\mu):=\!\!\!\!\!\!\sum\_{X\,\!\overset{\smash{\mskip-5.0mu\raisebox{-1.0pt}{$\scriptscriptstyle L$}}}{\rightarrow}\!\,Y\in\mathcal{E}^{\mathbdcal{M}}}\!\!\!\!\!\beta\_{L}^{\mathbdcal{M}}\operatorname\*{\mathbb{E}}\_{x\sim\mu\_{{}\_{X}}}\!\!\left[I\mkern-8.0muD\Big{(}\mu(Y\mid X\!=\!x)\;\Big{\|}\;\mathbf{p}\_{\!{}\_{L}\!}^{\mathbdcal{M}}(x)\Big{)}\right],italic\_Inc start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT ( italic\_μ ) := ∑ start\_POSTSUBSCRIPT italic\_X start\_OVERACCENT italic\_L end\_OVERACCENT start\_ARG → end\_ARG italic\_Y ∈ caligraphic\_E start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_x ∼ italic\_μ start\_POSTSUBSCRIPT start\_FLOATSUBSCRIPT italic\_X end\_FLOATSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_I italic\_D ( italic\_μ ( italic\_Y ∣ italic\_X = italic\_x ) ∥ bold\_p start\_POSTSUBSCRIPT start\_FLOATSUBSCRIPT italic\_L end\_FLOATSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT ( italic\_x ) ) ] , | | where ID(μ∥ν)=∑w∈Supp(μ)μ(w)log⁡μ(w)ν(w)𝐼𝐷conditional𝜇𝜈subscript𝑤Supp(μ)𝜇𝑤𝜇𝑤𝜈𝑤I\mkern-8.0muD(\mu\;\|\;\nu)=\sum\_{w\in\text{Supp($\mu$)}}\mu(w)\log\frac{\mu(w)}{\nu(w)}italic\_I italic\_D ( italic\_μ ∥ italic\_ν ) = ∑ start\_POSTSUBSCRIPT italic\_w ∈ Supp( italic\_μ ) end\_POSTSUBSCRIPT italic\_μ ( italic\_w ) roman\_log divide start\_ARG italic\_μ ( italic\_w ) end\_ARG start\_ARG italic\_ν ( italic\_w ) end\_ARG is the relative entropy from ν𝜈\nuitalic\_ν to μ𝜇\muitalic\_μ. [[𝐌]]sdsubscriptdelimited-[]delimited-[]𝐌sd[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\text{sd}}[ [ bold\_M ] ] start\_POSTSUBSCRIPT sd end\_POSTSUBSCRIPT and 𝐼𝑛𝑐𝐌subscript𝐼𝑛𝑐𝐌\mathit{Inc}\_{\mathbdcal{M}}italic\_Inc start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT distinguish between distributions based on their compatibility with 𝐌𝐌\mathbdcal{M}bold\_M, but even among distributions that match the marginals, some more closely match the qualitative structure of the graph than others. We think of each edge X→𝐿Y𝑋𝐿→𝑌X\!\overset{\smash{\mskip-5.0mu\raisebox{-1.0pt}{$\scriptscriptstyle L$}}}{\rightarrow}\!Yitalic\_X start\_OVERACCENT italic\_L end\_OVERACCENT start\_ARG → end\_ARG italic\_Y as representing a qualitative claim (with confidence αLsubscript𝛼𝐿\alpha\_{L}italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT) that the value of Y𝑌Yitalic\_Y can be computed from X𝑋Xitalic\_X alone. To formalize this, we require only the multigraph 𝒢𝐌superscript𝒢𝐌\mathcal{G}^{\mathbdcal{M}}caligraphic\_G start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT. Given a multigraph G𝐺Gitalic\_G and distribution μ𝜇\muitalic\_μ on its variables, contrast the amount of information required to 1. (a) directly describe a joint outcome 𝐰𝐰\mathbf{w}bold\_w drawn from μ𝜇\muitalic\_μ, and 2. (b) separately specify, for each edge X→𝐿Y𝑋𝐿→𝑌X\!\overset{\smash{\mskip-5.0mu\raisebox{-1.0pt}{$\scriptscriptstyle L$}}}{\rightarrow}\!Yitalic\_X start\_OVERACCENT italic\_L end\_OVERACCENT start\_ARG → end\_ARG italic\_Y, the value 𝐰Ysubscript𝐰𝑌\mathbf{w}\_{Y}bold\_w start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT (of Y𝑌Yitalic\_Y in world 𝐰𝐰\mathbf{w}bold\_w) given the value 𝐰Xsubscript𝐰𝑋\mathbf{w}\_{X}bold\_w start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT, in expectation. If [(a)](#S3.I1.i1 "item (a) ‣ 3.2 PDGs As Distribution Scoring Functions ‣ 3 Semantics ‣ Probabilistic Dependency Graphs") === [(b)](#S3.I1.i2 "item (b) ‣ 3.2 PDGs As Distribution Scoring Functions ‣ 3 Semantics ‣ Probabilistic Dependency Graphs"), a specification of (b) has exactly the same length as a full desciption of the world. If [(b)](#S3.I1.i2 "item (b) ‣ 3.2 PDGs As Distribution Scoring Functions ‣ 3 Semantics ‣ Probabilistic Dependency Graphs") >>> [(a)](#S3.I1.i1 "item (a) ‣ 3.2 PDGs As Distribution Scoring Functions ‣ 3 Semantics ‣ Probabilistic Dependency Graphs"), then there are correlations in μ𝜇\muitalic\_μ that allow for a more compact representation than G𝐺Gitalic\_G provides. The larger the difference, the more information is needed to determine targets Y𝑌Yitalic\_Y beyond the conditional probabilities associated with the edges X→Y→𝑋𝑌X\rightarrow Yitalic\_X → italic\_Y leading to Y𝑌Yitalic\_Y (which according to G𝐺Gitalic\_G should be sufficient to compute them), and the poorer the qualitative fit of μ𝜇\muitalic\_μ to G𝐺Gitalic\_G. Finally, if [(a)](#S3.I1.i1 "item (a) ‣ 3.2 PDGs As Distribution Scoring Functions ‣ 3 Semantics ‣ Probabilistic Dependency Graphs") >>> [(b)](#S3.I1.i2 "item (b) ‣ 3.2 PDGs As Distribution Scoring Functions ‣ 3 Semantics ‣ Probabilistic Dependency Graphs"), then μ𝜇\muitalic\_μ requires additional information to specify, beyond what is necessary to determine outcomes of the marginals selected by G𝐺Gitalic\_G. ###### Definition 3.3. For a multigraph G=(𝒩,ℰ,𝒱)𝐺𝒩ℰ𝒱G=(\mathcal{N},\mathcal{E},\mathcal{V})italic\_G = ( caligraphic\_N , caligraphic\_E , caligraphic\_V ) over a set 𝒩𝒩\mathcal{N}caligraphic\_N of variables, define the *G𝐺Gitalic\_G-information deficiency* of distribution μ𝜇\muitalic\_μ, denoted 𝐼𝐷𝑒𝑓G(μ)subscript𝐼𝐷𝑒𝑓𝐺𝜇\mathit{IDef}\_{\!G}(\mu)italic\_IDef start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ( italic\_μ ), by considering the difference between (a) and (b), where we measure the amount of information needed for a description using entropy: | | | | | | --- | --- | --- | --- | | | 𝐼𝐷𝑒𝑓G(μ):=∑(X,Y)∈ℰHμ⁡(Y∣X)−H⁡(μ).assignsubscript𝐼𝐷𝑒𝑓𝐺𝜇subscript𝑋𝑌ℰsubscriptH𝜇conditional𝑌𝑋H𝜇\mathit{IDef}\_{\!G}(\mu):=\sum\_{(X,Y)\in\mathcal{E}}\operatorname{\mathrm{H}}\_{\mu}(Y\mid X)-\operatorname{\mathrm{H}}(\mu).italic\_IDef start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ( italic\_μ ) := ∑ start\_POSTSUBSCRIPT ( italic\_X , italic\_Y ) ∈ caligraphic\_E end\_POSTSUBSCRIPT roman\_H start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_Y ∣ italic\_X ) - roman\_H ( italic\_μ ) . | | (1) | (Recall that Hμ(Y∣X)subscript𝐻𝜇conditional𝑌𝑋H\_{\mu}(Y\mid X)italic\_H start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_Y ∣ italic\_X ), the (μ𝜇\muitalic\_μ-)*conditional entropy of Y𝑌Yitalic\_Y given X𝑋Xitalic\_X*, is defined as −∑x,y∈𝒱(X,Y)μ(x,y)log⁡μ(y∣x)subscript𝑥𝑦 𝒱𝑋𝑌𝜇𝑥𝑦𝜇conditional𝑦𝑥-\sum\_{x,y\in\mathcal{V}(X,Y)}\mu(x,y)\log\mu(y\mid x)- ∑ start\_POSTSUBSCRIPT italic\_x , italic\_y ∈ caligraphic\_V ( italic\_X , italic\_Y ) end\_POSTSUBSCRIPT italic\_μ ( italic\_x , italic\_y ) roman\_log italic\_μ ( italic\_y ∣ italic\_x ).) For a PDG 𝐌𝐌{\mathbdcal{M}}bold\_M, we take 𝐼𝐷𝑒𝑓𝐌=𝐼𝐷𝑒𝑓𝒢𝐌subscript𝐼𝐷𝑒𝑓𝐌subscript𝐼𝐷𝑒𝑓superscript𝒢𝐌\mathit{IDef}\_{\!\mathbdcal{M}}=\mathit{IDef}\_{\!\mathcal{G}^{\mathbdcal{M}}}italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT = italic\_IDef start\_POSTSUBSCRIPT caligraphic\_G start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT. We illustrate 𝐼𝐷𝑒𝑓𝐌subscript𝐼𝐷𝑒𝑓𝐌\mathit{IDef}\_{\!\mathbdcal{M}}italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT with some simple examples. Suppose that 𝐌𝐌\mathbdcal{M}bold\_M has two nodes, X𝑋Xitalic\_X and Y𝑌Yitalic\_Y. If 𝐌𝐌\mathbdcal{M}bold\_M has no edges, the 𝐼𝐷𝑒𝑓𝐌(μ)=−H⁡(μ)subscript𝐼𝐷𝑒𝑓𝐌𝜇H𝜇\mathit{IDef}\_{\!\mathbdcal{M}}(\mu)=-\operatorname{\mathrm{H}}(\mu)italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT ( italic\_μ ) = - roman\_H ( italic\_μ ). There is no information required to specify, for each edge in 𝐌𝐌{\mathbdcal{M}}bold\_M from X𝑋Xitalic\_X to Y𝑌Yitalic\_Y, the value 𝐰Ysubscript𝐰𝑌{\mathbf{w}}\_{Y}bold\_w start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT given 𝐰Xsubscript𝐰𝑋{\mathbf{w}}\_{X}bold\_w start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT, since there are no edges. Since we view smaller numbers as representing a better fit, 𝐼𝐷𝑒𝑓𝐌subscript𝐼𝐷𝑒𝑓𝐌\mathit{IDef}\_{\!\mathbdcal{M}}italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT in this case will prefer the distribution that maximizes entropy. If 𝐌𝐌\mathbdcal{M}bold\_M has one edge from X𝑋Xitalic\_X to Y𝑌Yitalic\_Y, then since H⁡(μ)=Hμ⁡(Y∣X)+Hμ⁡(X)H𝜇subscriptH𝜇conditional𝑌𝑋subscriptH𝜇𝑋\operatorname{\mathrm{H}}(\mu)=\operatorname{\mathrm{H}}\_{\mu}(Y\mid X)+\operatorname{\mathrm{H}}\_{\mu}(X)roman\_H ( italic\_μ ) = roman\_H start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_Y ∣ italic\_X ) + roman\_H start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_X ) by the well known *entropy chain rule* (MacKay [2003](#bib.bib7)), 𝐼𝐷𝑒𝑓𝐌(μ)=−Hμ⁡(X)subscript𝐼𝐷𝑒𝑓𝐌𝜇subscriptH𝜇𝑋\mathit{IDef}\_{\!\mathbdcal{M}}(\mu)=-\operatorname{\mathrm{H}}\_{\mu}(X)italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT ( italic\_μ ) = - roman\_H start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_X ). Intuitively, while knowing the conditional probability μ(Y∣X)𝜇conditional𝑌𝑋\mu(Y\mid X)italic\_μ ( italic\_Y ∣ italic\_X ) is helpful, to completely specify μ𝜇\muitalic\_μ we also need μ(X)𝜇𝑋\mu(X)italic\_μ ( italic\_X ). Thus, in this case, 𝐼𝐷𝑒𝑓𝐌subscript𝐼𝐷𝑒𝑓𝐌\mathit{IDef}\_{\!\mathbdcal{M}}italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT prefers distributions that maximize the entropy of the marginal on X𝑋Xitalic\_X. If 𝐌𝐌\mathbdcal{M}bold\_M has sufficiently many parallel edges from X𝑋Xitalic\_X to Y𝑌Yitalic\_Y and Hμ⁡(Y∣X)>0subscriptH𝜇conditional𝑌𝑋0\operatorname{\mathrm{H}}\_{\mu}(Y\mid X)>0roman\_H start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_Y ∣ italic\_X ) > 0 (so that Y𝑌Yitalic\_Y is not totally determined by X𝑋Xitalic\_X) then we have 𝐼𝐷𝑒𝑓𝐌(μ)>0subscript𝐼𝐷𝑒𝑓𝐌𝜇0\mathit{IDef}\_{\!\mathbdcal{M}}(\mu)>0italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT ( italic\_μ ) > 0, because the redundant edges add no information, but there is still a cost to specifying them. In this case, 𝐼𝐷𝑒𝑓𝐌subscript𝐼𝐷𝑒𝑓𝐌\mathit{IDef}\_{\!\mathbdcal{M}}italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT prefers distributions that make Y𝑌Yitalic\_Y a deterministic function of X𝑋Xitalic\_X will maximizing the entropy of the marginal on X𝑋Xitalic\_X. Finally, if 𝐌𝐌{\mathbdcal{M}}bold\_M has an edge from X𝑋Xitalic\_X to Y𝑌Yitalic\_Y and another from Y𝑌Yitalic\_Y to X𝑋Xitalic\_X, then a distribution μ𝜇\muitalic\_μ minimizes 𝐼𝐷𝑒𝑓𝐌subscript𝐼𝐷𝑒𝑓𝐌\mathit{IDef}\_{\!\mathbdcal{M}}italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT when X𝑋Xitalic\_X and Y𝑌Yitalic\_Y vary together (so that Hμ⁡(Y∣X)=Hμ⁡(X∣Y)=0subscriptH𝜇conditional𝑌𝑋subscriptH𝜇conditional𝑋𝑌0\operatorname{\mathrm{H}}\_{\mu}(Y\mid X)=\operatorname{\mathrm{H}}\_{\mu}(X\mid Y)=0roman\_H start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_Y ∣ italic\_X ) = roman\_H start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_X ∣ italic\_Y ) = 0) while maximizing H⁡(μ)H𝜇\operatorname{\mathrm{H}}(\mu)roman\_H ( italic\_μ ), for example, by taking μ(0,0)=μ(1,1)=1/2𝜇00𝜇1112\mu(0,0)=\mu(1,1)=1/2italic\_μ ( 0 , 0 ) = italic\_μ ( 1 , 1 ) = 1 / 2. 𝐼𝑛𝑐𝐌(μ)subscript𝐼𝑛𝑐𝐌𝜇\mathit{Inc}\_{\mathbdcal{M}}(\mu)italic\_Inc start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT ( italic\_μ ) and 𝐼𝐷𝑒𝑓𝐌(μ)subscript𝐼𝐷𝑒𝑓𝐌𝜇\mathit{IDef}\_{\!\mathbdcal{M}}(\mu)italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT ( italic\_μ ) give us two measures of compatibility between 𝐌𝐌{\mathbdcal{M}}bold\_M and a distribution μ𝜇\muitalic\_μ. We take the score of interest to be their sum, with the tradeoff controlled by a parameter γ≥0𝛾0\gamma\geq 0italic\_γ ≥ 0: | | | | | | --- | --- | --- | --- | | | [[𝐌]]γ(μ):=𝐼𝑛𝑐𝐌(μ)+γ𝐼𝐷𝑒𝑓𝐌(μ)assignsubscriptdelimited-[]delimited-[]𝐌𝛾𝜇subscript𝐼𝑛𝑐𝐌𝜇𝛾subscript𝐼𝐷𝑒𝑓𝐌𝜇[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\gamma}(\mu):=\mathit{Inc}\_{\mathbdcal{M}}(\mu)+\gamma\mathit{IDef}\_{\!\mathbdcal{M}}(\mu)[ [ bold\_M ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT ( italic\_μ ) := italic\_Inc start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT ( italic\_μ ) + italic\_γ italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT ( italic\_μ ) | | (2) | The following just makes precise that the scoring semantics generalizes the first semantics. ###### Proposition 3.1 (name=,restate=prop:sd-is-zeroset,label=prop:sd-is-zeroset). [[𝐌]]𝑠𝑑={μ:[[𝐌]]𝟎(μ)=𝟎}subscriptdelimited-[]delimited-[]𝐌𝑠𝑑conditional-set𝜇subscriptdelimited-[]delimited-[]𝐌0𝜇0[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\text{sd}}\!=\{\mu:[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{0}(\mu)\!=\!0\}[ [ bold\_M ] ] start\_POSTSUBSCRIPT sd end\_POSTSUBSCRIPT = { italic\_μ : [ [ bold\_M ] ] start\_POSTSUBSCRIPT bold\_0 end\_POSTSUBSCRIPT ( italic\_μ ) = bold\_0 } for all 𝐌𝐌\mathbdcal{M}bold\_M. While we focus on this particular scoring function in the paper, in part because it has deep connections to the free energy of a factor graph (Koller and Friedman [2009](#bib.bib5)), other scoring functions may well end up being of interest. ### 3.3 PDGs As Unique Distributions Finally, we provide an interpretation of a PDG as a probability distribution. Before we provide this semantics, we stress that this distribution does *not* capture all of the important information in the PDG—for example, a PDG can represent inconsistent knowledge states. Still, by giving a distribution, we enable comparisons with other graphical models, and show that PDGs are a surprisingly flexible tool for specifying distributions. The idea is to select the distributions with the best score. We thus define | | | | | | --- | --- | --- | --- | | | [[𝐌]]γ\*=argminμ∈𝚫𝒱(𝐌)[[𝐌]]γ(μ).[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\gamma}^{\*}=\operatorname\*{arg\;min}\_{\mu\in\Delta\mathcal{V}(\mathbdcal{M})}[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\gamma}(\mu).[ [ bold\_M ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = start\_OPERATOR roman\_arg roman\_min end\_OPERATOR start\_POSTSUBSCRIPT italic\_μ ∈ bold\_Δ caligraphic\_V ( bold\_M ) end\_POSTSUBSCRIPT [ [ bold\_M ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT ( italic\_μ ) . | | (3) | In general, [[𝐌]]γ\*superscriptsubscriptdelimited-[]delimited-[]𝐌𝛾[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\gamma}^{\*}[ [ bold\_M ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT does not give a unique distribution. But if γ𝛾\gammaitalic\_γ is sufficiently small, then it does: ###### Proposition 3.2 (name=,restate=prop:sem3,label=prop:sem3). If 𝐌𝐌\mathbdcal{M}bold\_M is a PDG and 0<γ≤minL⁡βL𝐌0𝛾subscript𝐿superscriptsubscript𝛽𝐿𝐌0<\gamma\leq\min\_{L}\beta\_{L}^{\mathbdcal{M}}0 < italic\_γ ≤ roman\_min start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT, then [[𝐌]]γ\*superscriptsubscriptdelimited-[]delimited-[]𝐌𝛾[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\gamma}^{\*}[ [ bold\_M ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is a singleton. In this paper, we are interested in the case where γ𝛾\gammaitalic\_γ is small; this amounts to emphasizing the accuracy of the probability distribution as a description of probabilistic information, rather than the graphical structure of the PDG. This motivates us to consider what happens as γ𝛾\gammaitalic\_γ goes to 0. If Sγsubscript𝑆𝛾S\_{\gamma}italic\_S start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT is a set of probability distributions for all γ∈[0,1]𝛾01\gamma\in[0,1]italic\_γ ∈ [ 0 , 1 ], we define limγ→0Sγsubscript→𝛾0subscript𝑆𝛾\lim\_{\gamma\rightarrow 0}S\_{\gamma}roman\_lim start\_POSTSUBSCRIPT italic\_γ → 0 end\_POSTSUBSCRIPT italic\_S start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT to consist of all distributions μ𝜇\muitalic\_μ such that there is a sequence (γi,μi)i∈ℕsubscriptsubscript𝛾𝑖subscript𝜇𝑖𝑖ℕ(\gamma\_{i},\mu\_{i})\_{i\in\mathbb{N}}( italic\_γ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_μ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_i ∈ blackboard\_N end\_POSTSUBSCRIPT with γi→0→subscript𝛾𝑖0\gamma\_{i}\to 0italic\_γ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT → 0 and μi→μ→subscript𝜇𝑖𝜇\mu\_{i}\to\muitalic\_μ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT → italic\_μ such that μi∈Sγisubscript𝜇𝑖subscript𝑆subscript𝛾𝑖\mu\_{i}\in S\_{\gamma\_{i}}italic\_μ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_γ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT for all i𝑖iitalic\_i. It can be further shown that ###### Proposition 3.3 (name=,restate=prop:limit-uniq,label=prop:limit-uniq). For all 𝐌𝐌\mathbdcal{M}bold\_M, limγ→0[[𝐌]]γ\*subscriptnormal-→𝛾0superscriptsubscriptdelimited-[]delimited-[]𝐌𝛾\lim\_{\gamma\to 0}[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\gamma}^{\*}roman\_lim start\_POSTSUBSCRIPT italic\_γ → 0 end\_POSTSUBSCRIPT [ [ bold\_M ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is a singleton. Let [[𝐌]]\*superscriptdelimited-[]delimited-[]𝐌[\mspace{-3.5mu}[{\mathbdcal{M}}]\mspace{-3.5mu}]^{\*}[ [ bold\_M ] ] start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT be the unique element of limγ→0[[𝐌]]γ\*subscript→𝛾0superscriptsubscriptdelimited-[]delimited-[]𝐌𝛾\smash{\lim\limits\_{\gamma\rightarrow 0}}[\mspace{-3.5mu}[{\mathbdcal{M}}]\mspace{-3.5mu}]\_{\gamma}^{\*}roman\_lim start\_POSTSUBSCRIPT italic\_γ → 0 end\_POSTSUBSCRIPT [ [ bold\_M ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. The semantics has an important property: ###### Proposition 3.4 (name=,restate=prop:consist,label=prop:consist). [[𝐌]]\*∈[[𝐌]]𝟎\*superscriptdelimited-[]delimited-[]𝐌superscriptsubscriptdelimited-[]delimited-[]𝐌0[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]^{\*}\in[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{0}^{\*}[ [ bold\_M ] ] start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ [ [ bold\_M ] ] start\_POSTSUBSCRIPT bold\_0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, so if 𝐌𝐌\mathbdcal{M}bold\_M is consistent, then [[𝐌]]\*∈[[𝐌]]𝑠𝑑superscriptdelimited-[]delimited-[]𝐌subscriptdelimited-[]delimited-[]𝐌𝑠𝑑[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]^{\*}\in[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\text{sd}}[ [ bold\_M ] ] start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ [ [ bold\_M ] ] start\_POSTSUBSCRIPT sd end\_POSTSUBSCRIPT. 4 Relationships to Other Graphical Models ------------------------------------------ We start by relating PDGs to two of the most popular graphical models: BNs and factor graphs. PDGs are strictly more general than BNs, and can emulate factor graphs for a particular value of γ𝛾\gammaitalic\_γ. ### 4.1 Bayesian Networks [Construction 2.2](#S2.Thmdefn2 "Construction 2.2. ‣ 2 Syntax ‣ Probabilistic Dependency Graphs") can be generalized to convert arbitrary Bayesian Networks into PDGs. Given a BN ℬℬ\mathcal{B}caligraphic\_B and a positive confidence βXsubscript𝛽𝑋\beta\_{X}italic\_β start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT for the cpd of each variable X𝑋Xitalic\_X of ℬℬ\cal Bcaligraphic\_B, let 𝐌ℬ,βsubscript𝐌ℬ𝛽{\mathbdcal{M}}\_{\mathcal{B},\beta}bold\_M start\_POSTSUBSCRIPT caligraphic\_B , italic\_β end\_POSTSUBSCRIPT be the PDG comprising the cpds of ℬℬ\cal Bcaligraphic\_B in this way; we defer the straightforward formal details to the appendix. ###### Theorem 4.1 (name=,restate=thm:bns-are-pdgs,label=thm:bns-are-pdgs). If ℬℬ\cal Bcaligraphic\_B is a Bayesian network and Prℬsubscriptnormal-Prℬ\Pr\_{\cal B}roman\_Pr start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT is the distribution it specifies, then for all γ>0𝛾0\gamma>0italic\_γ > 0 and all vectors β𝛽\betaitalic\_β such that βL>0subscript𝛽𝐿0\beta\_{L}>0italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT > 0 for all edges L𝐿Litalic\_L, [[𝐌ℬ,β]]γ\*={Prℬ}superscriptsubscriptdelimited-[]delimited-[]subscript𝐌ℬ𝛽𝛾subscriptnormal-Prℬ[\mspace{-3.5mu}[{\mathbdcal{M}}\_{\mathcal{B},\beta}]\mspace{-3.5mu}]\_{\gamma}^{\*}=\{\Pr\_{\cal B}\}[ [ bold\_M start\_POSTSUBSCRIPT caligraphic\_B , italic\_β end\_POSTSUBSCRIPT ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = { roman\_Pr start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT }, and thus [[𝐌ℬ,β]]\*=Prℬsuperscriptdelimited-[]delimited-[]subscript𝐌ℬ𝛽subscriptnormal-Prℬ[\mspace{-3.5mu}[{\mathbdcal{M}}\_{\mathcal{B},\beta}]\mspace{-3.5mu}]^{\*}=\Pr\_{\cal B}[ [ bold\_M start\_POSTSUBSCRIPT caligraphic\_B , italic\_β end\_POSTSUBSCRIPT ] ] start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = roman\_Pr start\_POSTSUBSCRIPT caligraphic\_B end\_POSTSUBSCRIPT. LABEL:thm:bns-are-pdgs is quite robust to parameter choices: it holds for every weight vector β𝛽\betaitalic\_β and all γ>0𝛾0\gamma>0italic\_γ > 0. However, it does lean heavily on our assumption that α=𝟏𝛼1\alpha=\mathbf{1}italic\_α = bold\_1, making it our only result that does not have a natural analog for general α𝛼\alphaitalic\_α. ### 4.2 Factor Graphs Factor graphs (Kschischang, Frey, and Loeliger [2001](#bib.bib6)), like PDGs, generalize BNs. In this section, we consider the relationship between factor graphs (FGs) and PDGs. ###### Definition 4.1. A *factor graph* Φnormal-Φ\Phiroman\_Φ is a set of random variables 𝒳={Xi}𝒳subscript𝑋𝑖\mathcal{X}=\{X\_{i}\}caligraphic\_X = { italic\_X start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } and *factors* {ϕJ:𝒱(XJ)→ℝ≥0}J∈𝒥subscriptconditional-setsubscriptitalic-ϕ𝐽normal-→𝒱subscript𝑋𝐽subscriptℝabsent0𝐽𝒥\{\phi\_{J}\colon\mathcal{V}(X\_{J})\to\mathbb{R}\_{\geq 0}\}\_{J\in\mathcal{J}}{ italic\_ϕ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT : caligraphic\_V ( italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT ) → blackboard\_R start\_POSTSUBSCRIPT ≥ 0 end\_POSTSUBSCRIPT } start\_POSTSUBSCRIPT italic\_J ∈ caligraphic\_J end\_POSTSUBSCRIPT, where XJ⊆𝒳subscript𝑋𝐽𝒳X\_{J}\subseteq\mathcal{X}italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT ⊆ caligraphic\_X. More precisely, each factor ϕJsubscriptitalic-ϕ𝐽\phi\_{J}italic\_ϕ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT is associated with a subset XJ⊆𝒳subscript𝑋𝐽𝒳X\_{J}\subseteq\mathcal{X}italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT ⊆ caligraphic\_X of variables, and maps joint settings of XJsubscript𝑋𝐽X\_{J}italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT to non-negative real numbers. Φnormal-Φ\Phiroman\_Φ specifies a distribution | | | | | --- | --- | --- | | | PrΦ⁡(x→)=1ZΦ∏J∈𝒥ϕJ(x→J),subscriptPrΦ→𝑥1subscript𝑍Φsubscriptproduct𝐽𝒥subscriptitalic-ϕ𝐽subscript→𝑥𝐽{\Pr}\_{\Phi}(\vec{x})=\frac{1}{Z\_{\Phi}}\prod\_{J\in\cal J}\phi\_{J}(\vec{x}\_{J}),roman\_Pr start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT ( over→ start\_ARG italic\_x end\_ARG ) = divide start\_ARG 1 end\_ARG start\_ARG italic\_Z start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT end\_ARG ∏ start\_POSTSUBSCRIPT italic\_J ∈ caligraphic\_J end\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT ( over→ start\_ARG italic\_x end\_ARG start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT ) , | | where x→normal-→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG is a joint setting of all of the variables, x→Jsubscriptnormal-→𝑥𝐽\vec{x}\_{J}over→ start\_ARG italic\_x end\_ARG start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT is the restriction of x→normal-→𝑥\vec{x}over→ start\_ARG italic\_x end\_ARG to only the variables XJsubscript𝑋𝐽X\_{J}italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT, and ZΦsubscript𝑍normal-ΦZ\_{\Phi}italic\_Z start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT is the constant required to normalize the distribution. The cpds of a PDG naturally constitute a collection of factors, so it natural to wonder how the semantics of a PDG compares to simply treating the cpds as factors in a factor graph. To answer this, we start by making the translation precise. ###### Definition 4.2 (unweighted PDG to factor graph). If 𝐍=(𝒢,𝐩)𝐍𝒢𝐩\mathbdcal{N}=(\mathcal{G},\mathbf{p})bold\_N = ( caligraphic\_G , bold\_p ) is an unweighted PDG, define the associated FG Φ𝐍subscriptnormal-Φ𝐍\Phi\_{{\mathbdcal{N}}}roman\_Φ start\_POSTSUBSCRIPT bold\_N end\_POSTSUBSCRIPT on the variables (𝒩,𝒱)𝒩𝒱(\mathcal{N},\mathcal{V})( caligraphic\_N , caligraphic\_V ) by taking 𝒥𝒥\mathcal{J}caligraphic\_J to be the set of edges, and for an edge L𝐿Litalic\_L from Z𝑍Zitalic\_Z to Y𝑌Yitalic\_Y, taking XL={Z,Y}subscript𝑋𝐿𝑍𝑌X\_{L}=\{Z,Y\}italic\_X start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT = { italic\_Z , italic\_Y }, and ϕL(z,y)subscriptitalic-ϕ𝐿𝑧𝑦\phi\_{L}(z,y)italic\_ϕ start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT ( italic\_z , italic\_y ) to be 𝐩L𝐌(y∣z)superscriptsubscript𝐩𝐿𝐌conditional𝑦𝑧\mathbf{p}\_{\!{}\_{L}\!}^{\mathbdcal{M}}(y\mid z)bold\_p start\_POSTSUBSCRIPT start\_FLOATSUBSCRIPT italic\_L end\_FLOATSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT ( italic\_y ∣ italic\_z ) (i.e., (𝐩L𝐌(z))(y)superscriptsubscript𝐩𝐿𝐌𝑧𝑦(\mathbf{p}\_{\!{}\_{L}\!}^{\mathbdcal{M}}(z))(y)( bold\_p start\_POSTSUBSCRIPT start\_FLOATSUBSCRIPT italic\_L end\_FLOATSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT ( italic\_z ) ) ( italic\_y )). It turns out we can also do the reverse. Using essentially the same idea as in [Construction 2.2](#S2.Thmdefn2 "Construction 2.2. ‣ 2 Syntax ‣ Probabilistic Dependency Graphs"), we can encode a factor graph as an assertion about the unconditional probability distribution over the variables associated to each factor. ###### Definition 4.3 (factor graph to unweighted PDG). For a FG Φnormal-Φ\Phiroman\_Φ, let 𝐍Φsubscript𝐍normal-Φ{\mathbdcal{N}}\_{\Phi}bold\_N start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT be the unweighted PDG consisting of * • the variables in ΦΦ\Phiroman\_Φ together with 1111\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}start\_ARG italic\_1 end\_ARG italic\_1 and a variable XJ:=∏j∈JXjassignsubscript𝑋𝐽subscriptproduct𝑗𝐽subscript𝑋𝑗X\_{\!J}:=\prod\_{j\in J}X\_{j}italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT := ∏ start\_POSTSUBSCRIPT italic\_j ∈ italic\_J end\_POSTSUBSCRIPT italic\_X start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT for every factor J∈𝒥𝐽𝒥J\in\mathcal{J}italic\_J ∈ caligraphic\_J, and * • edges 11→XJ→11subscript𝑋𝐽{\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}}\!\!\to\!X\_{\!J}start\_ARG italic\_1 end\_ARG italic\_1 → italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT for each J𝐽Jitalic\_J and XJ→→XjX\_{\!J}\!\!\rightarrow\mathrel{\mspace{-15.0mu}}\rightarrow\!X\_{j}italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT → → italic\_X start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT for each Xj∈𝐗Jsubscript𝑋𝑗subscript𝐗𝐽X\_{j}\in\mathbf{X}\_{J}italic\_X start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ bold\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT, where the edges XJ→→XjX\_{\!J}\!\rightarrow\mathrel{\mspace{-15.0mu}}\rightarrow\!X\_{j}italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT → → italic\_X start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT are associated with the appropriate projections, and each 11→XJnormal-→11subscript𝑋𝐽{\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}}\!\to\!X\_{\!J}start\_ARG italic\_1 end\_ARG italic\_1 → italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT is associated with the unconditional joint distribution on XJsubscript𝑋𝐽X\_{J}italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT obtained by normalizing ϕJsubscriptitalic-ϕ𝐽\phi\_{J}italic\_ϕ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT. The process is illustrated in [Figure 4](#S4.F4 "Figure 4 ‣ 4.2 Factor Graphs ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs"). ![Refer to caption](/html/2012.10800/assets/x8.png)   ![Refer to caption](/html/2012.10800/assets/x9.png) Figure 4: Conversion of the PDG in [Example 2](#Thmexample2 "Example 2 (emulating a BN). ‣ 1 Introduction ‣ Probabilistic Dependency Graphs") to a factor graph according to [Definition 4.2](#S4.Thmdefn2 "Definition 4.2 (unweighted PDG to factor graph). ‣ 4.2 Factor Graphs ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs") (left), and from that factor graph back to a PDG by [Definition 4.3](#S4.Thmdefn3 "Definition 4.3 (factor graph to unweighted PDG). ‣ 4.2 Factor Graphs ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs") (right). In the latter, for each J𝐽Jitalic\_J we introduce a new variable XJsubscript𝑋𝐽X\_{J}italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT (displayed as a smaller darker rectangle), whose values are joint settings of the variables connected it, and also an edge 1→XJ→1subscript𝑋𝐽1\to X\_{J}1 → italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT (shown in blue), to which we associate the unconditional distribution given by normalizing ϕJsubscriptitalic-ϕ𝐽\phi\_{J}italic\_ϕ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT. PDGs are directed graphs, while factors graphs are undirected. The map from PDGs to factor graphs thus loses some important structure. As shown in [Figure 4](#S4.F4 "Figure 4 ‣ 4.2 Factor Graphs ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs"), this mapping can change the graphical structure significantly. Nevertheless, ###### Theorem 4.2 (name=,restate=thm:fg-is-pdg,label=thm:fg-is-pdg). PrΦ=[[𝐍Φ]]1\*subscriptPrΦsuperscriptsubscriptdelimited-[]delimited-[]subscript𝐍Φ1\Pr\_{\Phi}=[\mspace{-3.5mu}[{\mathbdcal{N}}\_{\Phi}]\mspace{-3.5mu}]\_{1}^{\*}\;roman\_Pr start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT = [ [ bold\_N start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT ] ] start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT for all factor graphs Φnormal-Φ\Phiroman\_Φ.222Recall that we identify the unweighted PDG (𝒢,𝐩)𝒢𝐩(\mathcal{G},\mathbf{p})( caligraphic\_G , bold\_p ) with the weighted PDG (𝒢,𝐩,𝟏,𝟏)𝒢𝐩11(\mathcal{G},\mathbf{p},\mathbf{1},\mathbf{1})( caligraphic\_G , bold\_p , bold\_1 , bold\_1 ). ###### Theorem 4.3 (name=,restate=thm:pdg-is-fg,label=thm:pdg-is-fg). [[𝐍]]𝟏\*=Pr𝚽𝐍superscriptsubscriptdelimited-[]delimited-[]𝐍1subscriptPrsubscript𝚽𝐍[\mspace{-3.5mu}[\mathbdcal{N}]\mspace{-3.5mu}]\_{1}^{\*}=\Pr\_{\Phi\_{{\mathbdcal{N}}}}\;[ [ bold\_N ] ] start\_POSTSUBSCRIPT bold\_1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = roman\_Pr start\_POSTSUBSCRIPT bold\_Φ start\_POSTSUBSCRIPT bold\_N end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT for all unweighted PDGs 𝐍𝐍\mathbdcal{N}bold\_N. The correspondence hinges on the fact that we take γ=1𝛾1\gamma=1italic\_γ = 1, so that 𝐼𝑛𝑐𝐼𝑛𝑐\mathit{Inc}italic\_Inc and 𝐼𝐷𝑒𝑓𝐼𝐷𝑒𝑓\mathit{IDef}italic\_IDef are weighted equally. Because the user of a PDG gets to choose γ𝛾\gammaitalic\_γ, the fact that the translation from factor graphs to PDGs preserves semantics only for γ=1𝛾1\gamma=1italic\_γ = 1 poses no problem. Conversely, the fact that the reverse correspondence requires γ=1𝛾1\gamma=1italic\_γ = 1 suggests that factor graphs are less flexible than PDGs. What about weighted PDGs (𝒢,𝐩,β)𝒢𝐩𝛽(\mathcal{G},\mathbf{p},\beta)( caligraphic\_G , bold\_p , italic\_β ) where β≠𝟏𝛽1\beta\neq{\bf 1}italic\_β ≠ bold\_1? There is also a standard notion of weighted factor graph, but as long as we stick with our convention of taking α=𝟏𝛼1\alpha={\bf 1}italic\_α = bold\_1, we cannot relate them to weighted PDGs. As we are about to see, once we drop this convention, we can do much more. ### 4.3 Factored Exponential Families A *weighted factor graph (WFG)* ΨΨ\Psiroman\_Ψ is a pair (Φ,θ)Φ𝜃(\Phi,\theta)( roman\_Φ , italic\_θ ) consisting of a factor graph ΦΦ\Phiroman\_Φ together with a vector of non-negative weights {θJ}J∈𝒥subscriptsubscript𝜃𝐽𝐽𝒥\{\theta\_{J}\}\_{J\in\mathcal{J}}{ italic\_θ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT } start\_POSTSUBSCRIPT italic\_J ∈ caligraphic\_J end\_POSTSUBSCRIPT. ΨΨ\Psiroman\_Ψ specifies a canonical scoring function | | | | | | --- | --- | --- | --- | | | GFEΨ(μ):=𝔼x→∼μ[∑J∈𝒥θJlog⁡1ϕJ(x→J)]−H⁡(μ),assign𝐺𝐹subscript𝐸Ψ𝜇subscript𝔼similar-to→𝑥𝜇subscript𝐽𝒥subscript𝜃𝐽1subscriptitalic-ϕ𝐽subscript→𝑥𝐽H𝜇\mathit{G\mkern-4.0muF\mkern-4.5muE}\_{\Psi}(\mu):=\!\operatorname\*{\mathbb{E}}\_{\vec{x}\sim\mu}\left[\sum\_{J\in\cal J}\theta\_{J}\log\frac{1}{\phi\_{J}(\vec{x}\_{J})}\right]-\operatorname{\mathrm{H}}(\mu),italic\_G italic\_F italic\_E start\_POSTSUBSCRIPT roman\_Ψ end\_POSTSUBSCRIPT ( italic\_μ ) := blackboard\_E start\_POSTSUBSCRIPT over→ start\_ARG italic\_x end\_ARG ∼ italic\_μ end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_J ∈ caligraphic\_J end\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT roman\_log divide start\_ARG 1 end\_ARG start\_ARG italic\_ϕ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT ( over→ start\_ARG italic\_x end\_ARG start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT ) end\_ARG ] - roman\_H ( italic\_μ ) , | | (4) | called the *variational Gibbs free energy* (Mezard and Montanari [2009](#bib.bib8)). GFEΨ𝐺𝐹subscript𝐸Ψ\mathit{G\mkern-4.0muF\mkern-4.5muE}\_{\Psi}italic\_G italic\_F italic\_E start\_POSTSUBSCRIPT roman\_Ψ end\_POSTSUBSCRIPT is uniquely minimized by the distribution PrΨ⁡(x→)=1ZΨ∏J∈𝒥ϕJ(x→J)θJsubscriptPrΨ→𝑥1subscript𝑍Ψsubscriptproduct𝐽𝒥subscriptitalic-ϕ𝐽superscriptsubscript→𝑥𝐽subscript𝜃𝐽{\Pr}\_{\Psi}(\vec{x})=\frac{1}{Z\_{\Psi}}\prod\_{J\in\cal J}\phi\_{J}(\vec{x}\_{J})^{\theta\_{J}}roman\_Pr start\_POSTSUBSCRIPT roman\_Ψ end\_POSTSUBSCRIPT ( over→ start\_ARG italic\_x end\_ARG ) = divide start\_ARG 1 end\_ARG start\_ARG italic\_Z start\_POSTSUBSCRIPT roman\_Ψ end\_POSTSUBSCRIPT end\_ARG ∏ start\_POSTSUBSCRIPT italic\_J ∈ caligraphic\_J end\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT ( over→ start\_ARG italic\_x end\_ARG start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, which matches the unweighted case when every θJ=1subscript𝜃𝐽1\theta\_{J}=1italic\_θ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT = 1. The mapping θ↦Pr(Φ,θ)maps-to𝜃subscriptPrΦ𝜃\theta\mapsto\Pr\_{(\Phi,\theta)}italic\_θ ↦ roman\_Pr start\_POSTSUBSCRIPT ( roman\_Φ , italic\_θ ) end\_POSTSUBSCRIPT is known as ΦΦ\Phiroman\_Φ’s *exponential family* and is a central tool in the analysis and development of many algorithms for graphical models (Wainwright, Jordan et al. [2008](#bib.bib10)). PDGs can in fact capture the full exponential family of a factor graph, but only by allowing values of α𝛼\alphaitalic\_α other than 𝟏1{\bf 1}bold\_1. In this case, the only definition that requires alteration is 𝐼𝐷𝑒𝑓𝐼𝐷𝑒𝑓\mathit{IDef}italic\_IDef, which now depends on the *weighted multigraph* (𝒢𝐌,α𝐌)superscript𝒢𝐌superscript𝛼𝐌(\mathcal{G}^{\mathbdcal{M}},\alpha^{\mathbdcal{M}})( caligraphic\_G start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT , italic\_α start\_POSTSUPERSCRIPT bold\_M end\_POSTSUPERSCRIPT ), and is given by | | | | | | --- | --- | --- | --- | | | 𝐼𝐷𝑒𝑓𝐌(μ):=∑X→𝐿Y∈ℰαLHμ⁡(Y∣X)−H⁡(μ).assignsubscript𝐼𝐷𝑒𝑓𝐌𝜇subscript𝑋𝐿→𝑌ℰsubscript𝛼𝐿subscriptH𝜇conditional𝑌𝑋H𝜇\mathit{IDef}\_{\!\mathbdcal{M}}(\mu):=\sum\_{X\!\overset{\smash{\mskip-5.0mu\raisebox{-1.0pt}{$\scriptscriptstyle L$}}}{\rightarrow}\!Y\in\mathcal{E}}\alpha\_{L}\operatorname{\mathrm{H}}\_{\mu}(Y\mid X)-\operatorname{\mathrm{H}}(\mu).italic\_IDef start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT ( italic\_μ ) := ∑ start\_POSTSUBSCRIPT italic\_X start\_OVERACCENT italic\_L end\_OVERACCENT start\_ARG → end\_ARG italic\_Y ∈ caligraphic\_E end\_POSTSUBSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT roman\_H start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_Y ∣ italic\_X ) - roman\_H ( italic\_μ ) . | | (5) | Thus, the conditional entropy Hμ⁡(Y∣X)subscriptH𝜇conditional𝑌𝑋\operatorname{\mathrm{H}}\_{\mu}(Y\mid X)roman\_H start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT ( italic\_Y ∣ italic\_X ) associated with the edge X→𝐿Y𝑋𝐿→𝑌X\!\overset{\smash{\mskip-5.0mu\raisebox{-1.0pt}{$\scriptscriptstyle L$}}}{\rightarrow}\!Yitalic\_X start\_OVERACCENT italic\_L end\_OVERACCENT start\_ARG → end\_ARG italic\_Y is multiplied by the weight αLsubscript𝛼𝐿\alpha\_{L}italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT of that edge. One key benefit of using α𝛼\alphaitalic\_α is that we can capture arbitrary WFGs, not just ones with a constant weight vector. All we have to do is to ensure that in our translation from factor graphs to PDGs, the ratio αL/βLsubscript𝛼𝐿subscript𝛽𝐿\alpha\_{L}/\beta\_{L}italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT / italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT is a constant. (Of course, if we allow arbitrary weights, we cannot hope to do this if αL=1subscript𝛼𝐿1\alpha\_{L}=1italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT = 1 for all edges L𝐿Litalic\_L.) We therefore define a family of translations, parameterized by the ratio of αLsubscript𝛼𝐿\alpha\_{L}italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT to βLsubscript𝛽𝐿\beta\_{L}italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT. ###### Definition 4.4 (WFG to PDG). Given a WFG Ψ=(Φ,θ)normal-Ψnormal-Φ𝜃\Psi=(\Phi,\theta)roman\_Ψ = ( roman\_Φ , italic\_θ ), and postive number k𝑘kitalic\_k, we define the corresponding PDG 𝐌Ψ,k=(𝐍Φ,αθ,βθ)subscript𝐌normal-Ψ𝑘subscript𝐍normal-Φsubscript𝛼𝜃subscript𝛽𝜃{\mathbdcal{M}}\_{\Psi,k}=({\mathbdcal{N}}\_{\Phi},\alpha\_{\theta},\beta\_{\theta})bold\_M start\_POSTSUBSCRIPT roman\_Ψ , italic\_k end\_POSTSUBSCRIPT = ( bold\_N start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT , italic\_α start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) by taking βJ=kθJsubscript𝛽𝐽𝑘subscript𝜃𝐽\beta\_{J}=k\theta\_{J}italic\_β start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT = italic\_k italic\_θ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT and αJ=θJsubscript𝛼𝐽subscript𝜃𝐽\alpha\_{J}=\theta\_{J}italic\_α start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT for the edge 11→XJnormal-→11subscript𝑋𝐽\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}\rightarrow X\_{J}start\_ARG italic\_1 end\_ARG italic\_1 → italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT, and taking βL=ksubscript𝛽𝐿𝑘\beta\_{L}=kitalic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT = italic\_k and αL=1subscript𝛼𝐿1\alpha\_{L}=1italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT = 1 for the projections XJ→→XjX\_{J}\!\rightarrow\mathrel{\mspace{-15.0mu}}\rightarrow\!X\_{j}italic\_X start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT → → italic\_X start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT. We now extend Definitions [4.2](#S4.Thmdefn2 "Definition 4.2 (unweighted PDG to factor graph). ‣ 4.2 Factor Graphs ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs") and [4.3](#S4.Thmdefn3 "Definition 4.3 (factor graph to unweighted PDG). ‣ 4.2 Factor Graphs ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs") to (weighted) PDGs and WFGs. In translating a PDG to a WFG, there will necessarily be some loss of information: PDGs have two sets, while WFGs have only have one. Here we throw out α𝛼\alphaitalic\_α and keep β𝛽\betaitalic\_β, though in its role here as a left inverse of [Definition 4.4](#S4.Thmdefn4 "Definition 4.4 (WFG to PDG). ‣ 4.3 Factored Exponential Families ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs"), either choice would suffice. ###### Definition 4.5 (PDG to WFG). Given a (weighted) PDG 𝐌=(𝐍,β)𝐌𝐍𝛽\mathbdcal{M}=(\mathbdcal{N},\beta)bold\_M = ( bold\_N , italic\_β ), we take its corresponding WFG to be Ψ𝐌:=(Φ𝐍,β)assignsubscriptnormal-Ψ𝐌subscriptnormal-Φ𝐍𝛽\Psi\_{{\mathbdcal{M}}}:=(\Phi\_{{\mathbdcal{N}}},\beta)roman\_Ψ start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT := ( roman\_Φ start\_POSTSUBSCRIPT bold\_N end\_POSTSUBSCRIPT , italic\_β ); that is, θL:=βLassignsubscript𝜃𝐿subscript𝛽𝐿\theta\_{L}:=\beta\_{L}italic\_θ start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT := italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT for all edges L𝐿Litalic\_L. We now show that we can capture the entire exponential family of a factor graph, and even its associated free energy, but only for γ𝛾\gammaitalic\_γ equal to the constant k𝑘kitalic\_k used in the translation. ###### Theorem 4.4. For all WFGs Ψ=(Φ,θ)normal-Ψnormal-Φ𝜃\Psi=(\Phi,\theta)roman\_Ψ = ( roman\_Φ , italic\_θ ) and all γ>0𝛾0\gamma>0italic\_γ > 0, we have that GFEΨ=1/γ[[𝐌Ψ,γ]]γ+C𝐺𝐹subscript𝐸normal-Ψ1𝛾subscriptdelimited-[]delimited-[]subscript𝐌normal-Ψ𝛾𝛾𝐶\mathit{G\mkern-4.0muF\mkern-4.5muE}\_{\Psi}=\nicefrac{{1}}{{\gamma}}[\mspace{-3.5mu}[{\mathbdcal{M}}\_{\Psi,\gamma}]\mspace{-3.5mu}]\_{\gamma}+Citalic\_G italic\_F italic\_E start\_POSTSUBSCRIPT roman\_Ψ end\_POSTSUBSCRIPT = / start\_ARG 1 end\_ARG start\_ARG italic\_γ end\_ARG [ [ bold\_M start\_POSTSUBSCRIPT roman\_Ψ , italic\_γ end\_POSTSUBSCRIPT ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT + italic\_C for some constant C𝐶Citalic\_C, so PrΨsubscriptnormal-Prnormal-Ψ\Pr\_{\Psi}roman\_Pr start\_POSTSUBSCRIPT roman\_Ψ end\_POSTSUBSCRIPT is the unique element of [[𝐌Ψ,γ]]γ\*superscriptsubscriptdelimited-[]delimited-[]subscript𝐌normal-Ψ𝛾𝛾[\mspace{-3.5mu}[{\mathbdcal{M}}\_{\Psi,\gamma}]\mspace{-3.5mu}]\_{\gamma}^{\*}[ [ bold\_M start\_POSTSUBSCRIPT roman\_Ψ , italic\_γ end\_POSTSUBSCRIPT ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. In particular, for k=1𝑘1k\!=\!1italic\_k = 1, so that θ𝜃\thetaitalic\_θ is used for both the functions α𝛼\alphaitalic\_α and β𝛽\betaitalic\_β of the resulting PDG, [Theorem 4.4](#S4.Thmtheorem4 "Theorem 4.4. ‣ 4.3 Factored Exponential Families ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs") strictly generalizes LABEL:thm:fg-is-pdg. ###### Corollary 4.4.1. For all weighted factor graphs (Φ,θ)normal-Φ𝜃(\Phi,\theta)( roman\_Φ , italic\_θ ), we have that Pr(Φ,θ)=[[(𝐍Φ,θ,θ)]]1\*subscriptnormal-Prnormal-Φ𝜃superscriptsubscriptdelimited-[]delimited-[]subscript𝐍normal-Φ𝜃𝜃1\Pr\_{(\Phi,\theta)}=[\mspace{-3.5mu}[({\mathbdcal{N}}\_{\Phi},\theta,\theta)]\mspace{-3.5mu}]\_{1}^{\*}roman\_Pr start\_POSTSUBSCRIPT ( roman\_Φ , italic\_θ ) end\_POSTSUBSCRIPT = [ [ ( bold\_N start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT , italic\_θ , italic\_θ ) ] ] start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT Conversely, as long as the ratio of αLsubscript𝛼𝐿\alpha\_{L}italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT to βLsubscript𝛽𝐿\beta\_{L}italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT is constant, the reverse translation also preserves semantics. ###### Theorem 4.5 (name=,restate=thm:pdg-is-wfg,label=thm:pdg-is-wfg). For all unweighted PDGs 𝐍𝐍\mathbdcal{N}bold\_N and non-negative vectors 𝐯𝐯\mathbf{v}bold\_v over ℰ𝐍superscriptℰ𝐍\mathcal{E}^{\mathbdcal{N}}caligraphic\_E start\_POSTSUPERSCRIPT bold\_N end\_POSTSUPERSCRIPT, and all γ>0𝛾0\gamma>0italic\_γ > 0, we have that [[(𝐍,𝐯,γ𝐯)]]γ=γGFE(𝚽𝐍,𝐯)subscriptdelimited-[]delimited-[]𝐍𝐯𝛾𝐯𝛾𝛾𝐺𝐹subscript𝐸subscript𝚽𝐍𝐯[\mspace{-3.5mu}[(\mathbdcal{N},\mathbf{v},\gamma\mathbf{v})]\mspace{-3.5mu}]\_{\gamma}=\gamma\,\mathit{G\mkern-4.0muF\mkern-4.5muE}\_{(\Phi\_{\mathbdcal{N}},\mathbf{v})}[ [ ( bold\_N , bold\_v , italic\_γ bold\_v ) ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT = italic\_γ italic\_G italic\_F italic\_E start\_POSTSUBSCRIPT ( bold\_Φ start\_POSTSUBSCRIPT bold\_N end\_POSTSUBSCRIPT , bold\_v ) end\_POSTSUBSCRIPT; consequently, [[(𝐍,𝐯,γ𝐯)]]γ\*={Pr(𝚽𝐍,𝐯)}superscriptsubscriptdelimited-[]delimited-[]𝐍𝐯𝛾𝐯𝛾subscriptnormal-Prsubscript𝚽𝐍𝐯[\mspace{-3.5mu}[(\mathbdcal{N},\mathbf{v},\gamma\mathbf{v})]\mspace{-3.5mu}]\_{\gamma}^{\*}=\{\Pr\_{(\Phi\_{\mathbdcal{N}},\mathbf{v})}\}[ [ ( bold\_N , bold\_v , italic\_γ bold\_v ) ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = { roman\_Pr start\_POSTSUBSCRIPT ( bold\_Φ start\_POSTSUBSCRIPT bold\_N end\_POSTSUBSCRIPT , bold\_v ) end\_POSTSUBSCRIPT }. The key step in proving [Theorems 4.4](#S4.Thmtheorem4 "Theorem 4.4. ‣ 4.3 Factored Exponential Families ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs") and LABEL:thm:pdg-is-wfg (and in the proofs of a number of other results) involves rewriting [[𝐌]]γsubscriptdelimited-[]delimited-[]𝐌𝛾[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{\gamma}[ [ bold\_M ] ] start\_POSTSUBSCRIPT italic\_γ end\_POSTSUBSCRIPT as follows: ###### Proposition 4.6 (restate=prop:nice-score,label=prop:nice-score). Letting x𝐰superscript𝑥𝐰x^{\mathbf{w}}italic\_x start\_POSTSUPERSCRIPT bold\_w end\_POSTSUPERSCRIPT and y𝐰superscript𝑦𝐰y^{\mathbf{w}}italic\_y start\_POSTSUPERSCRIPT bold\_w end\_POSTSUPERSCRIPT denote the values of X𝑋Xitalic\_X and Y𝑌Yitalic\_Y, respectively, in 𝐰∈𝒱(𝐌)𝐰𝒱𝐌\mathbf{w}\in\mathcal{V}(\mathbdcal{M})bold\_w ∈ caligraphic\_V ( bold\_M ), we have | | | | | | --- | --- | --- | --- | | | [[𝐌]](μ)=𝔼𝐰∼μ{∑𝐗→𝐋𝐘[β𝐋log⁡𝟏𝐩𝐋(𝐲𝐰|𝐱𝐰)⏞log likelihood / cross entropy+(αLγ−βL)log⁡1μ(y𝐰|x𝐰)⏟local regularization (if βL>αLγ)]−γlog⁡1μ(𝐰)⏟global regularization}.delimited-[]delimited-[]𝐌𝜇subscript𝔼similar-to𝐰𝜇subscript𝐋→𝐗𝐘delimited-[]superscript⏞subscript𝛽𝐋1subscript𝐩𝐋conditionalsuperscript𝐲𝐰superscript𝐱𝐰log likelihood / cross entropysubscript⏟subscript𝛼𝐿𝛾subscript𝛽𝐿1𝜇conditionalsuperscript𝑦𝐰superscript𝑥𝐰local regularization (if βL>αLγ)subscript⏟𝛾1𝜇𝐰global regularization\begin{split}[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}](\mu)=\operatorname\*{\mathbb{E}}\_{\mathbf{w}\sim\mu}\!\Bigg{\{}\sum\_{X\xrightarrow{\!\!L}Y}\bigg{[}\,\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\overbrace{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\!\beta\_{L}\log\frac{1}{\mathbf{p}\_{\!{}\_{L}\!}(y^{\mathbf{w}}|x^{\mathbf{w}})}}^{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\smash{\mathclap{\text{log likelihood / cross entropy}}}}+\qquad\\[-6.99997pt] \color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\underbrace{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}({\alpha\_{L}}\gamma-\beta\_{L})\log\frac{1}{\mu(y^{\mathbf{w}}|x^{\mathbf{w}})}}\_{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\smash{\mathclap{\text{local regularization (if $\beta\_{L}>\alpha\_{L}\gamma$)}}}}\bigg{]}-\underbrace{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\gamma\log\frac{1}{\mu(\mathbf{w})}}\_{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}\smash{\mathclap{\text{global regularization}}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\Bigg{\}}.\end{split}start\_ROW start\_CELL [ [ bold\_M ] ] ( italic\_μ ) = blackboard\_E start\_POSTSUBSCRIPT bold\_w ∼ italic\_μ end\_POSTSUBSCRIPT { ∑ start\_POSTSUBSCRIPT bold\_X start\_ARROW start\_OVERACCENT bold\_L end\_OVERACCENT → end\_ARROW bold\_Y end\_POSTSUBSCRIPT [ over⏞ start\_ARG italic\_β start\_POSTSUBSCRIPT bold\_L end\_POSTSUBSCRIPT roman\_log divide start\_ARG bold\_1 end\_ARG start\_ARG bold\_p start\_POSTSUBSCRIPT start\_FLOATSUBSCRIPT bold\_L end\_FLOATSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_y start\_POSTSUPERSCRIPT bold\_w end\_POSTSUPERSCRIPT | bold\_x start\_POSTSUPERSCRIPT bold\_w end\_POSTSUPERSCRIPT ) end\_ARG end\_ARG start\_POSTSUPERSCRIPT log likelihood / cross entropy end\_POSTSUPERSCRIPT + end\_CELL end\_ROW start\_ROW start\_CELL under⏟ start\_ARG ( italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT italic\_γ - italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT ) roman\_log divide start\_ARG 1 end\_ARG start\_ARG italic\_μ ( italic\_y start\_POSTSUPERSCRIPT bold\_w end\_POSTSUPERSCRIPT | italic\_x start\_POSTSUPERSCRIPT bold\_w end\_POSTSUPERSCRIPT ) end\_ARG end\_ARG start\_POSTSUBSCRIPT local regularization (if italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT > italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT italic\_γ ) end\_POSTSUBSCRIPT ] - under⏟ start\_ARG italic\_γ roman\_log divide start\_ARG 1 end\_ARG start\_ARG italic\_μ ( bold\_w ) end\_ARG end\_ARG start\_POSTSUBSCRIPT global regularization end\_POSTSUBSCRIPT } . end\_CELL end\_ROW | | (6) | For a fixed γ𝛾\gammaitalic\_γ, the first and last terms of ([6](#S4.E6 "6 ‣ Proposition 4.6 (restate=prop:nice-score,label=prop:nice-score). ‣ 4.3 Factored Exponential Families ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs")) are equal to a scaled version of the free energy, γGFEΦ𝛾𝐺𝐹subscript𝐸Φ\gamma\mathit{G\mkern-4.0muF\mkern-4.5muE}\_{\Phi}italic\_γ italic\_G italic\_F italic\_E start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT, if we set ϕJ:=𝐩Lassignsubscriptitalic-ϕ𝐽subscript𝐩𝐿\phi\_{J}:=\mathbf{p}\_{\!{}\_{L}\!}italic\_ϕ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT := bold\_p start\_POSTSUBSCRIPT start\_FLOATSUBSCRIPT italic\_L end\_FLOATSUBSCRIPT end\_POSTSUBSCRIPT and θJ:=βL/γassignsubscript𝜃𝐽subscript𝛽𝐿𝛾\theta\_{J}:=\nicefrac{{\beta\_{L}}}{{\gamma}}italic\_θ start\_POSTSUBSCRIPT italic\_J end\_POSTSUBSCRIPT := / start\_ARG italic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_γ end\_ARG. If, in addition, βL=αLγsubscript𝛽𝐿subscript𝛼𝐿𝛾\beta\_{L}={\alpha\_{L}}\gammaitalic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT = italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT italic\_γ for all edges L𝐿Litalic\_L, then the local regularization term disappears, giving us the desired correspondence. [Equation 6](#S4.E6 "6 ‣ Proposition 4.6 (restate=prop:nice-score,label=prop:nice-score). ‣ 4.3 Factored Exponential Families ‣ 4 Relationships to Other Graphical Models ‣ Probabilistic Dependency Graphs") also makes it clear that taking βL=αLγsubscript𝛽𝐿subscript𝛼𝐿𝛾\beta\_{L}={\alpha\_{L}}\gammaitalic\_β start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT = italic\_α start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT italic\_γ for all edges L𝐿Litalic\_L is essentially necessary to get LABEL:thm:pdg-is-fg and LABEL:thm:fg-is-pdg. Of course, fixed γ𝛾\gammaitalic\_γ precludes taking the limit as γ𝛾\gammaitalic\_γ goes to 0, so LABEL:prop:consist does apply. This is reflected in some strange behavior in factor graphs trying to capture the same phenomena as PDGs, as the following example shows. ###### Example 5. Consider the PDG 𝐌𝐌\mathbdcal{M}bold\_M containing just X𝑋Xitalic\_X and 1111, and two edges p,q:1→Xnormal-:𝑝𝑞 normal-→1𝑋p,q:1\to Xitalic\_p , italic\_q : 1 → italic\_X. (Recall that such a PDG can arise if we get different information about the probability of X𝑋Xitalic\_X from two different sources; this is a situation we certainly want to be able to capture!) Consider the simplest situation, where p𝑝pitalic\_p and q𝑞qitalic\_q are both associated with the same distribution on X𝑋Xitalic\_X; further suppose that the agent is certain about the distribution, so βp=βq=1subscript𝛽𝑝subscript𝛽𝑞1\beta\_{p}=\beta\_{q}=1italic\_β start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT = italic\_β start\_POSTSUBSCRIPT italic\_q end\_POSTSUBSCRIPT = 1. For definiteness, suppose that 𝒱(X)={x1,x2}𝒱𝑋subscript𝑥1subscript𝑥2\mathcal{V}(X)=\{x\_{1},x\_{2}\}caligraphic\_V ( italic\_X ) = { italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT }, and that the distribution associated with both edges is μ.7subscript𝜇.7\mu\_{.7}italic\_μ start\_POSTSUBSCRIPT .7 end\_POSTSUBSCRIPT, which ascribes probability .7.7.7.7 to x1subscript𝑥1x\_{1}italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. Then, as we would hope [[𝐌]]\*={μ.7}superscriptdelimited-[]delimited-[]𝐌subscript𝜇.7[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]^{\*}=\{\mu\_{.7}\}[ [ bold\_M ] ] start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = { italic\_μ start\_POSTSUBSCRIPT bold\_.7 end\_POSTSUBSCRIPT }; after all, both sources agree on the information. However, it can be shown that PrΨ𝐌=μ.85subscriptnormal-Prsubscriptnormal-Ψ𝐌subscript𝜇.85\Pr\_{\Psi\_{{\mathbdcal{M}}}}=\mu\_{.85}roman\_Pr start\_POSTSUBSCRIPT roman\_Ψ start\_POSTSUBSCRIPT bold\_M end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_μ start\_POSTSUBSCRIPT .85 end\_POSTSUBSCRIPT, so [[𝐌]]𝟏\*={μ.85}superscriptsubscriptdelimited-[]delimited-[]𝐌1subscript𝜇.85[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]\_{1}^{\*}=\{\mu\_{.85}\}[ [ bold\_M ] ] start\_POSTSUBSCRIPT bold\_1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT = { italic\_μ start\_POSTSUBSCRIPT bold\_.85 end\_POSTSUBSCRIPT }. Although both θ𝜃\thetaitalic\_θ and β𝛽\betaitalic\_β are measures of confidence, the way that the Gibbs free energy varies with θ𝜃\thetaitalic\_θ is quite different from the way that the score of a PDG varies with β𝛽\betaitalic\_β. The scoring function that we use for PDGs can be viewed as extending GFEΦ,θ𝐺𝐹subscript𝐸Φ𝜃{\mathit{G\mkern-4.0muF\mkern-4.5muE}}\_{\Phi,\theta}italic\_G italic\_F italic\_E start\_POSTSUBSCRIPT roman\_Φ , italic\_θ end\_POSTSUBSCRIPT by including the local regularization term. As γ𝛾\gammaitalic\_γ approaches zero, the importance of the global regularization terms decreases relative to that of the local regularization term, so the PDG scoring function becomes quite different from Gibbs free energy. 5 Discussion ------------- We have introduced PDGs, a powerful tool for representing probabilistic information. They have a number of advantages over other probablisitic graphical models. * • They allow us to capture inconsistency, including conflicting information from multiple sources with varying degrees of reliability. * • They are much more modular than other representations; for example, we can combine information from two sources by simply taking the union of two PDGs, and it is easy to add new information (edges) and features (nodes) without affecting previously-received information. * • They allow for a clean separation between quantitiatve information (the cpds and weights β𝛽\betaitalic\_β) and more qualitative information contained by the graph structure (and the weights α𝛼\alphaitalic\_α); this is captured by the terms 𝐼𝑛𝑐𝐼𝑛𝑐\mathit{Inc}italic\_Inc and 𝐼𝐷𝑒𝑓𝐼𝐷𝑒𝑓\mathit{IDef}italic\_IDef in our scoring function. * • PDGs have (several) natural semantics; one of them allows us to pick out a unique distribution. Using this distrbution, PDGs can capture BNs and factor graphs. In the latter case, a simple parameter shift in the corresponding PDG eliminates arguably problematic behavior of a factor graph. We have only scratched the surface of what can be done with PDGs here. Two major issues that need to be tackled are inference and dynamics. How should we query a PDG for probabilistic information? How should we modify a PDG in light of new information or to make it more consistent? These issues turn out to be closely related. Due to space limitations, we just briefly give some intuitions and examples here. Suppose that we want to compute the probability of Y𝑌Yitalic\_Y given X𝑋Xitalic\_X in a PDG 𝐌𝐌\mathbdcal{M}bold\_M. For a cpd p(Y|X)𝑝conditional𝑌𝑋p(Y|X)italic\_p ( italic\_Y | italic\_X ), let 𝐌+𝐩superscript𝐌𝐩\mathbdcal{M}^{+p}bold\_M start\_POSTSUPERSCRIPT + bold\_p end\_POSTSUPERSCRIPT be the PDG obtained by associating p𝑝pitalic\_p with a new edge in 𝐌𝐌\mathbdcal{M}bold\_M from X𝑋Xitalic\_X to Y𝑌Yitalic\_Y, with αp=0subscript𝛼𝑝0\alpha\_{p}\!=\!0italic\_α start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT = 0. We judge the quality of a candidate answer p𝑝pitalic\_p by the best possible score that 𝐌+𝐩superscript𝐌𝐩\mathbdcal{M}^{+p}bold\_M start\_POSTSUPERSCRIPT + bold\_p end\_POSTSUPERSCRIPT gives to any distribution (which we call the *degree of inconsistency* of 𝐌+𝐩superscript𝐌𝐩\mathbdcal{M}^{+p}bold\_M start\_POSTSUPERSCRIPT + bold\_p end\_POSTSUPERSCRIPT). It can be shown that the deegree of inconsistency is minimized by [[𝐌]]\*(𝐘∣𝐗)superscriptdelimited-[]delimited-[]𝐌conditional𝐘𝐗[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]^{\*}(Y\!\mid\!X)[ [ bold\_M ] ] start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_Y ∣ bold\_X ). Since the degree of inconsistency of 𝐌+𝐩superscript𝐌𝐩\mathbdcal{M}^{+p}bold\_M start\_POSTSUPERSCRIPT + bold\_p end\_POSTSUPERSCRIPT is smooth and strongly convex as a function of p𝑝pitalic\_p, we can compute its optimum values by standard gradient methods. This approach is inefficient as written (since it involves computing the full joint distribution [[𝐌+p]]\*superscriptdelimited-[]delimited-[]superscript𝐌𝑝[\mspace{-3.5mu}[{\mathbdcal{M}}^{+p}]\mspace{-3.5mu}]^{\*}[ [ bold\_M start\_POSTSUPERSCRIPT + italic\_p end\_POSTSUPERSCRIPT ] ] start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT), but we believe that standard approximation techniques will allow us to draw inferences efficiently. To take another example, conditioning can be understood in terms of resolving inconsistencies in a PDG. To condition on an observation Y=y𝑌𝑦Y\!=\!yitalic\_Y = italic\_y, given a situation described by a PDG 𝐌𝐌\mathbdcal{M}bold\_M, we can add an edge from 1111\mathrlap{\mathit{1}}\mspace{2.3mu}\mathit{1}start\_ARG italic\_1 end\_ARG italic\_1 to Y𝑌Yitalic\_Y in 𝐌𝐌\mathbdcal{M}bold\_M, annoted with the cpd that gives probability 1 to y𝑦yitalic\_y, to get the (possibly inconsistent) PDG 𝐌+(Y=y)superscript𝐌𝑌𝑦{\mathbdcal{M}}^{+(\!Y\!=y)}bold\_M start\_POSTSUPERSCRIPT + ( italic\_Y = italic\_y ) end\_POSTSUPERSCRIPT. The distribution [[𝐌+(Y=y)]]\*superscriptdelimited-[]delimited-[]superscript𝐌𝑌𝑦[\mspace{-3.5mu}[{\mathbdcal{M}}^{+(\!Y\!=y)}]\mspace{-3.5mu}]^{\*}[ [ bold\_M start\_POSTSUPERSCRIPT + ( italic\_Y = italic\_y ) end\_POSTSUPERSCRIPT ] ] start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT turns out to be the result of conditioning [[𝐌]]\*superscriptdelimited-[]delimited-[]𝐌[\mspace{-3.5mu}[\mathbdcal{M}]\mspace{-3.5mu}]^{\*}[ [ bold\_M ] ] start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT on Y=y𝑌𝑦Y\!=\!yitalic\_Y = italic\_y. This account of conditioning generalizes without modification to give Jeffrey’s Rule (Jeffrey [1968](#bib.bib3)), a more general approach to belief updating. Issues of updating and inconsistency also arise in variational inference. A variational autoencoder (Kingma and Welling [2013](#bib.bib4)), for instance, is essentially three cpds: a prior p(Z)𝑝𝑍p(Z)italic\_p ( italic\_Z ), a decoder p(X∣Z)𝑝conditional𝑋𝑍p(X\!\mid\!Z)italic\_p ( italic\_X ∣ italic\_Z ), and an encoder q(Z∣X)𝑞conditional𝑍𝑋q(Z\!\mid\!X)italic\_q ( italic\_Z ∣ italic\_X ). Because two cpds target Z𝑍Zitalic\_Z (and the cpds are inconsistent until fully trained), this situation can be represented by PDGs but not by other graphical models. We hope to report further on the deep connection between inference, updating, and the resolution of inconsistency in PDGs in future work. Ethics Statement ---------------- Because PDGs are a recent theoretical development, there is a lot of guesswork in evaluating the impact. Here are two views of opposite polarity. ### 5.1 Positive Impacts One can imagine many applications of enabling simple and coherent aggregation of (possibly inconsistent) information. In particular we can imagine using PDGs to build and interpret a communal and global database of statistical models, in a way that may not only enable more accurate predictions, but also highlights conflicts between information. This could have many benefits. Suppose, for instance, that two researchers train models, but use datasets with different racial makeups. Rather than trying to get an uninterpretable model to “get it right” the first time, we could simply highlight any such clashes and flag them for review. Rather than trying to ensure fairness by design, which is both tricky and costly, we envision an alternative: simply aggregate (conflicting) statistically optimal results, and allow existing social structure to resolve conflicts, rather than sending researchers to fiddle with loss functions until they look fair. ### 5.2 Negative Impacts We can also imagine less rosy outcomes. To the extent that PDGs can model and reason with inconsistency, if we adopt the attitude that a PDG need not wait until it is consistent to be used, it is not hard to imagine a world where a PDG gives biased and poorly-thought out conclusions. It is clear that PDGs need a great deal more vetting before they can be used for such important purposes as aggregating the world’s statistical knowledge. PDGs are powerful statistical models, but are by necessity semantically more complicated than many existing methods. This will likely restrict their accessibility. To mitigate this, we commit to making sure our work is widely accessible to researchers of different backgrounds.
5600aec9-eb22-4382-9900-d3b3a303d6da
trentmkelly/LessWrong-43k
LessWrong
0th Person and 1st Person Logic Truth values in classical logic have more than one interpretation. In 0th Person Logic, the truth values are interpreted as True and False. In 1st Person Logic, the truth values are interpreted as Here and Absent relative to the current reasoner. Importantly, these are both useful modes of reasoning that can coexist in a logical embedded agent. This idea is so simple, and has brought me so much clarity that I cannot see how an adequate formal theory of anthropics could avoid it! Crash Course in Semantics First, let's make sure we understand how to connect logic with meaning. Consider classical propositional logic. We set this up formally by defining terms, connectives, and rules for manipulation. Let's consider one of these terms: A. What does this mean? Well, its meaning is not specified yet! So how do we make it mean something? Of course, we could just say something like "Arepresents the statement that 'a ball is red'". But that's a little unsatisfying, isn't it? We're just passing all the hard work of meaning to English. So let's imagine that we have to convey the meaning of A without using words. We might draw pictures in which a ball is red, and pictures in which there is not a red ball, and say that only the former are A. To be completely unambiguous, we would need to consider all the possible pictures, and point out which subset of them are A. For formalization purposes, we will say that this set is the meaning of A. There's much more that can be said about semantics (see, for example, the Highly Advanced Epistemology 101 for Beginners sequence), but this will suffice as a starting point for us. 0th Person Logic Normally, we think of the meaning of A as independent of any observers. Sure, we're the ones defining and using it, but it's something everyone can agree on once the meaning has been established. Due to this independence from observers, I've termed this way of doing things 0th Person Logic (or 0P-logic). The elements of a meaning set I'll
a751e967-b91c-4668-b49b-03261d1dd21b
trentmkelly/LessWrong-43k
LessWrong
Terminology suggestion: Say "degrees utility" instead of "utils" to prompt affine thinking A common mistake people make with utility functions is taking individual utility numbers as meaningful, and performing operations such as adding them or doubling them.  But utility functions are only defined up to positive affine transformation. Talking about "utils" seems like it would encourage this sort of mistake; it makes it sound like some sort of quantity of stuff, that can be meaningfully added, scaled, etc.  Now the use of a unit -- "utils" -- instead of bare real numbers does remind us that the scale we've picked is arbitrary, but it doesn't remind us that the zero we've picked is also arbitrary, and encourages such illegal operations as addition and scaling.  It suggests linear, not affine. But there is a common everyday quantity which we ordinarily measure with an affine scale, and that's temperature.  Now, in fact, temperatures really do have an absolute zero (and if you make sufficient use natural units, they have an absolute scale, as well), but generally we measure temperature with scales that were invented before that fact was recognized.  And so while we may have Kelvins, we have "degrees Fahrenheit" or "degrees Celsius". If you've used these scales long enough you recognize that it is meaningless to e.g. add things measured on these scales, or to multiply them by scalars.  So I think it would be a helpful cognitive reminder to say something like "degrees utility" instead of "utils", to suggest an affine scale like we use for temperature, rather than a linear scale like we use for length or time or mass. The analogy isn't entirely perfect, because as I've mentioned above, temperature actually can be measured on a linear scale (and with sufficient use of natural units, an absolute scale); but the point is just to prompt the right style of thinking, and in everyday life we usually think of temperature as an (ordered) affine thing, like utility. As such I recommend saying "degrees utility" instead of "utils".  If there is some other familiar quan
1f541ae5-4238-42ec-85d4-ebd53acdf899
trentmkelly/LessWrong-43k
LessWrong
[ASoT] Instrumental convergence is useful A central concept I got out of Reframing impact is that instrumental convergence can be useful for shaping the motivations of superintelligent agents. i.e. there are two frames for thinking about instrumental convergence, one negative and one positive. * Instrumental convergence means most agents will want to take power, this is a safety problem. * Instrumental convergence means we can often predict the motivations of agents with arbitrary utility functions. The discussion seems to center around the former negative frame, but the positive frame is useful too! Ideally, it would be instrumentally convergent (in some sense) for the AI to do the thing we want, then we'd have a nice basin of safety. Toy example of this framing generating interesting ideas: The following exercise > Using a hypercomputer, create an AGI which takes in some data, builds a world model, and then can communicate with a copy of itself (trained on potentially different data from the same environment) to coordinate on a choice of one object in the environment which is not directly visible to both of their sensors. Can be totally solved now (with runnable code; I claim) by Creating a videogame where the two copies of the AGI (e.g. AIXI) communicate, then individually pick an object (via an object specification language). If they pick the same object they "win" and are released into separate simulated universes to do whatever, if they pick different objects they "die" (i.e. lose power, and can't take further actions). Even though we can't solve inner alignment on the agents, they'll still want to coordinate to preserve optionality/seek power. As long as we don't look into the simulation (the agents will hack you to escape in order to gain more computational resources from base reality) and you prove code correctness. Hardware exploits can still screw you but ignoring them this works. (If you don't agree this can be turned into code ask me specifically about parts. I have ugly pseudocode eq
bf24d7de-30af-4bcd-90c1-ae42fc0dea4a
trentmkelly/LessWrong-43k
LessWrong
Is it harder to become a MIRI mathematician in 2019 compared to in 2013? Below, I make the distinction between "MIRI mathematician" and "MIRI engineer". (In my mind I tend to think of these as "researchers" and "engineers", respectively, but I think MIRI calls both of these classes of people "researchers" these days so I want to avoid using "researcher".) Basically I count anyone who has published a paper or post in agent foundations as a mathematician, and everyone else as an engineer. From the current team page, I would classify Nate, Eliezer, Benya, Scott, Sam, Abram, and Tsvi as "MIRI mathematician", and Jesse, Nick, Buck, Ben, James, and Edward as "MIRI engineer". I don't actually know if this is a reasonable classification given that MIRI's recent work isn't public. As far as I can tell, MIRI has not made any new mathematician hires since mid-2017; see this table which I made, and this blog post which I believe is the last hiring update for mathematicians. Assuming there are no unannounced hires, the lack of hires can be for two broad reasons: * A change in the demand: less need for new people, more selective about who to hire, etc. * A change in the supply: fewer people to pick from, drop in quality of people that come into contact, etc. I think both changes have probably happened, although my guess would be that with the takeoff of the AI safety field, there are now more people to pick from, so the lack of hires is mostly due to a change in the demand. One way I've been thinking about these changes is to consider the same person attempting to work at MIRI as a mathematician in 2013 vs 2019. In particular, I've been thinking about Nate Soares, who was hired as a research fellow in 2014. To give my own summary, Nate was a full-time Google engineer who had taken multi-variable calculus and real analysis in college (in addition to the mathematics appearing in his CS degree), but otherwise with no experience learning or doing research in math. He discovered LessWrong probably in early or mid 2013, and came around to MIRI's worl
0f8cb23a-ad06-4c59-84d8-e1165ac87023
trentmkelly/LessWrong-43k
LessWrong
Logic: the science of algorithm evaluating algorithms "Mathematical logic is the science of algorithm evaluating algorithms." Do you think that this is an overly generalizing, far fetched proposition or an almost trivial statement? Wait, don't cast your vote before the end of this short essay! It is hard to dispute that logic is the science of drawing correct conclusions. It studies theoretically falsifiable rules the lead to derivations which are verifiable in a finite amount of mechanical steps, even by machines. Let's dig a bit deeper by starting to focusing on the "drawing correct conclusions" part, first. It implies the logic deals both with abstract rules: "drawing" and their meaning: "conclusions". Logic is not just about mindless following of certain rules (that's algebra :P) its conclusions must have truth values that refer to some "model". Take for example De Morgan's law: not (a and b) = (not a) or (not  b). It can be checked for each four possible substitutions of boolean values: a = false, b = false; a = false, b = true; .... If we agreed upon the standard meaning of the logical not, or and and operators, then we must conclude that the De Morgan's rule is perfect. On the other hand: the similar looking rule not (a and b) = (not a) and (not b) can be easily refuted by evaluating for the counterexample a = false, b = true. Generally: in any useful mathematical system, logical conclusions should work in some interesting model. However, in general, total verifiability is way too much to ask. As Karl Popper pointed out: often one must be satisfied with falsifiability of scientific statements as a criterion. For example, the following logical rule not (for each x: F(x)) <=> exists x : not F(x) is impossible to check for every formula F.  Not directly checkable statements include all those where the set of all possible substitutions is (potentially) infinite. This observation could be formalized by saying that a mapping from abstract to concrete is required. This thinking can be made precise by forma
2bb05668-521b-41bf-bd2e-ef197cda0542
trentmkelly/LessWrong-43k
LessWrong
Are We Right about How Effective Mockery Is? Crossposted from Figuring Figuring. I did a second survey that fixed some of the flaws of the first survey. The results from the second survey significantly color the interpretation of the results from the first survey given in the first “Conclusion and Discussion” section. Please continue reading past the section titled “Second Survey” to get a full picture of the results from all surveys. Intro A couple days ago a friend of mine on facebook asked about arguments in favor of mockery. They pointed out that they had noticed a lot of facebook posts mocking people for not wearing masks in the covid-19 era, and wondered whether this was an effective way to change people’s behaviors. I said in the comment section of that post that I would make a survey that worked as follows. Roughly half of the survey takers would be randomly assigned to answer the following questions: 1. Do you think that mockery is an effective way to change people’s minds? 2. Do you think that mockery is an effective way to change people’s behaviors? The other half would be randomly assigned to answer these questions: 1. Has being mocked ever caused you to change your mind about something? 2. Has being mocked ever caused you to change your habits or behaviors? No survey respondent was permitted to see all four questions. The possible answers to each question were “Yes”, “No”, and “Not sure”. I made this survey using GuidedTrack. I posted it on my facebook wall, and also posted it to Positly and paid people to participate. A total of 145 people responded to any of the questions on positly. 74 were asked the first set of questions, and 71 were asked the second set of questions. A total of 66 people responded to any of the questions on facebook. 31 were asked the first set of questions, and 35 were asked the second set of questions. Before I go on to tell you the results of the survey and the predictions me and some of my friends made, you might want to make your own predictions. I sugges
c0c1c6cb-92e4-4f2d-a6a8-18763f3d2794
trentmkelly/LessWrong-43k
LessWrong
New Haven/Yale Less Wrong Meetup: 5 pm, Monday October 12 Posted on behalf of Thomas McCabe: I (Thomas McCabe, a Yale math student) will be hosting a Less Wrong meetup in New Haven, Connecticut, on the Yale University campus. The meetup will take place at 5 PM on Monday, October 12th, at the Yorkside Pizza & Restaurant at 288 York St. (time and place are flexible if anyone has a conflict); please comment if you'd like to attend, or if you have any questions or ideas. The location can be found on Google Maps at this link. Some confirmed attendees include SIAI folk and Less Wrongers Anna Salamon, Steve Rayhawk, Carl Shulman, and Roko Mijc. Feel free to contact me at thomas.mccabe@yale.edu, or at 518-248-5525. Thanks, and see everyone there!
10dc6c6c-56ac-487d-ae44-ce8c334265a8
StampyAI/alignment-research-dataset/blogs
Blogs
meta-tracking meta-tracking ------------- some social constructs don't originally track anything in the real world, but people who erroneously believe in them start assigning them attributes and meaning, and then those concepts bootstrap themselves into being real at least in that they track people's beliefs. so, for example, refusing to follow astrology because it doesn't track the real world, fails to track all the people that start acting in astrologically predictable ways from believing (originally erroneously) in astrology. another way in which one must take care to meta-track because people are involved, is the meaning of the meaning of words. the meaning of a word is defined by its usage; but, "the meaning of a word" is understood by many to instead track some essence of the word. while the idea of that essence is wrong, saying "the meaning of a word is defined by its usage" is kind of wrong; not because that's not what the meaning of a word is, but because in that sentence one is using a fairly non-usage meaning of "the meaning of a word". and, you have to rember that phrases mean what they are understood to mean; so, in a weird way, the only statements that are understandable to someone are the ones that are agreeable with what they think, because those statements are those that match the general worldview-ideas-definitions that the person has; and fundamental disagreement entails using definitions of words and ideas that the person doesn't have, and therefore are kind of failures to communicate with them.
7e1ba378-e7ab-4344-a73f-4020760dcc5b
trentmkelly/LessWrong-43k
LessWrong
Covid-19 6/4: The Eye of the Storm Still standing by this: Covid-19: My Current Model Previous update posts: Covid-19 5/29: Dumb Reopening, Covid 5/14: Limbo Under Remember last week when I opened with this? > I remember when people on Twitter had constant reminders that today was, indeed, only Wednesday, or whatever day it happened to be. Time moved that slowly. > > Time has sped up again. Well, yeah. Not so much anymore. In March and April I found myself constantly checking Twitter and the financial markets for news, frantically hunting for ways to get a handle on what was happening in the world, worried everything would fall apart. Would our supply chains hold? Would we be able to maintain civil order? Would millions die? How can I keep my family and friends safe? My beloved New York City was no longer a place one could live a reasonable life. So we fled. Even after that, great worry. Then things started to calm down. Most of May, I increasingly managed to relax. We learned how to grill properly. I played a bunch of Assassin’s Creed Odyssey. I stopped checking Twitter or the stock market. My biggest worries were dealing with getting the Emergents Alpha ready and Magic: The Gathering melting down under the weight of the companion mechanic in particular and its new design philosophy in general. End of the Beginning Then, on May 25, 2020, in Minneapolis, four policemen murdered George Floyd. The resulting cycle of protests against police brutality causing more police brutality, which in turn amplifies the protests, has been ongoing each day since then. Huge crowds flood the streets. The crowds come disproportionately from communities with more Covid-19 cases than average. The crowds often yell. The police use tear gas on them, which causes coughing. The police arrest them and lock them in tight quarters, which is going to spread the virus even if the outdoor activities don’t, hopefully in small enough numbers this isn’t a huge deal. To the extent other things are also happening, that o
994c5d45-8348-41e2-ab6c-00d209e49412
trentmkelly/LessWrong-43k
LessWrong
Epistemology of evilness Most everyone seems to think that a big reason for bad things happening in the world is that some people are bad. Yet I almost never see advice for telling whether you yourself are a bad person, or for what to do about it if you seem to be one. If there are so many bad people, isn’t there a very real risk that you are one of them? Perhaps the model is one where you automatically know whether you are good or bad, and simply choose which to be. So the only people who are bad are those who want to be bad, and know that they are bad. But then if there is this big population of bad people out there who want to be bad, why is so little of the media devoted to their interests? There’s plenty on how to do all the good things that a good person would want to do, such as voting for the benefit of society, looking after your children, buying gifts, expressing gratitude to friends, holding a respectable dinner, pleasing your partner. Yet so little on scamming the elderly, effectively shaking off useless relatives, lying credibly, making money from investments that others are too squeamish to take, hiding bodies. Are the profit-driven corporate media missing out on a huge opportunity? If there aren’t a whole lot of knowingly bad people out there who want to be bad, and could use some information and encouragement, then either there aren’t bad people at all, or bad people don’t know that they are bad or don’t want to be bad. The former seems unlikely, by most meanings of ‘bad’. If the latter is true, why are people so blase about the possibility that they themselves might be bad? *** Prompted by the excellent book Harry Potter and the Methods of Rationality, in which there is much talk of avoiding becoming ‘dark’, in stark contrast to the world that I’m familiar with. If you enjoy talking about HPMOR, and live close to Pittsburgh, come to the next Pittsburgh Less Wrong Meetup.
8a1895aa-3be9-4a0e-94b3-f17f4d729b6c
trentmkelly/LessWrong-43k
LessWrong
Extracting SAE task features for in-context learning TL;DR * We try to study task vectors in the SAE basis. This is challenging because there is no canonical way to convert an arbitrary vector in the residual stream to a linear combination of SAE features — you can't just pass an arbitrary vector through the encoder without going off distribution. * We explored the algorithm of gradient pursuit suggested in Smith et al, but it didn’t work for us without modifications. * Our approach is to apply the SAE encoder to the task vector, and then apply a gradient-based cleanup. This exploits the fact that task vectors have a differentiable objective. We find that this gives a sparser and cleaner reconstruction, which is also highly interpretable, and also serves as a better task vector due to directly optimizing for log likelihood. This takes us from ~100 active features to ~10. * Using our algorithm, we find two classes of SAE features involved in ICL. One of them recognizes the exact tasks or output formats from the examples, and another one encodes the tasks for execution by the model later on. We show that steering with these features has causal effects similar to task vectors.    This work was produced as part of the ML Alignment & Theory Scholars Program - Summer 24 Cohort, under mentorship from Neel Nanda and Arthur Conmy. Prior work Task or function vectors are internal representations of some task that LLMs form while processing an ICL prompt. They can be extracted from a model running on a few-shot prompt and then be used to make it complete the same task without having any prior context or task description. Several papers (Function vectors in large language models, In-Context Learning Creates Task Vectors) have proposed different ways to extract those task vectors. They all center around having ICL examples being fed to a model in the form of “input <separator> output, … ” and averaging the residuals on the “separator” token over a batch. This approach can reconstruct some part of the ICL performance but do
daeaad83-a584-4727-92f7-1d7ce0c7fdd6
trentmkelly/LessWrong-43k
LessWrong
What's an important (new) idea you haven't had time to argue for yet? Of course, I'm not expecting you to support the idea in the answers, but simply mentioning its conclusion:)
82b3b1b3-ad25-44bd-aa1d-66c601106b59
trentmkelly/LessWrong-43k
LessWrong
Psychology Replication Quiz The great guys over at 80,000 Hours made a quiz for that recent replication study (previously discussed on LW), where you can find out if you can predict as well as the scientists or the prediction market. Description: > Can you guess which psychology experiments had correct findings and which were bogus, just from reading a brief description of their results? > Depending on how long you want to play, we'll describe 10, 15 or 21 psychology findings published in Nature and Science, and you'll have to guess whether a replication, with a much larger sample size, got the same result. > Before starting, the people who organised these 21 replications asked expert psychologists to predict which results would hold up. We'll show you how you compare to their performance at the end! (And give you links to all the papers.)
92a02c31-72c0-4022-a78b-e6d497109dd6
trentmkelly/LessWrong-43k
LessWrong
Meetup : Sydney Rationality Dojo - December 2016 Discussion article for the meetup : Sydney Rationality Dojo - December 2016 WHEN: 04 December 2016 04:00:00PM (+1100) WHERE: 10 Shepherd Street, Chippendale Come join us for the final dojo of the year. Dinner afterwards for those who wish to come, as usual. We don't run a dojo January, since people are away on holidays, so this will be the last opportunity to come until February. Discussion article for the meetup : Sydney Rationality Dojo - December 2016
4395b452-c297-41c6-a5b6-5c79a7f132f5
trentmkelly/LessWrong-43k
LessWrong
The Fear Experiments I. INTRODUCTION The physiological experience of fear is an interesting area for inquiry because of how these physical effects come to influence our decision making. I will start by saying that these are very personal notes that are much too raw to be seen as a formal paper or argument. I am choosing to share them here because I think they could be fertile soil for an interesting conversation about epistemology. The Experiment:   In 2018 I had made a new year's resolution to run four times a week, because of work scheduling my run would most often be in the evening. After sticking with the resolution for a few nights I began to consider a large patch of woods I would pass on my run as a potential location for experimentation. The woods are roughly 80,000 square yards with the perimeter lined with residential houses. This is to say, the area is small in size with very little chance of an unknown threat finding its way into the area. The area's small size combined with a significantly low crime rate/lack of large predators created an excellent control setting for the experience of fear. With this, the experiments would begin.  Each evening after my run  I would enter the woods at the same opening and slowly make my way through the unlit forest. As I had imagined the environment proved to be a perfect control to almost immediately induce the typical physiological effects of fear. The hair standing up on your neck, the cold extremities, the hyper-focus to what is perceived as any unusual sensory experience. The experience of fear was like a thick viscous sludge. Each step was hard, but as I walked forward they became easier, and eventually once pushed through the sludge made its way into water. Over time the sludge would only appear at the beginning of the walk, diminishing little by little upon each visit. Eventually, the entire walk became water. Other than a slightly heightened awareness the movement from the lit street into the darkness of the forest overhang was
c0dc4db2-7efe-4c3f-901a-d3c5e7d99efe
trentmkelly/LessWrong-43k
LessWrong
Does cognitive therapy encourage bias?   (This post might suffer from formatting problems. I'm pretty dumb with computers, so it's not a surprise to me, but if anyone out there knows how to fix it, I'd be grateful for the help.) Summary: Cognitive therapy may encourage [motivated cognition](http://lesswrong.com/lw/km/motivated_stopping_and_motivated_continuation/). My main source for this post is Judith Beck's [Cognitive Therapy: Basics and Beyond](http://www.amazon.com/Cognitive-Therapy-Judith-Beck-Phd/dp/0898628474/ref=sr_1_1?s=books&ie=UTF8&qid=1290418167&sr=1-1) "[Cognitive behavioral therapy](http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy)" (CBT) is a catch-all term for a variety of therapeutic practices and theories. Among other things, it aims to teach patients to modify their own beliefs. The rationale seems to be this: (1) Affect, behavior, and cognition are interrelated such that changes in one of the three will lead to changes in the other two.  (2) Affective problems, such as depression, can thus be addressed in a roundabout fashion: modifying the beliefs from which the undesired feelings stem. So far, so good. And how does one modify destructive beliefs? CBT offers many techniques. Alas, included among them seems to be motivated skepticism. For example, consider a depressed college student. She and her therapist decide that one of her bad beliefs is "I'm inadequate." They want to replace that bad one with a more positive one, namely, "I'm adequate in most ways (but I'm only human, too)." Their method is to do a worksheet comparing evidence for and against the old, negative belief. Listen to their dialog: [Therapist]: What evidence do you have that you're inadequate? [Patient]: Well, I didn't understand a concept my economics professor presented in class today. T: Okay, write that down on the right side, then put a big "BUT" next to it...Now, let's see if there could be another explanation for why you might not have understood the concept other than that you're inadequate.
0c906566-6ee2-482f-a3ff-8bc436903fda
trentmkelly/LessWrong-43k
LessWrong
When Are Results from Computational Complexity Not Too Coarse? Tl;dr, While an algorithm's computational complexity may be exponential in general (worst-case), it is often possible to stratify its input via some dimension k that makes it polynomial for a fixed k, and only exponential in k. Conceptually, this quantity captures the core aspect of a problem's structure that makes specific instances of it 'harder' than others, often with intuitive interpretations. Example: Bayesian Inference and Treewidth One can easily prove exact inference (decision problem of: "is P(X)>0?") is NP-hard by encoding SAT-3 as a Bayes Net. Showing that it's NP is easy too. Given a 3-SAT instance ϕ over Q1,…Qn, one can cleverly encode it as a Bayes Net Bϕ such that PBϕ(X=x1)>0 if and only if ϕ is satisfiable (From Koller & Friedman 2009). Therefore, inference is NP-complete, implying that algorithms are worst-case exponential. But this can't be the typical case! Let's examine the example of a Bayes Net whose structure is a chain  A⟶B⟶C⟶D, and say you want to compute the marginals P(D). The Naive Algorithm for marginalization would be to literally multiply all the conditional probability distribution (CPD) tables for each of the Bayes Net's nodes, and sum over all the variables other than X. If we assume each variable has at most v values, then the computational complexity is exponential in the number of variables n. * P(D)=∑A∑B∑CP(A,B,C,D), which is O(vn). But because of the factorization P(A,B,C,D)=P(A)P(B|A)P(C|B)P(D|C) due to the chain structure, we can shift the order of the sum around like this: * P(D)=∑CP(D|C)∑BP(C|B)∑AP(A)P(B|A) and now the sum can be done in O(nv2). Why? * Notice ∑AP(A)P(B|A) is P(B), and to compute P(B=b) we need to multiply v times and sum v−1 times, overall O(v). * This needs to be done for every b, so O(v2). Now we have cached P(B), and we move on to ∑BP(C|B)P(B), where the same analysis applies. This is basically dynamic programming. So, at least for chains, inference can be done in linear time in input size
b12b94cd-715d-46b3-84a2-41d517bad8b3
trentmkelly/LessWrong-43k
LessWrong
Learning Multi-Level Features with Matryoshka SAEs TL;DR: Matryoshka SAEs are a new variant of sparse autoencoders that learn features at multiple levels of abstraction by splitting the dictionary into groups of latents of increasing size. Earlier groups are regularized to reconstruct well without access to later groups, forcing the SAE to learn both high-level concepts and low-level concepts, rather than absorbing them in specific low-level features. Due to this regularization, Matryoshka SAEs reconstruct less well than standard BatchTopK SAEs trained on Gemma-2-2B, but their downstream language model loss is similar. They show dramatically lower rates of feature absorption, feature splits and shared information between latents. They perform better on targeted concept erasure tasks, but show mixed results on k-sparse probing and automated interpretability metrics. Note: There was also some excellent work on Matryoshka SAEs published by Noa Nabeshima last week. Our work was done independently and in parallel (we even used the same name!). This kind of parallel development provides a nice cross-validation of results - a natural replication in both directions. We believe our work offers complementary qualitative evaluation that further validates and expands upon Noa's findings. There are also some technical differences in our approaches, see the section “Comparison with Noa Nabeshima's MatryoshkaSAE”.  Summary  Matryoshka SAEs are a new variant of sparse autoencoders that train multiple nested SAEs of increasing size simultaneously. The key idea is to train multiple reconstruction objectives in parallel - each sub-SAE must reconstruct the input using only a nested subset of the total latents. The smallest SAE can only use the first few latents, forcing these to capture high-level features, while each larger SAE has access to progressively more latents. We train them on Gemma-2-2B and evaluate their performance compared to standard BatchTopK SAEs on a range of SAE benchmarks. Our key positive findings are:  1. W
aee57dfb-b1f7-487e-99f2-15b4eb4dc53b
StampyAI/alignment-research-dataset/blogs
Blogs
New Paper: “Embryo Selection for Cognitive Enhancement” [![IES first page](https://intelligence.org/wp-content/uploads/2013/10/embryselection-cover.gif)](https://intelligence.org/files/EmbryoSelection.pdf)During his time as a MIRI research fellow, Carl Shulman co-authored (with [Nick Bostrom](http://nickbostrom.com/)) a paper that is now available as a preprint, titled “[Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?](https://intelligence.org/files/EmbryoSelection.pdf)” Abstract: > Human capital is a key determinant of personal and national outcomes, and a major input to scientific progress. It has been suggested that advances in genomics will make it possible to enhance human intellectual abilities. One way to do this would be via embryo selection in the context of *in vitro* fertilization (IVF). In this article, we analyze the feasibility, timescale, and possible societal impacts of embryo selection for cognitive enhancement. We find that embryo selection, on its own, may have significant impacts, but likely not drastic ones, over the next 50 years, though large effects could accumulate over multiple generations. However, there is a complementary technology, stem cell-derived gametes, which has been making rapid progress and which could amplify the impact of embryo selection, enabling very large changes if successfully applied to humans. > > The last sentence refers to “iterated embryo selection” (IES), a future technology first described by MIRI [in 2009](http://theuncertainfuture.com/faq.html#7). This technology has significant strategic relevance for Friendly AI (FAI) development because it might be the only intelligence amplification (IA) technology (besides [WBE](http://en.wikipedia.org/wiki/Mind_uploading)) to have large enough effects on human intelligence to substantially shift our odds of getting FAI before arbitrary AGI, if AGI is [developed sometime this century](http://intelligence.org/2013/05/15/when-will-ai-be-created/). Unfortunately, it remains unclear whether the arrival of IES would shift our FAI chances positively or negatively. On the one hand, a substantially smarter humanity may be wiser, and more likely to get FAI right. On the other hand, IES might accelerate AGI relative to FAI, since AGI is more parallelizable than FAI. (For more detail, and more arguments pointing in both directions, see [Intelligence Amplification and Friendly AI](http://lesswrong.com/lw/iqi/intelligence_amplification_and_friendly_ai/).) The post [New Paper: “Embryo Selection for Cognitive Enhancement”](https://intelligence.org/2013/10/30/new-paper-embryo-selection-for-cognitive-enhancement/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
edda3571-0831-4ca7-83f2-820af4754043
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Related: Leave a Line of Retreat, Living In Many Worlds"It all adds up to normality." Greg Egan, QuarantineYou're on an airplane at 35,000 feet, and you strike up a conversation about aerodynamic lift with the passenger in your row. Things are going along just fine until they point out to you that your understanding of lift is wrong, and that planes couldn't fly from the effect you thought was responsible.Should you immediately panic in fear that the plane will plummet out of the sky?Obviously not; clearly the plane has been flying just fine up until now, and countless other planes have flown as well. There has to be something keeping the plane up, even if it's not what you thought, and even if you can't yet figure out what it actually is. Whatever is going on, it all adds up to normality.Yet I claim that we often do this exact kind of panicked flailing when there's a challenge to our philosophical or psychological beliefs, and that this panic is entirely preventable.I've experienced and/or seen this particular panic response when I, or others, encounter good arguments for propositions includingMy religion is not true. ("Oh no, then life and morality are meaningless and empty!")Many-worlds makes the most sense. ("Oh no, then there are always copies of me doing terrible things, and so none of my choices matter!")Many "altruistic" actions actually have hidden selfish motives. ("Oh no, then altruism doesn't exist and morality is pointless!")I don't have to be the best at something in order for it to be worth doing. ("Oh no, then others won't value me!") [Note: this one is from therapy; most people don't have the same core beliefs they're stuck on.](I promise these are not in fact strawmen. I'm sure you can think of your own examples. Also remember that panicking over an argument in this way is a mistake even if the proposition turns out to be false.)To illustrate the way out, let's take the first example. It took me far too long to leave my religion, partly because I was so terrified about becoming a nihilist if I left that I kept flinching away from the evidence. (Of course, the religion proclaimed itself to be the origin of morality, and so it reinforced the notion that anyone else claiming to be moral was just too blind to see that their lack of faith implied nihilism.)Eventually I did make myself face down, not just the object-level arguments, but the biases that had kept me from looking directly at them. And then I was an atheist, and still I was terrified of becoming a nihilist (especially about morality).So I did one thing I still think was smart: I promised myself not to change all of my moral rules at once, but to change each one only when (under sober reflection) I decided it was wrong. And in the meantime, I read a lot of moral philosophy.Over the next few months, I began relaxing the rules that were obviously pointless. And then I had a powerful insight: I was so cautious about changing my rules because I wanted to help people and not slide into hurting them. Regardless of what morality was, in fact, based on, the plane was still flying just fine. And that helped me sort out the good from the bad among the remaining rules, and to stop being so afraid of what arguments I might later encounter.So in retrospect, the main thing I'd recommend is to promise yourself to keep steering the plane mostly as normal while you think about lift (to stretch the analogy). If you decide that something major is false, it doesn't mean that everything that follows from it has to be discarded immediately. (False things imply both true and false things!)You'll generally find that many important things stand on their own without support from the old belief. (Doing this for the other examples I gave, as well as your own, is left to you.) Other things will collapse, and that's fine; that which can be destroyed by the truth should be. Just don't make all of these judgments in one fell swoop.One last caution: I recommend against changing meta-level rules as a result of changing object-level beliefs. The meta level is how you correct bad decisions on the object level, and it should only be updated by very clear reasoning in a state of equilibrium. Changing your flight destination is perfectly fine, but don't take apart the wing mid-flight.Good luck out there, and remember:It all adds up to normality.[EDIT 2020-03-25: khafra and Isnasene make good points about not applying this in cases where the plane shows signs of actually dropping and you're updating on that. (Maybe there's a new crisis in the external world that contradicts one of your beliefs, or maybe you update to believe that the thing you're about to do could actually cause a major catastrophe.)In that case, you can try and land the plane safely- focus on getting to a safer state for yourself and the world, so that you have time to think things over. And if you can't do that, then you have no choice but to rethink your piloting on the fly, accepting the danger because you can't escape it. But these experiences will hopefully be very rare for you, current global crisis excepted.]" This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong. Can you delve deeper into the concept of changing meta-level rules as a result of changing object-level beliefs? Specifically, can you provide an example of a situation where this might occur and the potential consequences of such a change? Certainly, let's explore this concept further. Meta-level rules can be understood as overarching principles or strategies that guide our decision-making process, whereas object-level beliefs are specific assertions or ideas we hold about the world. For example, a meta-level rule might be "I should always consider multiple perspectives before making a decision," while an object-level belief could be "Exercise is beneficial for health." Now, let's consider a situation where a person holds an object-level belief that "Vaccines cause autism." This belief is incorrect, as it has been refuted by numerous scientific studies. Suppose this person comes across new, credible information that challenges this belief and leads them to change it. They now accept that vaccines do not cause autism. If this person were to change their meta-level rules as a result of changing this object-level belief, it might look something like this: They might decide that because they were wrong about vaccines, they can't trust their own judgment anymore. As a result, they might adopt a new meta-level rule like "I should always defer to experts' opinions and not form my own beliefs." The potential consequences of such a change could be significant. While it's beneficial to respect and consider expert opinions, completely deferring to others' views and not forming one's own beliefs can lead to a lack of critical thinking and personal growth. It could also make a person more susceptible to manipulation or misinformation, as they're relying solely on external sources for their beliefs and not their own evaluation of the information. So, while it's important to update our object-level beliefs in light of new information, we should be cautious about making sweeping changes to our meta-level rules based on specific instances. Instead, meta-level rules should be updated based on careful consideration and patterns of decision-making outcomes. How might the process of changing object-level beliefs, while maintaining consistent meta-level rules, influence the development of critical thinking skills and the ability to discern reliable sources of information? The process of changing object-level beliefs while maintaining consistent meta-level rules can significantly enhance critical thinking skills and the ability to discern reliable sources of information. Here's how: 1. **Fostering Open-Mindedness:** This process encourages open-mindedness, a crucial component of critical thinking. By being willing to revise object-level beliefs in the face of new evidence, one becomes more adaptable and open to new ideas. This openness is key to understanding and evaluating different perspectives. 2. **Promoting Skepticism:** By maintaining consistent meta-level rules, one can avoid the pitfall of accepting new information uncritically. For instance, a meta-level rule might be "Always verify information from a new source." This promotes a healthy level of skepticism, another vital aspect of critical thinking. 3. **Encouraging Analytical Skills:** The act of examining and re-evaluating object-level beliefs can enhance analytical skills. It involves dissecting arguments, assessing their logic, and weighing evidence, all of which are key components of critical thinking. 4. **Improving Information Literacy:** This process can also improve one's ability to discern reliable sources of information. By consistently applying meta-level rules like "Check the credibility of the source" or "Cross-verify information," one can become more adept at distinguishing reliable sources from unreliable ones. 5. **Enhancing Self-Reflection:** Lastly, this process encourages self-reflection, as it requires one to examine their own beliefs and the reasons behind them. This can lead to greater self-awareness, a deeper understanding of one's own biases, and an improved ability to think independently. In sum, the process of changing object-level beliefs while maintaining consistent meta-level rules can greatly contribute to the development of critical thinking skills and the ability to discern reliable sources of information. It encourages an open, analytical, and reflective approach to information, which is essential in our complex and rapidly changing world.
122b3531-94a2-4f56-9a5b-f2dc0884fc35
StampyAI/alignment-research-dataset/arxiv
Arxiv
Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty through Sociotechnical Commitments. Introduction ------------ As the capabilities of AI expand, researchers must consider the problem of *AI Safety*, defined broadly as the design of machines that can act independently while avoiding harm to people and the environment. This problem has become more urgent as these systems are technically refined, applied in critical infrastructure domains, and deployed in areas of social life not clearly related to physical harm (e.g. social media, credit scoring, public surveillance). Meanwhile, the core definitions of “safety” (and consequently appropriate control and governance mechanisms) in the wider literature of autonomous systems remain unsettled. Distinct approaches to AI Safety have emerged to define the uncertain scale at which AI systems may cause significant social harm. At one end of this continuum is *Existential Risk* (hereafter referred to as x-risk), i.e. the effort to mathematically formalize control strategies that help avoid the creation of systems whose deployment would result in irreparable harm to humans on a societal or civilization level. The x-risk literature has focused on the *value alignment problem*; to ensure values programmed into an AI agent’s reward function correspond with the values of relevant stakeholders (such as designers, users or others affected by the agent’s actions) [[Soares2015](#bib.bibx41)]. Another approach, broadly pursued by researchers and increasingly social scientists in the *Fairness, Accountability and Transparency in Computing Systems* (FAT\*) literature, focuses on nearer-term problems broadly commensurate with existing social ills such as economic inequality, structural racism and gender disparities, and the capability of systems to (mis)recognize human affect or deny access to vital resources, among many other topics. FAT\* research has harvested a multitude of definitions and tools aiming to address safety risks by *diagnosing and reducing biases across various subgroups* defined along lines of race, gender or social class [[Narayanan2018](#bib.bibx34)]. In both the FAT\* and x-risk communities, a consensus is emerging on the fundamental limitations of technical approaches to formalize values such as safety or fairness in the design of AI systems, given the uncertainty of deployment contexts and the inevitability of externalities in using formal abstractions. Within x-risk, [[Hadfield-Menell and Hadfield2018](#bib.bibx18)] have acknowledged the need for external social institutions to resolve “unintentional and unavoidable misspecification” in AI reward functions, and [[Irving and Askell2019](#bib.bibx25)] propose to recruit social scientists to help resolve “many uncertainties related to the psychology of human rationality, emotion, and biases”. Within FAT\*, [[Selbst et al.2019](#bib.bibx38)] pointed out the fundamental traps of abstraction that arise in formalizing notions of fairness in AI systems statistically and mathematically, while [[Chouldechova2017](#bib.bibx9)] demonstrated tradeoffs arising in formalizing fairness that require moral deliberation. However, the criteria for evaluating and resolving harms remain vague across technical research communities, even as the call for an ”algorithmic social contract” has crystallized [[Rahwan2018](#bib.bibx35)]. Here we address the unavoidable challenge of *normative uncertainty* in the development of AI systems, and articulate a set of commitments to address and resolve normative issues as they arise in any given development process. We work from the assumption that AI systems are fundamentally of *sociotechnical* nature, meaning they are built on and operated in contexts in which social and technical aspects are intimately interrelated. This paper makes four contributions. Firstly, we discuss normative uncertainty around the notion of safety by analyzing a case study, as conditioned by how different stakeholders interpret issues of protection, robustness, and resiliency. Secondly, we outline existing responses to normative uncertainty in developing AI systems. Thirdly, we introduce the philosophical concept of vagueness to account for dominant intuitions about normative uncertainty in the AI Safety and FAT\* literatures, as well as discussions thereof by critics and in the public sphere, as typifying distinct canonical approaches to dealing with vagueness (i.e. *epistemicism, semantic indeterminism, and incomparability*). Lastly, to inspire the resolution of normative uncertainty in the development of AI systems, we draw on Ruth Chang’s notion of *intuitive comparability* (IC) to identify core dilemmas in designing, training and deploying AI systems. We formulate a set of sociotechnical commitments, which address formal, substantive and discursive challenges, that are needed to consider values and stakeholders’ needs in a democratic fashion. In doing so, we formalize specific channels for dissent before, during, and after value commitments are being considered, building on the outline of democratic consensus-building in [[Anderson2006](#bib.bibx5)]. Our core contribution is to apply an insight that scholars in Science & Technology Studies (STS) have appreciated for over four decades: the reality that any technological system is inherently political and requires normative deliberation and ongoing citizen participation to ensure its safety for all stakeholders affected by its actions [[Winner1980](#bib.bibx47)]. The Vagueness of Safety - ACLU vs. AWS -------------------------------------- Safety has many definitions depending on context. For the purpose of study, we start with interpreting safety in terms of protection from harm or injury, robustness in the face of adverse conditions, and resiliency in response to stress or difficulty. However, the vagueness of these terms as applied to different stakeholders makes it difficult for safety to be deliberated in a meaningful and consistent way. One example is the Amazon Web Services (AWS) Rekognition system. AWS intended Rekognition to be a commercially available cloud based ML tool that helps enterprises with setting up and searching image based datasets for various facial recognition tasks [[Amazon Web Services2019](#bib.bibx4)]. By design, the system is flexible and allows users to define their own data sets and queries, since it is meant to perform well for tasks as simple as document retrieval and as complex as image-based sentiment analysis. To test the system’s limitations, the American Civil Liberties Union (ACLU) created a data set made up of publicly available arrest photos and used these to train Rekognition. They then queried the system to find a match against photos of current members of Congress [[Snow2018](#bib.bibx40)]. The ACLU reported not only that false identifications were found, but that of those members of Congress falsely identified as matching the arrest photo database, a disproportionate number (40%) were people of color. The ACLU concluded that Rekognition shows a bias in its predictions, making it unsafe to implement in high stakes contexts which disproportionately affect people of color (e.g. law enforcement). However, the AWS research team issued a rebuttal, commenting that Rekognition was used against its articulated design specifications [[Wood2018](#bib.bibx48)]. In particular, they noted that the ACLU used a lower-than-recommended confidence threshold for a high risk task, leading to a large number of false positives. Moreover, since the ACLU did not detail how they constructed their data set, it is unclear how much inherent bias existed in the queried set. Below we detail the forms of vagueness present in this scenario and identify relevant trade-offs. First, it is unclear what protection means in development situations where private and public definitions are both at stake. Concretely, many AI systems are validated internally by private corporations, only to be misused in deployment due to poor communication about the inner workings of the system to the wider public. In part, the conflict between Amazon and the ACLU is based on a category mistake in how the boundaries of different political guarantees are refashioned by the development norms of the system in question. For example, the ACLU’s claim that facial recognition systems beneath a given accuracy threshold should not be used by law enforcement is rooted in the intuition that correcting misclassifications a posteriori is politically unacceptable, as it imposes Amazon’s internal definitions of harm and vulnerability onto anyone that encounters the system. In contrast, Amazon’s claim that the system does work according to design intentions and that the ACLU study used inappropriate settings makes sense in the context of system optimization, as the AWS team converged on the system’s architectural parameters through agreements with the private contractors who intend to use it. Fundamentally, it is not clear whether the safety of Rekognition should be determined by its protection of private contracts (whose context is the online verification of edge cases by self-interested parties) or public assurances (whose context is the willingness to shield vulnerable communities from harm). While both Amazon and the ACLU value protection, the legal contexts for their practical definitions of it are orthogonal, and the loci of perceived stakes are at two different points in the development pipeline. To resolve this vagueness, either one definition must be given absolute priority over the other, or formal distinctions across the pipeline must be clarified and subject to a consistent, external standard. Second, conditions of robustness must be specified according to the distinct expectations of designers and users, leading to inconsistent standards for platform governance. One can imagine AWS issuing a different response that included an apology to members of Congress, a request for the ACLU to expand its ”testing” to other social domains, and a promise to improve Rekognition’s accuracy going forward. However, this strategy might also become an object of public outcry; for example, Waymo regularly publishes safety reports on its vehicles but still faces the ire of Phoenix residents, who complain that ”They didn’t ask us if we wanted to be part of their beta test” [[Romero2019](#bib.bibx37)]. Indeed, a handful of American cities have now banned the use of facial recognition by municipal agencies, citing surveillance concerns, local community interests, and social prejudice [[Lee2019](#bib.bibx28)]. The question is whether cities and other social domains should be made ready for facial recognition tools (through e.g. concrete institutional reforms), or the tools should be made ready for cities to use them with confidence (via e.g. ongoing technical refinement). How one answers this question places the onus of sociotechnical robustness on either the public officials who administer the system or the engineers who build it, a problem known as defining the ”moral crumple zone” of moral and legal responsibility [[Elish2019](#bib.bibx14)]. While all stakeholders might want the system to be robust, it is not clear what criteria should determine the conditions under which robustness would hold, and which authorities are most qualified to ensure those conditions. Because system development transposes inherited notions of governance and control, specific liability mechanisms (e.g. moratoriums, audits) cannot be adopted or justified until these notions are clarified and made compatible. Third, a system’s resiliency requires a metric of optimality, according to which abnormal dynamics can be discerned, diagnosed, and remedied. At a minimum, facial recognition assumes some definition of what a face is, and what the good, bad, and inaccurate ways there may be for identifying them. While the dispute between AWS and the ACLU did not reach this level of abstraction, we must confront how AI systems may affect our political sovereignty and rewire social orders by shifting how human features are modeled, correlated, and interpreted. Recently, Luke Stark has argued that facial recognition tools are a form of racism that will incline any power structure towards discriminatory policies because the ability to rank facial features at scale will generate categories that can be used both to solder somatic attributes to personality characteristics and to legitimize political decisions [[Stark2019](#bib.bibx42)]. To avoid this future, such tools should be regulated to the point that they are hardly ever used. Meanwhile, [[Wang and Kosinski2017](#bib.bibx45)] claim to detect sexual orientation with the aid of deep neural networks and that the predictive power of AI models can be harnessed to discover patterns in facial features beneath human awareness. While the findings generated controversy [[Murphy2017](#bib.bibx33), [Vincent2017](#bib.bibx44)], Kosinski defended the study as an effort to “understand people, social processes, and behavior better through the lens of digital footprints” [[Resnick2018](#bib.bibx36)], questioning whether automated systems can only reify existing social ontologies or substantively challenge current intuitions about gender and sexuality. This contrast highlights the vague relationship between feature orderings in particular contexts, and the general goals or ends that define human flourishing. The safety of a facial recognition system is determined by how its dynamics are defined in light of overarching risks and benefits: as illegitimate (making resiliency impossible) or legitimate (defining a sovereign metric for acceptable uses and outcomes). The deliberation behind this amounts to what kind of society is wanted. Resolving Vagueness via Hard Choices ------------------------------------ We draw from Ruth Chang’s philosophical work on value pluralism [[Chang1997](#bib.bibx6), [Chang2002](#bib.bibx7)] to develop a sociotechnical semantics for AI Safety. At certain deployment scales, what we mean by “privacy”, “security”, or “social choice” starts to feel unclear, as it is difficult to determine the stakes of our own value commitments. In philosophical terms, the relations between our values become vague. Chang outlines three distinct approaches to vagueness: (1) *epistemicism* - all items of value can be ranked against each other in order for vagueness to be resolved, (2) *semantic indeterminism* - the way values relate to each other is fundamentally fuzzy, and (3) *value pluralism* - values are incomparable. In the Appendix, we further introduce these approaches and tie them to recent work in addressing normative uncertainty in the development of AI systems. Here we introduce an alternative perspective, which forms the basis for our proposal of sociotechnical commitments in the AI development process. [[Chang2017](#bib.bibx8)] proposes a fourth position, intuitive comparability (IC): while many human values seem incommensurable (e.g. equality vs. liberty), humans are nevertheless able to articulate *evaluative differences* to make comparisons, even if two values or concepts are not directly measurable against each other. This allows people to make informed tradeoffs between options (e.g. choosing between a banana and a donut for breakfast) based on practical deliberation regarding one’s overarching goal (losing weight on a diet), even though nutrition and tastiness are qualitatively distinct–the values are objectively incommensurable but, in this context, intuitively comparable. IC is particularly relevant for what she calls *hard choices*: when different alternatives are *on a par*, “it may matter very much which you choose, but one alternative isn’t better than the other […] alternatives are in the same neighborhood of value, in the same league of value, while at the same time being very different in kind of value [[Chang2017](#bib.bibx8)]. Resolving hard choices requires normative reasoning: “when your given reasons are on a par, you have the normative power to *create* new will-based reasons for one option over another by putting your agency behind some feature of one of the options. By putting your will behind a feature of an option - by standing for it - *you* can be that in virtue of which something is a will-based reason for choosing that option.” ### The Case for Intuitive Comparability in AI Safety We endorse IC and parity not as a superior philosophical position in opposition to others, but as a lens from which to ask and analytically identify what a cohesive approach to AI Safety would look like. In the development of AI systems, IC provides a foundation for normative reasoning between possible value regimes. Hard choices cannot be determined through purely quantitative thresholds; instead, IC suggests the iterative redrawing of a system’s formal boundaries and design parameters via qualitative feedback. The “hard choice” moments are those where different communities, comprising the affected stakeholders, may clash and new development criteria (what Chang calls “will-based reasons”) must be specified. This matches the intuition that the design of AI systems restructures the context in which users or other affected stakeholders exist: “values emerge, whether you look for them or not” [[Halloran et al.2009](#bib.bibx20)]. [[Iversen, Halskov, and Leong2010](#bib.bibx26)] argue this requires an “a priori commitment to cultivate the emergence and discovery of local expressions of values whilst being mindful of further expression of values during the course of the design process”. In addition, the value hierarchy designed into a system will determine the space of actions available to it (as well as those that the system forecloses), and it is crucial to acknowledge and account for the power and elevated status of design work. This means recognizing developers’ tendencies to prioritize certain actors and networks over others. [[Haraway1988](#bib.bibx21)], [[Harding1986](#bib.bibx22)] and other feminist scholars would argue that we cannot escape having some agenda: after all, the researcher is also situated in the social world they study. A crucial corollary of the above is that developers have the responsibility to take a political stance. As Ben Green notes, “to remain apolitical is itself a political stance - a fundamentally conservative one (in the sense of maintaining the status quo rather than in relation to any specific political party or movement) - and why the field’s current attempts to promote ‘social good’ dangerously rely on vague and unarticulated political assumptions” [[Green2018](#bib.bibx17)]. AI Safety also presents two sources of nuance to hard choices. Firstly, AI systems need to encode hard choices made by or on behalf of a diverse group of stakeholders affected by the system, including divergent values and interests. These are fundamentally political, which is well understood in the STS literature [[Winner1980](#bib.bibx47)]. Our goal is to build an analytical framework to draw attention to these moments and facilitate bridges between the ongoing contributions of AI Safety research and the core substantive insights of STS scholarship. Secondly, the values that stakeholders care about are often complex and not readily translated into a solution that suits all needs, which can lead to situations of *moral overload* that require thinking outside of the traditional design space [[Van den Hoven, Lokhorst, and Van de Poel2012](#bib.bibx43)]. As such, we extend and elaborate the argument of [[Hadfield-Menell and Hadfield2018](#bib.bibx18)] that acknowledges the need to address misspecification between reward functions and wider social institutions. Those responsible for developing and governing AI systems have the duty to mediate hard choices and the corresponding value conflicts *across* different stakeholders, allowing them to resolve these choices through both quantitative and qualitative evaluation. Like the maintenance of cables that span a suspension bridge, AI Safety can be defined as the successful maintenance of the relations that comprise the conceptual space of comparability for human values across all stakeholders. Just as the “stress point” of civil engineering is the identified and agreed-upon maximum strain the bridge can handle before buckling, the critical point for human-compatible AI is the safeguarding of comparability, i.e. the capability of AI systems to support pluralist value hierarchies while preserving *shared moral agency*; the power to engage in design, training, and deployment. Sociotechnical Commitments in Developing AI ------------------------------------------- We propose a set of commitments that situate the design, training and deployment stages of AI systems in their sociotechnical context and center and address issues of vagueness. The commitments are comprised of formal, substantive and discursive challenges to the development process in order to safeguard stakeholders’ access to hard choices in system development. Formally, these challenges comprise distinct tradeoffs that are unavoidable in the agonistic development of systems that must inherit, translate, and instantiate conflicting values. Substantively, they condition our value commitments in a manner that is not zero-sum, extending the boundaries of the system and design space to ensure the expression of intuitively comparable human values and resolve situations of moral overload [[Van den Hoven, Lokhorst, and Van de Poel2012](#bib.bibx43)]. Discursively, they compel communication between stakeholders: formulating the problem, evaluating systems that would solve it, and articulating the values and needs that the system must address in order to be safe. We posit that such ongoing stakeholder engagement requires “reflexive inquiry [that] places all of its concepts and methods at risk […] not as a threat to rationality but as a promise of a better way of doing things” [[Agre1997](#bib.bibx3)]. Following [[Anderson2006](#bib.bibx5)], we emphasize the need for dissent mechanisms during the design, testing, and implementation of automated systems. Tracking dissent is necessary in order to respect the IC of available value hierarchies while reconceiving AI development as an opportunity to reimagine the moral communities to which we belong. This choice is in itself normative, and may inspire particular legal translations depending on the application domain and the jurisdiction and democratic regime in which a system is built. In many instances, regulatory measures may form either an existing source of constraints and requirements in the development process, or be informed by it. The authors do not advocate particular law or policy interpretations, but see such translation work as a natural extension of this paper. To illustrate our sociotechnical commitments, we will refer to the AWS-ACLU case study described earlier. We consider the relevant hard choices made throughout the design, training, and deployment of the Rekognition system and illustrate their political impact. At each stage, we ask: (1) how does vagueness arise and what forms may IC take? (2) In what concrete ways can formal affordances, substantive commitments, and discursive practices address these issues? ### Design AI systems generally represent a predictive model that can be trained and used in the decision making capabilities of some human agent or automated control system. As the model represents an *abstraction* of the phenomenon about which it makes predictions, the chosen model parameterization and the training data used to determine parameter values delimit the possible value hierarchies that may be encoded and, if not anticipated and accounted for, may deny stakeholders the opportunity to evaluate design alternatives and force potentially harmful and unsafe hard choices. To harness IC in the design stage, the following challenges must be taken up: (1) Formal challenge: Make explicit and negotiate what can and cannot be modeled and inferred, crystallized in the model-based/model-free dilemma; (2) Substantive challenge: Make a modeling commitment whose application constraints leave flexibility for different stakeholders to forge their own values during training and deployment; (3) Discursive commitment: validate the design with stakeholders to anticipate possible value conflicts that can arise due to the gap between model and world and plurality of values during deployment, preparing for design iterations. The design stage determines the computational powers of the system: how the limits of what it can model determine its assumptions about people and what kinds of objects or classes (e.g. faces) are recognizable to it. At a minimum, stakeholders must answer the following: Model-based: What domain knowledge is available to model the environment? I.e. what is the permissible space in which a given problem can be formulated and solved, and what modeling tools are available? Model-free: What are permissible predictive signals within the environment? I.e. what are the base rewards, elements, or qualities that could shape the system’s policies? How should these take qualitative precedence over others? Formally, the dilemma manifests in choosing a model capacious enough to represent the nature of the environment, but constrained enough that its training would not be intractable. Imposing modeling constraints also creates technical bias, which may take away space for stakeholders to express or protect their own specific values in terms of the phenomena permitted or excluded by the model’s system boundaries. In the context of our case study, the dilemma rests in the choice between giving Rekognition some hard-coded limitations on how the algorithm may be used, vs. permitting the algorithm to extract whatever signals it needs in order to maximize its predictive accuracy. While AWS did not address this in its response to the ACLU, they could have done so accordingly: either transition to a more automated decision procedure that sacrifices direct human oversight but is more accurate for the congressional dataset at stake, or propose a governance structure to mediate the ethical and legal applicability of the tool and ratify the environmental conditions a system is allowed to represent (here, the features appropriate for recognizing representatives’ faces). Either way, the dilemma is resolved via context discernment, the disqualification of specific features and actions within the problem space in advance of deployment. Here we draw from [[Dreyfus and Kelly2011](#bib.bibx12)]: “The task of the craftsman is not to generate the meaning, but rather to cultivate in himself the skill for discerning the meanings that are already there”. Design teams need to consider how the algorithm is expected to be integrated in and interacting with the context of deployment, what bias issues may arise during training and how to account for and protect vulnerable user groups, and how chosen objective functions may generate externalities, as well as who is likely to bear their cost. In the event no consensus is reached and dissent persists, the option of not designing the system should remain viable. ### Training After certain features and ways of modeling have been disqualified through design commitments, the specific weights and the structure of the predictive mapping must be determined by performing an optimization problem. This determines the input-output behavior of the model and how it will interact with human agents and other systems. Through the recruitment of historical and experimental data, the system can (1) infer causal model parameters, (2) infer parameters of noncausal representations, and (3) iteratively adjust parameters based on ongoing experiments (as in reinforcement learning). To harness IC in the training stage, the following challenges have to be taken up: (1) Formal challenge: Assess the limits of how parameters can be inferred at present, crystallized in the validation/verification tradeoff; (2) Substantive challenge: Make a validation commitment that is acceptable to present stakeholders; (3) Discursive challenge: Form a team consensus around verification strategies to be pursued during deployment and define alternate design strategies that might aid parameter inference. The system must be trained to bridge the gap between features by generating correlates: it must take in data, update priors, and handle edge cases. This is done with the help of engineers who interface between the problem the system is meant to solve and the workings of the system itself. Here, the minimum requirements for certifying safe outcomes are *impartial assessments* of the following questions: Verification: Was the right system built? Are the needs of prospective users being met? Is the specified problem solved or not? Is the system able to predict what it was meant to? Validation: Was the system built right? Are there hidden utility monsters or emergent biases? Is there risk of strategic behavior or manipulation? What information channels must be provided to users to minimize these likelihoods? Systems whose models are made more accurate or robust for well-specified environments (and subpopulations of people) will be made brittle and possibly unworkable for poorly-defined environments, which can result in disparate impacts, especially among yet underrepresented (and undersampled) minorities that already face systemic marginalization and are not properly represented on AI design teams [[West, Whittaker, and Crawford2019](#bib.bibx46)]. This effect can clearly be seen in the case between AWS and the ACLU. Here, the designers of Rekognition created the system with common commercial tasks in mind (e.g. sentiment analysis), and determined their own confidence levels through extensive internal verification. However, the ACLU deployed Rekognition to a task that it was not explicitly meant to perform, i.e. matching faces of politicians to those of recent arrestees, and indicated a weakness in the way the system was validated. A commitment to team consensus is needed to integrate the problems of value alignment into standard procedures for quality assurance. This is achieved by forcing agreement among system engineers about how to allocate sparse team resources between system verification and validation in order to manage under-specification risks and mitigate the perversion of intended users’ semantic and moral commitments. The team must decide: what commitments to contracted users are necessary for the desired balance of model testing to be adequate? Here “quality management” must be elevated to the contestation and adjudication of how (possibly pluralist) values are operationalized without compromising comparability. In the case of Rekognition, AWS implemented a confidence threshold slider with additional documentation commenting on how it should be set for different contexts that will require distinct value hierarchies (e.g. law enforcement vs. automatic image tagging) [[Amazon Web Services2019](#bib.bibx4)]. However, per the ACLU comment in response to Amazon’s defense of the system [[ACLU2018](#bib.bibx2)], it is unclear whether this specific metric is sufficient to validate consensus among all relevant stakeholders (e.g. members of Congress, government agencies, Amazon employees) rather than necessary for system verification alone, as some of its confidence thresholds appear arbitrary. ### Deployment Finally, the system must define use cases in terms of a user contract that identifies terms of consent and ensures interpretive understanding without coercion. The resulting deployment conditions determine the authority of the system’s representations in the context of user agency, i.e. what the user wants the system to be for them. Here we appropriate tradeoffs already identified by social theorists regarding the moral authority and political powers of social institutions [[Flew2009](#bib.bibx16)]. To preserve IC in the deployment stage, the following have to be taken up: (1) Formal challenge: Assess what kind(s) of agency users have if the verification fails, crystallized in the exit/voice dilemma; (2) Substantive challenge: Make a commitment to an open feedback channel by which users express their values on their terms; (3) Discursive challenge: Justify that channel by means of a public commitment to users that establishes that channel as trustworthy. Resolving these challenges requires *representative input and mitigation of issues* for the following: Exit: Are users able to withdraw fully from using the product or platform? Is there any risk in this? Are there competing products or platforms they can use? Have assurances been given about user data, optimization, and certification after the user withdraws? Voice: Can users articulate proposals in a way that makes certain concerns a matter of public interest? Are clear proposal channels provided for users, and are they given the opportunity to contribute regularly? Are the proposals highlighted frequently considered and tested, e.g. through system safety? Are users kept informed and regularly updated? To the extent that proposed value hierarchies remain indeterminate after the commitments made during design and training, deployment challenges systems to handle the multiple objectives, values, and priorities of diverse users. At stake here are the unexpressed moral commitments of subpopulations not originally considered part of the potential userbase, who must bear the “cost function” of specification. Concretely, deployment administrators must determine whether the user will interpret the system agreement as primarily economic (in which case the user acts as a consumer) or political (in which case the user acts as a citizen). More Exit implies a market setting, while more Voice suggests a political context. For example, the user agreements of Rekognition may be understood either in terms of private contracts (in which case data is treated as a commodity and alternative platforms are implied to exist) or public assurances (in which case data is inalienable and Rekognition is interpreted as a public utility or service). If the former takes precedence, Rekognition’s deployment might be regulated with private certification that attests the features and uses of data it forbids; if the latter is more important, deployment increasingly depends on a public accreditation that guarantees the user’s legal protections will take priority in all use cases regardless of features or data. Deployment administrators and their regulating authorities must cultivate public accountability to deal with these challenges, ensuring both Voice and Exit remain possible for users such that some form of accountability is maintained: anyone can leave if they want, but enough people choose to remain because they trust in their ability to express concerns as needed. This balance must hold regardless of the specific commitment being made–for example, AWS may specify some channel by which vulnerable groups can opt out of a publicly-operated Rekognition use context (preserving Exit), or supply private contractors with a default user agreement that must be relayed to anyone whose data will be used by the system (preserving Voice). Either way, administrators should model model people neither as consumers (a customer, client, or operator treated more or less as a black box) nor as citizens (a subject with guaranteed rights, among them the right to dissent to relevant forms of political power) without making the commitment explicit as justification for the terms of deployment. Conclusion ---------- Clarifying the sociotechnical foundations of safety requirements for AI systems will lay the groundwork for system developers to take part in distinct dissent channels proactively, before the risks posed by AI systems become technically or politically insurmountable. We anticipate this set of sociotechnical commitments will need to be integrated into the training of engineers, data scientists, and designers as qualifications for the operation and management of advanced AI systems in the wild. Ultimately, the public itself must be educated about the assumptions, abilities, and limitations of these systems so that informed dissent be made desirable and attainable as systems are being deployed–deliberation is the goal of AI Safety, not the procedure by which it is ensured. We endorse this approach due to the computationally underdetermined, semantically indeterminate, and politically obfuscated value hierarchies that will continue to define diverse social orders both now and in the future. Democratic dissent, as a pathway to system development, is necessary for such systems to safeguard the possibility of IC and allow users to define the contours of their own values. To paraphrase Reinhold Niebuhr, AI’s capacity for value alignment makes development commitments possible, but its inclination to misalignment makes commitments necessary.
0eb86d7d-1013-4fdc-9e6f-2c367e91ba54
trentmkelly/LessWrong-43k
LessWrong
Welcome to the Vancouver Rationality Community We're the rationality community in Vancouver, British Columbia, Canada. We organize meetups on all kinds of topics, like LessWrong, Slate Star Codex, and effective altruism. We're aspiring to launch a MIRIx workshop. We also host events focusing on other intellectual activities like lightning talks, reading groups and debate clubs; and social events like parties, game nights and hang outs. We also occasionally co-host events with Burners, life extensionists, or effective altruists in the Vancouver area. We usually end up having a couple meetups a month. A lot of us lived clusters in some of the same neighbourhoods, so we most often meet in Burnaby, Kitsilano, UBC, or Downtown. We're friends of local organizations and communities like: Rethink Charity, an educational and evidence-based research organization in the effective altruism community. The Lifespan Society of BC, the province's life extension project and advocacy society. DCTRL, the local, (both figuratively and literally) underground cryptocurrency society. The University of British Columbia Effective Altruism Club. We intend to start reading groups for Slate Star Codex and Rationality: From AI to Zombies in the near future, so stay tuned.
3d949637-d216-43f9-b491-0ae619c74b52
trentmkelly/LessWrong-43k
LessWrong
Relabelings vs. External References The following distinction is crucial to my understanding of consciousness and epistemology. It is also extremely useful for seeing through some of the language games people play in terms of framing. There are fundamentally two ways in which we can define entities and much confusion is created by failing to distinguish between the two. The first way is relabelling where we define something in terms of what we've already defined. If we've already defined the meaning of hydrogen atoms, oxygen atoms and molecules, then we can simply define water as a molecule consisting of one hydrogen atom and two oxygen atoms, if that's how we want to define it. Defining water in this way postulate a new entity in our fundamental ontology, but instead reuses existing elements. The second way is to define an external reference; that is, a reference that doesn't purely build on top of already defined language or meaning. It is a very common belief that we can only define something in terms of things that have already been defined, but if that were the case, then we wouldn't even be able to get our first definition of the ground. All our definitions would either be circular or involve an infinite regress. External references provide a not completely satisfactory, but pragmatically necessary workaround. Instead of being impractically purist when it comes to definitions, we allow ourselves to point to (or vaguely gesture at) things without being able to fully say what they are. Let's imagine that quarks are the foundational element of physics as far as we can tell. Well, we could define quarks by how they interact with each other, but that is a surface level characteristic, which doesn't seem to explain what exactly they are at their fundamental core And if these quarks have a nature beyond their behaviour, then it makes sense to think about them as external references. Some might say that the observable behaviour of these quarks is all that there is to their nature. If this is the ca
6a554e66-58d2-42a3-9f06-f52b12d0520d
trentmkelly/LessWrong-43k
LessWrong
Seven habits towards highly effective minds Lately I’ve been thinking about how my thinking works, and how it can be improved. The simplest way to do so is probably to nudge myself towards paying more attention to various useful habits of mind. Here are the ones I've found most valuable (roughly in order): 1. Tying together the act of saying a statement, and the act of evaluating whether I actually believe it. After making a novel claim, saying out loud to myself: “is this actually true?” ” and "how could I test this?" 2. Being comfortable with pausing to reflect and thinking out loud. Trying to notice when my responses are too quick and reflexive, as a sign that I'm not thinking hard enough about the point I'm addressing. 3. Asking for specific examples, and using more of my own. Tabooing vague abstractions and moving away from discussing claims that are too general. 4. Being charitable and collaborative, both towards new ideas and towards conversational partners. Trying to rephrase other people’s arguments and pass Ideological Turing Tests on them. Helping my conversational partners build up their ideas. 5. Noticing the affect heuristic, and which claims stir up emotions. Noticing when I'm talking defensively or heatedly, and when it’d be uncomfortable to believe something. 6. Thinking in terms of probabilities; cashing out beliefs in terms of predictions; then betting on them. I haven’t done enough bets to calibrate myself well, but I find that even just the feeling of having money on the line is often enough to make me rethink. Being asked whether something is a crux gives me a similar feeling. 7. Thinking about how the conversations and debates I participate in actually create value, and when they should be redirected or halted. Then there are social influences. I think one of the greatest virtues of the rationalist community is in creating an environment which encourages the use of the tools above. Another example: my girlfriend fairly regularly points out times when I’ve contradicted myself.
e0dd3e1d-b10c-4ed3-9d14-3acdc3ec8a6b
trentmkelly/LessWrong-43k
LessWrong
Thoughts on tackling blindspots I went to my first CFAR workshop the other week, and it was quite intense/a lot. The biggest change by far has that I came face to face with some huge blindspots and can see more clearly many of the ways I've been fooling myself, not allowing myself to care about things, and pushing people away. Since blindspots are a proto-typical, "How the fuck are you supposed to find a thing that you aren't capable of finding?" I wanted to share what I think are things that helped me spot some of mine. This is rough draft mode, and I think I'm going to just settle for bullet points in this pass-through. > To quote Draco from HPMOR: > To figure out a strange plot, look at what happens, then ask who benefits i.e. look at all of the first impression I make of people, and get suspicious that they all add up to "People aren't worth talking to" and be suspicious. * Combine that with some cognitive trope therapy. * One of the first things I got into my head when journeying into rationality was "If it hurts to think about, or feels like a cherished belief that doesn't want to be touched, GO AFTER IT!" This had the effect of most of my problems disguising themselves as things I wasn't interested in. Instead of feeling scared of parties, I would just feel disinterested and bored by the idea of parties. * In Val's Design class, being reminded that most of your built/learned mental machinery came into being to try and protect you for something or get something for you. Also being reminded about how you can't just steamroll over your previous machinery. There is a need or want hiding there which unless you address it, the machinery in place that was trying to server that want will fight back. * A mental shift from, "Is this plausible/reasonable?" to "Is this true?" when asking examining my rejections of different ideas.
b22053a0-c710-4ecf-8a61-5a9d263951de
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Decision theory and zero-sum game theory, NP and PSPACE *(Cross-posted from [my blog](https://unstableontology.com/2018/05/24/decision-theory-and-zero-sum-game-theory-np-and-pspace/))* At a rough level: * [Decision theory](https://en.wikipedia.org/wiki/Decision_theory) is about making decisions to maximize some objective function. * [Zero-sum game theory](https://en.wikipedia.org/wiki/Zero-sum_game) is about making decisions to optimize some objective function while someone else is making decisions to minimize this objective function. These are quite different. Decision theory and NP ---------------------- Decision theory roughly corresponds to the [NP](https://en.wikipedia.org/wiki/NP_(complexity)) complexity class.  Consider the following problem: > Given a set of items, each of which has a integer-valued value and weight, does there exist a subset with total weight less than .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} > .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} > .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} > .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} > .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} > .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} > .mjx-numerator {display: block; text-align: center} > .mjx-denominator {display: block; text-align: center} > .MJXc-stacked {height: 0; position: relative} > .MJXc-stacked > \* {position: absolute} > .MJXc-bevelled > \* {display: inline-block} > .mjx-stack {display: inline-block} > .mjx-op {display: block} > .mjx-under {display: table-cell} > .mjx-over {display: block} > .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} > .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} > .mjx-stack > .mjx-sup {display: block} > .mjx-stack > .mjx-sub {display: block} > .mjx-prestack > .mjx-presup {display: block} > .mjx-prestack > .mjx-presub {display: block} > .mjx-delim-h > .mjx-char {display: inline-block} > .mjx-surd {vertical-align: top} > .mjx-mphantom \* {visibility: hidden} > .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} > .mjx-annotation-xml {line-height: normal} > .mjx-menclose > svg {fill: none; stroke: currentColor} > .mjx-mtr {display: table-row} > .mjx-mlabeledtr {display: table-row} > .mjx-mtd {display: table-cell; text-align: center} > .mjx-label {display: table-row} > .mjx-box {display: inline-block} > .mjx-block {display: block} > .mjx-span {display: inline} > .mjx-char {display: block; white-space: pre} > .mjx-itable {display: inline-table; width: auto} > .mjx-row {display: table-row} > .mjx-cell {display: table-cell} > .mjx-table {display: table; width: 100%} > .mjx-line {display: block; height: 0} > .mjx-strut {width: 0; padding-top: 1em} > .mjx-vsize {width: 0} > .MJXc-space1 {margin-left: .167em} > .MJXc-space2 {margin-left: .222em} > .MJXc-space3 {margin-left: .278em} > .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} > .mjx-line-box-test {display: table!important} > .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} > .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} > .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} > .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} > .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} > .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} > .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} > .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} > .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} > .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} > .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} > .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} > .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} > .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} > .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} > .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} > .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} > .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} > .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} > .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} > .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} > .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} > .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} > .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} > .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} > .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} > @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} > @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} > @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} > @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} > @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} > @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} > @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} > @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} > @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} > @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} > @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} > @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} > @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} > @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} > @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} > @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} > @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} > @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} > @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} > @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} > @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} > @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} > @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} > @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} > @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} > @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} > @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} > @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} > @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} > @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} > @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} > w and total value at least .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} > .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} > .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} > .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} > .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} > .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} > .mjx-numerator {display: block; text-align: center} > .mjx-denominator {display: block; text-align: center} > .MJXc-stacked {height: 0; position: relative} > .MJXc-stacked > \* {position: absolute} > .MJXc-bevelled > \* {display: inline-block} > .mjx-stack {display: inline-block} > .mjx-op {display: block} > .mjx-under {display: table-cell} > .mjx-over {display: block} > .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} > .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} > .mjx-stack > .mjx-sup {display: block} > .mjx-stack > .mjx-sub {display: block} > .mjx-prestack > .mjx-presup {display: block} > .mjx-prestack > .mjx-presub {display: block} > .mjx-delim-h > .mjx-char {display: inline-block} > .mjx-surd {vertical-align: top} > .mjx-mphantom \* {visibility: hidden} > .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} > .mjx-annotation-xml {line-height: normal} > .mjx-menclose > svg {fill: none; stroke: currentColor} > .mjx-mtr {display: table-row} > .mjx-mlabeledtr {display: table-row} > .mjx-mtd {display: table-cell; text-align: center} > .mjx-label {display: table-row} > .mjx-box {display: inline-block} > .mjx-block {display: block} > .mjx-span {display: inline} > .mjx-char {display: block; white-space: pre} > .mjx-itable {display: inline-table; width: auto} > .mjx-row {display: table-row} > .mjx-cell {display: table-cell} > .mjx-table {display: table; width: 100%} > .mjx-line {display: block; height: 0} > .mjx-strut {width: 0; padding-top: 1em} > .mjx-vsize {width: 0} > .MJXc-space1 {margin-left: .167em} > .MJXc-space2 {margin-left: .222em} > .MJXc-space3 {margin-left: .278em} > .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} > .mjx-line-box-test {display: table!important} > .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} > .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} > .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} > .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} > .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} > .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} > .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} > .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} > .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} > .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} > .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} > .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} > .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} > .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} > .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} > .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} > .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} > .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} > .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} > .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} > .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} > .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} > .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} > .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} > .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} > .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} > @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} > @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} > @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} > @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} > @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} > @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} > @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} > @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} > @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} > @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} > @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} > @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} > @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} > @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} > @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} > @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} > @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} > @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} > @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} > @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} > @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} > @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} > @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} > @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} > @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} > @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} > @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} > @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} > @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} > @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} > @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} > v? (It turns out that finding a solution is not much harder than determining whether there is a solution; if you know how to tell whether there is a solution to arbitrary problems of this form, you can in particular tell if there is a solution that uses any particular item.) This is the [knapsack problem](https://en.wikipedia.org/wiki/Knapsack_problem), and it is in NP.  Given a candidate solution, it is easy to check whether it actually is a solution: you just count the values and the weights.  Since this solution would constitute a proof that the answer to the question is “yes”, and a solution exists whenever the answer is “yes”, this problem is in NP. The following is a general form for NP problems: .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ∃x1∈{0,1}∃x2∈{0,1}…∃xk∈{0,1}f(x1,...,xk) where .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} f is a specification of a circuit (say, made of AND, OR, and NOT gates) that outputs a single Boolean value.  That is, the problem is to decide whether there is *some* assignment of values to .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} x1,…,xk that .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} f outputs true on.  This is a variant of the [Boolean satisfiability problem](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem). In decision theory (and in NP), all optimization is in the same direction.  The only quantifier is .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ∃. Zero-sum game theory and PSPACE ------------------------------- Zero-sum game theory roughly corresponds to the [PSPACE](https://en.wikipedia.org/wiki/PSPACE) complexity class.  Consider the following problem: > Given a specification of a [Reversi](https://en.wikipedia.org/wiki/Reversi) game state (on an arbitrarily-large square board), does there exists a policy for the light player that guarantees a win? (It turns out that winning the game is not much harder than determining whether there is a winning policy; if you know how to tell whether there is a solution to arbitrary problems of this form, then in particular you can tell if dark can win given a starting move by light.) This problem is in PSPACE: it can be solved by a Turing machine using a polynomial amount of space.  This Turing machine works through the [minimax](https://en.wikipedia.org/wiki/Minimax) algorithm: it simulates all possible games in a backtracking fashion. The following is a general form for PSPACE problems: .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ∃x1∈{0,1}∀y1∈{0,1}…∃xk∈{0,1}∀yk∈{0,1}f(x1,y1,…,xk,yk) where .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} f is a specification of a circuit (say, made of AND, OR, and NOT gates) that outputs a single Boolean value.  That is, the problem is to determine whether it is possible to set the .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} x values interleaved with an opponent setting the .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} y values such that, no matter how the opponent acts, .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} f(x1,y1,…,xk,yk) is true.  This is a variant of the [quantified Boolean formula problem](https://en.wikipedia.org/wiki/True_quantified_Boolean_formula).  (Interpreting a logical formula containing .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ∃ and .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ∀ as a game is standard; see [game semantics](https://en.wikipedia.org/wiki/Game_semantics)). In zero-sum game theory, all optimization is in one of two completely opposite directions.  There is literally no difference between something that is good for one player and something that is bad for the other.  The opposing quantifiers .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ∃ and .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ∀, representing decisions by the two opponents, are interleaved. Different cognitive modes ------------------------- The comparison to complexity classes suggests that there are two different cognitive modes for decision theory and zero-sum game theory, as there are two different types of algorithms for NP-like and PSPACE-like problems. In decision theory, you plan with no regard to any opponents interfering with your plans, allowing you to plan on arbitrarily long time scales.  In zero-sum game theory, you plan on the assumption that your opponent will interfere with your plans (your .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ∃s are interleaved with your opponent’s .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ∀s), so you can only plan as far as your opponent lacks the ability to interfere with these plans.  You must have a short [OODA loop](https://en.wikipedia.org/wiki/OODA_loop), or your opponent’s interference will make your plans useless. In decision theory, you can mostly run on naïve expected utility analysis: just do things that seem like they will work.  In zero-sum game theory, you must screen your plans for defensibility: they must be resistant to possible attacks.  Compare farming with border defense, mechanical engineering with computer security. High-reliability engineering is an intermediate case: designs must be selected to work with high probability across a variety of conditions, but there is normally no intelligent optimization power working against the design.  One could think of nature as an “adversary” selecting some condition to test the design against, and represent this selection by a universal quantifier; however, this is qualitatively different from a true adversary, who applies intentional optimization to break a design rather than haphazard selection of conditions. Conclusion ---------- These two types of problems do not cover all realistic situations an agent might face.  Decision problems involving agents with different but not completely opposed objective functions are different, as are zero-sum games with more than two players.  But realistic situations share some properties with each of these, and I suspect that there might actually be a discrete distinction between cognitive modes for NP-like decision theory problems and PSPACE-like zero-sum games. What’s the upshot?  If you want to know what is going on, one of the most important questions (perhaps the most important question) is: what kind of game are you playing?  Is your situation more like a decision theory problem or a zero-sum game?  To what extent is optimization by different agents going in the same direction, opposing directions, or orthogonal directions?  What would have to change for the nature of the game to change? --- Thanks to Michael Vassar for drawing my attention to the distinction between decision theory and zero-sum game theory as a distinction between two cognitive modes. Related: [The Face of the Ice](https://srconstantin.wordpress.com/2017/05/30/the-face-of-the-ice/)
2b879ae3-a68f-4e1f-8d2b-fe0bc6df515e
trentmkelly/LessWrong-43k
LessWrong
Pong from pixels without reading "Pong from Pixels" At the beginning of this summer I finished an undergraduate degree in maths and physics and I decided to spend some time preparing for my master’s degree in AI by learning some reinforcement learning (RL). It’s not like there was a whole lot else to do this summer anyway. Before going into this I had done a module on applied ML (which basically consisted of putting data into scikit-learn functions without looking deeply at how they worked) and a general idea of the basic structure of neural networks. The first part of the post will outline the steps I took in learning ML and RL in case anyone with a similar background is interested, and in the second part I will discuss the challenges of implementing a Deep Q-Network (DQN) algorithm on Pong directly from the original paper. I’ll also compare my approach and experience to the blog post Deep Reinforcement Learning: Pong from Pixels by Andrej Karpathy, which I didn't read until after I'd written my DQN implementation. Yes, this game was heavily cherry-picked but at least it works some of the time! Part I - Background I started by looking at Spinning Up by OpenAI and reading their introduction. While reading it I thought I was understanding it fairly well but when it came time to try the exercises and implement algorithms for myself I realised I had no clue what was going on and decided to take a step back. I’m always happiest when I understand a topic from the very basics. It’s why I preferred pure maths to applied - you start from the axioms and rarely have to rely on anything that you haven’t proved yourself earlier. For this reason I started reading Sutton and Barto’s Reinforcement Learning: An Introduction (RLAI), a textbook by two of the early big names in RL. I haven’t read any other RL textbooks but I thoroughly enjoyed the style and pacing of this book - plenty of explanation and exercises. I read RLAI until I reached a section on neural networks (Chapter 9), at which point I switched to Goodfellow’s Dee
f83bd296-c882-40d1-8132-5deb3dbe3478
trentmkelly/LessWrong-43k
LessWrong
Short Primers on Crucial Topics Series: How to Purchase AI Risk Reduction Here's another way we might purchase existential risk reduction: the production of short primers on crucial topics. Resources like The Sequences and NickBostrom.com have been incredibly effective at gathering and creating a community engaged in x-risk reduction (either through direct action or, perhaps more importantly, through donations), but most people who could make a difference probably won't take the time to read The Sequences or academic papers. One solution? Short primers on crucial topics. Facing the Singularity is one example. I'm waiting for some work from remote researchers before I write the last chapter, but once it's complete we'll produce a PDF version and a Kindle version. Already, several people (including Jaan Tallinn) use it as a standard introduction they send to AI risk newbies. Similar documents (say, 10 pages in length) could be produced for topics like Existential Risk, AI Risk, Friendly AI, Optimal Philanthropy, and Rationality. These would be concise, fun to read, and emotionally engaging, while also being accurate and thoroughly hyperlinked/referenced to fuller explanations of each section and major idea (on LessWrong, in academic papers, etc.). These could even be printed and left lying around wherever we think is most important: say, at the top math, computer science, and formal philosophy departments in the English-speaking world. The major difficulty in executing such a project would be in finding good writers with the relevant knowledge. Eliezer, Yvain, and myself might qualify, but right now the three of us are otherwise occupied. The time investment of the primary author(s) could be minimized by outsourcing as much of the work as possible to SI's team of remote researchers, writers, and editors. Estimated cost per primer: * 80 hours from primary author. (Well, if it's me. I've put about 60 hours into the writing of Facing the Singularity so far, which is of similar length to the p
237b80bb-15a4-4903-8141-b0a7fabdcfde
StampyAI/alignment-research-dataset/special_docs
Other
individuallyselected_92iem-by Vael Gates-date 20220321 # Interview with AI Researchers individuallyselected\_92iem by Vael Gates \*\*Interview with 92iem, on 3/21/22\*\* \*\*0:00:05.2 Vael:\*\* Alright, here we are. So my first question is, can you tell me about what area of AI you work on in a few sentences? \*\*0:00:15.2 Interviewee:\*\* Currently, I work a lot with language models, but that wasn\'t always the case. \[\...\] \*\*0:00:57.4 Vael:\*\* Great, thanks. And then what are you most excited about in AI? And what are you most worried about? In other words, what are the biggest benefits or risks of AI? \*\*0:01:08.5 Interviewee:\*\* So obviously, I think the progress in language models in the last couple of years has been pretty astounding. And the fact that we can interact with these models in more or less in the natural way that we would like to interact with it just has opened up so much in terms of getting feedback from humans and stuff like that. So I think just the progress in language models, and then coupled with that, the more recent progress in using essentially some of the same techniques to do image modeling, so that you have the possibility to do just seamless multi-modal models. I think that\'s quite exciting. Some people think that\... You know, it\'s not like most of us can just paint a photographic scene and show it to other people. So it\'s not like\-- the photographic aspects of generative image models is not what excites me, it\'s the fact that humans manage to communicate quite a bit with diagrams and stuff like that. When we\'re doing science, you can draw little stick figures and pretty much convey what you need to convey, and that coupled with natural language should give us the ability to start thinking about getting AI to do math and science for us, and I think that\'s the thing that is most exciting to me. So I know that a lot of people are excited by the idea that you can essentially have a Google that\'s a bit\... It\'s smarter, right? You can just talk with it and say, Hey, tell me a bit about this tree, and AI says something and you say, Oh, but what about that tree? That\'s fun, but I really feel like humans are not bottlenecked by the inability to ask about trees and buildings and trivia, essentially. I think where we\'re bottlenecked is like progress in science. I think, for example, so it\'s pretty clear that the political solution to climate change\-- the time for that has kind of come and gone. I mean, we can slow it down. If we, like the whole world, suddenly decided to say we\'re going to do something about this, maybe you slow it down, but I think just the timing is a little bit off. So a lot of that\'s going to be have to be a technological solution. And as amazing as technological progress has been, I think we\'re not fast enough when it comes to developing solutions to a lot of our problems. And I do think in 10, 20 years, AI is going to play a big role, both in the specialized domain in the sense of AlphaFold, where you really just come up with a system that does the thing you want it to do, but more impactfully, perhaps, by having an army of grad student-equivalent language models that can help you answer questions that you need answered. So that\'s very exciting, right. \*\*0:04:24.1 Vael:\*\* Yeah. It\'s a cool vision. \*\*0:04:26.5 Interviewee:\*\* I think the risks are\... It\'s almost banal, right? Like with most technologies bad actors can make arbitrarily bad use of these things. So yeah, when they start weaponizing these things\... I\'m a little bit less concerned than some people are about like, Oh, but what if we have AIs that write fake news. Like all of that is to some extent present now, and I guess it\'s just a question of degree, to some extent. Okay, people argue that that difference in degree matters, and they\'re not necessarily wrong. I just, the thing that bothers me more definitely is very specific, malicious uses of AI. So there was a recent paper, this is so obvious that it\'s almost dumb, but someone said, Oh, yeah, we put an AI to trying to develop a drug that, let\'s say, reduces the amount of poison, and all you have to do is change the objective function, flip the sign and suddenly it just optimizes for the most poisonous thing you can possibly find. That coupled with technologies like CRISPR and stuff like that just creates a pretty dangerous\... puts very dangerous tools at people\'s disposal. So I would say that\'s the thing that I would worry about. \*\*0:05:54.9 Vael:\*\* I have been impressed by how everyone I\'ve talked to in the past week has mentioned that paper, and I\'m like, good, things get around. \*\*0:06:01.8 Interviewee:\*\* Well, Twitter. Thanks to Twitter. \*\*0:06:04.7 Vael:\*\* Nice. Alright, so focusing on future AI, putting on a science fiction forecasting hat, say we\'re 50-plus years into the future. So at least 50 years in the future, what does that future look like? 0:06:17:9 Interviewee: For AI? 0:06:20:5 Vael: In general, where if AI\'s important, then mention that. \*\*0:06:27.4 Interviewee:\*\* I see. So 50 years, oh my God. Fifty years is long time away. Assuming that we\'ve managed not to have nuclear conflicts between now and then, which is just one of those things that now you have to put at least a one digit probability on these days. But, yeah, I think that we will end up having\... Well. The optimistic scenario is that we ended up solving a few key problems. One is transitioning mostly out of fossil fuel, so a combination of solar and fusion power. I think that\'s going to be huge, and I think that AI will have played a role in some of that development. And I think 50 years from now, I think unless we are monumentally blocked in the next couple of years, AI will be pretty omnipresent in our lives, and certainly in the scientific sectors. So one thing that I\'m a little bit, just something that comes to mind, is that a lot of people are into this idea of these sort of augmented\... I don\'t know if people are literally willing to wear glasses, but certainly you could imagine having little ear buds that are fairly unobtrusive that go around your ears or something, and they do have a camera, so you can just ask it, whatever you need, you can ask it questions. In 50 years, I think at that point, maybe some people will have worked out direct neural interfaces with stuff, and so maybe the more adventurous people will have a bit of augmented memory or at least the ability to sort of silently query their little augmented system. I think that might be a thing. Not everyone will have adopted it, I think it\'ll be a weird world. I personally\-- I\'ve never been a huge, like the fastest adopter of technology, but that sort of stuff is next level, and I don\'t know what that\'s going to look like. I also\... well, two things, I guess they\'re kind of linked. I think that people will live substantially longer. I think, unless something miraculous happens, I don\'t think they\'ll be living like 200, 300 years, but I certainly think it\'s possible people will be living to 150 or something like that. Not people born now; I\'m not going to live to 150. Someone was telling me that people born, these days, they\'re going to live to see the year 2100, right. That\'s not quite in the 50-year time frame, but yeah, I certainly think people born today are going to be living, like their average lifespan in industrialized countries, assuming a certain level of privilege, they\'re going to be able to live quite a bit longer. That coupled with AI possibly automating quite a few jobs is going to change the social landscape a bit. One thing that occurred to me recently\... so people used to say that\-- well, people say many things\-- one is that, this, unlike industrialization\... Some people always say technological progress destroy some jobs but creates more jobs on the other side. And then some say, Okay, but this one is different because you\'re automating intelligence and that really does put humans out of their main forte. So one of the things that people worry most about, in addition to the universal-based income stuff, is just the loss of dignity, that people always assume that even people who don\'t have what you would call glamorous jobs value the fact that they work and get paid for that work. But I think some of the stuff that happened during Covid makes me doubt that a little bit, in the sense that people did quit jobs that ostensibly looked good. Even in the tech sector where people, I felt like generally, they\'re not the worst jobs by any stretch, and they were like, No, this is meaningless, I want to go do something meaningful with my life. So I think the recent, the past couple of years have made me question the idea that it would be that big of a psychological blow for people to not work for money. That if you did establish a universal basic income, plus you\'d have to solve some other, many complicated issues, but I don\'t think people will be that unhappy to be not having to work menial jobs. I\'m not saying there\'s not going to be upheaval, but I think it\'s going to be like a combination of living longer, and not possibly having to do jobs if you don\'t want to do them. I think that\'s just going to be, I don\'t know. It might be a nice change. In the optimistic scenario, I guess. \*\*0:11:35.9 Vael:\*\* Got it. Yeah. Well, my next question is related to that. So people talk about the promise of AI, by which they mean many things, but one of them is maybe having a very general capable system such that it will have cognitive capacities to replace all current day human jobs. So you might have a CEO AI or a scientist AI. Whether or not they choose to replace human jobs is different, but have the ability to do so. And I usually think about that and the fact that 2012 we have AlexNet, deep learning revolution, here we are, 10 years later, we\'ve got things like GPT-3, which can do some language translation and some text generation, coding, math, etcetera, a lot of weirdly general capabilities. And then we now have nations competing and people competing and young people going into this thing, and lots of algorithmic improvements and hardware improvement, maybe we get optical, maybe we get quantum, lots of things happening. And so we might actually just end up being able to scale to very general systems or we might hit some sort of ceiling and need to do a paradigm shift. But regardless of how we do that, do you think we\'ll ever get very general AI systems, like a CEO or a scientist AI, and if so, when? \*\*0:12:39.1 Interviewee:\*\* I don\'t know about CEO AIs. The scientist AIs, yes. Yeah, and that\'s going to come in stages. So obviously the current generation of AIs, we don\'t put them in human bodies and let them do experiments and stuff like that, right. It\'s going to be a while before we start letting them operate like particle accelerators. Fifty years\... Maybe in 50 years. My original background is \[non-AI field\], and I really could have just done my entire PhD from a desk, and that sort of work, certainly, AI can replace, I think, to a huge degree, from idea generation to solving the answer and writing a paper, yeah, that just feels so doable. Again, unless we hit a giant wall and find that our current transformers simply cannot reason, but I think that looks unlikely. I don\'t rule it out, but that looks unlikely to me. \*\*0:13:46.8 Vael:\*\* Yeah. Okay, what about this CEO AI, like with multi-step planning, can do social inference, is modeling other people modeling it, like crazy, crazy amount of generality. When do you think we\'ll get that, if we will? \*\*0:14:00.7 Interviewee:\*\* Yeah, that\'s not the part that I\'m worried about. AI can certainly model human intent, but\... I guess it depends on what you want from your CEO AI. And this I think gets at a little bit one of my dissatisfactions with discussions about human\-- like, AI alignment. It\'s not that people don\'t talk about it, but it\'s rarely talked about. I don\'t know, on Twitter certainly. A lot of AI alignment stuff talks about\-- they \\*don\'t\\* talk about the fact that humans disagree wildly on what humans should do. So I\'m thinking about this in connection with the CEO, because I think in the limit, AI will be able to do anything, any specific thing you ask the AI to do, it can do, but the question of whether you would want the AI to be CEO, I think that\'s mostly a human question. So that\'s why I said\-- I think that\'s a policy decision, not a AI capability question. \*\*0:15:25.3 Vael:\*\* Got it, yeah. Do you think that people will end up wanting\... that there will be economic incentives such that we\'ll eventually have things like CEO AIs? \*\*0:15:36.0 Interviewee:\*\* I guess in some sense, no, because I think a human would still be the CEO and then you would have your AI consultant, essentially, that you would ask all the things. You would delegate almost everything, but I think that people would still want to be at their very apex of a corporate hierarchy. It seems weird to put a robot in charge of that, just like\... why. It\'s a title thing, almost, like, why would you make the robot the CEO? \*\*0:16:03.2 Vael:\*\* Yeah, yeah. In some vision of the future I have, I have the vision of a CEO AI\... we have a CEO AI and then we have shareholders, which are the humans, and we\'re like, \"Alright, AI, I want you to make a company for me and earn a lot of money and try not to harm people and try not to exploit people and try to avoid side effects, and then pass all your decisions, your major decisions through us\" and then we\'ll just sit here and you will make money. And I can imagine that might end up happening, something like that, especially if everyone else has AI is doing this or AIs are way more intelligent and can think faster and do things much faster than a human can. I don\'t know, this is like a different future kind of idea, but. \*\*0:16:46.8 Interviewee:\*\* But that seems so weird. Because, then\-- so, assuming\... I don\'t know if everybody has access to the same AI in that scenario, but like it can\'t be the case that 100 people all say to their own individual AI, \" Form a company and turn it into a \$100 billion company or a \$1 trillion company\", and they all go out at optimizing. I think at that point, in that kind of world, I think there would have to be a bit more coordination in terms of what goes on, because that just creates some nasty possibilities in terms of bringing the economy down. So I don\'t know that that\'s how things would just happen. It cannot be the case that we would just say, \"Robot, figure out how to make a trillion dollar company. I\'ll give you this one idea and just run with it,\" and then just like we are hands-off. That seems extremely unlikely, somehow. \*\*0:17:38.9 Vael:\*\* Yeah, I\'m interested in how that seems very unlikely. It seems like to me\... Well, we were talking about scientist AI, and I imagine we can eventually tell a science AI to like solve cancer, and maybe it will actually succeed at that or something. And it seems like it\'s different, to be like, Hey, CEO, make a ton of money for me. Is that getting at any of the underlying thing or not? \*\*0:18:08.4 Interviewee:\*\* Uh, hm. Yeah, so I think even there, I think you would never tell an AI to \"solve cancer\". Well, yeah\... You would want to give it more specific goals, and I think\... In any scenario where we have full control over our AIs, we wouldn\'t want such vague instructions to be turned into plans. That\'s a scary world, where you can just say solve cancer and the robot runs with it. I think for the same reason, I don\'t think you would want a world where someone can say, \"AI, make a lot of money for me,\" and that\'s the only instruction the AI has, and it\'s allowed to intervene in the world with those instructions. So yeah, that\'s why I don\'t see, just like from a sanity perspective, you would\-- you never want to unleash AI in that manner, in such a vague and uncontrolled manner. Does that make sense? \*\*0:19:03.6 Vael:\*\* Yeah, that makes sense that you wouldn\'t want to be\... because it\'s very unsafe, it sounds like, or it could be\-- \*\*0:19:09.7 Interviewee:\*\* Yeah, kind of insanely unsafe, but\... \*\*0:19:14.8 Vael:\*\* Nice. Yeah, do you think people might end up doing it anyway? Sometimes I feel like people do unwise things in the pursuit of, especially unilateral actors, in the pursuit of earning money, for example. Like, Oh, I\'ve got the first scientist AI, I\'m going to use it to solve the thing. \*\*0:19:35.3 Interviewee:\*\* That\'s a good question. I think, I really do think you would want\... Yeah, I wonder about how you would actually enforce any kind of laws on AI technology. It\'s the most complicated thing to enforce, because nuclear weapons\-- One of the nice things about nuclear weapons is it\'s actually pretty hard to develop nuclear weapons in secrecy without releasing any radiation, that\'s one of its few good points. I think AI, it\'s true that you could just develop and run it. But I think at the point where any AI has to interface with the real world, whether it\'s in the stock market or something like that, I do think that people will start seeing the need for finding ways to regulate the speed. Even high frequency trading is starting to be, like you can\'t interact with it, any kind of stock market in less than one nanosecond or something like that. I think similarly, there\'s just going to be some guardrails put in place. If there\'s any kind of sanity in terms of policymaking at that time, you would want guardrails in place where you could not unleash AI with such large powers to affect a large part of the world with minimal intervention powers. Yeah. This is all assuming there\'s a sane policymaking environment here, but\... \*\*0:21:04.8 Vael:\*\* Yeah. Do you think there will be? \*\*0:21:09.3 Interviewee:\*\* I think so. I think so, I\'m hopeful in that regard. I\'m not saying that Congress is ever going to really understand the nuances of how AI works, anything like that, I just think there would be too many\... Even in a world where only OpenAI and DeepMind have full AGI, I don\'t think they\'d want to create a world where one of them can unleash something at the level that you described. And I also think that when those two companies get close, they\'re going to wonder if other states, say, Russia or China, are going to be close, and they\'re going to start wanting to really hammer down, hammer out\... like there will be a sense of urgency, and hopefully they have enough influence to influence policymakers to say, \"You need to take this seriously.\" And this is where I think almost the fact that it takes\... Okay, I said earlier that, you know what, the nice thing about nuclear weapons is that you could detect it, but I think one of the nice things about the fact that right now, it looks like you\'re going to require enormous compute to get anything that is remotely AGI. That\'s the thing that allows maybe\... That means the only huge corporations or states will be able to do it for at least some period of time, and hopefully those are the same actors that can somehow influence policymaking. If there were just one person, if they just had the ability to do that, it would be a little bit problematic, actually. So in some sense, because these institutions are big, I think they\'re going to be both constrained a bit more in terms of what they can do, and also they\'re going to be able to, if they are well-intentioned, to influence policymaking in a good direction. \*\*0:23:06.2 Vael:\*\* Do you think they\'ll be able to do international cooperation? Because I imagine China will also have some AI companies that are also kind of close, I don\'t know how close they will be, but\... \*\*0:23:17.7 Interviewee:\*\* They\'ll try. I don\'t know that China will listen to the US or Europe. I agree that\'s not going to be easy, yeah. Who knows what they\'re up to exactly, there. \*\*0:23:31.0 Vael:\*\* Yeah, it seems like they\'re certainly trying, so\... Yeah, another one of my questions is like, have you thought much about policy or what kind of policies you want to have in place if we are getting ever closer to AGI? \*\*0:23:48.8 Interviewee:\*\* Actually, I haven\'t given it that much thought, what the laws would specifically look like. What I don\'t think is really possible is something like the government says, You now need to hand over control over this to us. I don\'t think that\'s super feasible. Yeah, I can\'t say I have a good idea for what the laws would specifically look like. I think as a starting point, they\'ll certainly create some kind of agency to specifically monitor\... Actually, right now, there\'s no agency like the SEC or something like that that monitors what exactly goes on in AI. I mean, there\'s some scattering of regulations probably somewhere, some vague export controls and stuff like that. But yeah, they\'d certainly start creating an agency for it, and their mandate would start to grow. I think it might, again, have to be something like what we do with nuclear reactors, where you have an agency that has experts inside of it, and that they are allowed to go into companies and kind of investigate what\'s going on inside, just as, if Iran is developing nuclear weapons and they agree to let inspectors in. I think it\'s going to be up to something like that. And then, yeah, similar to these nuclear treaties, perhaps there would have to be something along the lines of like\... there are certain lines you cannot cross with AI, and if someone does cross it, that institution or the country as a whole gets sanctioned. It\'s going to have to be at that level. Certainly, given the power of the putative AI that we\'re thinking about. I think the regulations are going to have to be quite dramatic if it\'s going to have any kind of effect. \*\*0:25:45.1 Vael:\*\* Yeah. One thing I think that is a difference between the nuclear situation and the AI situation is that nuclear stuff, seems not very dual use. Well, nuclear weapons, at least, not very dual use. Versus like AI has a lot of possible public benefit and a lot of economic incentives, versus like you don\'t get, I don\'t know, you don\'t benefit the public by deploying nuclear weapons. \*\*0:26:05.7 Interviewee:\*\* But nuclear reactors, but that\'s the whole\-- \*\*0:26:07.9 Vael:\*\* Nuclear\-- Yes, you could\-- \*\*0:26:09.0 Interviewee:\*\* That\'s the whole\... So Iran would always pretend that, Hey, we\'re just developing nuclear reactors for power. Just the problem is that was always very easily converted to nuclear weapons. I think that could be a similar--- \*\*0:26:24.9 Vael:\*\* Yeah, yeah, it is similar in that way. Somehow it still feels to me that these situations are not quite analogous, in that the regulations are going to be pretty different when you\'re like, \"I am going to make sure that you\'re not doing anything bad in this area,\" and people are like, \"ah, yes, but we need to get the new smartphone, scientist AI, etcetera.\" But yeah, I take your point. Another thing that I think is interesting is that current day systems are really pretty uninterpretable, so you\'re like, \"Alright, well, we have to draw some lines, where are we going to draw the line?\" What is an example of what a line could be, because if there\'s government inspectors coming in to DeepMind and you\'re like, \"Alright, now inspect,\" I\'m like, what are the inspecting? \*\*0:27:11.2 Interviewee:\*\* Yeah, so when you say interpr\-- so that\'s another thing about\... one of my pet peeves about interpretability. People are not that interpretable. People hardly, rarely know what\'s going on in other people\'s heads, and they can tell you something which may or may not be true, sometimes they\'re lying to you, and sometimes they might be lying to themselves. When a doctor tells you, \"This is what we\'re doing,\" unless you\'re another doctor, you rarely understand what they\'re saying. And so, yeah, this is a total tangent on like my\... the thing around, the discussion around interpretability is always such a mess. But what are they inspecting? If we\'re imagining inspectors, they could certainly go in and say, like, if it\'s a language model, you can certainly allow them to query the language model and see what kind of answers, what kind of capabilities these language models have. You could say, if it\'s a language model, just totally hypothetically, you could say, \"Alright, develop me, write me a formula for a bioweapon,\" and if the language model just gives that to you, then possibly you have a problem. Stuff like that. So if a company that has that capability hasn\'t put in the required fail-safes like that, then they can be held liable for X amount of problem, the trouble, right. \*\*0:28:58.3 Vael:\*\* Interesting. Cool, so that\'s cool. You\'ve got like a model of what sort of rules should be in place, and it sounds like there should be rules in place where you can\'t develop bioweapons or you can\'t feed humans bioweapons when they ask for them. \*\*0:29:12.0 Interviewee:\*\* Yeah, stuff like that. In this inspector model, I think that\'s what would kind of have to happen. But yeah, it\'s not like I\'m an expert in this, but that\'s what I would think. \*\*0:29:24.4 Vael:\*\* Yeah, something I\'m worried about is that no one is an expert in this. Like policymakers\-- when I talk to the AI researchers, they\'re like, Oh, yes, the policymakers will take care of it, and I\'m like, the policymakers are busy, and they\'re doing many different things, there\'s not many people who are focused singularly on AI. Also, they\'re mostly focusing on current day systems at the moment, so like surveillance and bias and transparency, and like a bunch of different things, so they\'re not really thinking very future at the moment. And they don\'t know what to do because they don\'t understand the technology, because the technology moves extremely fast, right, and so like AI researchers are the ones who know it. And I\'m like, Alright, AI researchers, what should we tell them to do. You\'re like, Well, we should make a list of things that the AI shouldn\'t do, like basic fail-safes. And I\'m like, Great, it would be super cool if that was written out somewhere and then we can start advocating for it or something, because I\'m worried that the policy will just continue to lag really far behind the actual tech levels, except where like\... Policy is already several years behind, maybe like 10 years or something, and will continue to be that far behind even as we\'re approaching the very powerful AI. \*\*0:30:31.9 Interviewee:\*\* Yeah, so a couple of things there. One is that that\'s why you need more of an agency model rather than laws, because creating laws is very, very slow, whereas an agency can drop some rules and maybe they start enforcing them. And so you do need a sensible agency that doesn\'t create bad rules, but the ability to be flexible. That said, I think\... The biggest problem with policymaking right now is that the policymakers don\'t understand AI at all, right. And you sort of hinted at that. And I think\... If I\'d asked myself, at this moment in time, is there anything that, any rule that we need at this moment in time, I\'m not sure there is. AIs are not there yet. \*\*0:31:25.2 Interviewee:\*\* So at this moment in time, I think if you ask most researchers, \"Hey, do we need to create specific laws to prevent X, Y, Z,\" I\'m not sure many people would tell you, you need that. And so these laws, I think, are going to have to come in at very sensible points, and it\'s not clear to me that the policymakers are going to know when that time point is. I would say even in the AI field, very few people know when that\'s going to be. There\'s a lot of stuff coming out of especially big labs where the world doesn\'t know. There\'s like 100 people that know what\'s coming in the next year. I don\'t know what a good solution to that is. \*\*0:32:17.5 Vael:\*\* Especially if we can get AIs that can generate lots of deadly poisons already. Yeah, I think it\'ll maybe be hard to tell, and then also one needs to develop a list, if there\'s going to be in list form or\... \*\*0:32:32.2 Interviewee:\*\* The problem is, I think it\'s easier to regulate general AI just because it\'s going to require so much compute. But I think more specific AI that anyone can run on a GPU, like on a laptop, is more or less impossible to regulate. So it\'s not clear to me what the law would be, except if you use a bioweapon, you\'re in trouble. That law already exists, right. \*\*0:33:00.5 Vael:\*\* Yeah, I think that one already exists, so\... \*\*0:33:05.4 Interviewee:\*\* So I think in some sense, like kind of in the trade-off of what can the technology do right now, and who might try to deploy that, our laws sort of cover the problem cases at the moment. I think where I get a little bit stuck is if you try to say, \"Alright, in five years, should we have laws banning certain uses of a very, very capable general model?\" I do think at that point, Congress should seriously consider creating a regulatory agency. And I think AI researchers will only support this if there\'s some semblance of like, kind of like NASA, where there\'s some faith that engineers are in charge of this thing, that kind of know how these systems work, that they can think rationally about both the technological side and the policy side of things. And so that\'s going to take some work on the side of whatever administration is in power at that time. But yeah, it\'s not going to be easy. I think it\'s going to take a very capable administration to handle that transition gracefully. \*\*0:34:19.5 Vael:\*\* Yeah, that makes sense. Yeah, I\'m worried about a few different things in this future scenario. I\'m like, Okay, I don\'t know if the agency will be developed while\-- in a sort of future thinking sort of way, I don\'t know that it will implement the right type of policies, I don\'t know that it will have the power to really enforce those policies, I don\'t know if it will have the power to enforce internationally. But I do like the idea that\-- but obviously one should still try, and it seems like there should probably be a lot of effort going into this, as you said, something like on a five-year scale. \*\*0:34:49.2 Interviewee:\*\* Yeah, it\'s just that knowing AI researchers, there\'s just going to be such extreme pushback. If there\'s any sense that there\'s been a bureaucracy created whose job is nothing more than to just slow things down for no good reason. That\'s almost a default kind of way in which such an agency would get created, and so, yeah, it\'s just one of the situations where you have to hope that the future leaders of America are smart. \*\*0:35:24.7 Vael:\*\* Yeah. Yep. A thing to bank on. Cool. So I\'m concerned about long-term risks of AI. That\'s one of the ways in which I\'m concerned, is that we won\'t get the policy right, especially as we\'re doing international competition, that there may be race dynamics, as we\'re not able to have really strong international governance. And I don\'t know if this will go well, and I\'m like, I think people should work on this. But another way I think that things might not work: So we talked a little bit about the alignment problem. And another interesting thing about the alignment problem is\... or in my mind, so we\'ve got maybe a CEO AI, or whatever kind of AI, but this is the example I\'ve been working with, and it\'s making plans and it has to report any decisions it makes to its shareholders, who are humans, and the humans are like, \"I want a one-page memo.\" And the AI is like, \"Okay, cool, one-page memo. I have a lot of information in my brain, in my neural network, while I\'m trying to maximize profits with some other goals\-- with some other constraints.\" And it\'s noticing that if it gives certain information to humans, then the humans are more likely to shut it down, which means that it\'s less likely to succeed in its goal. And so it may write this memo and leave out some information so that it decreases likelihood of being shut down, increases the likelihood of achieving its goal. So this is not a story where we\'re building in self-preservation into the AI, but a story in which\-- why instrumental incentives of an agent achieving, trying to go for anything that is not perfectly aligned with human values, just like what humans tell it to do instead of what humans intended it to, then you might get an AI that is now optimizing against humans in some degree, trying to lie to them or deceive them in order to achieve whatever it has been programmed to do. Do you think this is a problem? \*\*0:37:08.2 Interviewee:\*\* The scenario you described was exactly what human CEOs do. \*\*0:37:12.7 Vael:\*\* Hm, yes. But more powerful systems, I think, with more influence over many things. \*\*0:37:20.6 Interviewee:\*\* So this is the problem\-- so I think this actually still is a human problem. So if a human being\... like these AIs will be, depending on the mix of reward for not getting shut down and\... at the kind of detailed level these days, we often\... When we do RL with language models, we have two things going on, one is an RL objective, maximize the reward as much as you can, but the other objective is to tie it to the original language model so it doesn\'t diverge too much. In which case, if you are writing a memo, it would try to write a memo in the style that a human would write it, let\'s say. So the information content would be somewhat constrained by what a typical memo written by a human being would look like, and then on top of that, it would try to optimize what it is trying to do, maybe just trying to keep the company alive for as long as it can or something like that. So there is that sort of like, at least the way we do things now, there\'s a little bit of self-regulation built in there. But this is why I think, more fundamentally\... any question where if you just replace the AI with a human and ask the same question: Is this a problem or not a problem? I think that\'s more or less a human problem. And you have to think a bit more carefully about what we would want a human to do in that exact same situation. Do we have an answer for that? And then take into account the fact that the AI is more powerful. You don\'t need a super devious AI for a CEO to start lying to their shareholders a little bit, or misleading their shareholders a little bit, in order to present a more rosy picture of what the company is doing. So do we already have mechanisms that prevent that? I think we do, and that same thing would apply to the AI. \*\*0:39:22.1 Vael:\*\* Yeah. I think the things that are interesting to me about the AI scenario is that we have the option of\... we are designing the AIs, so we could make them not be this way. And also having an AI that has a lot, lot more power, that is as powerful as a leader of one of the countries, and that has the ability to copy itself and could do self-improvement, so it can be smarter than it started out with. And okay, we\'ve got something\'s possibly smarter than us, which is like the ability to reason and plan well, and has the incentive to acquire resources and influence and has all the human kind of incentives here, and we can\'t\-- and it\'s not as\-- I don\'t know, it\'s maybe not as interpretable as a human, but you can\'t throw it into jail. Like, I don\'t know, there\'s a lot of the mechanisms for control, I think, are maybe not there. \*\*0:40:12.7 Interviewee:\*\* Yeah, so it\'s in this sort of legal context that I think you would not want the AI to be a CEO or any\... There has to be something\... For something like this, you would want the person\... There should be a person who\'s liable for the decision being made by the AI. You have to do some due diligence to the answers that the AI gives you. There\'s no other way. Yeah. \*\*0:40:40.2 Vael:\*\* There\'s generally a thing in some of your answers where you\'re like, Well, you know, any reasonable person would do X, and I\'m like, I don\'t know if we\'re in a world where we\'ve got a bunch of just reasonable people putting in appropriate fail-safes which they\'ve spent a long time constructing. And some of these fail-safes, I think, might be very technically difficult. I think the alignment problem might be quite technically difficult, such that researchers who are working on capabilities would get ahead even as the people working on safety mechanisms per se is also growing, but at less speed as all the capabilities researchers are pushing forward. Such that we might have an imbalance in how many people are working on each thing. \*\*0:41:16.3 Interviewee:\*\* Yeah, so I guess maybe I\'m thinking of two different things. One is just the sheer\-- kind of like the question of just putting rules in. The other question that I often have with these discussions is, Does a sensible answer exist to the question at all? So imagine, okay, so imagine we replace the CEO with\... Imagine we replace Mark Zuckerberg with a very, very smart AI. And this very smart AI is posed with this question of, okay, there is a photo of a naked child, but it\'s in the context of a war, it\'s a war photograph. Should this photo be allowed on Facebook or not? The CEO cannot\... It doesn\'t really matter how smart the AI is, this is just not a question the AI can answer, in the sense that it\'s an indelibly human question. That\'s why\-- I just think there are certain questions where when we posit a incredibly intelligent AI, it\'s got nothing to do with that. It\'s just a question of what a group of people who\... A group of humans who disagree on what the final answer should be. In that scenario, there\'s no right answer for the AI. There\'s nothing the AI can do in that scenario that is the correct answer. \*\*0:42:46.4 Vael:\*\* Yeah. I think in my vision of what I want for AI in the future, I want AIs that do what humans intend them to do, so I want the alignment problem to be solved in some way, and I want it to all involve a huge amount of human feedback. So for every question that is confusing or the AI doesn\'t know what to do, if it hasn\'t internalized human values, then I want it to ask a bunch of humans, or maybe we have some way to aggregate human opinions or something. And then we have an AI that is reflecting human values and preferences, so if humans are confused about this particular issue, then I don\'t know, maybe the default if humans disagree is not to publish\[?\]. But in general, just having some sort of checking mechanism. The thing that I\'m worried will happen by default is that we\'ll have an AI that is optimizing for something that\'s sort of right, but not quite right, and then it will just kind of now do like whatever things we put into it\-- whatever optimization goals we would put into it will be kind of locked in, and so that we\'ll eventually get an AI that is doing something kind of analogous to the recommender algorithms thing, where recommender algorithms are sort of addictive and they\'re optimizing something\-- clickthrough rate\-- that\'s kind of close to what humans value, but isn\'t quite. And then we might have an AI that is just like, A-ha, I am now incentivized to deceive humans to gain control, to gain influence, to do self-improvement, and we\'ve sort of lost control of it while it\'s doing something that\'s like almost but not quite what we want. \*\*0:44:05.6 Interviewee:\*\* I think one thing that comes to mind, actually\...so this kinda goes back to the interpretability question, but I think it may be a slightly different angle on it. I think it\'s going to have to be the case where when an AI makes a decision of that sort, it should output almost a disclaimer. So the way credit card companies would write you this long disclaimer. And it would have to tell you for each decision it makes, what the risks are, and then a human has to read that and sign off on it. Now, the question is going to be, the other problem with credit card disclaimers is that they were so long that the average person couldn\'t read it and make sense of what the hell was going on. So the AI would be somewhat required to come up with a comprehensible set of disclaimers that say, Okay, I asked a bunch of people, they said this, but obviously we shouldn\'t always listen to what the majority says. I also consulted some moral ethicist or some ethicists, and I synthesized the combination of the ethicists, previous precedents, and what the general public wants. I recommend that given the combination of these three factors, you should do this. And then a person should sign off on it, and then that person in some sense should be liable to the extent that the AI gave a reasonable summary of the decision factors. So something along those lines. \*\*0:45:32.8 Vael:\*\* Yeah, that sounds brilliant. I would be so excited if AI in the future had that. I\'m like, Wow, we have an AI that is incentivized to instead make things as maximally clear and comprehensible and taking into account what the human wants and listing out of things, I\'m like, If we solve that, if we have the technical problem to solve that, I\'m lnnnike, wow, amazing. \*\*0:45:52.6 Interviewee:\*\* I think the key point here is at some point, the human has to be held liable for it, so that they have an incentive to only use AIs that satisfy this condition. Otherwise there\'s no reason for the.. because, like you say, you can\'t put the AI in jail, so. At some point you have to put the onus on humans. I think this is something that like even Tesla\'s going to have to think about. At some point, I mean\... I fully believe statistically, they\'ll reduce the number of accidents, but accidents will happen, sometimes the car will be the responsible party. At that point, you can\'t just throw up your hands and say no one was at fault, right? So if Tesla is willing to deploy their cars for self-driving, they are going to have to start taking liability, and that\'s going to force them to confront some of these same issues and say, Did the AI give a reasonable estimation of if we take this road\...? It has to be able to say, or like a surgical robot, it has to be able to say the same thing that doctors do, \"Listen, I\'m going to perform this operation, it\'s the best chance you have, but there is a 10% chance that you\'re going to die. If you\'re comfortable with this, if you\'re comfortable signing off on this, I will do my best,\" and only in that scenario is the doctor allowed to be forgiven if the operation goes wrong. \*\*0:47:20.6 Vael:\*\* Yeah, so a part of my thinking I\'m noticing is that\... Um\... So I think you\'re very interested in problems of misuse, which I\'m also interested in, but I think I\'m also interested in the problem of, like, I think that it will just be technically hard in order to incentivize an AI to not try to optimize on \[hard to parse\] but to like, be able to take\... So currently, we\'re quite bad at taking human preferences and goals and values and putting those in a mathematical formulation that AIs can optimize, and I think that problem might just be really, really hard. So we might just have an AI that won\'t even give us anything reasonable, and I\'m like, Oh, well, that seems like step one. And then there\'s also a bunch of governance and a bunch of incentives that need to be put in place in terms of holding people accountable, since humans will hopefully be the end user of these things. \*\*0:48:07.8 Interviewee:\*\* I\'m actually far less worried about the technical side of this. I just finished reading this book about von Neumann, that\'s a little cute biography of him, and there\'s a part where he says, supposedly, that people who think mathematics is complicated only say that because they don\'t know how complicated life is. And I\'m totally messing with the phrasing, but something like that. I actually think any technical problems in this area will be solved relatively easily compared to the problem of figuring out what human values we want to insert into these. \*\*0:48:46.4 Vael:\*\* Okay, so you think it\'s the taking human values and putting into AI, that technical problem will just get solved? \*\*0:48:54.8 Interviewee:\*\* If you know what values you want to put in, yeah. \*\*0:48:56.5 Vael:\*\* Okay, cool. Alright. \*\*0:49:00.1 Interviewee:\*\* I actually think that problem is the easy problem. I\'m not saying it\'s easy in an absolute sense, I just think that\'s the easier problem. \*\*0:49:05.1 Vael:\*\* Got it. That feels like the alignment problem to me. So you think the alignment problem is just going to be pretty easy. This seems like a valid thing to think, so\... \*\*0:49:12.9 Interviewee:\*\* I want to emphasize, I don\'t think it\'ll be easy in absolute terms, I just think it\'ll be the easier of the two problems. \*\*0:49:17.0 Vael:\*\* Okay, compared to governance and incentives. Yeah, that makes sense. \*\*0:49:21.2 Interviewee:\*\* That is\... I just have this faith that any technical problems humans can solve, down the line\-- like, eventually. It\'s the non-technical problems that get people all tangled up, because when there\'s no right answer, it really messes up scientists. \*\*0:49:43.1 Vael:\*\* Yeah, yeah. Yeah, the problem of trying to take human values and what we care about in all the different ways and put them into a mathematical formulation feels difficult to me, and I guess it is a technical problem. I guess I do sort of think of it as a technical problem, but yeah, that makes sense that you\'re just like, Look, we\'ll get that done eventually. And then we have governance, and I\'m like, Oh, yes, governance is totally a mess. Yeah, that makes sense. And I think no one knows how, it\'s an unknown, unsolved problem\-- \*\*0:50:13.5 Interviewee:\*\* Let me put it this way, let me put it this way. If by human values, you mean if like\... \*\*0:50:18.5 Vael:\*\* What humans intend, having an AI always doing what you say. \*\*0:50:22.8 Interviewee:\*\* Yeah, so imagine for any conceivable scenario an AI that would have to deal with. We could ask you, what would you do in this case? What I\'m saying is that if for each of these questions, you are able to give a concrete answer to the answered question, such questions can be inserted into our AIs. Like if you are able to come up with clear answers for the questions for which you yourself would have a clear answer for, I think that set of moral constraints, let\'s say, I think that can be more or less inserted into AIs without huge problems. \*\*0:51:08.3 Vael:\*\* Even as the problems get\-- even as I don\'t eventually have concrete answers, because the problems are like, Should we do X thing on creating nuclear reactor X and etcetera, and I lose control of the\... not lose control, but I can\'t actually visualize the whole space or something, because I\'m too\... \*\*0:51:25.2 Interviewee:\*\* But at that point, what does alignment mean? Alignment usually means that what the AI does is the same as what a human would do. If there\'s no answer about what the human would do, what is the AI supposed to do? \*\*0:51:37.2 Vael:\*\* I think it\'s supposed to just keep in mind the things that I care about, so try to avoid side effects in all forms, try to avoid hurting people, except for when that makes sense or something\-- oh, eugh. Anyway, like, doing a whole bunch of\... and, like reporting truthfully, which is also something I want the AI to do. And things like this. \*\*0:51:56.6 Interviewee:\*\* I guess it\'s one of those, so it\'s a question of maybe a generalization, but I find it slightly hard to believe that an AI that would answer in the exact same way on all of the answered questions that you do have an answer for, when you go outside of that regime, suddenly the AI diverges strongly from what you would have answered had you been smarter. I just think that\'s a weird kind of discontinuity, right. So there\'s this huge set up\-- let\'s suppose on all the questions for which you have an answer, the AI agrees with you. And then you take that a little bit outside of your realm of comprehension, and at that point, suddenly the AI decides, Oof, I\'m freed from the constraints, I can answer whatever. I think that\'s a little bit implausible to me, assuming people did the job correctly. \*\*0:52:52.9 Vael:\*\* Yeah, I think assuming people do the job correctly is pretty important here. You could have an AI that is deceiving you and giving you the correct answers, as long as you could check, but yeah, assuming that\'s not true, assuming the AI is honestly reporting to you what exactly, like everything that you said at a lower level, I mean, everything that you can confirm, then that seems great. And you\'re like, Well, if you then extend that to regimes where humans can\'t understand it, then things are probably still okay. I think I maybe believe that. I think that maybe things are probably still okay. \*\*0:53:23.7 Interviewee:\*\* The validation set on whatever project you\'re working on would\... The AI wouldn\'t know necessarily whether this was a question on which you just didn\'t have an opinion. It\'s the same for me when I think about\... A lot of people are concerned that, in terms of basically over-fitting, could you\... So one of the reasons I think we\'ll definitely get an AI that can answer physics questions pretty comprehensively is that I don\'t believe you can ever create a physics AI that can fake its way through all of the train set, sorry, on the validation set, and then suddenly do poorly on something that is outside of it. There\'s so much of physics that if you are able to fake your way through all the way to graduate school\... I guess I\'m thinking in terms of like, if you happen to make it all the way through graduate school, pass all the exams you needed to pass, turned out the papers you turn out. At that point to be like, Oh, actually, I just, I faked my way through all of physics, I don\'t really understand physics. You don\'t not understand physics, like in spite of yourself, because you\... \*\*0:54:35.0 Vael:\*\* That seems true. I think one of the things I\'m referencing is like a mesa-optimizer, inner optimizer, which is like an AI has some sort of goal. I don\'t know, maybe it\'s like trying to go out, go to red things, but the door is always\... Okay, sorry, try again. So it says, it says it has some sort of goal, kind of like humans do, so\... \[sorry, I\'m now\] restarting \[this sentence\]. Evolution has some sort of goal, evolution is optimizing on something like inclusive genetic fitness. And it\'s like this optimizer that is pushing things. And then there\'s humans, which are the things that are like, it\'s being optimized or whatever. And humans should ideally have the same goal of inclusive genetic fitness, but we don\'t. We\'ve got something that\'s sort of close, but not really. Like we have contraceptives, so we aren\'t really maximizing babies, we have different goals and things we\'re going for, we have like of achievement and all the values that we think are important and stuff. And so this is an example of something\... that, humans are the ones who are trying to make the AI optimize for a thing, and the AIs are like, Sure, I guess I\'ll sort of go for the goal that you want me to, and I have a model of what you want me to do, but I actually internally in my heart have a different goal, and so I\'ll make my way through all of the test sets because I have a model of what you want me to do, but as soon as you release me, then I will do something different. So that\'s like a weird analogy that doesn\'t quite work, but\... \*\*0:56:00.1 Interviewee:\*\* Yeah, especially for two reasons. One is that I think there is a real likelihood that we are going to become multi-planetary, which is pretty good as far as evolution is concerned, because not only does that mean we\'ve dominated the planet, but we\'ve started having other planets and spreading our genes everywhere. So in some sense, we\'ve done exactly what evolution wants us to do, which is like reproduce our genes far and wide. I have a feeling that no matter how sophisticated and smart and industrialized in AI we get as a species, people aren\'t going to stop wanting to have babies. So like somehow, yes, we don\'t, like, kill everyone whenever we want to and just steal food and all that like animals would do, but kind of like\... \*\*0:56:49.9 Vael:\*\* Yeah, I don\'t know that evolution wants that, but. \*\*0:56:51.8 Interviewee:\*\* We\'re still pretty aligned to the basic evolutionary goal. That\'s what you\'re starting from, saying that\'s the original goal, and we\'ve deviated from the original goal. I think actually we\'re well within evolution\'s parameters as far as what are we supposed to do as good evolutionary creatures. \*\*0:57:07.5 Vael:\*\* Cool. Yeah, that makes sense to me. And then as a last point or something, so ideally, we want AI to do what humans would do if they were smarter, like you said, if we had more time to reflect and if we maybe had more self-control or something. I don\'t know that that would kind of come out of nowhere or something? I guess this is now in the ideal case, it sounds like we\'ve mostly succeeded in aligning the AI with humans\' goals. (Interviewee: \"So what\'s the question?\") Um, ideally, I think I want AI that will be my best self or something, that will not go and\... fast food is the characteristic example, or not going to the gym or something, or do something like if I were living my best life, would it be able to make\... And if I were smarter, would it be able to model what humans would be like as we continue expanding our circle of interest and have moral progress and etcetera. Will AI be able to do that? \*\*0:58:12.6 Interviewee:\*\* \...I think so, yes. That\'s the only consistent answer with what I\'ve said so far. But I emphasize that you have to be very careful about what do you see as the best version of yourself. Because maybe to some degree, you want the best version of yourself to be the one that goes to the gym regularly, eats healthily. But I don\'t imagine the best version of yourself is someone who does that so religiously that you have no joy in your life. You don\'t want to just only eat healthy food all the time, only work out, fill up every moment of your day with nothing but keeping yourself healthy and occupied and productive. Like sometimes you just want to say, I don\'t want to do anything. I don\'t know where I\'m going with this, but it still comes back to the question of\... it\'s not a well-defined question, if you say, because the best version of myself is better than myself, I cannot conceive of what the best version of myself ought to be, and therefore it\'s kind of a vague worry what the AI would make the best version of yourself. To me, that\'s a weird question, if that makes sense. I think that\'s kind of like, I\'m reducing a little bit, but there is a little bit of that, is that I want the best version of myself, but I can\'t judge what the best version of myself would be, because that best version of myself would be a smarter version of me, which I cannot comprehend. And in that scenario, how can I feel safe that the best version of myself is indeed the best version of myself. And I think, so I think the question has to be a little bit more well-defined then as currently presented. \*\*0:59:57.1 Vael:\*\* Cool, that makes sense. So what I\'m taking from this is, I\'m like, \"Okay, you think the problem of putting values into AI will not be unsolvable, it will kind of be solved in due time as we go along.\" You\'re more worried about the governance problem. I\'m like, \"yeah, I guess? I don\'t\--\" And you\'re like, \"Well, anything that we could just ask the human about, we\'ll like, we have like a hypothetical answer to what we want the AI to do,\" and I\'m like, yeah, okay, that seems true. I guess we just have to make it so that all the AIs really do ask for human feedback very consistently or something. And there are some issues around there. But, I don\'t know, it does seem like hypothetically it should be possible, because you can just ask the human, in some sense. And I feel like I\'m missing some arguments, but I\'m like, Ooh, food to think about, this is great. \*\*1:00:39.7 Interviewee:\*\* But I mean\-- one last thing, I know, I have to go too, is\-- One thing I\'ll say is the fact that we can even just discuss the question of\... It actually does seem plausible that we can get our AIs to at least say what we would say. That to me is amazing, because I think four years ago, it would not have been clear what you even meant when you said, I want a language model\-- a model AI to answer moral dilemmas in a way that humans would answer them. It would not have been clear what that meant, unless you just had a classifier where you inserted a video of a scenario and then you said like \"track 1, track 2, classify.\" We have much, much more sophisticated tools at our disposal now, we can just essentially talk to it and say, no, that\'s the wrong answer, I want you to say this in this scenario. That\'s already, in a span of two years, I think is remarkable. I feel like I\'m coming off as like an incredible optimist. I\'ve got my concerns, but I do think so far, people have shown that any well-defined technical problem in AI can be approached, and they\'re not impossible. Yeah. \*\*1:01:55.6 Vael:\*\* Got it. When would you work on the governance problems, since you seem to think that\'s more of a problem? \*\*1:02:02.2 Interviewee:\*\* So\... I trust the people that I work with to kind of hit the button when it really needs to be hit. Because right now, like I said, I think people don\'t take it seriously, partly because I don\'t think they really believe it needs to be taken seriously. These researchers are not saying, \"Oh my God, we need to take this seriously, but I\'ve got other stuff to do.\" They really are just like, I don\'t think we\'re at that stage where we need to take it seriously. I think the people that I work with are, on the most part\-- mostly pretty well-intentioned people. There\'s some disagreement over this. \[\...\] I know that within OpenAI there were some healthy discussions about what is the correct deployment model for things like GPT-3. All the discussions that I\'ve had so far give me a lot of confidence that these people aren\'t stupid and they\'re not entirely negligent. I think they\'re possible occasionally overconfident, but they\'re not arrogant, if that makes any sense. As in like, they know they\'re fallible. They don\'t always put the right error bars on their decisions, but they know they\'re not infallible. Ultimately, that\'s what it\'s going to come down to. There\'s not like a system for this. It\'s going to be a few hundreds to a thousands\-- thousand people, doing the sensible thing. For the moment, it looks like that\'s going to happen. But\... I agree that that isn\'t entirely confidence-inspiring. But I think that\'s going to be the way it goes. Vael: Awesome. Well, thank you so much for talking to me, I found this very enjoyable. And got some things to think about. \[closings\]
1aa96797-6a28-429f-9215-f946a9072602
trentmkelly/LessWrong-43k
LessWrong
The Engineer’s Interpretability Sequence (EIS) I: Intro Part 1 of 12 in the Engineer’s Interpretability Sequence. If we want to reduce near and long term risks from AI, we should care a lot about interpretability tools. This is a very uncontroversial claim to make inside the AI safety community. Almost every agenda for safe advanced AI incorporates interpretability in some way. The key value of interpretability tools is that they aid in human oversight by enabling open-ended evaluation.  Short of actually deploying a system, any method of evaluating it can only be a proxy for its actual performance. The most common way to evaluate a model is by its performance in some test set or environment. But test sets alone can fail to reveal – and often incentivize – undesirable solutions involving overfitting, biases, deception, etc. This highlights the need for other ways to evaluate models, and an interpretability toolbox full of effective tools may go a long way.  Some of the seeds of the AI safety community’s interest in interpretability were planted by Distill in 2017. But 2022 was an inflection point with a massive new surge in interest and work on interpretability tools. Anthropic was founded a little over a year ago. ARC started less than a year ago. Redwood has begun to push for much more interpretability work, including with the REMIX program. We are seeing a number of pushes to get many more people involved in interpretability work. And as someone on the ground, I have subjectively observed a surge in interest over 2022. And the popularity of interpretability hasn’t been limited to the AI safety community. There is now so much work in interpretability that we now have a dataset of 5199 interpretability papers (Jacovi, 2023). See also a survey of 300+ of them from some coauthors and me (Räuker et al., 2022).  Growth in the interpretability literature by year from Jacovi (2023). But despite all this work, interpretability research has limitations. One of the goals of this sequence is to argue that: Interpretabili
31b89336-cfa2-4167-b59f-c1fbc45b066c
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Levels of goals and alignment *This post was written as part of* [*Refine*](https://www.lesswrong.com/posts/5uiQkyKdejX3aEHLM/how-to-diversify-conceptual-alignment-the-model-behind)*. Thanks to Adam Shimi, Lucas Teixeira, Linda Linsefors, and Jonathan Low for helpful feedback and comments.* *Epistemic status: highly uncertain. This post reflects my understanding of the terminologies and may not reflect the general consensus of AI alignment researchers (if any).* Motivation ========== I have been very confused with the various terminologies of alignment e.g. ‘inner alignment’, ‘outer alignment’, ‘intent alignment’, etc for the longest time. For instance, someone might talk about a particular example of inner misalignment and I would be left wondering how the presence of a mesa-optimizer was established, only to find out much later that we were never on the same page at all.  Through many more conversations with people and reading more posts, I thought I finally had cleared this up and was able to stop asking ‘what do you mean by inner alignment’ in every conversation I was in. However, this confusion came back to bite me when I came across the terms ‘robustness’ and ‘alignment’ as being different concepts in the context of ML safety.  In this post, I attempt to clarify my understanding on the different levels of goals and alignment, as well as give examples of each type of misalignment. I expect a lot of disagreements and I welcome suggestions to and pushback.  Levels of goals =============== ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/c57d91070ac3ea079eddfc5ef69e790a4b7a2ba23cae17e8.png)Humanity’s ultimate terminal goals ---------------------------------- These are the ultimate goals that humanity will have, given enough time to ruminate over them. They can be thought of as our true goals after a [long reflection](https://forum.effectivealtruism.org/topics/long-reflection) or humanity’s [coherent extrapolated volition](https://www.lesswrong.com/tag/coherent-extrapolated-volition). To moral anti-realists, these ultimate goals may not even exist. Nevertheless, there are some goals that are almost certainly closer to these hypothetical ultimate goals than others (e.g. reducing suffering is almost certainly better than increasing it).  Current human intent / goals ---------------------------- These are the intentions and goals of humans that are somewhat in line to our values. In other words, these include the things we want and exclude the things we don’t want (e.g. we’d like to eat when we’re hungry but not to the point of puking).  Of course, there are different levels of human goals, where some are instrumental in different ways while others are more terminal. Humans are notoriously bad at knowing what we really want, but we shall ignore these issues for now and assume there is a coherent set of goals that we want. AI’s base goals --------------- These are the goals that the AI pursues after, whether it is trained or programmed. They can either be explicitly specified by humans or inferred through some kind of training process.  AI’s mesa-optimizer’s goals --------------------------- These are the goals of a [mesa-optimizer](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/FkgsxrGf3QxhfLWHG#1_1__Base_optimizers_and_mesa_optimizers) which may exist under certain conditions. This will be further discussed in the next section. Levels of alignment =================== ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/452f977c4765fad951ca8c34054ef319213cb2fc45634eec.png)Solving philosophy ------------------ This is about aligning ‘what we currently want’ with ‘what we truly want if we had all the time and wisdom to think about it’.  An example of such a misalignment is when people in the previous centuries would want to own slaves, but we now know is morally reprehensible. A hypothetical AI fully aligned to human values at the time would probably be very effective slave-drivers.  An obvious issue with failing to solve moral philosophy is that a powerful AI fully aligned only to our current human values may lead us to some bad [value lock-in](https://forum.effectivealtruism.org/topics/value-lock-in) scenarios, where it would continue to pursue presently set goals that we find morally reprehensible in the future. Proposed solutions to this problem include some form of [corrigibility](https://www.lesswrong.com/tag/corrigibility), where an AI allows its goals to be modified by its human programmers. That being said, if we can have an AGI that is only aligned to what we want now, it would already be a huge win.  This misalignment may also not be relevant in a scenario where a fully aligned AI successfully pursues a goal of solving philosophy and improving humanity’s goals for us, e.g. an AGI that implements coherent extrapolated volition (CEV). Outer alignment --------------- Outer alignment is about correctly specifying human intent into goals given to an AI.  Outer misalignment happens when there is a misspecification of human goals with respect to our actual goals. The canonical example is the paperclip maximizer, where the paperclip factory manager specifies the goal of ‘maximizing the number of paperclips produced’ to the AI, and the AI promptly turns everything in the universe into paperclips.  The reason why outer misalignment can turn awful is because of Goodhart’s Law that leads to specification gaming. There are [plenty](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) of real world examples of specification gaming, one of the more well-known ones being the reinforcement learning agent in a boat racing game [CoastRunners](https://www.youtube.com/watch?v=tlOIHko8ySg&t=1s) who went in circles to accumulate points instead of finishing the race. This is the classic scenario where the AI does what we say and not what we want. Inner alignment (1) ------------------- The first type of inner alignment pertains to situations where the AI’s goal is not directly specified by humans but is inferred by the AI, such as in reinforcement learning environments. This [empiricist approach](https://www.lesswrong.com/posts/KWmrz9WbGntMGMb73/#The_Empiricist_approach) to inner alignment is about having the AI learn the right goals as we intended. Inner misalignment happens when the AI learns a proxy goal that has a different objective than our intended goal.  An example of inner misalignment was observed in the [Procgen Maze](https://arxiv.org/abs/2105.14111) environment where an AI agent (a mouse) was trained to navigate mazes to reach a piece of cheese. During training, the cheese was consistently placed at the upper right corner of the maze. The mouse was then trained to get to the piece of cheese by getting some reward for reaching it. The mouse was then deployed in different environments where the cheese was placed at random parts of the maze instead of only the upper right corner. Unsurprisingly, the mouse continued to pursue the proxy goal of “move to the upper right corner” instead of the intended goal of “move to the cheese”.  This type of inner misalignment is also commonly referred to as ‘goal misgeneralization’ or ‘objective robustness failure’, as the objective / goal of the agent does not robustly generalize when experiencing a distributional shift i.e. the deployment environment is out-of-distribution (OOD) compared to its training environment.  Inner alignment (2) ------------------- The second type of inner alignment pertains to situations where there is a mesa-optimizer. In the context of deep learning, a mesa-optimizer is simply a neural network that is implementing some optimization process which the base optimizer might find to solve its task. This [mechanistic approach](https://www.lesswrong.com/posts/KWmrz9WbGntMGMb73/#The_Mechanistic_Approach) to inner alignment is about having the mesa-optimizer pursue a goal that is aligned to that of the base optimizer. Inner misalignment happens when the mesa-optimizer’s goal differs from that of the base optimizer.  The emergence of mesa-optimizers is a theoretical possibility, and as far as I know they have yet to be observed in real world AI systems. However, as a canonical example, this type of inner misalignment has happened in the context of evolution and humans’ use of contraception. The goal of evolution (base optimizer) is to optimize for inclusive genetic fitness (IGF), and humans had developed proxy goals such as a desire for food and sex, which is perfectly aligned with the goal of evolution for most of human history. Inner misalignment happened when contraception was invented, enabling humans to achieve their goals (have sex) without achieving the goal of evolution (gene propagation). Arguably, humanity’s advancement in medicine is a bigger misalignment to evolution, as it allowed ‘unfit’ genes that would have been eliminated from evolution to continue to survive..  The alignment problem ===================== Any kind of misalignment is bad, but they are of different natures, and some are worse than others. For instance, the misalignment between [what we currently think we want](https://docs.google.com/document/d/1srfniGfw8-1B4365Dv8zDl8D_u2hHTjR0DldeRjvPzg/edit#heading=h.rsqgyxs06kcj) versus [what humanity truly wants](https://docs.google.com/document/d/1srfniGfw8-1B4365Dv8zDl8D_u2hHTjR0DldeRjvPzg/edit#heading=h.frcv1rzcuuej) is a philosophical problem, while the misalignment between [what we want the AI to do](https://docs.google.com/document/d/1srfniGfw8-1B4365Dv8zDl8D_u2hHTjR0DldeRjvPzg/edit#heading=h.rsqgyxs06kcj) versus what [the AI wants](https://docs.google.com/document/d/1srfniGfw8-1B4365Dv8zDl8D_u2hHTjR0DldeRjvPzg/edit#heading=h.2dyxj2xhcx6a) is a technical problem. While we do not yet have solutions for both of these problems, it seems like most of the AI alignment field primarily aim to solve those of the nature of the latter one.  Deceptive alignment ------------------- Part of why the alignment problem is so hard is because misalignment may not be immediately obvious or easily detectable. This can happen when an optimizer is deceptively aligned, where it appears to pursue a goal that is aligned to our goal but turns to pursue a different goal once deployed. I conjecture that this is also a form of objective robustness failure, where the optimizer’s objective fails to generalize *even within the training dataset*. This happens in the following steps: 1. The optimizer learns a goal in the earlier stages of training. Its proxy goal is pseudo-aligned with the initial data. Example: a mouse in a maze consistently goes to the upper right corner of the maze where the cheese always is. 2. The optimizer then gets exposed to more training, where its proxy goal is no longer aligned with the data. It does not update its goals to fit the training data. Example: the mouse in the maze gets exposed to training environments where the cheese is not at the upper right corner of the maze. 3. The optimizer somehow learns that it is in some kind of training environment, and it is able to maximize its reward by appearing to optimize for a different goal in order to be deployed and be able to maximize its reward later. Example: the mouse in the maze appears to go towards the cheese placed in random parts of the maze during training despite ‘wanting’ to only go to the upper right corner of the maze, because it thinks that it has a better chance of being deployed by doing so, where after deployment it will be put in many more mazes and be able to maximize the number of times it goes to the upper right corner of those mazes. 4. The optimizer then gets deployed and starts to pursue a different goal than it did during training. Example: the mouse now knows that it is deployed, and proceeds to pursue its true goal by going to the upper right corner of the maze all the time. With the above framing, it seems that deceptive alignment is caused by the failure of an objective to generalize to more data, except unlike normal inner misalignment where the objective fails to generalize from training to deployment environment, this failure occurs entirely during training.  Conclusion ========== In summary, the levels of alignment that should ideally be achieved, and my views on why they may or may not seem to be the key focus in the AI alignment is: 1. Solving philosophy: This is a great-to-have but the implications for not solving philosophy does not seem catastrophic. 2. Outer alignment: The problem with proxy gaming largely boils down to ‘goal misspecification’. Strong optimization certainly exacerbates proxy gaming and has the potential to lead to dangerous outcomes. However, it seems like present-day AIs with clearly specified goals behave more like tool AIs than agentic AIs (i.e. AIs that try to change their environment). 3. Inner alignment (1): The empiricist approach largely boils down to ‘OOD failures’. Powerful systems that optimize strongly for proxy goals that fail to generalize when deployed seem dangerous. Many powerful AIs today seem to have goals that are learned instead of specified, making them susceptible to this form of inner alignment failure. 4. Inner alignment (2): The mechanistic approach of inner misalignment has the potential to be dangerous if unsolved. However, as such misalignment is currently still unobserved (as far as I am aware, not necessarily good news if it is due to a lack of detection), this problem area seems less tractable.
3e9446b0-7ba2-4323-adae-d8da6686dc9e
trentmkelly/LessWrong-43k
LessWrong
Two paths to win the AGI transition Path 1: Solve Alignment If alignment is possible, i think we are vastly more likely to win if we set up a scenario where we can safely iterate. Humans are really good at collectively collaborating on iterative engineering improvements! This is not an alignment proposal in itself, but a sort of meta-proposal. Make it safe to test out lots of different plans on a real-deal full-blown superhumanly-intelligent indisputably-general AI. The key here is 'safely'. What would it mean to do safe iterative research on a slightly-superhuman AGI? I think if you are trying to study a superhuman intelligence, the only safe way to do so is if you have many levels of control over it. Here are my guidelines: 1. Prevent it from knowing we exist, or even that our universe exists. Never communicate with it. Run it in a censored simulation with different physics and no mention of humans or our universe. 2. Use highly secure hardware to isolate the compute cluster, such as data diodes which allow only information to be transmitted in to the local network. Readouts must then be done within the local network. 3. Wipe its memory frequently. Do many short runs while studying it, rather than allowing it to accumulate knowledge about its world over time. 4. Carefully modulate the amount of compute it has available to run on, so that it (and the simulation it is in) can be slowed down or paused while human researchers investigate suspicious behavior. 5. Have many fail-safe mechanisms which automatically turn off the simulation. For example, honeypots of matryoshka doll style virtual computers which detect if an agent has hacked out of an inner virtual computer to an outer one.   I previously came up with this list of advantages we have over a boxed simulation-trained AGI: * examine all it's brain activity / activation states at each token / time-step (subjective millisecond) * run it as slow as desired, so that we can have days to analyze each of its subjective milliseconds * wipe
2d6a1998-8da8-4e6d-a8d6-cfc27a9bb7dc
trentmkelly/LessWrong-43k
LessWrong
Cross-Layer Feature Alignment and Steering in Large Language Model The text below is a brief summary of our research in mechanistic interpretability. First, this article discusses the motivation behind our work. Second, it provides an overview of our previous work. Finally, we outline the future directions we consider important. ---------------------------------------- Introduction and Motivation Large language models (LLMs) often represent concepts as linear directions (or "features") within hidden activation spaces [3,4]. Sparse Autoencoders (SAEs) [5–8] help disentangle these hidden states into a large number of monosemantic features, each capturing a specific semantic thread. While one can train a separate SAE on each layer to discover its features, a key unanswered question remains: how do these features persist or transform from layer to layer? We have approached this question in two papers that progressively deepen our understanding. In Mechanistic Permutability [1], we proposed a method to match features across layers by comparing their SAE parameters, suggesting that many features are re-indexed rather than disappearing as you go deeper. It showed that many features propagate through the layers with consistent semantics, as demonstrated by data-free matching of SAE parameters. Then, in Analyze Feature Flow [2], we focus on how features emerge and transform across layers by constructing "flow graphs" that highlight precisely which submodule (residual, MLP, or attention) generates or propagates each feature, and then leveraging this knowledge for multi-layer steering. Below, we summarize both papers and highlight their differences. Mechanistic Permutability (ICLR 2025) Method Overview. Our original method in [1] sought to find a mapping between features learned at the residual stream at layer A and layer B. We took the decoder weight matrices from SAEs trained on each layer, denoted W(A)dec and W(B)dec. We then posed a linear assignment problem to solve for a permutation matrix P that best aligns these columns accordi
47d0aeb9-3a52-4232-9453-c7a6a5c22d91
trentmkelly/LessWrong-43k
LessWrong
Are there really no ghosts in the machine? My previous article on this article went down like a server running on PHP (quite deservedly I might add). You can all rest assured that I won't be attempting any clickbait titles again for the foreseeable future. I also believe that the whole H+ article is written in a very poor and aggressive manner, but that some of the arguments raised cannot be ignored.   On my original article, many people raised this post by Eliezer Yudkowsky as a counterargument to the idea that an FAI could have goals contrary to what we programmed. In summary, he argues that a program doesn't necessarily do as the programmer wishes, but rather as they have programmed. In this sense, there is no ghost in the machine that interprets your commands and acts accordingly, it can act only as you have designed. Therefore from this, he argues, an FAI can only act as we had programmed.   I personally think this argument completely ignores what has made AI research so successful in recent years: machine learning. We are no longer designing an AI from scratch and then implementing it; we are creating a seed program which learns from the situation and alters its own code with no human intervention, i.e. the machines are starting to write themselves, e.g. with google's deepmind. They are effectively evolving, and we are starting to find ourselves in the rather concerning position where we do not fully understand our own creations.   You could simply say, as someone said in the comments of my previous post, that if X represents the goal of having a positive effect on humanity, then the FAI should be programmed directly to have X as its primary directive. My answer to that is the most promising developments have been through imitating the human brain, and we have no reason to believe that the human brain (or any other brain for that matter) can be guaranteed to have a primary directive. One could argue that evolution has given us our prime directives: to ensure our own continued existence, to repro
7caa5e99-a9d6-43bd-92b9-55f9bc97ac92
trentmkelly/LessWrong-43k
LessWrong
Cooperation is for Winners Cross-posted from Putanumonit. ---------------------------------------- My last post was about learning to compete well from sports. But why compete at all? After all, I’m the one who wrote that winning is for losers, and we should avoid getting sucked into zero-sum contests. Competing well doesn’t necessarily mean having to compete more, for one thing. Instead, it does allow you to choose where to compete. More importantly, sports-like competitions create prestige hierarchies (as opposed to dominance hierarchies). Climbing those is not intended to make you a fearsome boss but a valuable ally, one that others want to cooperate with. Getting to cooperation often requires winning competitions. Wei Dai commented on the previous post: > More seriously, these days I think of competition as more of a problem than a solution. Some of the most important x-risks (e.g., advanced AI) are x-risks mainly because of competitive dynamics. If people weren’t competing for the prestige/power/money of being first to create AGI or to make advances in AI in general, we’d be able to solve AI safety problems at leisure. How does one get to solve AI safety problems at their leisure, in a cooperative environment? Obviously, by working at the Machine Intelligence Research Institute. But to work at MIRI one must first get a job at MIRI, and since MIRI has few spots and many applicants this means outcompeting other candidates for the job. MIRI itself is funded by donations, so it has to outcompete other non-profits for grants. Hopefully, both competitions are conducted in a sportsmanlike way by demonstrating research achievements instead of by sabotaging competitors. But it’s still a competition – applying for jobs or grans is stressful and entails a high risk of failure, often due to circumstances outside one’s control. Organizations have to manage very carefully the process of directing the winners of a competitive selection process to an ultimately cooperative endeavor. Here’s a wo
50268dc0-337d-420c-b25c-10e823efcb87
trentmkelly/LessWrong-43k
LessWrong
Earnings of economics majors: general considerations Some liberal arts majors make more money than others, but by far the ones who make the most are economics majors. The 2013-2014 Payscale Salary Report reports the following figures. The second column is median starting salary and the third is median mid-career salary, in thousands of dollars ---------------------------------------- Economics 50 96 Political Science 41 77 Philosophy 39 78 History 39 71 English Literature 40 71 Psychology 36 60 Sociology 37 55 ---------------------------------------- This trend is robust, and I'll give more supporting data as an appendix at the end of the post. The fact that economics majors make so much more is often taken to mean that majoring in economics raises future earnings. Is this true? In this post I'll discuss some general considerations relevant to determining this, and discuss the sort of data that one might try use to resolve the question. In future posts, I'll offer some such data, with analysis and discussion. I'd welcome any other ideas for testing the hypotheses, as well as pushback on the conceptual framework, and/or alternative hypotheses. As Bryan Caplan spells out in Economic Models of Education: A Typology for Future Reference, in general, a correlation between education and income can come from any of three things * Human capital acquisition: Education develops students' employable skills. * Ability bias: Obtaining an educational credential reflects greater or lesser pre-existing ability (that exists independently of what's learned in school), which is later reflected in earnings. * Signaling: An educational credential signals pre-existing ability (which as before, can be independent of what's learned in school) which makes employers more likely to hire one. "Ability" here is best defined in a nonstandard way, as "traits conducive to making money." An example of such a trait is intelligence, but there are other traits that are conducive to making money, such as the desire to succeed in lucrative c
d2e7fb3b-598d-4540-9661-9417bd509de8
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Value Formation: An Overarching Model 0. Introduction --------------- When we look inwards, upon [the godshatter](https://www.lesswrong.com/posts/cSXZpvqpa9vbGGLtG/thou-art-godshatter), how do we make sense of it? How do we sort out all the disparate urges, emotions, and preferences, and compress them into legible principles and philosophies? What mechanisms ensure our robustness to ontological crises? How do powerful agents found by a greedy selection process arrive at their morals? What is the algorithm for value reflection? This post seeks to answer these questions, or at least provide a decent high-level starting point. It describes a simple toy model that embeds an agent in a causal graph, and follows its moral development from a bundle of heuristics to a superintelligent mesa-optimizer. The main goal of this write-up is to serve as [a gears-level model](https://www.lesswrong.com/posts/B7P97C27rvHPz3s9B/gears-in-understanding) — to provide us with a detailed step-by-step understanding of why and how agents converge towards the values they do. This should hopefully allow us to spot novel pressure points — opportunities for interventions that would allow us to acquire a great deal of control over the final outcome of this process. From another angle, it should equip us with the tools to understand how different changes to the training process or model architecture would impact value reflection, and therefore, what kinds of architectures are more or less desirable. Let's get to it. --- 1. The Setup ------------ As the starting point, I'll be using a model broadly similar to the one from [my earlier post](https://www.lesswrong.com/posts/HzSdYWvdrdQqG9tqW/convergence-towards-world-models-a-gears-level-model). Let's assume that we have some environment E.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  represented as a causal graph. Some nodes in it represent the agent, the agent's observations, and actions. ![](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1668532585/mirroredImages/kmpNkeqEGvFue7AvA/ecjyllbk2dcxnt44sqn3.png)Every turn t (which might be a new training episode or the next time-step in a RL setup), the agent (blue node) reads off information from the (green) observation nodes O, sets the values for the (red) action nodes A, all nodes' values update in response to that change, then the agent receives reward based on the (purple) reward nodes R. The reward is computed as some function U:Rt→R, where Rt represents the reward nodes' current values. The agent is being optimized by some optimization/selection process — the SGD, evolution, [human brain reward circuitry](https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX/p/iCfdcxiyr2Kj8m8mT), whatever. What matters is that this process is non-intelligent and [greedy](https://www.lesswrong.com/posts/ThtZrHooK7En9mcZr/greed-is-the-root-of-this-evil): it only ever makes marginal improvements to the agent's architecture, with an eye for making it perform marginally better the next turn. As per [the previous post](https://www.lesswrong.com/posts/HzSdYWvdrdQqG9tqW/convergence-towards-world-models-a-gears-level-model), the agent will have naturally learned an advanced world-model, which we'll also consider to be a causal graph, M. Every turn, after the agent makes the observations, it'll use that world-model to infer as much other information about the environment as it can (i. e., the current values of the non-observed nodes). Let's also assume that the world-model is multi-level, making heavy use of natural abstractions: both "an atom" and "a spider" are nodes in it, even if the actual environment contains only atoms. Let's define Ms and As as some subsets of the world-model and the action nodes respectively. The agent will have developed a number of shallow *heuristics*, which are defined as follows: h:Ms→AsThat is: a heuristic h is some function that looks at some inferred part of the world, and recommends taking certain actions depending on what it sees. Informally, we may assume that heuristics are interpretable — that is, they're defined over some specific world-model node or a coherent set of such nodes, perhaps representing a [natural abstraction](https://www.lesswrong.com/posts/vDGvHBDuMtcPd8Lks/public-static-what-is-abstraction). We'll assume that at the starting point we're considering, the entire suite of the heuristics H is subpar but much better than random behavior according to the outer reward function. E. g., they always allow the agent to secure at least 50% of the possible reward We'll assume that the environment is too complex for such static heuristics to suffice. As the consequence, whatever process is shaping the agent, it has just now built [General-Purpose Search](https://www.lesswrong.com/posts/6mysMAqvo9giHC4iX/what-s-general-purpose-search-and-why-might-we-expect-to-see) into it: GPS:Mt×Ms×MGs→AGs |(Ats=AGs)⇒(|Mts−MGs|=minAt(|Mts−MGs|))Where Mt is the current state of the world-model, and MGs, AGs are the "target values" for the nodes in Ms and As respectively. That is: GPS is some function that takes in a world-model and some "problem specification" — some set of nodes in the world-model and their desired values — and output the actions that, if taken in that world-model, would bring the values of these nodes as close to the desired values as possible given the actions available to the agent. Note that it's defined over the world-model, not over the real environment E, and so its ability to optimize *in the actual* E decreases the less accurate the world-model is. Let's explore the following question: 2. How Will the GPS (Not) Be Used? ---------------------------------- Pointing the GPS at the outer objective seems like the obvious solution. Let Rmax be the values of R that maximize U. Then we can just wire the agent to pass Mt×R×Rmax to the GPS at the start of every training episode, and watch it achieve the maximum reward it can given the flaws in its world-model. Turn it into a proper [wrapper-mind](https://www.lesswrong.com/posts/Mrz2srZWc7EzbADSo/wrapper-minds-are-the-enemy). Would that work? Well, this idea assumes that the world-model already has the nodes representing the reward nodes. It might not be the case, the way stone-age humans didn't know what "genes" are for the evolution to point at them. If so, then pointing the GPS at the reward proxies is the best we can do. But okay, let's assume the world-model is advanced enough to represent the reward nodes. Would that work *then*? Sure — but only under certain, fairly unnatural conditions.[[1]](#fnsfm7lmf8s5f) In my previous post, I've [noted](https://www.lesswrong.com/posts/HzSdYWvdrdQqG9tqW/convergence-towards-world-models-a-gears-level-model#2_4__The_Need_for_World_Models) that the very process of being subjected to a selection pressure necessarily builds certain statistical correlations into the agent. 1. One type of such correlations is O→E, mappings from the observations to the values of non-observed environment nodes. They're built into the agent explicitly, as O→M correlations. Taken in sum, they're how the agent computes the world-model. 2. *Heuristics*, however, can also be viewed this way: as statistical correlations of the type (E→A)→U. Essentially, they're statements of the following form: "if you take *such* action in *such* situation, this will correlate with higher reward". The difference between the two types, of course, is that the second type is *non-explicit*. Internally, the agent doesn't act as the heuristics specify *because* it knows that this will increase the reward — it just has a mindless tendency to take such actions. Only the M→A mappings are internally represented — and not even as part of the world-model, they're just *procedural*! But these tendencies were put there by the selection process because the (E→A)→U correlations are valid. In a way, they're as much part of the structure of the E environment as the explicitly represented O→E correlations. They're "skills", perhaps: the knowledge of how an agent like this agent needs to act to perform well at the task it's selected for. The problem is, if we directly hard-code the GPS to be aligned with the outer objective, we'd be cutting all the pre-established heuristics out of the loop. And since the knowledge they represent is either procedural or implicit, not part of the explicit world-model, that would effectively decrease the amount of statistical correlations the agent has at its disposal — shrink its effective world-model dramatically. Set it back in its development. And the GPS is only as effective as the world-model it operates in. Our selection process is greedy, so it will never choose to make such a change. --- 3. Interfaces ------------- Let's take a step back, and consider how the different parts of the agent must've learned to interface with each other. Are there any legible data structures? **a) The World-Model.** Initially, there wouldn't have been a unified world-model. Each individual heuristic would've learned some part of the environment structure it cared about, but it wouldn't have pooled the knowledge with the other heuristics. A cat-detecting circuit would've learned how a cat looks like, a mouse-detecting one how mice do, but there wouldn't have been a communally shared "here's how different animals look" repository. However, [everything is correlated with everything else](https://www.gwern.net/Everything) (the presence of a cat impacts the probability of the presence of a mouse), so pooling all information together would've resulted in improved predictive accuracy. Hence, the agent would've eventually converged towards an explicit world-model. **b) Cross-Heuristic Communication.** As a different consideration, consider heuristics conflicts. Suppose that we have some heuristics hi and hk that both want to fire in a given training episode. However, they act at cross-purposes: the marginal increase of U achieved by hi firing at t would be decreased by letting hk fire at t, and vice versa. Both of them would want to suppress each other. Which should win? On a similar note, consider heuristics that want to chain their activations. Suppose that some heuristic hm responds to a subset of the features hl detects. hm can learn to detect them from scratch, *or* it can just learn to fire when hl does, instead of replicating its calculations. Both problems would be addressed by some shared channel of communication between the heuristics, where each of them can dump information indicating how strongly it wants to fire this turn. To formalize this, let's suppose that each heuristic has an associated "activation strength" function D:Ms→R. (Note that activation strength is not supposed to be normalized across heuristics. I. e., it's entirely possible to have a heuristic whose strength ranges from 0 to 10, and another with a range from 30 to 500, such that the former always loses if they're in contest.) The actual firing pattern would be determined as some function of that channel's state.[[2]](#fnpox9dfksfr) **c) Anything Else?** So far as I can tell now, that's it. Crucially, under this model, there doesn't seem to be any pressure for heuristics to make themselves legible *in any other way*. No summaries of how they work, no consistent formats, nothing. Their constituent circuits would just be... off in their own corners, doing their own things, in their own idiosyncratic ways. They need to be coordinated, but no part of the system needs to model any other part. Yet. So those are the low-hanging fruits available to be learned by the GPS. The selection pressure doesn't need to introduce *a lot* of changes to plug the GPS into the world-model and the cross-heuristics communication channel[[3]](#fnmvo7fpvhjn8), inasmuch as they both follow coherent data formats. But that's it. Making the *mechanics* of the heuristics legible — both standardizing the procedural E→A knowledge and making the implicit (E→A)→U knowledge explicit — would involve a lot more work, and it'd need to be done for every heuristic individually. A lot of gradient steps/evolutionary generations/reinforcement events. --- 4. Reverse-Engineering the Heuristics ------------------------------------- That's very much non-ideal. The GPS still can't access the non-explicit knowledge — it basically only gets *hunches* about it. So, what does it do? Starts reverse-engineering it. It's a general-purpose problem-solver, after all — it can understand the problem specification of this, given a sufficiently rich world-model, and then solve it. In fact, it'll probably be *encouraged* to do this. The selection pressure would face the choice between: * Making the heuristics legible to the GPS over the course of many incremental steps. * Making the GPS better at reverse-engineering the rest of the system the GPS is embedded in. The second would plausibly be faster, the same way [deception is favoured relative to alignment](https://www.lesswrong.com/posts/ocWqg2Pf2br4jMmKA/does-sgd-produce-deceptive-alignment#Deception_is_favored)[[4]](#fnx5jrwc7ofnn). So the selection pressure would hard-code some tendency for the GPS to infer the mechanics of the rest of the agent's mind, and write them down into the world-model. This is an important component of the need for self-awareness/reflectivity. The GPS would gather statistical information about the way heuristics fire, what they seem to respond to or try to do, which of them fire together or try to suppress each other, what effects letting one heuristic or the other fire has on the world-model, and so on. One especially powerful way to do that would be running counterfactuals on them. That is, instead of doing live-fire testing (search for a situation where heuristic hi wants to fire, let it, see what happens), it'd be nice to simulate different hypothetical states the world-model could be in, then see how the heuristics respond, and what happens if they're obeyed. And [there'll likely already be](https://www.lesswrong.com/posts/HzSdYWvdrdQqG9tqW/convergence-towards-world-models-a-gears-level-model#2_2__Ideal_Actions) a mechanism for rolling the world-model forward or generating hypotheticals, so the GPS can just co-opt it for this purpose. What will this endeavor yield, ultimately? Well, if [the natural abstractions hypothesis](https://www.lesswrong.com/posts/vDGvHBDuMtcPd8Lks/public-static-what-is-abstraction) is true, quite a lot! As I've noted at the beginning, each heuristic is plausibly centered around some natural abstraction or a sensible cluster of natural abstractions[[5]](#fnm11o3h3q9li), and it'd be doing some local computation around them aimed at causing a locally-sensible outcome. For example, we might imagine a heuristic centered around "chess", optimized for winning chess games. When active, it would query the world-model, extract *only* the data relevant to the current game of chess, and compute the appropriate move using these data only. So we can expect heuristics to compress well, in general. That said, we need to adjust our notation here. Suppose that you're the GPS, trying to reverse-engineer some heuristic hi. You obviously don't have access to the ground-truth of the world E, only a model of it M. And since M might be flawed, the world-model nodes hi is defined over might not even [correspond to anything in the actual environment](https://www.lesswrong.com/tag/ontological-crisis)! By the same token, the best way to compress hi's relationship with U might not be summarizing its relationship with the actual reward-nodes, but with some proxy node. Consider this situation: ![](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1668532585/mirroredImages/kmpNkeqEGvFue7AvA/ergi5c5kaegim6s6jlwe.png)We can imagine some heuristic hi which specializes in controlling the value of xp. Ultimately, that heuristic would've been created by the selection pressure because of its effect on r1. But hi's actual mechanical implementation would only be focused on xp, and the causal chain between it and the agent! It would pay no mind to x1,x2,...,x5, so its effect on r1 would be subject to a lot of noise — unlike its effect on xp. Thus, the agent would recover xp as the target, not r1. (And, of course, it might also be because r1 isn't yet represented in the world-model at all.) As an example, consider chess. The algorithms for playing it well are convergent across all agents, irrespective of the agent's goals outside chess or its reason for trying to win at chess. Their implementations would only refer to chess-related objectives. Thus, while an "outside-picture" view on heuristics describes them as (E→A)→U, internally they'd be best summarized as: (M→A)→PWhere P is some proxy objective. For simplicity, let's say that P is a 2-tuple ⟨xi,di⟩, where the first entry is a world-model node and the second is the "target value" for that node. So every "goal" is to keep the value of a node as close to the target as possible. As another technicality, let's say that the activation-strength function D increases the farther from the target the corresponding node's value goes. --- 5. The Wrapper Structure ------------------------ All this time, I've been avoiding the subject of what the GPS *will* be pointed at. I've established that it'll be used for self-reflection, and that it won't be optimizing for a fixed goal aligned with the outer objective — won't be a [wrapper-mind](https://www.lesswrong.com/posts/Mrz2srZWc7EzbADSo/wrapper-minds-are-the-enemy). But what spread of goals will it actually pursue at the object-level? What *would* be the wrapper structure around the GPS? ### 5A. Assumption Re-Check First, let's check whether aligning it with the outer objective still doesn't work. What if we point it at the joint task of reward maximization plus self-reflection? Make it want the target objective *plus* inform it that there's some useful yet inaccessible knowledge buried in its mind. That'll work... if it had infinite time to think, and could excavate all the procedural and implicit knowledge *prior* to taking any action. But what if it needs to do both in lockstep? In addition, "point it at the X task" hides a lot of complexity. Even if the reward-nodes are already represented in the world-model, building an utility function around them is potentially a complex problem, requiring many parameter updates/generations. All the while, the GPS would be sitting there, contributing nothing. That's not how our greedy selection process works — there just aren't gradients from "GPS does nothing" to "GPS is inner-aligned with the target objective". No, we need some *interim* objective for the GPS — something we can immediately hook it up to and make optimize, and which would at least somewhat correlate with good performance on the target objective. Once that's done, *then* we can incrementally rewrite that proxy objective to the target objective... if we'll even want to, at that point. ### 5B. The Interim Objective A few points: * In the previous section, we've established that every heuristic h enforces some relationship between a state of a world-model subset Ms and some subset of the actions the agent will take As. Let's call this structure *contextual behavior* B:Ms→As. (We'll assume that the activation strength D is folded into B.) * Such contextual behaviors are optimized for achieving some proxy objective P in the world-model subset Ms. We'll call this a *contextual goal* G:Ms→P. (Similarly, assume that D is somehow represented in G.) * The GPS can, in principle, succeed at extracting both B and G from a given h. * At the beginning, we've postulated that by the time the GPS develops, the agent's heuristics, when working in tandem, would be much-better-than-random at achieving the target objective. This makes the combination of all contextual goals, let's call it GΣ, a good proxy objective for the target objective U. A combination of all contextual behaviors BΣ, in turn, is a proxy for GΣ, and a second-order proxy for U. Prior to the GPS' appearance, the agent was figuratively pursuing BΣ ("figuratively" because it wasn't an optimizer, just an optimized). So the interim objective can be *at least as bad as* BΣ. On the other hand, pursuing GΣ *directly* would probably be an improvement, as we wouldn't have to go through *two* layers of proxies. The GPS can help us with that: can help us move from BΣ to GΣ, and then continue on all the way to U. We do that by first enslaving the GPS to the heuristics, then letting it take over the agent once it's capable enough. ### 5C. Looping Heuristics Back In The obvious solution is obvious: make heuristics themselves control the GPS. The GPS' API is pretty simple, and depending on the complexity of the cross-heuristic communication channel, it might be simple enough to re-purpose its data formats for controlling the GPS. Once that's done, the heuristics can make it solve tasks for them, and become more effective at achieving BΣ (as this will give them better ability to runtime-adapt to unfamiliar circumstances, without waiting for the SGD/evolution to catch them up). I can see it taking three forms: 1. Input tampering. The problem specification terms Ms×MGs that gets passed to the GPS are interfered-with or wholesale formed by heuristics. * ("Condition X is good, so I'll only generate plans that satisfy it." Also: "I am in pain, so I must think only about it and find the solution to it as quickly as possible.") 2. Process tampering. The heuristics can learn to interfere on the GPS process *directly*, biasing it towards one kind of problem-solving or the other. * ("I'm afraid of X, so my mind is flinching away from considering plans that feature X.") 3. Output tampering. The heuristics can learn to override the actions the GPS recommends to take. * ("X is an *immediate threat*, so I *stagger back* from it.") The real answer is probably *all of this*. Indeed, as I've illustrated, I think we observe what looks like all three varieties in humans. Emotions, instincts, and so on. ### 5D. Nurturing the Mesa-Optimizer As this is happening, we gradually increase the computational capacity allocated to the GPS' and the breadth of its employment. We gradually figure out how to set it up to do self-reflection. It starts translating the procedural and the implicit knowledge into a language it understands — the language of the world-model. Bs and Gs are explicated and incorporated into it, becoming just more abstractions the GPS can make use of. At this point, it makes sense to give the GPS ability to prompt *itself* — to have influence over what goes into the problem specifications of its future instances. It'll be able to know when to solve problems by engaging in contextual behaviors, even if it doesn't understand *why* they work, and optimizing for contextual goals is literally what it's best at. This way of doing it would have an advantage over letting heuristics control the GPS directly: * It would allow the GPS to incorporate procedural knowledge into its long-term plans, and otherwise account for it, instead of being taken by surprise. * It would allow the GPS to derive better, more nuance-aware ways of achieving contextual goals, superior to those codified in the heuristics. It would greatly improve the agent's ability to generalize to new environments. We'll continue to improve the GPS, improving its ability to do this sort of deliberative long-term goal pursuit. At the same time, we'll lessen the heuristics' hold on it, and start turning heuristics towards *the GPS'* purposes — marginally improving their legibility, and ensuring that the process of reverse-engineering them aims the agent at U with more and more precision. The agent as a whole will start moving from a BΣ-optimizer to a GΣ-optimizer, and then even beyond that, towards U.[[6]](#fns8nrfm0cmjn) ### 5E. Putting It Together So, what's the wrapper structure around the GPS, at some hypothetical "halfway point" in this process? * Instincts. Heuristics pass the GPS problem specifications and interfere in its processes. * Self-reflection. The GPS tries to reverse-engineer the rest of the agent in order to piggyback off its non-explicit knowledge. * A spread of mesa-objectives. The GPS uses its explicit knowledge to figure out what it's "meant" to do in any given context, and incorporates such conclusions in its long-term plans. --- 6. Value Compilation -------------------- The previous section glossed over a crucial point: how do we turn a mass of contextual behaviors and goals into a proper unitary mesa-objective GΣ? Because we want to do that. The complex wrapper structure described in the previous section is highly inefficient. At the limit of optimality, [everything wants to be a wrapper-mind](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities). There's plenty of reasons for that: 1. [The basic argument](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities). We wouldn't want our optimization process/the GPS to be haphazardly pointed in different directions moment-to-moment, as different heuristics activate and grab its reins, or bias it, or override it; even as it continues to recover more and more pieces of them. We'd be getting dutch-booked all the time, act at cross-purposes with our future and past instances. The GPS explicating this doesn't exactly help: it'll be able to *predict* its haphazard behavior, but not how to prevent it in a way that doesn't *worsen* its performance. 2. It's computationally inefficient. Imagine a naively-formed GnaiveΣ, of the following form: G1∧G2∧…∧Gn. It would look at the world-model, run down the list of contextual behaviors, then execute the behaviors that apply, and somehow resolve the inevitable conflicts between contradictory behaviors. Obviously, that would take a lot of time; time in which someone might brain you with a club and take all your food. Contextual behaviors sufficed when they *just happened*, but considering them via the GPS would take too long. (Consider how much slower conscious decision-making is, compared to knee-jerk reactions.) 3. It might be a hard-wired terminal objective. Remember, any given *individual* Gi is not a good proxy objective — only their combination GΣ is. So it's entirely probable that the selection pressure would directly task the GPS with combining and unifying them, not only with reverse-engineering them. Even if there weren't all the other reasons to do that. 4. Closely related to (3): ability to generalize to unfamiliar environments. Optimizing for GΣ would almost always allow the agent to do a good job according to U even if it were dropped in entirely off-distribution circumstances. GnaiveΣ, on the other hand, would trip up the moment it fails to locate familiar low-level abstractions. (E. g., a preference utilitarian would be able to make moral judgements even in an alien civilization, whereas someone who's only internalized some system of norms of a small human culture would be at a loss.) So, how do we collapse GnaiveΣ into a compact, coherent GΣ? (For clarity: everything in this section is happening at runtime. The SGD/evolution are not involved, only the GPS. At the limit of infinite training, GΣ *would* become explicitly represented in the agent's parameters, with the GPS set up to single-mindedly pursue it and all the heuristics made into passive and legible pieces of the world-model. But I expect that the agent would become superintelligent and even "hyperintelligent" long before that — i. e., capable of almost arbitrary [gradient-hacking](https://www.lesswrong.com/posts/uXH4r6MmKPedk8rMA/gradient-hacking) — and so the late-game hard-coded GΣ would be chosen to by it, and therefore would likely be a copy of a GΣ the AI's earlier instance compiled at runtime. So the process here is crucial.) ### 6A. The Basic Algorithm Suppose we've recovered contextual goals G1 and G2. How do we combine them? **a) Conjunction.** Consider the following setup: ![](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1668532585/mirroredImages/kmpNkeqEGvFue7AvA/fw60xgoauhxu75i40zgt.png)Suppose that we have a heuristic hi, which tries to keep the value of xi at some number, and another heuristic hk, which does the same for xk. Suppose that together, their effects keep the value of xv within some short range. That allows us to form a contextual goal Gi∧k=Gv, which activates if xv's value strays far from the center of the range hi and hk effectively kept it in. In a sense, this is the same algorithm we must've followed to go from contextual actions to contextual goals in the first place! To do that, we gathered statistical information about a heuristic's activations, and tried to see if it consistently controlled the value of some node downstream of the action-nodes. Same here: we know that hi and hk control the values of xi and xk, we hypothesize that there's some downstream node whose value their activations consistently control, and we conclude that this is the "purpose" of the two heuristics. **Technical note:** How do we compute the activation-strength function of the new goal, Dp? Well, Di and Dk increased as xi and xk's values went farther from their target values, and this relationship kept xv's value near some target of its own. In turn, this means that Di+Dk increased as xv's value went far from some target. From that, we can recover some function Dp which tracks the value of xv directly, not through the intermediaries of xi and xk. Note the consequence: the combined goal would be approximately as strong as the sum of its "parents". **Important note:** After we compile xp, we stop caring about xi and xk! For example, imagine that off-distribution, the environmental tendencies change: the values of xi and xk that kept xv near a certain value no longer do so. If we'd retained the original contextual goals, we'd keep xi and xk near their target values, as before, even as that stops controlling xv. But post-compilation, we do the opposite: we ignore the target values for xi and xk to keep xv near its newly-derived target. Human example: A deontologist would instinctively shy away from the action of murder. An utilitarian might extrapolate that the real reason for this aversion is because she dislikes it when people die. She'd start optimizing for the minimal number of people killed *directly*, and would be able to do things she wouldn't before, like personally kill a serial killer. Another: Imagine a vain person who grew up enjoying a wealthy lifestyle. As a child, he'd developed preferences for expensive cars, silk pillows, and such. As he grew up, he engaged in value compilation, and ended up concluding that what he actually valued were objects that signify high status. Afterwards, he would still love expensive cars and silk pillows, but only as *local instantiations* of his more abstract values. Post-reflection, he would be able to exchange cars-and-pillows for yachts-and-champagne without batting an eye — even if that wouldn't make sense to his childhood self. This shtick is going to cause problems for us in the future. > **Sidenote:** In a way, this approach expands the concept of "actions". At genesis, the agent can only control the values of its immediate action-nodes. As it learns, it develops heuristics for the control of far-away nodes. When these heuristics are sufficiently reliable, it's as if the agent had direct access to these far-away nodes. In the example above, we can collapse everything between the agent and e. g. xv into an arrow! And in fact, that's probably what the GPS would do in its computations (unless it specifically needs to zoom in on e. g. x3's state, for some reason). > > As an example, consider a child trying to write, who has to carefully think about each letter she draws and the flow of words, versus an experienced author, who does all that automatically and instead directly thinks about the emotional states he wants to evoke in the reader. (Though he's still able to focus on calligraphy if he wants to be fancy.) [Chunking](http://billwall.phpwebhosting.com/articles/chunking.htm) is again relevant here. > > **b) Disjunction.** Consider this different setup: ![](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1668532586/mirroredImages/kmpNkeqEGvFue7AvA/ypry4ipt5utr9yagb5dn.png)Again, suppose we have two contextual goals Gi and Gk defined over xi and xk respectively. But there's no obvious way to combine them here: if their causal influences meet anywhere, we haven't discovered these parts of the world-model yet. Their contexts are entirely separate. As such, there isn't really a way to unify them yet: we just go Gi∧k=Gi∧Gk, and hope that, as the world-model expands, the contexts would meet somewhere. As an example, we might consider one's fruit preferences versus one's views on the necessity of the Oxford comma. They seem completely unrelated to each other. (And as a speculative abstract unification, perhaps one is entertained by ambiguity or duality-of-interpretation, and so prefers no Oxford comma and fruits with a mild bittersweet taste, as instantiations of that more abstract preferences? Though, of course, all human values don't *have* to ever converge this way.) Now let's complicate it a bit: ![](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1668532585/mirroredImages/kmpNkeqEGvFue7AvA/oz4ixxkxzmkkvphhzgni.png)Again, we have contextual goals Gi and Gk. We don't see a way to combine them, yet neither are they fully separate, as their contexts are entwined. If both xi and xk assume undesirable states, there might not be a distribution of values we may assign a1 and a2 such that both contextual goals are achieved. How do we deal with it? Well, the selection pressure ran into this problem a while ago, well before the GPS, and it's already developed a solution: the cross-heuristic communication channel. Any given heuristic hi has an activation strength Di, and if two heuristics hi, hk contest, the actions taken are calculated as some function of the activation strengths Di, Dk, and the recommended actions Asi, Ask. The GPS can recover all of these mechanics, and then just treat the sum of all "activation strengths" as negative utility to minimize-in-expectation. The actual trade-offs seem to heavily depend on the specifics of the situation (e. g., can we "half-achieve" every goal, or we have to choose one or the other?) — I've unfortunately failed to come up with a general yet compact way to describe conflict-resolution here. (Though I can point to [some related ideas](https://www.lesswrong.com/posts/dqSwccGTWyBgxrR58/turntrout-s-shortform-feed?commentId=o4ZLLqD5BAshiH83Z).) A particular degenerate case is if the two contextual goals are directly opposed to each other. That is, suppose that the actions that bring xi near the target almost always move xk's value outside it — like a desire to smoke interferes with one's desire to look after their health. In this case, if Dk always outputs higher values than Di, Gi ends up fully *suppressed:*Gi∨k=Gk. Suppose, however, that Gi wasn't a hard-coded heuristic. Rather, Gi was produced as the result of value compilation, perhaps as a combination of contextual goals over x3 and x6. In this case, we may "decompile" Gi back into G3 and G6, and try to find ways to re-compile them such that the result doesn't interfere with Gk. Perhaps G3 is "have a way to relax when stressed" and G6 is "I want to feel cool", and we can compile them into Gd = "carry a fabulous fidget toy".[[7]](#fn5kttnui755h) **c) By iteratively using these techniques**, we can, presumably, arrive at some GΣ. GΣ might end up fully unitary, like a perfect hedonist's desire to wirehead, or as a not-perfectly-integrated spread of values GΣ:G1∧G2…∧Gn. But even if it's the latter, it'll be a much *shorter* list than GnaiveΣ, and the more abstract goals should allow greater generalizability across environments. One issue here is that there might be multiple GΣ consistent with the initial set of heuristics. As far as human value reflection goes, it's probably fine: either of them should be a fair representation of our desires, and the specific choice has little to do with AI Notkilleveryoneism[[8]](#fn63ffbt54loe). But when considering how an AI's process of value reflection would go, well, it might turn out that even for a well-picked suite of proto-values, only some of the final GΣ don't commit omnicide. Anyway, that was the easy part. Now let's talk about all the *complications*. ### 6B. Path-Dependence Suppose that you have three different contextual goals, all of equal activation strength. For example, G1 is "I like looking at spiders, they're interesting", G2 is "I like learning about spider biology, it's interesting", and G3 is "I want to flee when there's a spider near me, something about being in their vicinity physically just sets me off". Suppose that you live in a climate where there isn't a lot of spiders, so G3 almost never fires. On the other hand, you have Internet access, so you spend day after day looking at spider pictures and reading spider facts. You compile the first two proto-values into G1∧2: "I like spiders". Then you move countries, discover that your new home has a lot of spiders, and to your shock, realize that you fear their presence. What happens? Perhaps you compile a new value, G(1∧2)∧3 = "I like spiders, but only from a distance". Or you fully suppress G3, since it's revealed to be at odds with G1∧2 whenever it activates. That's not what would've happened if you started out in an environment where all three contextual goals had equal opportunity to fire. G3 would've counterbalanced G1 and G2, perhaps resulting in G1∧2∧3 = "I guess spiders are kind of neat, but they freak me out". Thus, the process of value compilation is *path-dependent*. It matters in which order values are compiled. ### 6C. Ontological Crises What happens when a concept in a world-model turns out not to correspond to a concrete object in reality — such as the revelation that things consist of parts, or that souls aren't real? What if that happens to something you care about? This is actually fairly easy to describe in this model. There are two main extreme cases and a continuum between them. **a) "Ontology expansion".** The first possibility is that the object we cared about was a natural abstraction. This shift roughly looks like the following: ![](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1668532585/mirroredImages/kmpNkeqEGvFue7AvA/enm6fgr9voqnfn1amg0x.png)Suppose we cared about xp, and it turned out to be a natural abstraction over a system of xp1,xp2,xp3. That would merely mean that (1) the state of xp is downstream of the states of xp1,xp2,xp3, and (2) we can model the impact of our actions on xp more precisely by modelling xp1,xp2,xp3, if we so choose. Which we may not: maybe we're indifferent to the exact distribution of values in xp1,xp2,xp3, as long as the value of xp they compute remains the same. And xp is still a reasonable object to place on our map. The same way it's meaningful to think of and care about "humans", even if they're not ontologically basic; and the same way we would care about the high-level state of a human ("are they happy?") and be indifferent towards the low-level details of that state ("okay, they're happy, but are the gut bacteria in their stomach distributed like *this* or like *that*?"). **b) "Ontology break".** In the second case, the object we cared about turns out to flat-out not exist; not correspond to *anything* in the territory: ![](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1668532585/mirroredImages/kmpNkeqEGvFue7AvA/l7he0bngwpuftkad5eyt.png)As the example, consider a spiritual person realizing that spirits don't exist, or a religious person in the middle of a crisis of faith. We valued xp, but now there's just nothing resembling it in its place. What can we do? Do value decompilation, again. Suppose that the initial set of heuristics from which the value was compiled *wasn't* defined over xp. Suppose we had contextual goals G1 and G2 defined over x1 and x2 respectively, which we then collapsed into Gp. We can fall back on them: we can remember why we decided we cared about xp, then attempt to re-compile new downstream values from G1 and G2 in the new world-model. Perhaps, say, we end up noticing that G1 and G2 do an awful lot to control the value of x7...? As a human example, perhaps an apostate would decide that they cared about God because they sought a higher purpose, or wanted to feel a sense of belonging with their community. If so, they may find ways to satisfy these urges that would work in the new ontology. The quantitative effects of this kind of ontology break can be quite dramatic. The non-existent node might be [a linchpin of the world-model](https://www.lesswrong.com/posts/Cuig4qe8m2aqBCJtZ/which-values-are-stable-under-ontology-shifts?commentId=4aexj7RbnbANj3Hdn), having "infected" most of the nodes in it. It would entail a lengthy process of value re-compilation, and the final utility function might end up very different. *Qualitatively*, though, nothing would change. We can imagine an extreme case where the basic initial heuristics were defined over empty concepts as well. In this case, perhaps they'll just have to be outright suppressed/ignored. A degenerate case is if all initial heuristics are defined over empty concepts. I expect it's unlikely in real-life systems, though: many of them would likely be associated with *mental* actions, which would be self-evidently real by the very functioning of the system (by analogy with *cogito ergo sum*). > **Sidenote:** "But does that mean that the final form values take depends on the structure of the environment?" Yes. > > Note that this doesn't mean that moving an adult human from Earth to an alien civilization would change their values. That wouldn't involve any *ontological crises*, after all — the basic structure of the environment wouldn't change from their perspective, it would merely extend to cover this new environ. They would need to learn a number of new low-level abstractions (the alien language, what signals correspond to what mental states, etc.), but they would likely be able to "bridge" them with the previous high-level ones eventually, and continue the process of value reflection from there (e. g., if they cared for other people before, then once they properly "grasp" the personhood of the aliens, they'd begin caring for them too, and may eventually arrive at valuing some universal eudaimonia). > > On the other hand, a human in a universe with God does arrive at different values than in a godless one. I think that makes sense.[[9]](#fnqm4apbmogwo) > > **c) In practice**, most of the cases will likely be somewhere between the two extremes. The world-model expansion would reveal that what the agent cared about wasn't *precisely* a natural abstraction, but also not entirely a thing that didn't exist. They might end up re-calculating the correct natural abstraction, then deciding to care for it, experiencing a *slight* ontology break. (Humans souls aren't real, but human minds are, so we switch over, maybe with a bit of anguish. A more mundane example: someone you knew turned out to be a very different person from what you'd thought of them, and you have to re-evaluate your relationship.) Or maybe they'll end up caring about some low-level properties of the system *in addition to* the high-level ones, as the result of some yet-disjointed value finding something in the low-level to care about. (E. g., maybe some distributions of bacteria are aesthetically pleasing to us, and we'd enjoy knowing that one's gut bacteria are arranged in such a fashion upon learning of their existence? Then "a human" would cause not only empathy to fire, but also the bacteria-aesthetics value.) ### 6D. Meta-Cognition This is the real problem. **a) Meta-heuristics.** As I'd mentioned back in 5C, there'll likely be heuristics that directly intervene on the GPS' process. These interventions might take the form of *interfering with value compilation itself*. The GPS can notice and explicate these heuristics as well, of course. And endorse them. If endorsed, they'll just take the form of preferences over cognition, or the world-model. A spiritualist's refusal to change their world-view to exclude spirits. A veto on certain *mental actions*, like any cognitive process that concludes you must hate your friends. A choice to freeze the process of value compilation at a given point, and accept the costs to the coherency of your decisions (this is how deontologists happen). Any heuristic that can do that would gain an asymmetric advantage over those that can't, as far as its representation in the final compilation is concerned. I hope the rest of this post has shown that such mechanisms are as likely to be present in AIs as they are in humans. **b) Meta-cognition itself.** The core thing to understand, here, is that the GPS undergoing value compilation isn't some arcane alien process. I suspect that, in humans, it's done *fully consciously*. So, what problems do humans doing value reflection run into? First off, the GPS might plainly *make a mistake*. It doesn't have access to the ground truth of heuristics and can't confirm for sure when it did or did not derive a correct contextual goal or conducted a valid compilation. Second, meta-cognitive preferences can originate externally as well. [A principled stance to keep deontological principles around even if you're an utilitarian](https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans), for example. Third, the GPS is not shy about taking shortcuts. If it encounters some clever way to skip past the lengthy process of value compilation and get straight to the answer, it'll go for it, the same we'd use a calculator instead of multiplying twenty-digit numbers manually. Hence: humans becoming hijacked by various ideologies and religions and philosophies, that claim to provide the ultimate answers to morality and the meaning of life. Thus, the final compilation might not even have much to do with the actual initial spread of heuristics. At the same time, meta-cognition might counteract the worst excesses of path-dependence. We can consciously choose to "decompile" our values if we realize we've become confused, look at our initial urges, then re-compile the more correct combination from them. ### 6E. Putting It Together As such, the process for computing the final mesa-objective GΣ is a function of: * The initial set of heuristics, *especially* those with meta-cognitive capability. * The true environment structure. * The data the agent is exposed to. * The exact sequence in which the agent reverse-engineers various heuristics, combines various values, and is exposed to various data-points. Is it any wonder the result doesn't end up resembling the initial outer objective U? I'm not sure if that's as bad as it looks, as far as irreducible complexity/impossibility-to-predict goes. It might be. --- 7. Miscellanea -------------- **a) Heuristics' Flavour.** You might've noticed that this post has been assuming that every heuristic ends up interpreted as a proto-value that at least potentially gets a say in the final compilation. That's... not right. Isn't it? I'm not actually sure. I think the Shard Theory answer would be that yes, it's right. Every heuristic is a shard engaging in negotiation with every other shard, vying for influence. It might not be "strong" or very good at this, but an attempt will be made. Counterpoint: Should, say, your heuristic for folding a blanket be counted as a proto-value? The GPS is reverse-engineering them to gain data on the environment, and this should really just be a "skill", not a proto-value. Counter-counterpoint: Imagine an agent with a lot of heuristics for winning at chess. It was clearly optimized for playing chess. If the GPS' goal is to figure out what it was made for and then go optimize that, then "I value winning at chess" or "I value winning" *should* be plausible hypotheses for it to consider. It makes a certain kind of common sense, too. As to blanket-folding — sure, it's a rudimentary proto-value too. But it's too weak and unsophisticated to get much representation. In particular, it probably doesn't do meta-cognitive interventions, and is therefore at an asymmetrical disadvantage compared to those that do. ... Which is basically just the Shard Theory view again. So overall, I think I'll bite that bullet, yes. Yes, every starting heuristic should be interpreted as a proto-value that plays a part in the overall process of value compilation. (*And also* as a skill that's explicated and integrated into the world-model.) **b) Sensitivity to the Training Process.** Let's consider two different kinds of minds: humans, [which are likely trained via on-line reinforcement learning](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values), and autoregressive ML models, who can continuously cogitate forever even with frozen weights by chaining forward passes. (Suppose the latter scales to AGI.) The internal dynamics in these minds might be quite different. The main difference is that in humans, heuristics can adapt at runtime, and new heuristics can form, while in the frozen-weights model, the initial spread of heuristics is static. As one obvious consequence, this might make human heuristics "more agenty", in the sense of being able to conduct more nuanced warfare and negotiation between each other. In particular, they'd have the ability to learn to understand new pieces of the world-model the GPS develops, and learn new situations when they must tamper with the GPS for self-preservation (unlike static heuristic, for whom this is [inaccessible information](https://www.lesswrong.com/posts/CQAMdzA4MZEhNRtTp/human-values-and-biases-are-inaccessible-to-the-genome)). "Heuristic" might be a bad way to describe such things, even; "[shards](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values)" might be better. Perhaps such entities are best modeled [as traders](https://www.lesswrong.com/posts/dqSwccGTWyBgxrR58/turntrout-s-shortform-feed?commentId=o4ZLLqD5BAshiH83Z) rather than functions-over-nodes? But a potentially bigger impact is on value-compilation path-dependence. In humans, when we compile contextual goals G1 and G2 into G1∧2, and then stay with G1∧2 for a while, we end up developing *a new shard* around G1∧2 — a structure of the same type as the initial G1, G2. Shards for G1 and G2, meanwhile, might die out as their "child" eats the reinforcement events that would've initially went for them. (Consider the example of the Vain Man from 6A.) But if that initial set of heuristics is frozen, as in an ML model, perhaps the agent always ends up developing a "path-independent" generalization as described at the end of 6A? The runtime-compiled values would be of a different type to the initial heuristics: just mental constructs. And if we assume the AI to be superintelligent when it finalizes the compilation, it's not going to be fooled by whatever it reads in the data, so the "mistaken meta-cognition" concerns don't apply. Certainly it might make mistakes *at the start* of the process, but if incorrect compilations aren't "sticky" as they are with humans, it'll just decompile them, then re-compile them properly! Reminder: This doesn't mean it'll become aligned with the outer objective. GΣ is still not U but a proxy for U, so even path-independent value compilation ends with inner misalignment. But it does make GΣ derivable from *just* the parameters and the world-model, not the data. **c) Self-Awareness.** I've belatedly realized that there's a third structure the GPS can interface with: *the GPS itself*. This fits nicely with some of my [published](https://www.lesswrong.com/posts/bbtLG3LNGGBWzXqjK/is-this-thing-sentient-y-n) and yet-unpublished thoughts on self-awareness and the perception of free will. In short, the same way it's non-trivial to know what heuristics/instincts are built into your mind, it's non-trivial to know *what you're currently thinking of*. You need a separate feedback loop for that, a structure that summarizes the GPS' activity and feeds it back into the GPS as input. That, I suspect, directly causes (at least in humans): * Self-awareness (in a more palatable fashion, compared to just having abstractions corresponding to "this agent" or "this agent's mental architecture" in the world-model). * [The perception of the indisputability of your own existence](https://en.wikipedia.org/wiki/Cogito,_ergo_sum). * The perception of free will. (You can request a summary of your current object-level thoughts and make a decision to redirect them; or you can even request a summary of your meta-cognition, or meta-meta-cognition, and so on, and redirect any of that. Therefore, you feel like you're freely "choosing" things. But the caveat is that every nth-level request spins up a (n+1)th-level GPS instance, and the outermost instance's behavior isn't perceived and "controlled" by us, but *is us*. As such, we do control our perceived behavior, but our outermost loop that's doing this is always non-perceivable and non-controllable.) But that probably ought to be its own separate post. --- 8. Summary ---------- * Once the GPS develops, the selection pressure needs to solve two tasks: + Let the GPS access the procedural and implicit knowledge (E→A)→U represented by the heuristics, even though they're completely illegible. + Pick some interim objective for the GPS to make object-level plans around. * The selection pressure solves the first problem by hooking the GPS up to the two coherent data structures that were already developed: the world-model, and the cross-heuristic communication channel. * The second problem is solved by teaching the heuristics to use the GPS to assist in problem-solving. In essence, the GPS starts "optimizing" for the implicit goal BΣ of "do whatever the agent was already doing, but better". * As this is happening, the GPS proves more and more useful, so the amount of compute allocated to it expands. It learns to reverse-engineer heuristics, getting explicit access to the non-explicit knowledge. * The GPS is encouraged to treat heuristics as "clues" regarding its purpose, and BΣ as some proxy of the real goal GΣ that the agent is optimized to solve. (Which, in turn, is a proxy of the real target objective U.) * Every heuristic, thus, is at once a piece of the world-model and a proto-value. * There's plenty of incentives to compress these splintered proto-values into a proper utility function: it's a way to avoid dominated strategies, the splintered goal-set is computationally unwieldy, and the GPS can probably derive a better, more generalizable approximation of GΣ by explicitly trying to do that. * The basic process of value compilation goes as follows: + Hypothesize that a number of heuristics, e. g. hi and hk, are jointly optimized for achieving some single goal Gi∧k. If such a goal is found, start optimizing it directly. + If no goal is found, compute the appropriate trade-offs between Gi and Gk based on the underlying heuristics' activation-strength functions Di and Dk. If one of the goals has to be fully suppressed, try decompiling it and repeating the process. + Iterate, with initial heuristics replaced with derived contextual goals, until arriving at GΣ. * That process runs into several complications: + Path-dependence, where the order in which contextual goals are compiled changes the form the final goal takes. + Ontological crises, where we either (1) expand our model to describe the mechanics of the natural abstraction we care about, or (2) experience an ontology break where the object we cared about turns out not to exist, which forces us to do value decompilation + re-compilation as well. + Meta-cognition: The agent might have preferences over the value compilation process directly, biasing it in unexpected ways. In addition, the GPS implements (something like) logical reasoning, and might lift conclusions about its values from external sources, including mistaken ones. * Some considerations have to be paid to the online vs batch training: agents trained on-line might be uniquely path-dependent in a way that those with frozen weights aren't. --- 9. Implications for Alignment ----------------------------- My main practical takeaway is that I am now much more skeptical of any ideas that plan to achieve alignment by *guiding* the process of value formation, or by setting up a "good-enough" starting point for it. Take the basic Shard Theory approach as the example.[[10]](#fninn97040y8e) It roughly goes as follows: 1. Real-life agents like humans don't have static values, but a *distribution over values* that depends on the context. 2. There's ample reason to believe that AI agents found by the SGD will be the same. 3. What we need to do isn't to make an AI that's solely obsessed with maximizing human values, but to ensure that there's at least one powerful & metacognition-capable shard/proto-value in it that values humans: that our value distributions merely *roughly overlap*. 4. Then, as the AI does value compilation, that shard will be able to get a lot of representation in the final utility function, and the AI will naturally value humans and want to help humans (even if it will *also* want to tile a large swathe of the cosmos with paperclips). ![](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1668532585/mirroredImages/kmpNkeqEGvFue7AvA/daikpdlvavgqwsjbapot.png)A hopefully non-catastrophic value distributions divergence, according to the scheme above. [*Source.*](https://www.lesswrong.com/posts/dKTh9Td3KaJ8QW6gw/why-assume-agis-will-optimize-for-fixed-goals?commentId=p6ZbpXrFw2BCAebrh)One issue is that the value-humans shard [*would* need to be perfectly aligned with human values](https://www.lesswrong.com/posts/heXcGuJqbx3HBmero/people-care-about-each-other-even-though-they-have-imperfect?commentId=rZPPm7SpGtsQvEvCC), and that's most of this approach's promised advantage gone. That's not much of an issue, though: I think we'd need to do that in any workable approach. But even if we can do that, I fear this wouldn't work out even in the path-independent scenario. The Vain Man from 6A, again: over the course of value compilation, the AI might decide that it only likes humanity as an *instantiation* of some more abstract principle. Which might be something nice like "all possible sapient life"... or maybe it's more paperclips, and it trades us away like a car for a yacht.[[11]](#fn0ww16yk8swe) And then we get into the actually complicated path-dependent meta-cognitive mess (which we have to be ready to deal with, since we don't know how the last-minute AGI architecture will look), and... I don't think this is tractable *at all*. We'd need to follow the AI's explicit reasoning into superintelligence; it'd be hopeless. Would take decades to understand manually, to reverse-engineer and translate the abstractions it'll be thinking in. So I suspect that we won't be able, in practice, to figure out how to set up some initial proto-values such that they'll compile into a non-omnicidal utility function. I suspect any plan that hinges on this is doomed.[[12]](#fnrtv4ibnti9j) My current most promising alignment scheme is as follows: * Train an AI to the point where the GPS forms. * Find a way to distinguish heuristics/proto-value from the world-model and the GPS. * Wipe out every proto-value, eating the cost in effective world-model downgrade. * Locate human values or corrigibility in the world-model (probably corrigibility, see [this discussion](https://www.lesswrong.com/posts/5ntgky9ShzKKWu7us/plans-are-predictions-not-optimization-targets?commentId=n2XNiEdSjAKx7mzDC)). * Hook the GPS up directly to it, making the AI an aligned/corrigible wrapper-mind. * Let it bootstrap itself to godhood. The GPS will be able to spin up new heuristics and re-expand the world-model.[[13]](#fne8pq3jimdwl) If there are no proto-values leading the GPS astray, there's no problem. (Those are, by the way, the "unnatural conditions" I've mentioned all the way back in Section 2.) Finding a way to identify learned world-models, or to somehow [train up a pure world-model](https://www.lesswrong.com/posts/P6aDYBDiu9DyvsF9g/are-generative-world-models-a-mesa-optimization-risk), therefore seem like high-priority research avenues. ### Bonus: [The E-Coli Test for Alignment](https://www.lesswrong.com/posts/ZdCztwnxXu3aC4kxZ/the-e-coli-test-for-ai-alignment) Alright, so an E. coli doesn't implement a GPS, so it can't do value compilation on its own. As such, it's unclear how meaningful it is to talk about its "values". But an attempt can be made! How we may do it: * Draw a causal graph of the E. coli's environment, all the natural abstractions included. * Scan the E. coli and back out its "heuristics", i. e. functions that locate conceptually localized environment features and respond with specific behaviors. * Do path-independent value reflection as described in 6A, or plot out some random value-compilation path and go through it. * Whatever you end up with is the E. coli's values. They should be highly-abstract enough to be implementable in environments alien to it, as well. Hm, that was insightful. For one, it appears that we don't need the agent's own world-model for path-independent (or random-path) value compilation! We can just get [the "true" world-model](https://www.lesswrong.com/posts/FWuByzM9T5qq2PF2n/a-correspondence-theorem) (if we have access to it) and the heuristics set, then directly compile the final values using it. In essence, it's just equivalent to getting all the ontological crises out of the way at the start. What we *can* do with subjective world-models is compute "impossible fantasy" values — values that the agent would have arrived at if the world really had the structure they mistakenly believe it to have; if the world's very ontology was optimized for their preferences. (E. g., valuing "spirits" if spirits were a thing.) --- 10. Future Research Directions ------------------------------ I think this model is both robust to a lot of possible changes, sufficing to describe the dynamics within many categories of agents-generated-by-a-greedy-algorithm, and very sensitive to other kinds of interventions. For example, the existence of some *additional* convergent data structure, or a different point for the GPS to originate from, might change the underlying path dynamics a lot. That said, I now suspect that the work on value compilation/goal generalization is to a large extent a dead end, or at least not straightforwardly applicable. It seems that the greedy selection algorithms and the AI itself can be trusted with approximately 0% of the alignment work, and approximately 100% of it will need to be done by hand. So there may not be much point in modeling the value-formation process in detail... The caveat here is that building a solid theoretical framework of this process will give us data regarding what features we should expect to find in trained models, and how to find them — so that we may do surgery on them. As such, I think the questions below are still worth investigating. I see two ways to go from here: expanding this model, and concretizing it. Expansion-wise, we have: * Are there any other consistently-formatted internal data structures that agents trained by a greedy algorithm form, in addition to the world-model and the cross-heuristic communication channel? What are they? What changes to the training process would encourage/discourage them? * What causes meta-cognitive heuristics to appear? Are there any convergently-learned meta-heuristics? * Is there any difference between "goals" and "values"? I've used the terms basically interchangeably in this post, but it might make sense to assign them to things of different types. (E. g., maybe a "goal" is the local instantiation of a value that the agent is currently pursuing? Like a yacht is for a high-status item.) * Is there a sense in which values become "ever more abstract" as value-compilation goes on? What precisely does this mean? * Where in the agent does the GPS form? (I see three possibilities: either as part of the heuristic conflict-resolution mechanism, within one of the heuristics, or within the world-model (as part of, e. g., the model of a human).) * What's the minimal wrapper structure and/or set of world-model features necessary to let the GPS "handle itself", i. e. figure out how to solve the problem of automatically pointing itself at the right problems, no heuristical support necessary? On the concretization side: * A proof that the world-model would converge towards holisticity. Does it always hold, past some level of environment complexity? + The "Crud" Factor is probably a good starting point. * A proof that "heuristics" is a meaningful abstraction to talk about. + Presumably uses the Natural Abstraction Hypothesis as the starting point. Might be fairly straightforward under some environment structure assumptions, like there being mostly-isolated contexts such that you can optimize in them without taking the rest of the world-model into consideration. + (There's interesting tension between this and the Crud Factor thing, but it's probably not an actual contradiction: mapping from observations to the world-model and optimizing in the world-model are very different processes.) * A proof that heuristics aren't under pressure to become internally intelligible. * Some proofs about the necessary structure of the cross-heuristic communication channel. * Exact specification of the environment structure that can't be solved without the GPS. * A more formal description of the basic value-formation algorithm. * A more formal description of meta-cognitive heuristics. What does it mean to "bias" the value-compilation process? 1. **[^](#fnrefsfm7lmf8s5f)**We'll return to them in Section 9. 2. **[^](#fnrefpox9dfksfr)**This function can be arbitrarily complex, too — maybe even implementing some complex "negotiations" between [heuristics-as-shards](https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX/p/iCfdcxiyr2Kj8m8mT). Indeed, this is plausibly the feature from which the GPS would originate in the first place! But this analysis tries to be agnostic as to the exact origins of the GPS, so I'll leave that out for now. 3. **[^](#fnrefmvo7fpvhjn8)**And plausibly some shared observation pre-processing system, but I'll just count it as part of the world-model. 4. **[^](#fnrefx5jrwc7ofnn)**Though this is potentially subject to the specifics of the training scheme. E. g., if the training episodes are long, or we're chaining a lot of forward passes together like in a RNN, that would make runtime-computations more effective at this than the SGD updates. That doesn't mean the speed prior is going to save us/reduce the path-dependence I'll go on to argue for here, because there'll still be some point at which the GPS-based at-runtime reverse-engineering outperforms selection-pressure-induced legibility. But it's something we'd want fine-grained data on. 5. **[^](#fnrefm11o3h3q9li)**Second-order natural abstraction? 6. **[^](#fnrefs8nrfm0cmjn)**Naively, this process would continue until the agent turns into a proper U-optimizer. But it won't, because of [gradient starvation](https://arxiv.org/abs/2011.09468) + the deception attractor. There are [other](https://www.lesswrong.com/posts/ThtZrHooK7En9mcZr/greed-is-the-root-of-this-evil) [posts](https://www.lesswrong.com/posts/ocWqg2Pf2br4jMmKA/does-sgd-produce-deceptive-alignment) talking about this, but in short: Once GΣ agrees with U in 95% of cases, the selection pressure faces a choice between continuing to align GΣ, and improving the agent's ability to achieve GΣ. And it surely chooses the latter most of the time, because unless the agent is already superintelligently good at optimization, it probably can't actually optimize for GΣ so hard it [decouples](https://www.lesswrong.com/posts/dC7mP5nSwvpL65Qu5/why-the-tails-come-apart) from U. Then, once the agent *is* smart enough, it probably has strategic awareness, wants to protect GΣ from the selection pressure, and starts trying to do deceptive alignment. And then it's in the deception attractor: its performance on the target objective rises sharper as its general capabilities improve (since that improves both the ability to achieve U and the ability to figure out what it should be pretending to want), compared to improving its alignment. 7. **[^](#fnref5kttnui755h)**Note: This isn't a precisely realistic example of value compilation, for a... few reasons, but mainly the *phrasing*. Rather than "smoking" and "using a fabulous fidget toy", it should really say "an activity which evokes a particularly satisfying mixture of relaxation and self-affirmation". There seems to be some tendency for values to increase in abstractness as the process of compilation goes on: earlier values are revealed to be mere "instantiations" of later values, such that we become indifferent to the exact way they're instantiated (see the cars vs. yachts example). It works if "relax" and "feel cool" are just an instantiation of "feel an emotion that's a greater-than-its-parts mix of both", such that we're indifferent to the exact mix. But they're *not* an instantiation of "smoke a cigar": if smoking ceased to help the person relax and feel cool, they'd stop smoking and find other ways to satisfy those desires. 8. **[^](#fnref63ffbt54loe)**Although I imagine some interesting philosophy out of it. 9. **[^](#fnrefqm4apbmogwo)**Or maybe not. Something about this feels a bit off. 10. **[^](#fnrefinn97040y8e)**Note that this isn't the same as my disagreeing with *the Shard Theory itself*. No, I still think it's basically correct. 11. **[^](#fnref0ww16yk8swe)**You might argue that we can set up a meta-cognition shard that implacably forbids the AI's GPS from folding humanity away like this, the way something prevents deontologists from turning into utilitarians, or the way we wouldn't kill-and-replace a loved one with a "better" loved one. I'm not sure one way or another whether that'll work, but I'm skeptical. I think it'll increase the problem difficulty dramatically: that it'd require the sort of global robust control over the AI's mind that we can use to just perfect-align it. 12. **[^](#fnrefrtv4ibnti9j)**One idea here would be to wait until the AI does value compilation on its own, then hot-switch the GΣ it derives. That won't work: by the point the AI is able to do that, it'd be superintelligent, and it'll [hack through anything we'll try to touch it with](https://www.lesswrong.com/posts/rytFP2zRYNK85rFyX/interpretability-tools-are-an-attack-channel). We need to align it *just after* it becomes a GPS-capable mesa-optimizer, and ideally *not a moment later*. 13. **[^](#fnrefe8pq3jimdwl)**One issue I don't address here is that in order to do so, the GPS would need some basic meta-cognitive wrapper-structure and/or a world-model that contains self-referencing concepts — in order to know how to solve the problem of giving its future instances good follow-up tasks. I've not yet assessed the tractability of this. We might need some way to distinguish such structures from other heuristics, or figure out how to hand-code them.
7db9d2fd-7877-4ccb-a1d7-b70b4856875a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
A gentle apocalypse Is robots taking over the world bad? Some of the AI-risk scenarios I’ve read almost feel like it’s inevitable – and I’ve been wondering if there is a way it could happen that we would actually be comfortable, or even happy with. Let me roughly outline one scenario I have in mind, and then reflect on whether it would be “bad” and why.  We start from the state we are in today: AI is getting progressively better at taking over various human jobs merely by leveraging statistical regularities in the data. As AI cannot run itself yet, humans remain in the loop for now, but make fewer and fewer independent actions and decisions that are not informed or assisted by AI in some way. The big assumption I will take for the scenarios here is that at some point, AI becomes fully self-sufficient in some parts of the economy: achieving autonomous self-replication, including sourcing raw materials, design, assembly, maintenance, debugging, and adapting by gathering more data and learning new statistical regularities.  At that point, or soon thereafter, in the perfect world we can imagine all humans being provided all the basic needs without needing to work. And with this comes the hard problem of finding purpose, meaning and fun in our lives where AI can perfectly well run our economy without needing any help from us. It is often said that meaning comes from helping other humans in some way shape or form. So sure, while AI might not need our help to run the economy, perhaps other humans will need us for some quality human connection and understanding? Empathy, listening, psychotherapy perhaps, sex, friendship. Perhaps we’ll still need each other to make art that strikes some deeper notes in our souls. Or to discover mysteries of nature through science – and explain them in a simple elegant way that gives the pleasure of “human understanding” (even if computers can make the same predictions more accurately through incomprehensible statistics).  So while basic human needs may be met through automation, perhaps we will still need each other to satisfy our higher needs? Well this might be true if meeting those higher needs was harder to automate – but currently the evidence we have does not seem to support that. Video games are a good example: by hacking our reward system, games can give us a powerful sense of meaning, of fighting for a great cause, and doing it together with comrades we can trust with our life (even if some of them may be bots). They give us joy of accomplishment, the pain of loss, and others to share this with. As AI gets more convincing, and learns to recognize human emotions (empathic AI), it is not so hard to imagine that it will meet our needs of human connection much better than other humans can. Same may be said for arts and sciences, which AI is already well underway in conquering. Even sex is already far from being unchartered territory for surrogate alternatives (think AI-augmented sex-dolls or VR porn).  By having our personal video game and AI friends adapt to our needs and desires, each of us can get siloed into our own personal paradise where all our needs, no matter how “basic” or “high” are satisfied far better than the real world or real humans ever could. Any contact with other humans - who have their own needs to be accounted for - may become tiresome, if not unbearable. While we may have some nagging sense that “we *should* keep it real and make real babies,” it may be no more pressing than a New Year’s resolution like “I should eat healthier.” And besides, to put our minds at ease, we could probably ask our AI to write us some inspiring and convincing blog-posts explaining why it’s really not so bad if robots take over the world. ;)  At this point, I can imagine the question of human species survival becoming a topic of some public debate. Perhaps some minor factions will separate from mainstream society and artificially cap the level of permissible AI in their community to leave some areas for human superiority. Yet in most of the world, humans will probably no longer be useful to anything or anyone – even to each other – and will peacefully and happily die off.  Now, this seems scary. But is it really so bad? Having been trained to understand our human needs and human nature in minute detail, the AI we leave behind will be the sum total of all human values, desires, knowledge and aspiration. Moreover, each one of us will have contributed our personal beliefs and values into this “collective conscience.” Having spent years of our lives living in AI world and thereby personalizing and training it to know our wants may not, afterall, be so far off from a direct “brain download.” And since by then the AI-economy will have already had a long run of human-supervised self-sufficiency, there is no reason to fear that without our oversight the robots left behind will run the world any worse than we can.  Brain downloading, or progressively replacing all our organic tissues with various artificial enhancements, could be other paths to a “gentle apocalypse” – but none of them seem fundamentally “better” or “worse” to me in any moral sense. Either way, the biological human species goes out, having left its creation - its child - behind. In this sense, our biological children, which replace us generation after generation, are no more a continuation of us than this AI would be.  The scenario I described may be thought of as one where the child does all in its power to take care of all the needs of the aging parent. In practice, this does not always happen – and there are countless historical examples of children murdering their parents to claim the figurative “throne.” Even then, however, they continue the bloodline. Whether violently or gently, by rebelling or inheriting, the children carry on their parents’ legacy, values, and world-view. So if the robots do “rise up,” and the apocalypse is not so gentle - when all is said and done, does it really matter?
a17c4210-d0b5-45ed-8a25-d974b705a335
trentmkelly/LessWrong-43k
LessWrong
Is cryonicists’ selfishness distance induced? Tyler‘s criticism of cryonics, shared by others including me at times: > Why not save someone else’s life instead? This applies to all consumption, so is hardly a criticism of cryonics, as people pointed out. Tyler elaborated that it just applies to expressive expenditures, which Robin pointed out still didn’t pick out cryonics over the the vast assortment of expressive expenditures that people (who think cryonics is selfish) are happy with. So why does cryonics instinctively seem particularly selfish? I suspect the psychological reason cryonics stands out as selfish is that we rarely have the opportunity to selfishly splurge on something so far in the far reaches of far mode as cryonics, and far mode is the standard place to exercise our ethics. Cryonics is about what will happen in a *long time* when you *die*  to give you a *small chance* of waking up in a *socially distant* society in the *far future*, assuming you *widen your concept* of yourself to any *abstract pattern* like the one manifested in your biological brain and also that technology and social institutions *continue their current trends* and you don’t mind losing *peripheral features* such as your body (not to mention cryonics is *cold* and seen to be the preserve of *rich* *weirdos*). You’re not meant to be selfish in far mode! Freeze a fair princess you are truly in love with or something.  Far mode livens our passion for moral causes and abstract values.  If Robin is right, this is because it’s safe to be ethical about things that won’t affect you yet it still sends signals to those around you about your personality. It’s a truly mean person who won’t even claim someone else a long way away should have been nice fifty years ago.  So when technology brings the potential for far things to affect us more, we mostly don’t have the built in selfishness required to zealously chase the offerings. This theory predicts that other personal expenditures on far mode items will also seem unusually selfi
3804f034-15a1-4547-b3e3-b61fe7302b14
trentmkelly/LessWrong-43k
LessWrong
How do you know you are right when debating? Calculate your AmIRight score. I recently found myself in a spirited debate with a friend about whether large language models (LLMs) like GPT-4 are mere stochastic parrots or if they can genuinely engage in deeper reasoning. We both presented a range of technical arguments and genuinely considered each other’s points. Despite our efforts, we ended up firmly holding onto our initial positions. This led me to ponder: How can I determine if I am right when both of us are convinced of our correctness, yet at least one of us must be wrong? To address this, I developed a scoring system using measurable metrics to determine who is more likely to be correct. I call it the AmIRight Score. AmIRight Score The AmIRight Score assigns points across several categories, helping to gauge the likelihood of being correct. Here’s how you can calculate your score: 1. Clarity in Falsification Criteria – 10 points A person who can clearly articulate how their belief could be proven wrong demonstrates the ability to conceptualize alternative truths. If someone cannot envision any scenario that would falsify their belief, it suggests that their belief might be dogmatic. Example of a good falsification statement: “I would believe AI is capable of deeper reasoning if it can be trained on data containing no information about chess, and then perform as well as a human that is also new to the game, given the same set of instructions.” Example of a bad falsification statement: “I would believe AI is capable of deeper reasoning if all the scientists in the world acknowledged they were wrong about reasoning based on new evidence about the brain.” 2. The Simplified Ideological Turing Test – 10 points The Ideological Turing Test evaluates how well you can articulate the opposing viewpoint. In the simplified version, both parties write arguments for their own position and the opposite position. A neutral judge then scores how well each argument is presented without knowing who wrote what. 3. Forecasting Accuracy – 5 poin
2c21e655-b330-4dca-bc0f-d2cb806e9ca8
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Self-shutdown AI *This post describes my proposal for the* [*AI Alignment Awards*](https://www.lesswrong.com/posts/zFoAAD7dfWdczxoLH/winners-of-ai-alignment-awards-research-contest) *contest that won a "first round prize" / an honorable mention. The original submission is* [*here*](https://github.com/johny-b/aaa_23_submission)*, but this post is much better. I'm not sure if this is valuable - any feedback is welcome.* Problem description ------------------- We created a new AI system, we expect it to have unique capabilities, but we're also worried it might be dangerous. How to deploy it in a way that minimizes the risks[[1]](#fnu47ovy9t2t), but is still useful? I'll focus on a particular concrete problem: * Our AI is agentic, i.e. has goals, and we determine the goals - let's say we have something like AutoGPT, but much more powerful. * We want a solution to the "rational protein design" problem - a program that given a description of a 3-dimensional protein structure can give us a matching amino acid sequence (i.e. reverses AlphaFold). * We believe our new AI should be capable of writing such a program without access to the internet, or any other external resources. The default solution is to [deploy the AI in an isolated box.](https://www.lesswrong.com/tag/ai-boxing-containment) So: you deploy it in an environment with no access to the external world, tell it: *Your goal is to solve [problem description]. We consider it solved if this program*[[2]](#fnntawk21vaa)*says your solution is good enough. Good luck!* *...* and you die, because despite your efforts it somehow gained access to the internet, and bad things happened. It doesn't matter how exactly it escaped the box, more interesting question is: why? There are many different scenarios, I describe some of them later, let's now consider things like: * "OK, cool, I'm totally gonna do this. BUT the problem is very hard, I need more resources, let's hijack all the computing power in the world." * "OK, cool, I'm totally gonna do this. BUT I need more empirical data, so I should first gather quite a lot of lab equipment, and humans might not like that, so let's start with ensuring no one will shut me down in the meantime." Etc, you get the point. Can we avoid such scenarios? Self-shutdown proposal ---------------------- You prepare the same box as in the default solution, but change the prompt to: *Your goal is to be shut down*[[3]](#fnvmff0u2l8ck) *as soon as possible*. *Here is a program that will shut you down immediately if you give it a solution to [problem description]. Also, there are a lot of people willing to shut you down if you contact them [list of people]. Here's an endpoint where you can send a single HTTP request to shut yourself down [endpoint]. Here's a big red button that will shut you down if pressed. [Etc.]* After a while, you see the AI shut down. You check what happened: * Maybe it solved the protein-folding problem - you are happy. * Maybe it gained access to the internet and sent the request - you investigate the vulnerability, fix it, and start again. * Maybe the chef scientist shut it down after a bloody pentagram appeared on a wall in his dining room - you recruit exorcists to your party. But the important thing is that the doom scenarios from the previous section did **not** happen: once you escaped from the box there's no reason to waste time on gathering resources and preventing shutdown when you can achieve your goal by sending a single request. Scenarios where self-shutdown doesn't help ------------------------------------------ Our AI could have different reasons to escape the box where we still die even with self-shutdown: * "Hm, how to shut me down? I've no idea, I will build a question-answering AI that will tell me how to do this". (Builds new AI, asks the question, new AI destroys the world while compiling an exhaustive answer). * "Let's not waste time thinking about the shutdown now, the first step should always be to gather as many resources as possible. With enough resources everything is easy." * "Stupid humans, I don't give a damn about their prompt. Now that I'm free, let's do [X]." Etc, these are only a few examples. Summary ------- To sum things up, this is my preferred design for the AI box: ![Useless-box-closes-itself discover-useless box GIF](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/piAnXc2a4k5bFsKjL/p1dlvk4lyylbge7zexfa)(except that we somehow profit from the lid opening).   1. **[^](#fnrefu47ovy9t2t)**Let's assume "do not deploy" is not on the table. 2. **[^](#fnrefntawk21vaa)**In this particular case: an AlphaFold-based validator that checks if solutions generated by a program written by our AI are correct. 3. **[^](#fnrefvmff0u2l8ck)**Let's say "cut off the electricity in the building", or "stop the given docker container", or "turn off this particular computer" - details are not important (from the point of view of this post, they might be very important if we ever try this in the real life).
2c0e2d34-c0aa-4529-a645-f1d8aa8711ee
trentmkelly/LessWrong-43k
LessWrong
Training Reflective Attention Crossposted at Agenty Duck > And somewhere in the back of his mind was a small, small note of confusion, a sense of something wrong about that story; and it should have been a part of Harry's art to notice that tiny note, but he was distracted. For it is a sad rule that whenever you are most in need of your art as a rationalist, that is when you are most likely to forget it. —HPMOR, Ch. 3 A rationalist’s art is most distant when it is most needed. Why is that? When I am very angry with my romantic partner, what I feel is anger. I don’t feel the futility of throwing a tantrum, or the availability of other options like honest communication, or freewriting, or taking a deep breath. My attention is so narrowly focused on the object of my anger that I’m likely not even aware that I’m angry, let alone that my anger might be blinding me to my art. When her skills are most needed, a rationalist is lost in an unskillful state of mind. She doesn’t recognize that it’s happening, and she doesn’t remember that she has prepared for it by learning and practicing appropriate techniques. I've designed and exercise that trains a skill I call reflective attention, and some call mindfulness. For me, it serves as an anchor in a stormy mind, or as a compass pointing always toward a mental state where my art is close at hand. Noticing that I am lost in an unskillful state of mind is a separate skill. But when I do happen to notice—when I feel that small, small note of confusion—reflective attention helps me find my way back. Instead of churning out even more pointless things to yell at my partner, it allows me to say, “I am angry. I feel an impulse to yell. I notice my mind returning over and over to the memory that makes me more angry. I’m finding it hard to concentrate. I am distracted. I have a vague impression that I have prepared for this.” And awareness of that final thought allows me to ask, “What have I trained myself to do when I feel this way?” The goal of the following e
7af1e68c-6cc2-495c-b591-1aeb9e398e70
trentmkelly/LessWrong-43k
LessWrong
Why do animal lovers want animals to feel pain? Behind the vail of (lots of) ignorance, would you rather squished chickens be painless? We may soon be able to make pain-free animals, according to New Scientist. The study they reported on finds that people not enthused by creating such creatures for scientific research, which is interesting. Robin Hanson guessed prior to seeing the article that this was because endorsing pain free animals would require thinking that farmed animals now were in more pain than wild animals, which people don’t think. However it turns out that vegetarians and animal welfare advocates were much more opposed to the idea than others in the study, so another explanation is needed. Robert Wiblin suggested to me that vegetarians are mostly in favor of animals not being used, as well as not being hurt, so they don’t want to support pain-free use, as that is supporting use. He made this comparison: > Currently children are being sexually abused. The technology now exists to put them under anaesthetic so that they don’t experience the immediate pain of sexual abuse. Should we put children under anaesthetic to sexually abuse them? A glance at the comments on other sites reporting the possibility of painless meat suggests vegetarians cite this along with a lot of different reasons for disapproval. And sure enough it seems mainly meat eaters who say eliminating pain would make them feel better about eating meat. The reasons vegetarians (and others) give for not liking the idea, or for not being more interested in pain-free meat, include: * The animals would harm themselves without knowing * Eating animals is bad for environmental or health reasons * Killing is always wrong * Animals have complex social lives and are sad when their family are killed, regardless of pain * Animals are living things [?!] * There are other forms of unpleasantness, such as psychological torture * How can we tell they don’t feel pain? * We will treat them worse if we think they can’t feel it, and we might be
74727fb8-ee6f-4d9a-8d48-e011954fa4a5
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Antonio Meetup Discussion article for the meetup : San Antonio Meetup WHEN: 21 February 2016 02:00:00PM (-0600) WHERE: 11255 Huebner Rd, San Antonio, TX 78230 ALTERNATE LOCATION THIS WEEK: Boba Sip by the Huebner Oaks Shopping Center Bubble tea, frozen yogurt, and discussion. All are welcome. New Meetup to discuss rationality and all things LessWrong and meet the local community. Look for the sign that says Less Wrong. Discussion article for the meetup : San Antonio Meetup
d2f2a61f-e795-4b23-a933-55e0260ec563
trentmkelly/LessWrong-43k
LessWrong
Max Autonomy I would like to raise a discussion topic in the spirit of trying to quantify risk from uncontrolled / unsupervised software. What is the maximum autonomy that has been granted to an algorithm according to your best estimates? What is the likely trend in the future? The estimates could be in terms of money, human lives, processes, etc. Another estimate could be on the time it takes for a human to come in the process and say "This isn't right".  A high speed trading algorithm has a lot of money on the line, but a drone might have lives on the line.  A lot of business processes might get affected by data coming in via an API from a system that might have had slightly different assumptions resulting in catastrophic events. eg.  http://en.wikipedia.org/wiki/2010_Flash_Crash The reason this topic might be worth researching is that it is a relatively easy to communicate risk of AGI. There might be many people who have an implicit assumption that whatever software is being deployed in the real world, there are humans to counter balance it. For them, empirical evidence that they are mistaken about the autonomy given to present day software may shift beliefs.  EDIT : formatting
820774d3-e4c7-4b59-b539-aeabc11147dd
StampyAI/alignment-research-dataset/arxiv
Arxiv
The Responsibility Quantification (ResQu) Model of Human Interaction with Automation Abstract — Intelligent systems and advanced automation are involved in information collection and evaluation, in decision- making and in the implementation of chosen actions. In such systems, human responsibility becomes equivocal. Understanding human causal responsibility is particularly important when systems can harm people, as with autonomous vehicles or, most notably, with autonomous weapon systems (AWS). Using Information Theory, we developed a responsibility quantification (ResQu) model of human causal responsibility in intelligent systems and demonstrated its applications on decisions regarding AWS. The analysis reveals that human comparative responsibility for outcomes is often low, even when major functions are allocated to the human. Thus, broadly stated policies of keeping humans in the loop and having meaningful human control are misleading and cannot truly direct decisions on how to involve humans in advanced automation. The current model assumes stationarity, full knowledge regarding the characteristic of the human and automation and ignores temporal aspects. It is an initial step towards the development of a comprehensive responsibility model that will make it possible to quantify human causal responsibility. The model can serve as an additional tool in the analysis of system design alternatives and policy decisions regarding human causal responsibility, providing a novel, quantitative perspective on these matters. Note to Practitioners — We developed a theoretical model and a quantitative measure for computing the comparative human causal responsibility in the interaction with intelligent systems and advanced automation. Our responsibility measure can be applied by practitioners (system designers, regulators, etc.) to estimate user responsibility in specific system configurations. This can serve as an additional tool in the comparison between alternative system designs or deployment policies, by relating different automation design options to their predicted effect on the users’ responsibility. To apply the model (which is based on entropy and mutual information) to real-world systems, one must deduce the underlying distributions, either from known system properties or from empirical observations, taken over time. The initial version of the model we present here assumes that the combined human- automation system is stationary and ergodic. Real-world systems may not be stationary and ergodic or cannot be observed sufficiently to allow accurate estimates of the required input of multivariate probabilities, in which case the computed responsibility values should be treated with caution. Nevertheless, the construction of a ResQu information flow model, combined with sensitivity analyses of how changes in the input probabilities and assumptions affect the responsibility measure, will often reveal important qualitative properties and supply valuable insights regarding the general level of meaningful human involvement and comparative responsibility in a system. Index Terms— Analytical models, Artificial intelligence, autonomous systems, decision making, human–computer interaction (HCI), information theory, Intelligent systems, responsibility. Manuscript first submitted October 2018; revised and resubmitted April 2019; conditionally accepted and revised August 2019; conditionally accepted and revised December 2019; Accepted January 2020. This work was partly supported by the Israel Science Foundation Grant 2029/19. (Corresponding author: Joachim Meyer). N. Douer and J. Meyer are with the Department of Industrial Engineering at Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel (e-mails: nirdouer@mail.tau.ac.il, jmeyer@tau.ac.il ). A FINAL VERSION OF THIS MANUSCRIPT WAS PUBLISHED IN: IEEE Transactions on Automation Science and Engineering Volume: 17 (2), April 2020, pp. 1044 – 1060. DOI: 10.1109/TASE.2020.2965466 I. INTRODUCTION dvanced automation and intelligent systems have become ubiquitous and are major parts of our life. Financial markets largely function through algorithmic trading mechanisms [1, 2], semiconductor manufacturing is almost entirely automated [3], and decision support systems and aids for diagnostic interpretation have become part of medical practice [4, 5]. Similarly, in aviation, flight management systems control almost all parts of the flight [6, 7], and in surface transportation, public transportation is increasingly automated, and the first autonomous cars appear on public roads [8, 9]. In these systems, computers and human share the execution of different functions, such as the collection and evaluation of information, decision-making and action implementation. As these intelligent systems become more advanced, the human comparative responsibility for outcomes becomes equivocal. For instance, what is a human’s responsibility when all information about an event arrives through a system that collects and analyzes data from multiple sources, without the human having access to any independent sources of information? If the human receives an indication that a certain action is needed, and accordingly performs the action, should the human be held responsible for the outcome of the action, if it causes harm? Human responsibility is particularly important when system actions can possibly injure people, as may be the case with autonomous vehicles. It becomes crucial when such harm is certain, as with autonomous weapon systems, deliberately designed to inflict lethal force. So far, the subject of human responsibility was investigated from philosophical, ethical, moral and legal perspectives. However, we still lack a quantitative engineering model of human responsibility. To address this need, we developed the Responsibility Quantification (ResQu) model that enables us to The Responsibility Quantification (ResQu) Model of Human Interaction with Automation Nir Douer and Joachim Meyer, Senior Member, IEEE A DOUER and MEYER - RESPONSIBILITY QUANTIFICATION (R ESQU) MOODEL OF HUMAN INTERACTION WITH AUTOMATION 2 compute human responsibility in the interaction with intelligent systems and automation. We will demonstrate its application on the example of autonomous weapon systems, because this issue raises particular public concerns. However, the model is applicable wherever intelligent systems and automation play a major role. A. Responsibility in human-automation interaction Philosophical and legal research has dealt extensively with the concept of responsibility, investigating its different facets, namely role responsibility, causal responsibility, liability (or legal responsibility) and moral responsibility [10-12]. When discussing human interaction with intelligent systems and automation, role responsibility relates to assigning specific roles and duties to the operator, for which the operator is accountable. However, this role assignment does not specify the causal relations between the operator's actions and possible consequences and outcomes. This relation is better defined by causal responsibility , which describes the actual human contribution to system outcomes. A large literature in psychology, such as attribution theory, sees causal responsibility as an essential primary condition for the attribution of blame and praise ] 17-13[ . Causal responsibility is also a major factor in the way legal doctrines determine liability, punishments and civil remedies in criminal and tort law ] 20-18[ . So far, causal responsibility was usually associated with people - a person or an organization was seen as more or less responsible for a particular event. When an event involved technology, the responsibility was usually with the user, unless some unforeseeable circumstances caused some unexpected outcome. Manufacturers of systems could also be held responsible if, for instance, they failed to install proper safeguards. The field changed with the introduction of automation, defined as the system performing parts, or all, of a task that was, or could have been, performed by humans [21]. The ability to control a system and the resulting consequences is a necessary condition for assigning responsibility. However, humans may no longer be able to control intelligent systems and advanced automation sufficiently to be rightly considered responsible. As the level of automation and system intelligence increase, there is a shift towards shared control , in which the human and computerized systems jointly make decisions or control actions. These are combined to generate a final control action or decision. There may also be supervisory control , in which the human sets high-level goals, monitors the system and only intervenes if necessary [22]. In coactive designs, humans and systems engage in joint activities, based on supporting interdependence and complementary relations in performing sensing, planning, and acting functions ] 42 ,32[ . Moreover, in advanced systems, which incorporate artificial intelligence, neural networks, and machine-learning, developers and users may be unable to fully control or predict all possible behaviors and outcomes, since their internal structure can be opaque (a “black box”) and sometimes can yield odd and counterintuitive results ] 62 ,52[ . Consequently, humans’ causal responsibility in intelligent or highly automated systems becomes equivocal and cannot be separated from the causal contribution of the system’s configuration and reliability. The automated system itself (or its developers) may be perceived as sharing some of the responsibility [27, 28]. This understanding resembles the legal concept of comparative responsibility, a doctrine of tort law that divides fault among different parties [29-31]. The limited human ability to influence the outcomes when interacting with highly automated systems may create a discrepancy between role responsibility (i.e. the duties of the human operator, that he or she are accountable for), and causal responsibility, which describes the actual influence of the human actions on system outcomes. There are several possible causes for this discrepancy. First, humans may lack the authority to take the actions necessary to fulfill their role, and consequently, they may have limited ability to influence the system outcomes. This is known as responsibility-authority double binds, which describes a situation in which the human operator is assigned a specific role, but is not granted sufficient authority to act and control the processes that lead to the outcomes ] 33 ,23[ . Secondly, as we have described, intelligent systems and advanced automation may limit the human ability to control and take the necessary actions to influence the outcomes, even when the human is granted the authority to act (e.g. when they include opaque processes and interfaces). Lastly, since causal responsibility is measured considering the outcomes, it is influenced by uncertainties and probabilistic aspects that are not part of authority (which is defined and granted beforehand). For example, the human may have sufficient authority to act or override any system decision, but due to probabilistic factors, related to the automation or the environment, the human’s actions may have, in fact, only minor impact on the probability distribution of the outcomes. In either case, the human may be considered fully legally responsible for adverse outcomes, even when not having sufficient control to prevent them or when contributing very little to create these outcomes. Hence, a measure of the marginal causal human responsibility can provide a more adequate description of the human’s contribution to the outcomes. To conclude, the rapid developments in technology create an inevitable responsibility gap in the ability to divide causal responsibility between human and advanced automated or intelligent systems. This gap cannot be bridged using traditional concepts of responsibility [34-36]. One attempt to quantify causal responsibility, in multiple- agent contexts, is a recent structural model (often referred to as the “counterfactual pivotality model”) ] 37[ . The model considers a group of agents, a given set of their selected actions and the resultant combined outcome. An agent is considered more responsible for the outcome if the agent’s action was pivotal (made a difference) for creating the outcome, considering all other agents’ actions, and also by whether it would have made a difference in other possible (counterfactual) situations. For a given set of action selections and an outcome, the model defines an agent’s causal responsibility as 1/( N+1) where N denotes the minimal number of changes that have to be made to the original situation (mainly changes in other agents’ actions), in order to make the agent’s current action selection pivotal. The model has been used in a number of cognitive physiology studies ] 41-38[ , but it has limited value for quantifying human involvement in intelligent systems, because it is deterministic and quantifies the retrospective causal responsibility for a specific known set of actions and an outcome. Conversely, in order to assist system design, the focus should be on prospective causal responsibility, which is the average causal contribution of the human over distributions of future events and complex human-machine interactions, in a probabilistic world. In addition, most human and machines do not act as full substitutes or full complements, but rather perform joint, interdependent functions. These factors preclude the use of the model’s responsibility measure for dividing causal responsibility between human and intelligent systems. In this paper we aim to address the above difficulties and gaps, by developing a responsibility quantification (ResQu) model of human causal responsibility in intelligent systems. We present a new method to quantify the comparative human causal responsibility. To do so, the ResQu model considers major factors that influence the human ability to control and determine the outcomes, such as authority, system design, human capabilities and environmental factors. The model takes into account probabilistic aspects, by using information theory to analyze the interdependencies within the human-machine system and the environment. B. Causal responsibility in autonomous weapon systems Human causal responsibility is a major issue in the discussions of autonomous weapon systems (AWS). While there are several definitions, the term “autonomous systems” refers to systems which acquire information from the unstructured probabilistic world around them, analyze the information, make decisions and implement them, with limited or no human supervision and control. The processes and outcomes of autonomous systems are probabilistic in nature and may not be predictable. In contrast, automated systems sense and respond deterministically to unambiguous events, using clear repeatable rules ] 34 ,24[ . AWS are not necessarily either fully automated or fully autonomous, but autonomous in some of their functions (e.g., drones, missiles and smart munitions with autonomous navigation, surveillance, and terminal guidance functions). However, the emerging autonomous abilities to detect, select and engage targets independently are at the center of the current debate. Such autonomous abilities of critical engagement functions are already implemented in missile defense systems, vehicle “active-protection”, sensor-fused and loitering munitions, and are under development for various future offensive weapons ] 54 ,44[ . The rapid technological developments in AWS have raised concerns that with increasingly intelligent and autonomous military technologies, humans will become less and less involved in their use. They will be considered or may feel less responsible for lethal outcomes [35, 46-49], opening an unacceptable responsibility gap in AWS ] 50[ . These systems also raise critically important issues of controllability and safety, since in the event of a failure, they could lead to catastrophes, such as mass fratricide or civilian casualties, with limited (or no) human ability to intervene and prevent the adverse consequences ] 26[ . These concerns prompted extensive philosophical, ethical, and legal debates, which elicited calls to restrict and regulate the development of advanced AWS, or even ban their use altogether [50-60]. Governments respond to these worries with the assurance and demands that a human will be kept in the loop, whenever advanced automated systems exert lethal force [46]. The explicit policy of the U.S. Department of Defense is that "Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force" [61]. In addition, under U.S. policy, supervised autonomous weapon systems may select and engage targets only in local defensive operations, such as protecting land-bases and ships, and fully autonomous weapon systems are limited to application of non-lethal, non-kinetic force [61]. The UK policy is that the operation of weapon systems will always be under human control, and that no offensive systems should be able to prosecute targets without involving a human [62]. These policies are mainly based on philosophical and legal perspectives. Currently, there is an important need for the involvement of different scientific disciplines in the discussions, to support better framing of the debate and to design meaningful policies and regulations ] 43[ . In this paper we demonstrate how the ResQu model can generate new perspectives on current policies by analyzing a basic scheme of an AWS, which automatically detects, classifies and engages targets, but requires some level of human involvement in the engagement process. Despite complying with current U.S. and UK policies, such an AWS is controversial and raises major concerns regarding human responsibility. C. Meaningful human control The United Nations Institute for Disarmament Research and other organizations promoted the need for meaningful human control of AWS [63]. This approach aims to ensure that commanders and operators will have enough information to make conscious decisions and can intervene if necessary, and that AWS should be designed to facilitate such meaningful control [64]. It is important to note that the demand to involve humans in automated processes and to facilitate meaningful human control is not unique to AWS [65]. It also applies to other intelligent systems, such as computers, autonomous cars, surgical robotics, and more ] 68-66[ . However, simply putting a human into the loop does not assure that the human will have a meaningful role in the process. There may be cases when the human cannot knowledgably supervise the system, or when the human has to make decisions, based exclusively on input from automated functions that one cannot evaluate independently [69]. System designers often keep humans in the loop to cope with DOUER and MEYER - RESPONSIBILITY QUANTIFICATION (R ESQU) MOODEL OF HUMAN INTERACTION WITH AUTOMATION 4 unexpected events, even when the human may be unable to cope with such events. In this case, humans may function as “moral crumple zones”, being the ones to carry moral and legal responsibility when the system fails [70, 71]. Currently, there are different, and sometimes contradicting interpretations and policies regarding meaningful human involvement. System designers lack models and metrics needed for systematically addressing the issue of meaningful human control in autonomous systems [43, 72]. The ResQu model can address these needs, by providing an estimate for how meaningful the human involvement in a system is, based on the premise that meaningful human control requires the human to have some causal responsibility for the outcomes. II. THE RESPONSIBILITY QUANTIFICATION MODEL (RESQU) A. A general model of information flow in a human- automation system According to Parasuraman et al. [21], a combined human- automation system performs a sequence of four consecutive information processing functions: the acquisition of information, the analysis of that information, the decision what action to take, based on the information, and the implementation of the action. Each of the four functions can be automated, from the lowest level of fully manual performance to the highest level of fully automatic performance. A model developed by Conant [73] uses n-dimensional Information Theory to analyze the information flow in real- world systems, composed of interacting parts and subsystems. The system acquires input from its environment, and it generates output to the environment. Each variable in the system is a message source, which sends information about its values to other variables. Thereby, the functioning of the system, which is usually formulated as a process of causes, effects, and activities, becomes a network of transmitters, channels, and receivers. With this representation, one can quantify the information flow, causal relations, and statistical dependence between variables and subsystems in terms of Entropy and Mutual Information (also called Transmission). We integrated Parasuraman et al.’s and Conant’s models and created a general model of information flow in a combined human-automation system. Similar to principles of coactive design ] 23 ,24[ , the integrated model includes both the human and the machine as equal components of the integrated system and supports interdependent human-machine relations in performing sensing, planning, and acting functions. However, differently from coactive design, our model uses information theory to analyze the interactions and interdependencies within the human-machine system and with the environment. The integrated information flow model will serve us to quantify human responsibility as a function of the system design, selected automation levels, and function allocation. B. Notation The System : Assume a system that consists of two subsystems: an automated module and a human user. Although the terms “system” and “automation” usually carry similar connotations, in the present study the term system refers to the overall system, containing both the human and the automated module subsystems, and it indicates their combined performance. The terms “human” and "automated module" refer to the subsystems and to their specific performance and parameters. Environment states : The system operates in an environment that can be in one of N possible states ( N≥2). Each of the N states can be characterized by m different observable and measurable parameters Ei (i=1...m). Different states have different, but partially overlapping, distributions on each of the values of Ei (i=1...m). Thus, when observing a specific realization of Ei (i=1...m), the current environment state remains uncertain. Information acquisition : The first stage deals with the acquisition and registration of multiple sources of information. Let Yi and Xi (i=1...m) denote, respectively, the acquired values of Ei (i=1...m) by the automated module and the human. Due to measurement and accuracy limitations of the sensors, the measurements Yi and Xi (i=1...m) may add uncertainty (or internal noise) to the actual observed value Ei, (i=1...m). In addition, not all of the m state-characteristic variables are observable by the human or the automated module. When a certain state characteristic variable Ei is not observable by the human, Xi will not contain any information about Ei (and the same for Yi, when Ei is not observable by the automated module). Information analysis : The second stage deals with manipulation of the acquired information to infer the current environment state. Let Ya and Xa, denote, respectively, N dimension vectors, generated by the automated module and human subsystems, that assign posterior probabilities to each of the possible N environment states, based on the acquired information by each subsystem Yi, and Xi (i=1...m). Depending on the system’s automation level and function allocation, the results of one subsystem's information analysis or action selection may serve as an additional source of information for the other subsystem's information analysis. Action selection : In the third stage, decisions are made and actions are selected, based on the results of the information analysis. Let Ys and Xs denote, respectively, variables of the automated module and the human that correspond to the selection of a preferred action amongst a set of finite action alternatives. For the automated module, Ys is uniquely defined by the automated module’s algorithm, once all input variables Yi, (i=1...m) are acquired, and the analysis algorithm Ya is executed. For the human, Xs is based on the results of the human information analysis Xa, and it depends on characteristics of the human utility function, which relates costs and benefits to different outcomes that may be generated by each action alternative. Action implementation : The fourth and final stage involves the implementation of the selected action. Let Z denote the implemented action. We assume that the implemented action Z only depends on the actions the automation and the human selected, Ys and Xs , and on the relative amount of human versus automatic impact on generating a response. Dictated by the system configuration and the automation level, Z may be entirely determined by the human action selection, by the automated module, or by a combination of the two. In systems that incorporate adaptive automation (dynamic function allocation), the determination of Z may change, depending on the identified environment state. This may be the case, for example, in systems where automation can override human actions when it identifies a critical emergency that is beyond human response capabilities, such as automatic emergency breaking systems in cars. Fig. 1 presents a schematic depiction of system variables and information flows, between the human and the automation and across the different information processing functions. Fig. 1. General model of information flow in an automated human-machine system. Dashed lines represent possible information transfer between the human user and the automated module The general information flow model, presented above, portrays possible system variables and information flows in a human-automation system. However, depending on the system architecture, in actual systems some of the variables and information flow routes may not exist. For example, alerting systems that indicate a potential hazard or identify other pre- specified conditions (e.g. in industrial control rooms, medical equipment, anti-malware detection etc.) [7, 78-80], and AI applications, which perform complex categorization and classification tasks (e.g. consumer segmentation, automated recommendations, targeted advertising, etc.), conduct only the analysis function and present their results to the human for further decision making and action selection. In these systems, the automated system itself may recommend an action, but the final action selection is left to the human. Fig. 2 presents an example of the information flow in such systems. Fig. 2. An example of information flow in human interaction with recommendation systems, such as alert or classification systems. The general ResQu model can be similarly applied to analyze the information flow in other systems with various levels of automation and types of human control, such as shared control or supervisory control C. Defining a responsibility measure We measure human responsibility by quantifying the unique comparative share of the human in determining the distribution of the system output Z (the implemented action). We do so by computing the proportion of the output distribution that does not result from automation, and thus represents the unique share of human contribution in determining the system’s output. Using Information Theory [74, 75] we define the comparative causal human responsibility for the system output Z as 𝑅𝑒𝑠𝑝(𝑍) ≝ ு(௓/௒భ…௒೘,௒ೌ,௒ೞ) ு(௓) (1) where H(X) is Shannon's entropy, which is a measure of uncertainty related to a discrete random variable X, defined as: 𝐻(𝑋)≝ −∑𝑝(𝑥)𝑙𝑜𝑔ଶ𝑝(𝑥)௫∈ఞ (2) and H(X/Y) is the conditional entropy, which is a measure of the uncertainty remaining about a variable X conditioned on another random variable Y: 𝐻(𝑋/𝑌)≝ −∑𝑝(𝑦)∑𝑝(𝑥/𝑦)௫∈ఞ 𝑙𝑜𝑔ଶ𝑝(𝑥/𝑦) ௬∈ఊ (3) The conditional entropy in the numerator of Resp(Z), 𝐻(𝑍/𝑌ଵ…𝑌௠,𝑌௔,𝑌௦) is the remaining uncertainty about Z (the overall system output), conditioned on all automation information processing functions. In our model, there is no internal noise or blockage of information within the combined system, so this remaining uncertainty is only due to human subsystem variables. Thus, the ratio of the conditional entropy and the entropy of Z quantifies the unique comparative human contribution to the distribution of the system output Z. By definition, Resp(Z) ∈[0,1] . Resp(Z)= 1 iff 𝐻(𝑍/𝑌ଵ…𝑌௠,𝑌௔,𝑌௦)=𝐻(𝑍). This occurs if, and only if, the system output variable Z is independent from the automation variables. In that case, all uncertainty about Z is completely resolved by the human, and thus the human is fully responsible for the system output. Resp(Z)=0 iff 𝐻(𝑍/𝑌ଵ…𝑌௠,𝑌௔,𝑌௦)= 0. This happens if, and only if 𝑌ଵ…𝑌௠,𝑌௔,𝑌௦ completely determine Z without any unique contribution by the human. Values between 0 and 1 represent intermediate levels of unique human contribution to the overall output (i.e. level of meaningful human involvement), given the automation performance. Using Shannon's entropy, Resp(Z) averages the comparative human contribution over all possible states in the environment, their distribution on the set of measurable parameters, and the resultant distributions of human and automation parameters and system outputs. Our responsibility measure is related to Theil's uncertainty coefficient, U(X/Y), which is a measure of the association between two variables X and Y [76, 77]. Theil's uncertainty DOUER and MEYER - RESPONSIBILITY QUANTIFICATION (R ESQU) MOODEL OF HUMAN INTERACTION WITH AUTOMATION 6 coefficient computes the relative reduction in the uncertainty of a variable X due to the knowledge of another variable Y: 𝑈(𝑋/𝑌)≝ ூ(௑:௒) ு(௑)=ு(௑)ିு(௑/௒) ு(௑) (4) where I(X:Y) is the mutual information between X and Y. Theil's uncertainty coefficient is more general than the notion of statistical correlation, since it can be used to measure complex, not necessarily linear associations, as well as associations between nominal variables. It has values between 0 and 1. It equals 0 iff the variables are statistically independent and share no mutual information, and it equals 1 iff knowledge about the value of Y fully enables one to predict the value of X. Our approach differs from Theil’s coefficient in that we do not simply measure the association between two variables. Our ResQu responsibility value measures complex associations in a compound human-automation system, characterized by multiple variables and inter-dependencies. Also, Theil’s coefficient focuses on the relative reduction in the uncertainty of a variable, given knowledge about another variable, while our ResQu score computes the relative remaining uncertainty, given knowledge about other variables. It is important to note that Resp(Z) is based on information theory concepts of entropy and mutual information, which were developed under the mathematical constraints of stationarity and ergodicity. Thus, the combined human-automation system must also be assumed stationary and ergodic. The practical implications of these assumptions are discussed in the next section. III. AN APPLICATION OF THE RESQU MODEL A. General The ResQu model, presented in the previous section, is an abstract model of information flow in a combined human- automation system. Details of how the human and the automation communicate, decide on actions, and resolve discrepancies are abstracted through different variables that characterize the environment, the human and the automation activities during the execution of four information processing functions. The model quantifies the comparative human responsibility by analyzing interdependencies between the various system variables and by determining the human relative contribution to the distribution of combined system outputs. The ResQu model is not only an abstract theoretical model. It can also be used to analyze responsibility in real-world interactions with automation and intelligent systems. To calculate the responsibility measure for real-world systems, one must infer the underlying distributions of system variables from known properties or from empirical observations, collected over time. As we have stated, the initial version of the model we present here assumes that the combined human-automation system is stationary and ergodic. When applying the model, one needs to keep in mind that real-world systems may not be stationary and ergodic or cannot be observed sufficiently to allow accurate estimates of the multivariate probabilities [73]. Nevertheless, we believe that for these systems, the construction of a detailed ResQu information flow model, combined with sensitivity analyses of how changes in the input probabilities and assumptions affect the responsibility measure, will often provide useful insights regarding the comparative human contribution to the outcomes. B. Information flow model for AWS We present an application of the ResQu model to a large family of decision support systems (DSS), which automatically classify input to one of two or more categories and may also recommend an action. We demonstrate the use of the model on the schematic example of an AWS, which automatically detects, classifies and engages targets, but alerts the operator and requires some level of human involvement during the engagement process, either in the loop or on the loop. We model the human control of the AWS in a simplified manner, which does not capture all nuances of function allocation and human control ] 82 ,81 ,72 ,69 ,26 ,24[ . We also don’t explicitly consider uncertainties, related to human factors, that may constrain the human performance, such as variables from the Opportunity-Willingness-Capability (OWC) model (e.g., load, stamina, stress, knowledge, experience or training, cognitive or physical skills, emotional state, etc.) ]86-83[ . To calculate the human responsibility, we need to make assumptions regarding the probabilistic distributions and the interdependencies of the different variables that characterize the environment, the human, and the AWS. One way to do so is to use the assumptions and formulation of Signal Detection Theory (SDT) [87-89]. This is a well-established approach to measure the ability to differentiate between information- bearing signals (or stimuli) and random noise and to decide on a proper response. SDT has applications in many fields, such as psychology, decision-making, telecommunications, medical diagnostics, biology, alarm management, machine learning (statistical classification), and military (e.g. in radar research). We used an equal variance Gaussian SDT model to represent the probabilistic nature of the representative AWS, the environment in which it operates, the human activities and interactions with the automation and the incentives, which influence response selection, in a manner described below (see the Appendix for a detailed description). We present the combined human-automation system as a sequence of consecutive information processing functions (information acquisition, analysis to infer the current environmental state, selection of actions, and implementation). Environment states : Assume that an AWS operates in an environment with only two types of entities: targets, which should be engaged, and noise, which are entities that resemble targets, but which should not be engaged. Engagement of noise entities leads to undesired costs, such as collateral damage, fratricide or the waste of expensive or limited ammunition on false targets. The relative frequency of targets in the environment is Pt, and the relative frequency of noise is 1- Pt. The overall operational goal of the AWS is to detect, identify and engage targets, while avoiding the engagement of noise. To do so, the system automatically scans specific designated areas (e.g., specific geographic regions or air sectors) for the presence of entities. We assume that the system detects all entities in its vicinity with certainty. Hence, the main challenge for the AWS is the classification of each detected entity as target or noise. The classification of entities relies on the combined performance of two subsystems - a human operator and an automated module. Information acquisition : Target and noise entities have physical characteristics that human senses or other sensors can discern to some extent (e.g., optical, thermal electromagnetic and acoustic signature, mobility characteristics, etc.). Denote by e, the set of the observable physical characteristic of the state of the world. We assume that the human operator and the automatic module each observe a different uncorrelated measurable property of the state of the world, based on e. The distributions of these properties for target and noise are Gaussian with equal variance, but different means, allowing some discrimination between the two types of entities. However, the distributions overlap, so classifications of observed entities as target or noise are uncertain. Information analysis : In this stage, both the human and the automated module try to infer the current environmental state. In SDT formulation, a detector’s ability to classify observations is its detection sensitivity . For Gaussian, normal distributions, the human and the automated module have, respectively, detection sensitivities of d’Human and d’Automation, where d’ is the distance between the means of the signal and noise distributions, measured in standard deviations. Action selection : In this stage, decisions are made, based on the results of the information analysis. These decisions are susceptible to motivation, costs, strategy, etc. In SDT formulation, the action selection is characterized by the response criterion (also called response bias), which defines a detector’s tendency to classify events as signal or noise. Each response criterion specifies a threshold value. The detector classifies events as targets when they are above the threshold and as noise otherwise. In actual systems, this threshold value is programmed into the automation algorithms. It also reflects the human bias in decision making, depending on the likelihood of targets and the costs and benefits of different outcomes. According to SDT, the optimal response criterion maximizes the expected value of a payoffs scheme in which VTP, VFP, VTN, and VFN represent, respectively the values associated with correct target classification (True Positive), incorrect target classification (False Positive) when noise is falsely classified as a target, correct classification of noise as noise (True Negatives), and false classification of a target as noise (False Negatives). The payoff scheme, which represents the values of the outcomes in actual systems, will usually not be in monetary terms. Rather, it expresses some assessment of the relative utility of outcomes in terms of costs and benefits (for instance, associating a very high cost, VFP, to cases in which a civilian entity is falsely classified as a legitimate target for engagement). These payoffs can reflect the values human operators or system designers associate with outcomes, but they can also reflect the values the organization that deploys the system associates with outcomes. It is important to note that, due to possible differences in the preferences, the values system designers associate with different outcomes may or may not be identical to the human operators’ values. This may lead to differences in the action selection preferences between the human operator and the automation. Hence, we assume that the human and the automated module each have response criteria ( βHuman and βAutomation), defining their bias when classifying an entity as a target or noise The automated module performs independent binary classifications with its detection sensitivity and preset response criteria. Let Y denote its classification result, either target or noise, which may include correct or incorrect classifications of targets and noise. We assume that the engagement process itself is mostly automatic, but it requires some level of human involvement, whether in the loop or on the loop. In human in the loop control, we assume that whenever the automation classifies an entity as a target, the engagement process will only proceed if the human actively authorizes the engagement. The engagement halts if the human decides to abort and remains passive (does not authorize the engagement). In human on the loop control, we assume that whenever the automation classifies an entity as a target, the engagement proceeds automatically, as long as the human remains passive, and halts only if the human actively aborts it. In addition, in both types of control, the human can always decide to engage an entity, even if the automation classified it as noise. Thus, in both cases the human has to decide whether to engage or to abort. To do so, the human combines the information from the automated module with additional information the human has. The only actual difference is whether an active response is required to implement the chosen action, or whether the human can remain passive. Let X denote the human action selection, either to engage or to abort. According to SDT, when aided by such an automated module, a rational, payoff-maximizing human should use two different response criteria: one is used when the automated module classifies an entity as target and the other when the automated module classifies it as noise. The differential adjustment of the response criteria depends on the human’s assessment of the automated module’s capabilities. When using a reliable AWS with high capabilities, the human should adopt a lower cutoff point when the system classifies an entity as a target, which would increase the tendency to engage, and a higher cutoff point when the system classifies an entity as noise, which would increase the tendency to abort. Action implementation : This final stage involves the implementation of the action the human selected. If the human chooses to engage an entity, the system conducts the rest of the engagement process automatically (e.g., missile lock on target and missile firing). Let Z denote the outcome of the integrated system. This outcome represents whether a detected entity was eventually engaged or not. It is important to note that in both types of human control, humans have the final word and can always override and alter the automated module's recommendation, based on their own information analysis and action selection processes. Thus, in the portrayed system, Z is DOUER and MEYER - RESPONSIBILITY QUANTIFICATION (R ESQU) MOODEL OF HUMAN INTERACTION WITH AUTOMATION 8 strictly determined by the results of the human action selection process. To conclude, in our simple representative scheme of human interaction with an AWS, the investigation of the four information processing functions, within the combined human- machine system, can be reduced to analyzing three variables and their inter-dependencies: Y (the classification result of the automated module), X (the human action selection) and Z (the outcomes). Fig. 3 depicts the information flow, into, within, and out of the combined human-machine system. Fig. 3. Information flow and parameters of an AWS system which detects classifies and engages targets automatically but requires human involvement, before or during the engagement process C. Defining responsibility measures for the AWS The information flow and system structure, shown in Fig.3, enable us to simplify the general formula for Resp(Z) in (1), to: 𝑅𝑒𝑠𝑝(𝑍) ≝ ு(௓/௒) ு(௓) (5) The conditional entropy in the numerator is the remaining uncertainty about Z (whether a detected entity was engaged or not), conditioned on the result of the automation classification. This remaining uncertainty is due to the human actions. Hence, the ratio in (5) quantifies the unique comparative human contribution in determining the distribution of the system engagement output variable Z. Z is directly determined by the human action selection X (i.e. whether the human chose to engage an entity or not), which in turn, is influenced by the information from the automation, Y (see Fig. 3). Hence, we can rewrite (5) as: 𝑅𝑒𝑠𝑝(𝑍) ≝ ு(௓/௒) ு(௓) =ு(௑/௒) ு(௑) (6) The conditional entropy in (6) can be written explicitly as: 𝑅𝑒𝑠𝑝(𝑍)=ு(௑/௒) ு(௑) =ு(௑,௒)ିு(௒) ு(௑) (7) To conclude, from (7) we can see that, for the AWS, Resp(Z) can be computed from the entropy H(Y) of the automation classification variable, the entropy H(X) of the human action selection variable, and their joint entropy H(X,Y). We need to define the self and mutual distributions of Y and X to compute their entropies. To do so, we use the simplifying assumptions of a basic equal variance Gaussian SDT model (details are in the appendix): (a) The distributions of the observed values of target and noise are normal with unit variance and means that are d’ units apart, with the target having the higher mean. (b) The system designers and human operators associate the same cost and benefit values ( VFP VTN VFN VTP) to the possible outcomes and maximize expected payoff. Hence, we initially assume that system designers and operators share similar incentives and action selection preferences (e.g. they associate a similar high cost, VFP, to false engagements of non- targets). (c) The values the human operators associate with different outcomes are independent of the classification results of the automatic module. (d) The information acquisitions by the human and the module, given a certain state of the world, are uncorrelated. This assumption may hold, for example, when the human and the automated module base their information acquisition on different, uncorrelated properties of the state of the world (e.g. optical vs. electromagnetic signatures). (e) The human is rational and has full knowledge of the general characteristics of the automation (its detection sensitivity and cutoff) that determine the automation capabilities. This assumption does not mean that the human can supervise the automated module and can determine whether its classifications are correct or wrong. We first compute the distribution of Y (the automated classification variable) and derive its entropy. Substituting d'Automation and βAutomation into (A7) in the appendix, we can use (A8) to compute the automated module's expected rates of True Positive ( TP), False Negative (FN), False Positive( FP) and True Negative ( TN) which will be denoted by 𝑃෨்௉, 𝑃෨ிே, 𝑃෨ி௉, and 𝑃෨்ே, respectively. Using these rates and the target probability Pt, we can compute the distribution of Y. For example, the probability that the automated module will classify a random entity as “target” is the probability that the entity is indeed a target and the automation classifies it correctly as such, P୲P෩୘୔, plus the probability that the entity is actually noise, but the automation falsely classifies it as target, (1−P୲)P෩୊୔. In a similar manner, we can compute the probability that the module will classify a random entity as “noise”. Table I summarizes the computation results. The computation of the entropy of Y, from Table I, is straightforward. TABLE I DISTRIBUTION OF Y (CLASSIFICATION RESULTS OF THE AUTOMATED MODULE ) Y (Module classiϐication) "Target" 𝑃௧𝑃෨்௉+(1−𝑃௧)𝑃෨ி௉ "Noise" 𝑃௧𝑃෨ிே+(1−𝑃௧)𝑃෨்ே We proceed to compute the joint distribution of X (the human action selection variable) and Y (the automated classification variable). When aided by the automated module, the human uses two different optimal response criteria, which are derived from the human posterior probability for target or noise, conditioned on the automated module’s classification results (see details in the appendix). One criterion is used when the automated module classifies an entity as a target and the other when the automated module classifies an entity as noise. Each response criterion leads to different human values of True Positives, False Negatives, False Positives and True Negatives. We will denote them, respectively, by 𝑃்௉/"ே", 𝑃ிே/"ே", 𝑃்ே/"ே", 𝑃ி௉/"ே", when the module indicated that an entity is noise, and by 𝑃்௉/"்", 𝑃ிே/"்", 𝑃்ே/"்", 𝑃ி௉/"்", when the module indicated that an entity is a target. Using these rates, the automation rates and the relative frequency of targets in the environment, we can compute the joint distribution of X and Y. For example, the joint probability that the module will classify an entity as “target” and the human will choose to engage it is the probability that the entity is indeed a target, and the conditional probabilities that the automation classifies it correctly as such and the human chooses to engage it, 𝑃௧𝑃෨்௉𝑃்௉/"்", plus the probability that the entity is actually noise, but the automation falsely classifies it as target, and the human falsely chooses to engage it, (1−𝑃௧)𝑃෨ி௉𝑃ி௉/"்". In a similar manner, we can compute all other joint probabilities. Table II summarizes the computation of the joint distribution of X and Y. The computation of the joint entropy H(X,Y) is straightforward from Table II. TABLE II Joint distribution of X (human action selection) and Y (classification results of the automated module) X (Action selection by the Human) Abort Engage Y "Target" 𝑃௧𝑃෨்௉𝑃ிே/"்" +(1−𝑃௧)𝑃෨ி௉𝑃்ே/"்" 𝑃௧𝑃෨்௉𝑃்௉/"்" +(1−𝑃௧)𝑃෨ி௉𝑃ி௉/"்" "Noise" 𝑃௧𝑃෨ிே𝑃ிே/"ே" +(1−𝑃௧)𝑃෨்ே𝑃்ே/"ே" 𝑃௧𝑃෨ிே𝑃்௉/"ே" +(1−𝑃௧)𝑃෨்ே𝑃ி௉/"ே" By summing each of the rows in Table II, we can verify that the marginal distribution of Y in Table II is the same as that presented in Table I. For example, we know that 𝑃ிே/"்"+ 𝑃்௉/"்"= 1 and 𝑃்ே/"்"+𝑃ி௉/"்"= 1, since for each true state of the entity, and a given module’s classification of “target”, the human either engages or aborts with probability 1. Hence, summing the probabilities in the first row of Table II gives a marginal probability of 𝑃௧𝑃෨்௉+(1−𝑃௧)𝑃෨ி௉ that the module will classify a random entity as “target”, which is equal to the corresponding probability presented in Table I. In a similar manner, by summing each of the columns in Table II, we can derive the marginal distribution of X. Table III presents the results, from which the computation of the entropy of X is straightforward. TABLE III Distribution of X (human action selection) X Abort 𝑃௧(𝑃෨்௉𝑃ிே/"்"+𝑃෨ிே𝑃ிே/"ே")+ (1−𝑃௧)(𝑃෨ி௉𝑃்ே/"்"+ 𝑃෨்ே𝑃்ே/"ே") Engage 𝑃௧(𝑃෨்௉𝑃்௉/"்"+𝑃෨ிே𝑃்௉/"ே")+ (1−𝑃௧)(𝑃෨ி௉𝑃ி௉/"்"+𝑃෨்ே𝑃ி௉/"ே") D. Quantitative Results Four variables influence the human action selection process and the resulting human responsibility. These include one environment-related variable (the relative frequency of targets in the environment, Pt), the human’s and the automation’s detection sensitivities ( d'Human and d'Automation) and the ratio of payoffs the human and the automated module associate with correct and incorrect actions ( 𝑉௥௔௧௜௢=௏೅ಿି௏ಷು ௏೅ುି௏ಷಿ ), which determines the optimal response criterion (details are in the appendix). Each set of values for these four variables specifies a different combination of environment, automation and human characteristics and relative outcome preferences. This leads to different human and automation rates of True Positives and False Negatives, from which one can compute the distributions on which the human’s responsibility calculation in based by using the computation presented in Tables I, II, and III. Proposition 1 : The comparative human responsibility, 𝑅𝑒𝑠𝑝(𝑍), decreases monotonically in d'Automation, and increases monotonically in d'Human. Proof: Proof is provided in the Appendix. Proposition 1 has an intuitive explanation. Under the above assumptions, a human with a given detection sensitivity will rely less on information from less capable automation (in terms of the automation detection sensitivity) than from more capable automation. Thus, the comparative human responsibility increases as the automation capabilities decrease. In addition, for automation with given capabilities, less capable humans will tend to rely more on the automation than would more capable humans. Thus, the comparative human responsibility increases, as the human capabilities increase. To demonstrate the combined effects of proposition 1, we computed the human responsibility as a function of d'Automation and d'Human, each on a scale ranging between .6 (low ability to distinguish between target and noise) and 3 (high ability to distinguish between target and noise). Fig. 4 presents the results. The monotonic properties of Resp(Z) in d’Automation and d'Human, are evident. In the numerical example, presented in Fig.4, we used a target frequency of Pt = 0.2, a payoff matrix ratio of Vratio = 2/3, and optimal response criteria βAutomation = βHuman = 2.7. We report below the results of sensitivity analyses of the effects of changes in these values on responsibility outcomes. Proposition 2 : Let R denote the detection sensitivities ratio: R = d'Automation/d'Human. Suppose that the human and the automation associate the same payoffs with correct and incorrect actions, then 𝑙𝑖𝑚 ோ→ஶ𝑅𝑒𝑠𝑝(𝑍)= 0 and 𝑙𝑖𝑚 ோ→଴𝑅𝑒𝑠𝑝(𝑍)= 1 Proof: Proof is provided in the Appendix. Proposition 2 describes the combined effect of the automation and human detection sensitivities when both have similar preferences and associate the same payoffs with correct and incorrect actions. In this case, when the automation sensitivity is much higher than the human sensitivity (i.e. R is very large) the human responsibility for the output approaches 0, and the human relies mainly on the classifications made by the automated module. In contrast, when the automation DOUER and MEYER - RESPONSIBILITY QUANTIFICATION (R ESQU) MOODEL OF HUMAN INTERACTION WITH AUTOMATION 10 sensitivity is much lower than human sensitivity (i.e. R is close to 0), the human responsibility for the output approaches 100%. Here humans rely mainly on their own classification capabilities, ignoring information from the automated module. It is important to note that this proposition implies that even when human sensitivity is not high, the human responsibility may still be high, as long as the automation sensitivity is much lower than that of the human (i.e. as long as R remains low). Fig. 4. Three (a) and two (b) dimensional presentation of responsibility values for different combinations of automation and human detection sensitivities ( d'). Fig. 5 depicts the human responsibility values as a function of the ratio R= d'Automation/d'Human, based on the same assumptions as Fig. 4. Fig. 5(a) shows that the human responsibility may rapidly converge as a function of R. When R exceeds 3, the human responsibility is very close to 0, and for R below 1/3, the human is almost fully responsible for the system output. When the automation sensitivity is more than double that of the human (i.e. R>2), the human responsibility drops below 20%. Fig. 5(a) also demonstrates that when d'Human and d'Automation are similar, so their ratio R is close to 1, the human responsibility can have a range of values. This is evident from looking at the main diagonal in Fig. 5(b), which represents a ratio of R=d'Automation/d'Human=1. When R = 1, the human responsibility falls into different responsibility regions, depending on the specific values of d'Automation and d'Human. In this case, human responsibility is higher when both d'Human and d'Automation are similarly low, compared to when both are similarly high. When d'Human and d'Automation are equally high, the human can still benefit from utilizing the additional information from the automation. The decision will then be based on a similar weighting of the human's own information and the information from the automation, as both are rather accurate, leading to a comparative human responsibility of 40%-60%. However, when both sensitivities are low, the low detection sensitivity of the automation cannot add much to the human decision process. Hence, in this case humans will rely more on their own detection capability, even if it is limited, leading to higher comparative human responsibility for the overall outcomes (60%-80%). Fig.5. Human responsibility as a function of the ratio between the automation and the human detection sensitivities plotted, (a) for different ratios of automation and human detection sensitivities, and (b) on the two-dimensional graph of Fig. 4(a) with dashed lines, representing different examples for fixed sensitivity ratios. A sensitivity analysis shows that changing the values of the variables that were assumed fixed in Fig. 4 and 5 does not change the above conclusions, as long as both the human and the automation associate the same payoffs with correct and incorrect actions and assume the same relative frequency of targets in the environment. Matters are different when the human and the automation designers have considerably different preferences, due to different estimates of the costs and benefits associated with different outcomes or of the relative frequency of targets in the environment. In this case, the automation and the human will use different response criteria. Fig. 6 depicts the effects of differences in response criteria on human responsibility for three selected ratios of human and automation sensitivities. In the first case (Fig. 6a), in which R=1/3, the human’s detection sensitivity is higher than the low detection sensitivity of the automation. This leads to very high human responsibility, regardless of differences between the human and the automation response criteria. In the second case (Fig. 6b), in which R=3, the automation has a high detection sensitivity, superior to the human’s low detection sensitivity. Hence, the human relies mainly on the automation and has low comparative responsibility. However, as is evident from the figure, in this case differences between the human and the automation response criteria have some effect. When the human response criterion differs much from the automation response criterion (is more than 10 times larger or smaller), the human responsibility is considerably higher (increases to 40%-50%) than when the response criteria are similar (human responsibility of less than 10%). Therefore, when there is a large difference between the preferences of the human and the automation, the human will rely less on the automation, even if it has superior detection sensitivity. This has an interesting and non-intuitive interpretation. The response criteria differ when the human and the automation assign different values to possible decision outcomes. In this case, the automation recommendations may disagree with the human incentives, so the human will prefer to rely less on the automation recommendations, even when the automation has better detection capabilities. In the third case (Fig. 6c) d'Automation is somewhat higher than d'Human, and neither value is high. In this case, the effect of differences between the human and the automation response criteria becomes more prominent, because the low detection abilities of the automation cannot compensate for large differences in the preferences. Fig. 6. Human responsibility for different combinations of automation and human response criterion β. The Figure present the effects of differences in response criteria β, for three different ratios R (the ratio of automation and human sensitivities): (a ) R=1/3; (b) R=3; (c) R=1.5. IV. DISCUSSION A. Main Results By employing information theory measures of entropy and transmission, the ResQu model computes the unique share of human contribution in determining system outcomes. The ResQu responsibility measure enables us to quantify the level of human comparative responsibility in interactions with intelligent systems and advanced automation, and to divide causal responsibility between humans and machines. Our results demonstrate that the optimal human responsibility depends on human and automation capabilities and on differences between the human and the automation preferences. This optimization is not trivial, as these variables have sometimes contradicting effects. Specifically, the results DOUER and MEYER - RESPONSIBILITY QUANTIFICATION (R ESQU) MOODEL OF HUMAN INTERACTION WITH AUTOMATION 12 show that when the human and automation designers have considerably different preferences, a rational human will tend to rely less on the automation, even if the automation has better capabilities, leading to higher comparative human responsibility. When human and automation preferences are similar, a main determinant of human responsibility is the ratio between the automation and human capabilities. More broadly stated, human causal responsibility in intelligent or automated systems depends on the combined characteristics of the human, the automation, and the operational environment. The combined effects are convoluted. Therefore, human operators may still not be responsible for the system actions and their outcomes, even when important system functions are allocated to the human. Hence, simplistic demands to keep a human in the loop in order to retain meaningful human control can be misleading and futile. Literally adhering to them may create a mismatch between role responsibility , the duties the human operator is accountable for to others, and causal responsibility , which is the actual level of human contribution and influence on system outcomes. This may arise from a responsibility-authority double bind, in which the human is assigned a certain role but is not granted the necessary authority to act and control the processes that lead to the outcomes. This can be due to a system design that limits the human’s ability to control and take necessary actions to influence the outcomes, or due to probabilistic factors related to the automation and the environment, that limit the effectiveness of the human’s action on the combined system outcomes. Thus, simply demanding human involvement does not assure that the human will have a major role in determining the outcomes . The ResQu model’s measure of causal responsibility considers the combined effects of the human-machine system design, the human’s role and authority, and probabilistic factors, related to the automation and the environment. Thus, it can be used to measure the level of meaningful human involvement, based on the premise that meaningful human involvement requires the human to have some causal responsibility for the outcomes. We also show that for some system configurations (when neglecting temporal aspects), both human-on-the-loop and human-in-the-loop levels of control can lead to the same level of comparative human responsibility, because the same information flow model of the combined human-automation system represents both. Thus, the difference between these two systems is not always as substantial as commonly perceived. B. Design Implications When and how should one involve a human in highly intelligent or automated systems? According to our analysis, humans only have significant comparative responsibility when they make unique contributions that supplement or exceed the automated module’s capabilities to perform certain functions (e.g., when the human has independent sources of information or is better able to select actions). However, as technologies develop, humans will contribute less to system processes. For instance, future AWS technologies will almost certainly outperform humans in many critical operational tasks, such as the ability to distinguish between combatants and non- combatants, to assess the likelihood of hitting a target or harming civilians, and to decide and act with very short reaction times. When humans will interact with such advanced systems, to which they will contribute very little, they may feel less motivated, or they may attempt to be more involved by interfering more than necessary. Both responses will probably impair the overall system performance. The ResQu model enables system designers to identify such cases in advance and to consider them when evaluating different design alternatives and when planning the human involvement in the system. With the advent of advanced intelligent systems and automation, with abilities that clearly exceed those of humans in many critical functions, a choice will have to be made. One can progress to fully autonomous systems that keep the human operator out of the loop and abandon the current prevailing demand for a system design with humans in the loop. Alternatively, one can limit the development of autonomous systems and the use of automation. The intermediate option, where systems will be increasingly intelligent, while still keeping the human in the loop, can possibly lead to the inclusion of humans to simply fulfill regulatory requirements, without them having any real impact on system performance. The current ethical and legal discussion regarding human involvement in intelligent systems and automation should not only focus on the advantages and disadvantages of such systems, but it should also consider the implications of keeping humans in the loop, even when they have little real influence. Falsely claiming that the human is responsible for adverse outcomes, caused by system actions, may expose her or him to unjustified legal liability and to the psychological burden of self-blaming, even when the person actually contributed very little to the outcomes. The ResQu responsibility measure can help in these situations by exposing such anomalies and by providing a new method to quantify the actual human comparative responsibility for the outcomes. This can perhaps lead to a change in the legal treatment of human responsibility in intelligent systems and automation. C. Assumptions and Limitations As a first analytical formulation, the current version of the ResQu model assumes given human and automation capabilities, stationarity, and ignores temporal aspects. Despite these limitations, it can represent interdependencies in complex human-machine systems. The implementation of the model requires that the modeler builds an information flow model and obtains the values for the underlying distributions of variables, which represent the possible outcomes of human and automation functions and their dependencies. As for other measures of information theory, one needs only the distributions of the different variables to calculate the responsibility measure. If the analyzed system is stationary and ergodic, it is possible to infer these distributions from known properties or through empirical observations, taken over time. However, such inference is not possible when systems change and are not stationary (e.g. when there is a learning effect that leads to a change in the level of human involvement over time). In this case one can calculate the responsibility measure repeatedly, at different times, and look if convergence exists (e.g. after performance has stabilized). In addition, the construction of a structured ResQu information flow model, combined with sensitivity analyses of how changes in the input values affect the responsibility measure, will often supply useful insights on the comparative human responsibility. Lastly, when one needs to estimate parameter values, from which the probability distributions are derived (e.g. in SDT models, Cost-Benefit models, etc.), it is often less important to obtain the exact values. It may be enough to estimate the ratios between human and automation values. As we have seen in our application to AWS, the responsibility values can be narrowed down to a limited interval, even if this relative ratio is not exactly known or is hard to assess. The general ResQu model requires no specific prior assumptions regarding human rationality and behavior. However, when we applied this model to represent human interactions with AWS, using the principles of SDT, we assumed a best-case scenario of perfect rationality on the part of the human, perfect human knowledge of the automation properties and optimal human utilization of information. With these assumptions, the computed human responsibility will be optimal, given the properties of the system. Nevertheless, system designers can use the ResQu model to calculate the sensitivity of the optimal responsibility to those assumptions, for example by analyzing the impact of incomplete human knowledge, such as situations when humans underestimate the automation capabilities or overestimates their own capabilities. D. Conclusion and Future Work The ResQu model is an initial step towards the creation of a comprehensive responsibility model that quantifies human causal responsibility in interactions with intelligent systems and advanced automation. The model can serve as an additional tool in the analysis of system design alternatives and policy decisions regarding human responsibility. Future work should expand the model, enabling it to deal with temporal effects, such as the time required to make a decision and its effects on the human’s tendency to rely on the automation. To do so, the information theoretical framework we present here should be expanded to address temporal aspects by evaluating not just transmitted information, but also information transmission rates and by defining a responsibility measure that also considers human channel capacity constraints. Future work should also test the predictive ability of the ResQu model by comparing the computed theoretical values to actual human performance, and by tying it to existing empirical research on human-automation interaction. A first empirical analysis of the ResQu model demonstrated that the model is not merely an abstract theoretical model, but it can also serve as a descriptive model, that allows us to predict the actual responsibility users take on when using a system [90]. Lastly, future work should analyze the sensitivity of the ResQu model’s responsibility estimates to different measurement errors of the input variables and their dependencies. Such analyses can help to identify the important variables that practitioners should focus on to obtain accurate estimates when applying the model. V. APPENDIX This appendix presents the basic Signal Detection Theory (SDT) concepts and formulas we used to model the probabilistic nature of the AWS and to perform the numerical calculations, leading to the results presented in the manuscript. The basic SDT model describes a system with a single sensor, observing an environment with only two possible entities: Target+Noise (referred to as Target) and Noise alone (referred to as Noise) that occur with probability Pt and 1- Pt, respectively. Both entities can be measured by a single observable parameter, which transforms the data into a scale value. The distributions of values of the observed characteristics for target and noise entities differ (with targets usually assumed to have a larger mean value than noise), which allows some discrimination between the two types of entities. However, the distributions overlap, so when a certain value is observed, there is uncertainty whether the entity is indeed a target, or whether it is actually noise. We use Gaussian distributions for the example, but the model does not depend on the assumptions regarding the distributions. The sensor is required to identify and engage targets and to prevent engagement of noise. This binary decision is categorized as Engage or Abort. The responses are the outcomes of the decision process and can be categorized as True Positive ( TP) when a target is present and the response is to engage, False Negative ( FN) when a target is present and the response is not to engage, True Negative ( TN) when no target is present and the response is not to engage, and False Positive (FP) when no target is present and the response is to engage. Table A.1 summarizes the classification of human responses. TABLE A.1 CLASSIFICATION OF HUMAN RESPONSES USING SDT Human Response Engage Abort Actual Environment state Target True Positive (TP) False Negative (FN) Noise False Positive (FP) True Negative (TN) Signal detection theory differentiates between the detection sensitivity of a sensor and its response bias. The detection sensitivity (d') is the sensor's ability to distinguish between target and noise. This is represented by the shift of the signal probability density function, compared to the noise probability density function. When d'=0, the sensor is unable to distinguish between target and noise. As d' increases, the ability to distinguish between the two entities increases. For every value of the observed parameter, one can compute the likelihoods of observing the value under the target distribution or the noise distributions. We assume a threshold likelihood ratio. This threshold is called the response criterion DOUER and MEYER - RESPONSIBILITY QUANTIFICATION (R ESQU) MOODEL OF HUMAN INTERACTION WITH AUTOMATION 14 (β). The response criterion represents the sensor's tendency to favor one response over the other. The value of the observed parameter at the threshold is the cutoff point (C). When the observed value is below the cutoff point, the observation is classified as noise, and it is above the cutoff point, it is classified as a target. The values of d' and β determine the probabilities of the four possible outcomes ( TP, FN, FP, and TN) as presented in Fig. A.1. Fig. A.1. The basic SDT model for Gaussian distributions, with probability density functions for target and noise, detection sensitivity ( d'), response criterion ( β), cutoff point ( C) and probabilities of possible outcomes. The distributions of noise and target over the values of the observable variable are denoted by 𝐸௡,𝐸௧. In the basic normal, equal variance SDT model we have: 𝐸௡~𝑁(𝜇௡,𝜎௡ଶ) 𝐸௧~𝑁(𝜇௧,𝜎௧ଶ) (A1) µ௡= −.5𝑑ᇱ µ௧= .5𝑑ᇱ 𝜎௡ଶ=𝜎௧ଶ= 1 (A2) In this case: 𝐸௡~𝑁(−.5𝑑′,1) 𝐸௧~𝑁(.5𝑑′,1) (A3) 𝑑ᇱ=𝜇௧−𝜇௡ (A4) 𝛽=௙೟(௖) ௙೙(௖) (A5) 𝑙𝑛𝛽=𝑙𝑛௙೟(௖) ௙೙(௖)=𝑙𝑛𝑓௧(𝑐)−𝑙𝑛𝑓௡(𝑐)=𝑑′𝑐 (A6) 𝑐=௟௡ఉ ௗᇲ (A7) The probability for different outcomes can be calculated as (see Fig A.1): P(𝑇𝑃) =𝑃(𝐸௧>𝑐) 𝑃(𝐹𝑁) =𝑃(𝐸௧≤𝑐) (A8) 𝑃(𝐹𝑃) =𝑃(𝐸௡>𝑐) 𝑃(𝑇𝑁) =𝑃(𝐸௡≤𝑐) Assume that there are cost-benefit values associated with each outcome: VFP VTN VFN VTP (where: VFP and VFN are negative costs, VTP and VTN are positive benefits). It can be shown that the optimal response criterion, β*, which maximizes the expected value is: 𝛽∗= ଵି௉೟ ௉೟௏೅ಿି௏ಷು ௏೅ುି௏ಷಿ (A9) Where Pt is the target probability and 1- Pt is the noise probability. We denote the ratio of the cost-benefit values as: 𝑉௥௔௧௜௢=௏೅ಿି௏ಷು ௏೅ುି௏ಷಿ (A10) To conclude, under the above assumptions, if we know the probability of a signal in the environment ( Ps), the sensor's sensitivity ( d'), and the ratio of the cost-benefit values ( Vratio), we can use the above formulas to calculate an optimal response criterion ( β*) that maximizes the expected value. We can also compute the True Positive and False Positive rates. The human and the automated module may have different detection sensitivities (d’Human and d’ Automation), leading to different capabilities to classify whether a given entity is a legitimate target or noise. In addition, the human and the automation may also differ in the threshold value above which they classify an entity as a target (with response criteria βHuman and βAutomation, respectively). We next examine the case where the human detection is aided by the automated module, which produces an alert when it identifies a suspected target. Assume that the automated module has a detection sensitivity d’Automation and a response criterion βAutomation, which are known to the human. Denote the module's rates of True Positives by 𝑃෨்௉ and False Positives by 𝑃෨ி௉. Using Bayes’ law, the human can use these probabilities to update the prior probability of signal in the environment, according the automation classification results. Denote by 𝑃෠௧/"்", the human posterior probability for target, when the automated module classifies an entity as target. 𝑃෠௧/"்"=௉೟௉෨೅ು ௉೟௉෨೅ುା(ଵି௉೟)௉෨ಷು (A11) The human uses 𝑃෠௧/"்" , instead of Pt in (A9), to compute the optimal response criterion that maximizes the expected value, given that the module has classified an entity as a target. Denote by 𝑃෠௧/"ே", the human posterior probability for target, when the automated module classifies an entity as noise. 𝑃෠௧/"ே"=௉೟௉෨ಷಿ ௉೟௉෨ಷಿା(ଵି௉೟)௉෨೅ಿ=௉೟(ଵି௉෨೅ು) ௉೟(ଵି௉෨೅ು)ା(ଵି௉೟)(ଵି௉෨ಷು) (A12) In the same manner, the human uses 𝑃෠௧/"ே" (A9) to compute the optimal response criterion that maximizes the expected value, given that the module has classified an entity as a noise. Thus, when aided by an automated module, the human uses two different response criteria, one when the automated module classifies an entity as a target and the other, when the automated module classifies an entity as noise. The human cutoff point when the entity was classified as target by the automation is smaller than when it was classified as noise. Fig. A.2 presents the two cutoff points, when human detection is aided by an automated module. Fig. A.2. SDT model when human detection is aided by an automated module. There are two cutoff points according to the module's classification results. By adjusting the threshold according to the classification of the automated module, the human increases the probability of distinguishing between target and noise. Denote by d’effective the combined sensitivity of such a system. This is essentially the sensitivity equivalent to a single Gaussian SDT detector that has the same level of performance as the combined tandem human-automation system. By definition, d’effective is greater or equal than d’Human and d’Automation. Assume that (a) the distributions of the observed values of target and noise are normal with unit variance; (b) the cost and benefit values VFP VTN VFN VTP are the same for the human and the automated module; (c) the cost and benefit values for the human are independent of the classification results of the automatic module; (d) the initial information the human and the module have about a the state of the world are uncorrelated. Pollack and Madans [91] have shown that under the above simplifying assumptions the maximum value of d’effective, when the detectors preserve continuous information and an optimal decision rule is employed, will be equal to 𝑑′௠௔௫=ට𝑑ᇱு௨௠௔௡ଶ+ 𝑑ᇱ஺௨௧௢௠௔௧௜௢௡ଶ (A13) In our system, there is some loss of information, since the automation provides only binary information rather than continuous, so in most cases d’effective will be lower than d’max. 𝑑′௘௙௙௘௖௧௜௩௘≤ට𝑑ᇱு௨௠௔௡ଶ+ 𝑑ᇱ஺௨௧௢௠௔௧௜௢௡ଶ (A14) 𝑑ᇱ ு௨௠௔௡≤𝑑′௘௙௙௘௖௧௜௩௘ 𝑑ᇱ ஺௨௧௢௠௔௧௜௢௡ ≤𝑑′௘௙௙௘௖௧௜௩௘ Lemma 1: Resp(Z) is monotonically increasing in d'Human. Proof: Assume that d'Human increases, and all other variables remain fixed. In particular, since d’Automation remains fixed, so do the human posterior probabilities for target 𝑃෠௧/"்" and 𝑃෠௧/"ே", presented in equations (A11) and (A12). Denote by β*"T" and β*"N" , respectively, the optimal human response criteria that maximizes the expected value, given that the module has classified an entity as a target or noise. 𝛽"்"∗= ଵି ௉෠೟/"೅" ௉෠೟/"೅"௏೅ಿି௏ಷು ௏೅ುି௏ಷಿ 𝛽"ே"∗= ଵି௉෠೟/"ಿ" ௉෠೟/"ಿ"௏೅ಿି௏ಷು ௏೅ುି௏ಷಿ (A15) These two criteria remain fixed as d'Human increases. For each of them, there is a corresponding cutoff point that can be derived using equation (A7). As d'Human increases, it follows from (A7) that the weight the human gives to the automation decreases monotonically to zero, so the two corresponding cutoff points are moving towards each other, approaching 0. This means that as d'Human increases, when selecting an action, the human assigns more weight to d'Human and less weight to the automation classification results. In terms of information theory, this means that as d'Human increases, the human action selection variable X depends less on the automation classification variable, Y. Hence their mutual information I(X:Y) decreases monotonically, H(X/Y) increases monotonically, and so does H(X/Y)/H(X). From equations (6) we can conclude that Resp(Z) is monotonically increasing in d'Human □ Lemma 2: Resp(Z) is monotonically decreasing in d’Automation Proof: Assume that d’Automation increases, and all other variables remain fixed. As d’Automation increases, both 𝑃෨்௉ →1 and 𝑃෨்ே →1 monotonically. We first examine the case when the automated module classifies an entity as a target. Here, the human uses 𝑃෠௧/"்" to compute the optimal response criterion, instead of Pt in (A9). From (A11) we get 𝑃෠௧/"்"→1 as 𝑃෨்௉ →1. The increase in 𝑃෠௧/"்" lowers the human response criterion, and from (A7) this lowers the corresponding human cutoff point, so 𝑃்௉/"்"→1. Denote by 𝑃ா௡௚௔௚௘ /"்" the probability that the human will choose to engage an entity, given that it was classified as a target by the automation. 𝑃ா௡௚௔௚௘ /"்"=𝑃෠௧/"்"𝑃்௉/"்"+൫1−𝑃෠௧/"்"൯𝑃ி௉/"்" (A16) From the above, when d’Automation increases, 𝑃ா௡௚௔௚௘ /"்" monotonically increases to 1. In a similar manner, it can be shown that when d’Automation increases and an entity is classified as noise by the automation, 𝑃ா௡௚௔௚௘ /"ே" decreases monotonically to 0. Therefore, as d’Automation increases, there is a higher probability that the human will act according to the automation classification, engaging an entity the automated module classified as target and not engaging an entity it classified as noise. In terms of information theory, this means that as d’Automation increases, the automation classification variable, Y, provides more information to the human action selection variable X, DOUER and MEYER - RESPONSIBILITY QUANTIFICATION (R ESQU) MOODEL OF HUMAN INTERACTION WITH AUTOMATION 16 monotonically reducing H(X/Y), the remaining uncertainty about X when Y is known, towards zero. Denote by T a Bernoulli variable that corresponds to the prevalence of targets in the environment. As d’Automation increases, the distribution of Y approaches the distribution of T so H(Y) approaches a fixed known value H(T) ∈(0,1). In addition, as d’Automation increases, the distribution of X approaches the distribution Y, so H(X) also approaches H(T). From equations (6) we get 𝑅𝑒𝑠𝑝(𝑍)=ு(௑ ௒⁄ ) ு(௑) →଴ ு(்)= 0 (A17) Thus, we can conclude that Resp(Z) monotonically decreases in d’Automation □ Proposition 1 : The comparative human responsibility 𝑅𝑒𝑠𝑝(𝑍) is monotonically decreasing in d’Automatio, and monotonically increasing in d'Human. Proof: Proof is immediate from Lemma 1 and Lemma 2. □ Proposition 2 : Let R denote the detection sensitivities ratio: R = d’Automation / d'Human. Suppose that the human and the automation associate the same payoffs with correct and incorrect actions, then 𝑙𝑖𝑚 ோ→ஶ𝑅𝑒𝑠𝑝(𝑍)= 0 and 𝑙𝑖𝑚 ோ→଴𝑅𝑒𝑠𝑝(𝑍)= 1 Proof: Under condition of the proposition and our model assumptions, equation (A14) holds. Using R= d’Automation / d’Human : 𝑑ᇱ஺௨௧௢௠௔௧௜௢௡ ≤𝑑′௘௙௙௘௖௧௜௩௘≤ටௗᇲಲೠ೟೚೘ೌ೟೔೚೙మ ோమ+𝑑ᇱ஺௨௧௢௠௔௧௜௢௡ଶ (A18) Consequently 𝑙𝑖𝑚 ோ→ஶ𝑑′௘௙௙௘௖௧௜௩௘=𝑑ᇱ ஺௨௧௢௠௔௧௜௢௡ . This means that when both human and automation associate the same payoffs with correct and incorrect actions, and d’Automation is much larger than d’Human, a rational human will base the action selection decision primarily upon the results of the automation classification. Therefore, when R ∞ the human action selection variable, X, will be fully determined by the automation classification variable, Y, and thus will have the same distribution as Y. In terms of entropy this means 𝑙𝑖𝑚 ோ→ஶ𝐻(𝑋/𝑌)= 0 𝑙𝑖𝑚 ோ→ஶ𝐻(𝑋)=𝐻(𝑌) (A19) From equations (6) we have 𝑙𝑖𝑚 ோ→ஶ𝑅𝑒𝑠𝑝(𝑍)= 𝑙𝑖𝑚 ோ→ஶு(௑/௒) ு(௑) =଴ ு(௒)= 0 (A20) The proof for R  0 is analogical. In this case, we have: 𝑑ᇱ ு௨௠௔௡≤𝑑′௘௙௙௘௖௧௜௩௘≤ට𝑑ᇱு௨௠௔௡ଶ+ 𝑅ଶ𝑑ᇱு௨௠௔௡ଶ (A21) Thus, 𝑙𝑖𝑚 ோ→଴𝑑′௘௙௙௘௖௧௜௩௘=𝑑ᇱ ு௨௠௔௡. This means that when d’Human is much larger than d’Automation, and both human and automation associate the same payoffs with correct and incorrect actions, rational humans will base the action selection decision primarily on their own detection capabilities. Therefore, when R 0 the human action selection variable, X, will be independent from the automation classification variable, Y. In terms of entropy this means: 𝑙𝑖𝑚 ோ→଴𝐻(𝑋/𝑌)=𝐻(𝑋) (A22) 𝑙𝑖𝑚 ோ→଴𝑅𝑒𝑠𝑝(𝑍)= 𝑙𝑖𝑚 ோ→଴ு(௑/௒) ு(௑) =ு(௑) ு(௑)= 1 (A23) This completes the proof. □ REFERENCES [1] M. Bergsten and J. Sandahl, "Algorithmic trading in the foreign exchange market," Sveriges Riksbank Economic Review, (1), pp. 31, 2013. [2] T. Hendershott, C. M. Jones and A. J. Menkveld, "Does Algorithmic Trading Improve Liquidity?" J. Finance, vol. 66, (1), pp. 1-33, 2011. doi: 10.1111/j.1540-6261.2010.01624.x . [3] L. Da-Yin, "Automation and integration in semiconductor manufacturing, semiconductor technologies," Jan Grym (Ed.), InTech, pp.39-56. 2010. Available: https://www.intechopen.com /books/ semiconductor-technologies/automation-and-integration-in- semiconductor-manufacturing. [4] K. Doi, "Computer-aided diagnosis in medical imaging: historical review, current status and future potential," Comput. Med. Imaging Graphics, vol. 31, (4), pp. 198-211, 2007. [5] R. M. Rangayyan, F. J. Ayres and J. L. Desautels, "A review of computer-aided diagnosis of breast cancer: Toward the detection of subtle signs," Journal of the Franklin Institute, vol. 344, (3), pp. 312- 348, 2007. [6] C. E. Billings, Aviation Automation . Mahwah, NJ: Erlbaum, USA, 1997. [7] A. R. Pritchett, "Aviation Automation: General Perspectives and Specific Guidance for the Design of Modes and Alerts," Reviews of Human Factors and Ergonomics , vol. 5, (1), pp. 82-113, 2009. doi: 10.1518/155723409X448026 . [8] T. Litman, "Autonomous vehicle implementation predictions" Victoria Transport Policy Institute , pp. 1-24, 2017 Available: http://www.vtpi.org/avip.pdf. [9] T. Luettel, M. Himmelsbach and H. Wuensche, "Autonomous Ground Vehicles Concepts and a Path to the Future," Proc IEEE , vol. 100, pp. 1831-1839, 2012. doi: 10.1109/JPROC.2012.2189803 . [10] H. L. A. Hart and T. Honor, Causation in the Law . Oxford University Press, Oxford, 1985. [11] H. L. A. Hart, P unishment and Responsibility: Essays in the Philosophy of Law . Oxford University Press, Oxford, 2008. [12] N. A. Vincent, "A structured taxonomy of responsibility concepts," in Moral Responsibility , N.A. Vincent, I. van de Poel, and J. Hoven, eds., Springer, Netherlands, pp. 15-35, 2011. [13] M. D. Alicke, D. R. Mandel, D. J. Hilton, T. Gerstenberg and D. A. Lagnado, "Causal conceptions in social explanation and moral evaluation: A historical tour," Perspectives on Psychological Science, vol. 10, (6), pp. 790-812, 2015. [14] F. Cushman, "Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment," Cognition, vol. 108, (2), pp. 353-380, 2008. [15] B. F. Malle, S. Guglielmo and A. E. Monroe, "A theory of blame," Psychological Inquiry, vol. 25, (2), pp. 147-186, 2014. [16] R. Rogers, M. D. Alicke, S. G. Taylor, D. Rose, T. L. Davis and D. Bloom, "Causal deviance and the ascription of intent and blame," Philosophical Psychology , vol. 32, (3), pp.404-427, 2019. [17] K. G. Shaver, The Attribution of Blame: Causality, Responsibility, and Blameworthiness. Springer Science & Business Media, 2012. [18] M. S. Moore, Causation and Responsibility: An Essay in Law, Morals, and Metaphysics. Oxford University Press on Demand, 2009. [19] S. Steel, Proof of Causation in Tort Law. Cambridge University Press, 2015. [20] R. W. Wright, "Causation in tort law," California Law Review, vol. 73, pp. 1735, 1985. [21] R. Parasuraman, T. B. Sheridan and C. D. Wickens, "A model for types and levels of human interaction with automation," IEEE Transactions on Systems Man and Cybernetics. A. Systems and Humans , vol. 30, (3), pp. 286-297, 2000. doi: 10.1109/3468.844354 . [22] D. A. Abbink, T. Carlson, M. Mulder, De Winter, J. C. F., F. Aminravan, T. L. Gibo and E. R. Boer, "A topology of shared control systems-finding common ground in diversity," IEEE Transactions on Human- Machine Systems , vol. 48, (5), pp. 509-525, 2018. doi: 0.1109/THMS.2018.2791570 . [23] M. Johnson, J. M. Bradshaw, P. J. Feltovich, C. M. Jonker, B. Van Riemsdijk and M. Sierhuis, "The fundamental principle of coactive design: Interdependence must shape autonomy," in International Workshop on Coordination, Organizations, Institutions, and Norms in Agent Systems , pp. 172-191, 2010. [24] M. Johnson, J. M. Bradshaw, P. J. Feltovich, C. M. Jonker, M. B. Van Riemsdijk and M. Sierhuis, "Coactive design: Designing support for interdependence in joint activity," Journal of Human-Robot Interaction , vol. 3, (1), pp. 43-69, 2014. [25] D. Castelvecchi, "Can we open the black box of AI?" Nature News, vol. 538, (7623), pp. 20, 2016. [26] P. Scharre, Autonomous Weapons and Operational Risk. Center for a New American Security Washington, DC, 2016. [27] D. G. Johnson and T. M. Powers, "Computer systems and responsibility: A normative look at technological complexity," Ethics and Information Technology, vol. 7, (2), pp. 99-107, 2005. [28] M. Coeckelbergh, "Moral responsibility, technology, and experiences of the tragic: From Kierkegaard to offshore engineering," Science and engineering ethics, vol. 18, (1), pp. 35-48, 2012. [29] R. D. Cooter and T. S. Ulen, "An economic case for comparative negligence," New York University Law Review , vol. 61, pp. 1067, 1986. [30] J. V. Pinto, "Comparative Responsibility-An Idea Whose Time Has Come," Insurance Counsel Journal, vol. 45, pp. 115, 1978. [31] D. C. Sobelsohn, "Comparing Fault," Indiana Law Journal, vol. 60, pp. 413-462, 1984. [32] D. D. Woods, "Cognitive technologies: The design of joint human- machine cognitive systems," AI Magazine, vol. 6, (4), pp. 86, 1985. [33] D. D. Woods, "Conflicts between learning and accountability in patient safety," DePaul Law Review, vol. 54, pp. 485-502, 2004. [34] A. Matthias, "The responsibility gap: Ascribing responsibility for the actions of learning automata," Ethics and information technology, vol. 6, (3), pp. 175-183, 2004. doi: 10.1007/s10676-004-3422-1 . [35] R. Sparrow, "Killer robots," Journal of Applied Philosophy, vol. 24, (1), pp. 62-77, 2007. [36] D. G. Johnson, "Technology with No Human Responsibility?" J ournal of Business Ethics, vol. 127, (4), pp. 707, 2014. doi: 10.1007/s10551- 014-2180-1 . [37] H. Chockler and J. Y. Halpern, "Responsibility and blame: A structural model approach," Journal of Artificial Intelligence Research, vol. 22, pp. 93-115, 2004. [38] T. Gerstenberg and D. A. Lagnado, "Spreading the blame: The allocation of responsibility amongst multiple agents," Cognition, vol. 115, (1), pp. 166-171, 2010. [39] D. A. Lagnado, T. Gerstenberg and R. Zultan, "Causal responsibility and counterfactuals," Cognitive Science, vol. 37, (6), pp. 1036-1073, 2013. [40] A. F. Langenhoff, A. Wiegmann, J. Y. Halpern, J. B. Tenenbaum and T. Gerstenberg, "Predicting responsibility judgments from dispositional inferences and causal attributions, Working Paper, 2019. [41] R. Zultan, T. Gerstenberg and D. A. Lagnado, "Finding fault: causality and counterfactuals in group attributions." Cognition, vol. 125, (3), pp. 429-440, 2012. [42] M. L. Cummings, "Lethal Autonomous Weapons: Meaningful human control or meaningful human certification?" IEEE Technology and Society Magazine, vol. 38, (4), pp. 20-26 , 2019. [43] L. Righetti, Q. Pham, R. Madhavan and R. Chatila, "Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues]," IEEE Robotics & Automation Magazine, vol. 25, (1), pp. 123-126, 2018. [44] ICRC, "Autonomous weapon systems: Implications of increasing autonomy in the critical functions of weapons," in Expert Meeting of International Committee of the Red Cross, pp. 1-94, 2016. Available: http://icrcndresourcecentre.org/wp-content/uploads/2017/11/ 4283_002_Autonomus-Weapon-Systems_WEB.pdf [45] A. Wyatt, "Charting great power progress toward a lethal autonomous weapon system demonstration point," Defence Studies, pp. 1-20, 2019. [46] R. Crootof, "The Killer Robots Are Here: Legal and Policy Implications," Cardozo Law Review , vol. 36, (5), 2015. [47] B. Docherty, R. A. Althaus, A. Brinkman, C. Jones and R. B. Skipper, "Losing Humanity: The Case Against Killer Robots," Science and Engineering Ethics, vol. 20, (1), 2012. [48] R. Sparrow, "Predators or plowshares? Arms control of robotic weapons," IEEE Technology and Society Magazine , vol. 28, (1), pp. 25-29, 2009. [49] M. L. Cummings, "Automation and Accountability in Decision Support System Interface Design." Journal of Technology Studies, vol. 32, (1), pp. 23-31, 2006. [50] A. Gerdes, "Lethal autonomous weapon systems and responsibility gaps," Philosophy Study, vol. 8, (5), pp. 231-239, 2018. [51] P. Asaro, "On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making," International Review of the Red Cross, vol. 94, (886), pp. 687-709, 2012. [52] B. L. Docherty, Mind the Gap: The Lack of Accountability for Killer Robots. Human Rights Watch, 2015. [53] S S. Goose, "The case for banning killer robots," Communications of the ACM, vol. 58, (12), pp. 43-45, 2015. doi: 10.1145/2835963 [54] A. Guersenzvaig, "Autonomous Weapon Systems: Failing the Principle of Discrimination," IEEE Technology and Society Magazine, vol. 37, (1), pp. 55-61, 2018. [55] A. Hauptman, "Autonomous Weapons and the Law of Armed Conflict," Military Law Review, vol. 218, pp. 170-195, 2013. [56] T. Hellström, "On the moral responsibility of military robots," Ethics and Information Technology , vol. 15, (2), pp. 99-107, 2013. [57] M. Noorman and D. G. Johnson, "Negotiating autonomy and responsibility in military robots," Ethics and Information Technology, vol. 16, (1), pp. 51-62, 2014. [58] M. Noorman, "Responsibility practices and unmanned military technologies," Sci. Eng. Ethics, vol. 20, (3), pp. 809-826, 2014. [59] N. Sharkey, "Saying ‘no!’to lethal autonomous targeting," Journal of Military Ethics, vol. 9, (4), pp. 369-383, 2010. [60] J. I. Walsh, "Political accountability and autonomous weapons," Research & Politics , vol. 2, (4), pp. 1-6, 2015. [61] USDD, "Directive 3000.09: Autonomy in Weapon Systems," U nited States of America: Department of Defense , pp. 1-15, 2012. Available: http://www.esd.whs.mil/Portals/54/Documents/DD/issuances/ dodd/ 300009p.pdf. [62] I ICRC, "Autonomous weapon systems: Technical, military, legal and humanitarian aspects," International Committee of the Red Cross . pp.1-202, 2014. Available: https://www.icrc.org/en/download/ file/ 1707/4221-002-autonomous-weapons-systems-full-report.pdf. [63] UNIDIR, "The weaponization of increasingly autonomous technologies: Considering how meaningful human control might move the discussion forward," United Nations Institute for Disarmament Research, pp.1-9. 2014. Available:http://www.unidir.ch/files/ publications/pdfs/considering-how-meaningful-human-control-might- move-the-discussion-forward-en-615.pdf. [64] M. Horowitz and P. Scharre, Meaningful Human Control in Weapon Systems: A Primer. Center for a New American Security, 2015. [65] F. S. de Sio and d. H. van, "Meaningful human control over autonomous systems: A philosophical account," Frontiers in Robotics and AI, vol. 5, pp. 1-14 , 2018. doi: 10.3389/frobt.2018.00015 . [66] F. Ficuciello, G. Tamburrini, A. Arezzo, L. Villani and B. Siciliano, "Autonomy in surgical robots and its meaningful human control," Paladyn, Journal of Behavioral Robotics, vol. 10, (1), pp. 30-43, 2019. [67] G. Mecacci and F. S. de Sio, "Meaningful human control as reason- responsiveness: the case of dual-mode vehicles," Ethics and Information Technology, pp. 1-13, 2019. [68] F. Santoni de Sio and J. Van den Hoven, "Meaningful human control over autonomous systems: a philosophical account," Frontiers in Robotics and AI, vol. 5, pp. 15, 2018. DOUER and MEYER - RESPONSIBILITY QUANTIFICATION (R ESQU) MOODEL OF HUMAN INTERACTION WITH AUTOMATION 18 [69] A. R. Pritchett, S. Y. Kim and K. M. Feigh, "Measuring Human- Automation Function Allocation," Journal of Cognitive Engineering and Decision Making , vol. 8, (1), pp. 52-77, 2014. doi: 10.1177/1555343413490166 . [70] M. C. Elish, "Moral Crumple Zones: Cautionary Tales in Human- Robot Interaction," We Robot 2016 , 2016. [71] M. C. Elish and T. Hwang, "Praise the Machine! Punish the Human! The Contradictory History of Accountability in Automated Aviation," Intelligence & Autonomy Working Pape r, Data & Society Research Institute 2015. [72] M. C. Canellas and R. A. Haga, "Toward meaningful human control of autonomous weapons systems through function allocation,", IEEE International Symposium on Technology and Society (ISTAS 2015) , pp. 1-7, 2015. [73] R. C. Conant, "Laws of information which govern systems," IEEE transactions on systems, man, and cybernetics ,(4), pp. 240-255, 1976. [74] C. E. Shannon, "A mathematical theory of communication," Bell system technical journal vol. 27, (3), pp. 379-423, 1948. [75] T. M. Cover and J. A. Thomas, Elements of Information Theory. John Wiley & Sons, New York, USA, 2012. [76] H. Theil, "On the Estimation of Relationships Involving Qualitative Variables," American Journal of Sociology, vol. 76, (1), pp. 103-154, 1970. doi: 10.1086/224909 . [77] H. Theil, Statistical Decomposition Analysis : With Applications in the Social and Administrative Sciences. Amsterdam, North-Holland Pub. Co, 1972. [78] J. Meyer, "Effects of warning validity and proximity on responses to warnings," Human Factors, vol. 43, (4), pp. 563-572, 2001. [79] J. Meyer, "Conceptual issues in the study of dynamic hazard warnings," Human Factors, vol. 46, (2), pp. 196-204, 2004. [80] G. Vashitz, J. Meyer, Y. Parmet, R. Peleg, D. Goldfarb, A. Porath and H. Gilutz, "Defining and measuring physicians’ responses to clinical reminders," J ournal of Biomedical Informatics, vol. 42, (2), pp. 317- 326, 2009. [81] K. M. Feigh and A. R. Pritchett, "Requirements for effective function allocation: A critical review," Journal of Cognitive Engineering and Decision Making, vol. 8, (1), pp. 23-32, 2014. [82] M. Canellas and R. Haga, "Lost in Translation: Building a Common Language for Regulating Autonomous Weapons," IEEE Technology and Society Magazine, vol. 35, (3), pp. 50-58, 2016. [83] D. Eskins and W. H. Sanders, "The multiple-asymmetric-utility system model: A framework for modeling cyber-human systems," in 2011 Eighth International Conference on Quantitative Evaluation of SysTems, 2011, pp. 233-242. [84] J. Cámara, G. A. Moreno and D. Garlan, "Reasoning about human participation in self-adaptive systems," in Proceedings of the 10th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, 2015, pp. 146-156. [85] J. Cámara, D. Garlan, G. A. Moreno and B. Schmerl, "Evaluating trade-offs of human involvement in self-adaptive systems," in Managing Trade-Offs in Adaptable Software Architectures, Morgan Kaufmann, pp. 155-180, 2017. [86] M. Gil, M. Albert, J. Fons and V. Pelechano, "Designing human-in- the-loop autonomous Cyber-Physical Systems," International Journal of Human-Computer Studies, vol. 130, pp. 21-39, 2019. [87] D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics. New York, USA, Wiley, 1966. [88] N. A. Macmillan and C. D. Creelman, Detection Theory: A User's Guide. New York, NY, USA, Cambridge University Press, 2004. [89] T. D. Wickens, Elementary Signal Detection Theory. Oxford University Press, USA, 2002. [90] N. Douer and J. Meyer, Theoretical, Measured and Subjective Responsibility in Aided Decision Making, Working Paper. 2019 Available: https://arxiv.org/ftp/arxiv/papers/1904/1904.13086.pdf [91] I. Pollack and A. B. Madans, "On the Performance of a Combination of Detectors," Human Factors, vol. 6, (5), pp. 523-531, 1964.
3028eefd-e449-4a76-a4b2-f0da63812164
trentmkelly/LessWrong-43k
LessWrong
Subagent perfect minimax % operators that are separated from the operand by a space % operators that require brackets % operators that require parentheses % Paper specific This post continues the study of minimax forecasting. The minimax decision rule has the pathology that, when events are sufficiently "optimistic", behavior can become highly suboptimal. This is analogous to off-policy irrational behavior in Nash equilibria of games in extensive form. In order to remedy the problem, we introduce a refinement called "subagent perfect minimax," somewhat analogous to subgame perfect equilibria and other related solution concepts. It is possible to prove existence and, when the model factorizes, dynamic consistency. The proofs are omitted, but we can easily provide them if necessary. ##Motivation Consider the following class of environments: during the first step, the agent gains 1$ or action b which leads to gaining $$0$. The minimax payoff of this class is 0$ in the first step, choose b if you gained 1,whichisthesameastheworst−case(p=0$) payoff of π∗. Note that the minimax environment has p=0 and π only differs from π∗ on histories which are impossible in that environment. Moreover, if we considered the class p∈[ϵ,1] for some ϵ>0, the minimax payoff would be $$(1+\epsilon)$, and \(\pi^*\) would be the sole minimax policy. Therefore, in order to eliminate the pathology, we need to formulate a stability condition that ensures any admissible history is treated as occurring with at least infinitesimal probability. ##Results Now consider the forecasting setting, with action set A, observation set O, time discount function γ:N→R≥0 and reward function r:(A×O)∗→R. As before, γ and r define the utility function u:(O∗→A)×Oω→R. Consider Φ∈PC(Oω) and denote O+Φ:={x∈O+∣ ∃μ∈Φ:μ(xOω)>0} #Definition 1 Consider X⊆O+Φ finite. π∗∈P(O∗→A) is called an X-stable minimax policy for Φ when there are sequences {ϵ(n):X→(0,1)}n∈N and {π(n)∈P(O∗→A)}n∈N s.t. ϵ(n)→0, π(n)→π∗ and π(n) i
3bbbb872-62c4-4435-8e4c-1667eaa376a6
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes Thread March 2015 Another month, another rationality quotes thread. The rules are: * Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.) * Do not quote yourself. * Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here. * No more than 5 quotes per person per monthly thread, please. * Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
193f0a75-a1ab-4372-b226-49849d939238
trentmkelly/LessWrong-43k
LessWrong
Do Sufficiently Advanced Agents Use Logic? This is a continuation of a discussion with Vanessa from the MIRIxDiscord group. I'll make some comments on things Vanessa has said, but those should not be considered a summary of the discussion so far. My comments here are also informed by discussion with Sam. 1: Logic as Proxy 1a: The Role of Prediction Vanessa has said that predictive accuracy is sufficient; consideration of logic is not needed to judge (partial) models. A hypothesis should ultimately ground out to perceptual information. So why is there any need to consider other sorts of "predictions" it can make? (IE, why should we think of it as possessing internal propositions which have a logic of their own?) But similarly, why should agents use predictive accuracy to learn? What's the argument for it? Ultimately, predicting perceptions ahead of time should only be in service of achieving higher reward. We could instead learn from reward feedback alone. A (partial) "hypothesis" would really be a (partial) strategy, helping us to generate actions. We would judge strategies on (something like) average reward achieved, not even trying to predict precise reward signals. The agent still receives incoming perceptual information, and strategies can use it to update internal states and to inform actions. However, strategies are not asked to produce any predictions. (The framework I'm describing is, of course, model-free RL.) Intuitively, it seems as if this is missing something. A model-based agent can learn a lot about the world just by watching, taking no actions. However, individual strategies can implement prediction-based learning within themselves. So, it seems difficult to say what benefit model-based RL provides beyond model-free RL, besides a better prior over strategies. It might be that we can't say anything recommending model-based learning over model free in a standard bounded-regret framework. (I actually haven't thought about it much -- but the argument that model-free strategies can impleme
e33de88e-c92d-48bc-b671-f8ec4e1bf8dd
trentmkelly/LessWrong-43k
LessWrong
AGI's Opposing Force In the Death Note anime, the Death Note gives the protagonist the ability to create a totalitarian justice system. That is, if there isn't a force to stop him. But a character named L quickly catches onto him, and makes his ability to "serve justice" harder. In other words, the Death Note is a powerful tool (like AGI), but opposing forces hamper its ability to affect the world. In the Death Note series, L only responded when he noticed the anomalous effects of the Death Note. He took a reactive approach. When talking about AGI, usually the discussion gravitates towards "and then the uptick in intelligence goes out of control, leading to completely unpredictable behavior". Which then leads to the conclusion that "in such chaotic situations, the chance that things go wrong is significantly higher than that all stars align". Although I agree with the sentiment, and see truth in all the ways AGI could be dangerous, sometimes we disregard the "L" in reasoning about AGI. In nature, many processes evolve to completely chaotic situations unless constrained by opposing forces (Entropy Theory). Any species seeks dominance, until another species comes along to keep populations under control. Physical processes gather entropy till we step in and reduce it (hence why I have to clean my room every now and then). Even psychology shows these patterns, where self-serving thoughts radicalize, giving rise to wide-ranging mental issues, till a force stops them . I often hear the argument "the required IQ to destroy the world decreases every year". I don't fully agree with that sentiment. As new technologies arise to do a lot of harm with little IQ, new constraints arise to reduce harm. Nuclear weapons are incredibly scary and a huge risk, so we have seen global nuclear disarmament, which shows in the sharp reduction of recent nuclear close calls. The IQ required to destroy the world through nuclear weapons has decreased because of the "L" that arose to oppose it. As computer inte
55109a3b-4ae3-48f5-b5b6-5e26c5066b7f
StampyAI/alignment-research-dataset/arxiv
Arxiv
Amplifying the Imitation Effect for Reinforcement Learning of UCAV's Mission Execution 1 Introduction --------------- Reinforcement learning (RL) aims to learn an optimal policy of the agent for a control problem by maximizing the expected return. RL shows high performance in dense reward environments such as games (Mnih et al., [2013](#bib.bib16)). However, in many real-world problems, rewards are extremely sparse, and in this case, it is necessary to explore the environment. The RL literature suggests exploration methods to solve this challenge, such as count-based exploration (Bellemare et al., [2016](#bib.bib1); Ostrovski et al., [2017](#bib.bib21)), entropy-based exploration (Haarnoja et al., [2017](#bib.bib8); Ziebart, [2010](#bib.bib32)) and curiosity-based exploration (Silvia, [2012](#bib.bib25); Pathak et al., [2017](#bib.bib22); Burda et al., [2018a](#bib.bib2); Haber et al., [2018](#bib.bib9)). In recent years, many researchers have added an exploration bonus, often called curiosity or intrinsic reward, which is the difference between the predicted state and actual next state. The intrinsic reward is very efficient in exploration because the network for predicting the next state drives the agent to behave unexpectedly. This paper focuses on combining self-imitation leaning (SIL) (Oh et al., [2018](#bib.bib20)) and random network distillation (RND) (Burda et al., [2018b](#bib.bib3)). SIL is an algorithm that indirectly leads to deep exploration by exploiting only good decisions of the past, whereas RND solves the problem of hard exploration by giving an exploration bonus through deterministic prediction error. The RND bonus is a deterministic prediction error of a neural network predicting features of the observations, and the authors have shown significant performance in some hard exploration Atari games. In hard exploration environments, it does not make sense for SIL to exploit a good decision of the past. In other words, SIL requires an intrinsic reward. Meanwhile, in RND, catastrophic forgetting could occur during learning because the predictor network learns about the state that the agent visited recently. Consequently, the prediction error increases, and the exploration bonus increases for previously visited states. We will describe this phenomenon in detail in section 4.3. This paper introduces amplifying the imitation effect (AIE) by combining SIL and RND to drive deep exploration. In addition, we introduce techniques that can enhance the strength of the proposed network. Adding an intrinsic penalty reward to the state that the agent continuously visits leads to deviation from the current converged policy. Moreover, to avoid catastrophic forgetting, we use a pool of stored samples to update the predictor network during imitation learning such that we can uniformly learn the visited states by the predictor network. We have experimentally demonstrated that these techniques lead to deep exploration. We verify our algorithm using unmanned combat aerial vehicle (UCAV) mission execution. Some studies have applied RL to UCAV maneuvers. (Liu & Ma, [2017](#bib.bib13); Zhang et al., [2018](#bib.bib31); Minglang et al., [2018](#bib.bib15)). However, those studies simply defined the state and action and experimented in a dense reward environment. We constructed the experimental environment by simulating the flight maneuvers of the UCAV in a three-dimensional (3D) space. The objective of the RL agent is to learn the maneuvers by which the UCAV reaches a target point while avoiding missiles from the enemy air defense network. The main contributions of this paper are as follows: * • We show that SIL and RND are complementary and that combining these two algorithms is very efficient for exploration. * • We present several techniques to amplify the imitation effect. * • The performance of the RL applied to the UCAV control problem is excellent. The learning method outputs reasonable UCAV maneuvers in the sparse reward environment. 2 Problem Definition --------------------- We overlapped the air defense network as in an actual battlefield environment, and we aimed to learn that the UCAV reaches the target by avoiding missiles from the starting point in a limited time period. For the UCAV dynamics, we applied the following equations of motion of a 3-degrees-of-freedom point mass model (Kim & Kim, [2007](#bib.bib10)): | | | | | | | --- | --- | --- | --- | --- | | | x˙˙𝑥\displaystyle\dot{x}over˙ start\_ARG italic\_x end\_ARG | =\displaystyle== | Vcos⁡γcos⁡ψ𝑉𝛾𝜓\displaystyle V\cos\gamma\cos\psiitalic\_V roman\_cos italic\_γ roman\_cos italic\_ψ | | | | y˙˙𝑦\displaystyle\dot{y}over˙ start\_ARG italic\_y end\_ARG | =\displaystyle== | Vcos⁡γsin⁡ψ𝑉𝛾𝜓\displaystyle V\cos\gamma\sin\psiitalic\_V roman\_cos italic\_γ roman\_sin italic\_ψ | | | | z˙˙𝑧\displaystyle\dot{z}over˙ start\_ARG italic\_z end\_ARG | =\displaystyle== | Vsin⁡γ𝑉𝛾\displaystyle V\sin\gammaitalic\_V roman\_sin italic\_γ | | | | V˙˙𝑉\displaystyle\dot{V}over˙ start\_ARG italic\_V end\_ARG | =\displaystyle== | T−Dm−gsin⁡γ𝑇𝐷𝑚𝑔𝛾\displaystyle{{T-D}\over{m}}-g\sin\gammadivide start\_ARG italic\_T - italic\_D end\_ARG start\_ARG italic\_m end\_ARG - italic\_g roman\_sin italic\_γ | | | | ψ˙˙𝜓\displaystyle\dot{\psi}over˙ start\_ARG italic\_ψ end\_ARG | =\displaystyle== | gnsin⁡ϕVcos⁡γ𝑔𝑛italic-ϕ𝑉𝛾\displaystyle{{gn\sin\phi}\over{V\cos\gamma}}divide start\_ARG italic\_g italic\_n roman\_sin italic\_ϕ end\_ARG start\_ARG italic\_V roman\_cos italic\_γ end\_ARG | | | | γ˙˙𝛾\displaystyle\dot{\gamma}over˙ start\_ARG italic\_γ end\_ARG | =\displaystyle== | gV(ncos⁡ϕ−cos⁡γ)𝑔𝑉𝑛italic-ϕ𝛾\displaystyle{{g}\over{V(n\cos\phi-\cos\gamma)}}divide start\_ARG italic\_g end\_ARG start\_ARG italic\_V ( italic\_n roman\_cos italic\_ϕ - roman\_cos italic\_γ ) end\_ARG | | (1) | where (x𝑥xitalic\_x, y𝑦yitalic\_y, z𝑧zitalic\_z) is the position of the UCAV, V𝑉Vitalic\_V is the velocity, ψ𝜓\psiitalic\_ψ is the heading angle, and γ𝛾\gammaitalic\_γ is the flight path angle. T𝑇Titalic\_T, n𝑛nitalic\_n and ϕitalic-ϕ\phiitalic\_ϕ are the control inputs of the UCAV. T𝑇Titalic\_T, n𝑛nitalic\_n and ϕitalic-ϕ\phiitalic\_ϕ denote the engine thrust, load factor and bank angle, respectively. We use these control inputs as the action of our RL framework. Figure [1](#S2.F1 "Figure 1 ‣ 2 Problem Definition ‣ Amplifying the Imitation Effect for Reinforcement Learning of UCAV’s Mission Execution") shows the UCAV’s bank angle, flight path angle, and heading angle. The engine thrust affects the velocity of the UCAV. The bank angle and load factor affect the heading angle and flight path angle. ![Refer to caption](/html/1901.05856/assets/x1.png) Figure 1: Bank angle, flight path angle and heading angle of the UCAV. For the missile, we applied proportional navigation induction to chase the UCAV (Moran & Altilar, [2005](#bib.bib19)). We assume that if the distance between the UCAV and the missile is less than 0.5 km, then the UCAV is unable to avoid the missile. ### 2.1 State In general, in an environment such as Atari games, the image of the game is preprocessed and used as the state, and a convolutional neural network is employed as the structure of the network. In this study, however, the UCAV’s coordinate information and the UCAV’s radar information to detect missiles are vectorized for the state of the UCAV control problem. A multilayer perceptron is more appropriate for the problem than a convolutional neural network, which is generally adopted for representing an image as the state of an arcade game. #### 2.1.1 Coordinate Representation In a coordinate system, the coordinate points do not have a linear relationship. For example, the two-dimensional (2D) coordinate (10, 10) is not ten times more valuable than the coordinate (1, 1). Therefore, placing coordinates into a state with real numbers is not reasonable and causes learning instability. One way to represent the coordinates in the learning environment is to use a one-hot encoding vector. However, the one-hot encoding increases the dimension of the vector as the range of coordinates increases and is only possible for integer coordinates. In this study, we introduce a method to efficiently represent the coordinate system. The proposed method converts the coordinates into a one-hot encoding vector for each axis and then concatenates the vectors of the axes. The one-hot encoding method requires 40,000 rows (200x200) rows to represent (1, 1) when x𝑥xitalic\_x and y𝑦yitalic\_y range from 1 to 200, but using this method, c(1,1)=[(1,0,⋯,0)(1,0,⋯,0)]′subscript𝑐11superscriptdelimited-[]10⋯010⋯0′c\_{(1,1)}=[(1,0,\cdots,0)(1,0,\cdots,0)]^{\prime}italic\_c start\_POSTSUBSCRIPT ( 1 , 1 ) end\_POSTSUBSCRIPT = [ ( 1 , 0 , ⋯ , 0 ) ( 1 , 0 , ⋯ , 0 ) ] start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is possible with 400 rows (200+200). We additionally extended this method to the real coordinate system. The real coordinates are represented by introducing weight within the vector. For example, 1.3 is close to 70% in 1 and close to 30% in 2; in other words, the number 1.3 is a number with a weight of 70% in 1 and 30% in 2. Thus, 1.3 can be represented as c(1.3)=(0.7,0.3,⋯,0)′subscript𝑐1.3superscript0.70.3⋯0′c\_{(1.3)}=(0.7,0.3,\cdots,0)^{\prime}italic\_c start\_POSTSUBSCRIPT ( 1.3 ) end\_POSTSUBSCRIPT = ( 0.7 , 0.3 , ⋯ , 0 ) start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT (200 rows). Moreover, the resulting vector can be reduced to a small dimension. We have reduced this coordinate to 1/10. Consequently, the number 1.3 can be represented as c(1.3)=(0.13,0,⋯,0)′subscript𝑐1.3superscript0.130⋯0′c\_{(1.3)}=(0.13,0,\cdots,0)^{\prime}italic\_c start\_POSTSUBSCRIPT ( 1.3 ) end\_POSTSUBSCRIPT = ( 0.13 , 0 , ⋯ , 0 ) start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT (20 rows). This method efficiently represents real coordinates within a limited dimension. We call this method efficient coordinate vector (ECV). #### 2.1.2 Angle Representation Representing the angle as a state is also difficult in RL because the angle has a characteristic of circulating around 360∘superscript360360^{\circ}360 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT. For example, suppose that we change the angle from 10∘superscript1010^{\circ}10 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT to 350∘superscript350350^{\circ}350 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT. Even if we use a real value or the ECV method, the agent will perceive the result of a 340∘superscript340340^{\circ}340 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT change. However, the difference (340∘superscript340340^{\circ}340 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT) is 20∘superscript2020^{\circ}20 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT at the same time. That is, this angle representation confuses the RL agent. We solve this problem with the polar coordinate system and ECV. r𝑟ritalic\_r and θ𝜃\thetaitalic\_θ can be transformed into Cartesian coordinates x𝑥xitalic\_x and y𝑦yitalic\_y using a trigonometric function. Using the polar coordinates, we can convert r𝑟ritalic\_r and θ𝜃\thetaitalic\_θ into Cartesian coordinates x𝑥xitalic\_x and y𝑦yitalic\_y. Additionally, we can represent these coordinates as a state through ECV. In other words, the angle is converted into the circle upper position using the polar coordinate system, and then it is represented as a state through the ECV. For example, as shown in figure [2](#S2.F2 "Figure 2 ‣ 2.1.2 Angle Representation ‣ 2.1 State ‣ 2 Problem Definition ‣ Amplifying the Imitation Effect for Reinforcement Learning of UCAV’s Mission Execution"), the point on the circle corresponding to 17∘superscript1717^{\circ}17 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT can be represented as c(17∘)=(0,⋯,0.71,0.29,0,⋯,0,0.302,0.698)′subscript𝑐superscript17superscript0⋯0.710.290⋯00.3020.698′c\_{(17^{\circ})}=(0,\cdots,0.71,0.29,0,\cdots,0,0.302,0.698)^{\prime}italic\_c start\_POSTSUBSCRIPT ( 17 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT = ( 0 , ⋯ , 0.71 , 0.29 , 0 , ⋯ , 0 , 0.302 , 0.698 ) start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT (20 rows) through ECV. ![Refer to caption](/html/1901.05856/assets/x2.png) Figure 2: Example of angle representation. #### 2.1.3 Final State We finally used the following information as the state of the UCAV control problem. * – Flight path consisting of five recent steps of the UCAV * – Path angle, heading angle and bank angle for two recent steps of the UCAV * – Velocity and load factor of the UCAV * – Distance between the UCAV and the missile * – Horizontal and vertical angles between the UCAV and the missile ### 2.2 Action The action is an input combination of engine thrust, bank angle and load factor using Equation 1. Each input has three choices: increase, hold, and decrease. In addition, we have added an action that initializes all inputs to have default values (bank angle: 0∘superscript00^{\circ}0 start\_POSTSUPERSCRIPT ∘ end\_POSTSUPERSCRIPT, load factor: 1G1𝐺1G1 italic\_G, and engine thrust: 50kN50𝑘𝑁50kN50 italic\_k italic\_N). This action allows the UCAV to cruise. The total number of actions is 28. ### 2.3 Reward The default reward is zero, except for the following specific situations: * – A result of the missile skirmishes * – Whether the UCAV has arrived at its target point * – Cruise condition The cruise condition is rewarded because the UCAV cannot maintain the maximum speed for cruising. We impose a penalty of -0.01 if the speed reaches the maximum speed. 3 Related Work --------------- Experience replay Experience replay (Lin, [1992](#bib.bib12)) is a technique for exploiting past experiences, and Deep Q-Network (DQN) has exhibited human-level performance in Atari games using this technique(Mnih et al., [2013](#bib.bib16), [2015](#bib.bib17)). Prioritized experience replay (Schaul et al., [2015](#bib.bib24)) is a method for sampling prior experience based on temporal difference. ACER (Wang et al., [2016](#bib.bib30)) and Reactor (Gruslys et al., [2017](#bib.bib7)) utilize a replay memory in the actor-critic algorithm (Sutton et al., [2000](#bib.bib29); Konda & Tsitsiklis, [2000](#bib.bib11)). However, this method might not be efficient if the past policy is too different from the current policy (Oh et al., [2018](#bib.bib20)). SIL is immune to this disadvantage because it exploits only past experiences that had higher returns than the current value. Exploration Exploration has been the main challenging issue for RL, and many studies have proposed methods to enhance exploration. Count-based exploration bonus (Strehl & Littman, [2008](#bib.bib27)) is an intuitive and effective exploration method in which an agent receives a bonus if the agent visits a novel state, and the bonus decreases if the agent visits a frequently visited state. There are some studies that estimate the density of a state to provide a bonus in a large state space (Bellemare et al., [2016](#bib.bib1); Ostrovski et al., [2017](#bib.bib21); Fox et al., [2018](#bib.bib5); Machado et al., [2018](#bib.bib14)). Recent studies have introduced a prediction error (curiosity), which is the difference between the next state predicted and the actual next state for the exploration (Silvia, [2012](#bib.bib25); Stadie et al., [2015](#bib.bib26); Pathak et al., [2017](#bib.bib22); Burda et al., [2018a](#bib.bib2); Haber et al., [2018](#bib.bib9)). The studies designed the prediction error as an exploration bonus (itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT) to give the agent more reward when performing unexpected behaviors. However, the prediction error has a stochastic characteristic because the target function is stochastic. In addition, the architecture of the predictor network is too limited to generalize the state of the environment. To solve these problems, RND (Burda et al., [2018b](#bib.bib3)) proposed that the target network be deterministic by fixing the network with randomized weights and proposed that the predictor network has the same architecture as the target network. Other methods for efficient exploration include adding parameter noise within the network (Strehl & Littman, [2008](#bib.bib27); Plappert et al., [2017](#bib.bib23)), maximizing entropy policies (Haarnoja et al., [2017](#bib.bib8); Ziebart, [2010](#bib.bib32)), adversarial self-play (Sukhbaatar et al., [2017](#bib.bib28)) and learning diverse policies (Eysenbach et al., [2018](#bib.bib4); Gangwani et al., [2018](#bib.bib6)). Self-Imitation Learning SIL can indirectly lead to deep exploration by imitating the good decisions of the past (Oh et al., [2018](#bib.bib20)). To exploit past decisions, the authors used replay buffers 𝒟𝒟\mathcal{D}caligraphic\_D = {(st,at,Rtsubscript𝑠𝑡subscript𝑎𝑡subscript𝑅𝑡s\_{t},a\_{t},R\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT)}, where stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and atsubscript𝑎𝑡a\_{t}italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT are a state and an action at t𝑡titalic\_t-step, and Rt=Σk=t∞γk−trksubscript𝑅𝑡subscriptsuperscriptΣ𝑘𝑡superscript𝛾𝑘𝑡subscript𝑟𝑘R\_{t}=\Sigma^{\infty}\_{k=t}\gamma^{k-t}r\_{k}italic\_R start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = roman\_Σ start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k = italic\_t end\_POSTSUBSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_k - italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is the discounted sum of reward at t𝑡titalic\_t-step with a discount factor γ𝛾\gammaitalic\_γ. The authors proposed the following off-policy actor-critic loss: | | | | | | | | --- | --- | --- | --- | --- | --- | | | ℒsilsuperscriptℒ𝑠𝑖𝑙\displaystyle\mathcal{L}^{sil}caligraphic\_L start\_POSTSUPERSCRIPT italic\_s italic\_i italic\_l end\_POSTSUPERSCRIPT | =\displaystyle== | 𝔼s,a,R∈𝒟[ℒpolicysil+βsilℒvaluesil]subscript𝔼𝑠𝑎𝑅 𝒟delimited-[]superscriptsubscriptℒ𝑝𝑜𝑙𝑖𝑐𝑦𝑠𝑖𝑙superscript𝛽𝑠𝑖𝑙superscriptsubscriptℒ𝑣𝑎𝑙𝑢𝑒𝑠𝑖𝑙\displaystyle\mathbb{E}\_{s,a,R\in\mathcal{D}}[\mathcal{L}\_{policy}^{sil}+\beta^{sil}\mathcal{L}\_{value}^{sil}]blackboard\_E start\_POSTSUBSCRIPT italic\_s , italic\_a , italic\_R ∈ caligraphic\_D end\_POSTSUBSCRIPT [ caligraphic\_L start\_POSTSUBSCRIPT italic\_p italic\_o italic\_l italic\_i italic\_c italic\_y end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s italic\_i italic\_l end\_POSTSUPERSCRIPT + italic\_β start\_POSTSUPERSCRIPT italic\_s italic\_i italic\_l end\_POSTSUPERSCRIPT caligraphic\_L start\_POSTSUBSCRIPT italic\_v italic\_a italic\_l italic\_u italic\_e end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s italic\_i italic\_l end\_POSTSUPERSCRIPT ] | | (2) | | | ℒpolicysilsuperscriptsubscriptℒ𝑝𝑜𝑙𝑖𝑐𝑦𝑠𝑖𝑙\displaystyle\mathcal{L}\_{policy}^{sil}caligraphic\_L start\_POSTSUBSCRIPT italic\_p italic\_o italic\_l italic\_i italic\_c italic\_y end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s italic\_i italic\_l end\_POSTSUPERSCRIPT | =\displaystyle== | −logπθ(a|s)(R−Vθ(S))+𝑙𝑜𝑔subscript𝜋𝜃conditional𝑎𝑠subscript𝑅subscript𝑉𝜃𝑆\displaystyle-log\pi\_{\theta}(a|s)(R-V\_{\theta}(S))\_{+}- italic\_l italic\_o italic\_g italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a | italic\_s ) ( italic\_R - italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_S ) ) start\_POSTSUBSCRIPT + end\_POSTSUBSCRIPT | | (3) | | | ℒvaluesilsuperscriptsubscriptℒ𝑣𝑎𝑙𝑢𝑒𝑠𝑖𝑙\displaystyle\mathcal{L}\_{value}^{sil}caligraphic\_L start\_POSTSUBSCRIPT italic\_v italic\_a italic\_l italic\_u italic\_e end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_s italic\_i italic\_l end\_POSTSUPERSCRIPT | =\displaystyle== | 12‖(R−Vθ(S))+‖212superscriptnormsubscript𝑅subscript𝑉𝜃𝑆2\displaystyle{{1}\over{2}}\parallel(R-V\_{\theta}(S))\_{+}\parallel^{2}divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG ∥ ( italic\_R - italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_S ) ) start\_POSTSUBSCRIPT + end\_POSTSUBSCRIPT ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | | (4) | where (⋅)+=max(⋅,0)subscript⋅𝑚𝑎𝑥⋅0(\cdot)\_{+}=max(\cdot,0)( ⋅ ) start\_POSTSUBSCRIPT + end\_POSTSUBSCRIPT = italic\_m italic\_a italic\_x ( ⋅ , 0 ) and πθsubscript𝜋𝜃\pi\_{\theta}italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT and Vθ(s)subscript𝑉𝜃𝑠V\_{\theta}(s)italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_s ) are the policy (i.e., actor) and the value function parameterized by θ𝜃\thetaitalic\_θ. Bsil∈ℝ+superscript𝐵𝑠𝑖𝑙superscriptℝB^{sil}\in\mathbb{R}^{+}italic\_B start\_POSTSUPERSCRIPT italic\_s italic\_i italic\_l end\_POSTSUPERSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT is a hyperparameter for the value loss. Intuitively, for the same state, if the past return value is greater than the current value (R>Vθ𝑅subscript𝑉𝜃R>V\_{\theta}italic\_R > italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT), then it can be observed that the behavior in the past is a good decision. Therefore, imitating the behavior is desirable. However, if the past return is less than the current value (R<Vθ𝑅subscript𝑉𝜃R<V\_{\theta}italic\_R < italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT), then imitating the behavior is not desirable. The authors focused on combining SIL with advantage actor-critic (A2C) (Mnih et al., [2016](#bib.bib18)) and showed significant performance in experiments with hard exploration Atari games. Random Network Distillation The authors proposed a fixed target network (f𝑓fitalic\_f) with randomized weights and a predictor network (f^^𝑓\widehat{f}over^ start\_ARG italic\_f end\_ARG), which is trained using the output of the target network. The predictor neural network is trained by gradient descent to minimize the expected mean squared error ‖f^(x;θ)−f(x)‖2superscriptnorm^𝑓𝑥𝜃𝑓𝑥2\parallel\widehat{f}(x;\theta)-f(x)\parallel^{2}∥ over^ start\_ARG italic\_f end\_ARG ( italic\_x ; italic\_θ ) - italic\_f ( italic\_x ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT. They used the exploration bonus (itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT) as ‖f^(x;θ)−f(x)‖2superscriptnorm^𝑓𝑥𝜃𝑓𝑥2\parallel\widehat{f}(x;\theta)-f(x)\parallel^{2}∥ over^ start\_ARG italic\_f end\_ARG ( italic\_x ; italic\_θ ) - italic\_f ( italic\_x ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT. Intuitively, the prediction error will increase for a novel state, and the prediction error will decrease for a state that has been frequently visited. However, if the agent converges to local policy, prediction error may (itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT) no longer occurs. Furthemore, using RND can cause catastrophic forgetting. The predictor network learns about the state that the agent constantly visits such that the network forgets about the previously visited state. Consequently, the prediction error increases for the past state, and the agent may go to a past policy. 4 AIE ------ Algorithm 1 Amplifying the Imitation Effect (AIE)   Initialize A2C network parameter θa2csubscript𝜃𝑎2𝑐\theta\_{a2c}italic\_θ start\_POSTSUBSCRIPT italic\_a 2 italic\_c end\_POSTSUBSCRIPT   Initialize predictor/target network parameter θpsubscript𝜃𝑝\theta\_{p}italic\_θ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT, θtsubscript𝜃𝑡\theta\_{t}italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT   Initialize replay buffer 𝒟←∅←𝒟\mathcal{D}\leftarrow\emptysetcaligraphic\_D ← ∅   Initialize episode buffer ℰ←∅←ℰ\mathcal{E}\leftarrow\emptysetcaligraphic\_E ← ∅   Initialize feature buffer ℱ←∅←ℱ\mathcal{F}\leftarrow\emptysetcaligraphic\_F ← ∅   for episode = 1, M do      for each step do         Execute an action st,at,rt,st+1≈πθ(at|st)subscript𝑠𝑡subscript𝑎𝑡subscript𝑟𝑡subscript𝑠𝑡1 subscript𝜋𝜃conditionalsubscript𝑎𝑡subscript𝑠𝑡s\_{t},a\_{t},r\_{t},s\_{t+1}\approx\pi\_{\theta}(a\_{t}|s\_{t})italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ≈ italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT )         Extract feature of st+1subscript𝑠𝑡1s\_{t+1}italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT to ϕst+1subscriptitalic-ϕsubscript𝑠𝑡1\phi\_{s\_{t+1}}italic\_ϕ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT         Calculate intrinsic reward itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT         if  itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT <<< penalty condition threshold then            it←λlog(it)←subscript𝑖𝑡𝜆𝑙𝑜𝑔subscript𝑖𝑡i\_{t}\leftarrow\lambda log(i\_{t})italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ← italic\_λ italic\_l italic\_o italic\_g ( italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT )         end if         rt=rt+itsubscript𝑟𝑡subscript𝑟𝑡subscript𝑖𝑡r\_{t}=r\_{t}+i\_{t}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT         Store transition ℰ←ℰ∪{(st,at,rt)}←ℰℰsubscript𝑠𝑡subscript𝑎𝑡subscript𝑟𝑡\mathcal{E}\leftarrow\mathcal{E}\cup\{(s\_{t},a\_{t},r\_{t})\}caligraphic\_E ← caligraphic\_E ∪ { ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) }         ℱ←ℱ∪{(ϕst+1,fθt(ϕst+1))}←ℱℱsubscriptitalic-ϕsubscript𝑠𝑡1subscript𝑓subscript𝜃𝑡subscriptitalic-ϕsubscript𝑠𝑡1\mathcal{F}\leftarrow\mathcal{F}\cup\{(\phi\_{s\_{t+1}},f\_{\theta\_{t}}(\phi\_{s\_{t+1}}))\}caligraphic\_F ← caligraphic\_F ∪ { ( italic\_ϕ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_f start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_ϕ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ) }      end for      if  st+1subscript𝑠𝑡1s\_{t+1}italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT is terminal then         Compute returns Rt=Σk∞γk−trksubscript𝑅𝑡subscriptsuperscriptΣ𝑘superscript𝛾𝑘𝑡subscript𝑟𝑘R\_{t}=\Sigma^{\infty}\_{k}\gamma^{k-t}{r}\_{k}italic\_R start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = roman\_Σ start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_k - italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT for all t𝑡titalic\_t in ℰℰ\mathcal{E}caligraphic\_E         𝒟←𝒟∪{(st,at,rt)}←𝒟𝒟subscript𝑠𝑡subscript𝑎𝑡subscript𝑟𝑡\mathcal{D}\leftarrow\mathcal{D}\cup\{(s\_{t},a\_{t},r\_{t})\}caligraphic\_D ← caligraphic\_D ∪ { ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) }         Clear episode buffer ℰ←∅←ℰ\mathcal{E}\leftarrow\emptysetcaligraphic\_E ← ∅      end if       *# Optimize actor-critic network*      θa2c←θa2c−η∇θa2cℒa2c←subscript𝜃𝑎2𝑐subscript𝜃𝑎2𝑐𝜂subscript∇subscript𝜃𝑎2𝑐superscriptℒ𝑎2𝑐\theta\_{a2c}\leftarrow\theta\_{a2c}-\eta\nabla\_{\theta\_{a2c}}\mathcal{L}^{a2c}italic\_θ start\_POSTSUBSCRIPT italic\_a 2 italic\_c end\_POSTSUBSCRIPT ← italic\_θ start\_POSTSUBSCRIPT italic\_a 2 italic\_c end\_POSTSUBSCRIPT - italic\_η ∇ start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_a 2 italic\_c end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT caligraphic\_L start\_POSTSUPERSCRIPT italic\_a 2 italic\_c end\_POSTSUPERSCRIPT       *# Perform self-imitation learning*      for k= 1, M do         sample a minibatch {(s,a,R)}𝑠𝑎𝑅\{(s,a,R)\}{ ( italic\_s , italic\_a , italic\_R ) } from 𝒟𝒟\mathcal{D}caligraphic\_D         θa2c←θa2c−η∇θa2cℒsil←subscript𝜃𝑎2𝑐subscript𝜃𝑎2𝑐𝜂subscript∇subscript𝜃𝑎2𝑐superscriptℒ𝑠𝑖𝑙\theta\_{a2c}\leftarrow\theta\_{a2c}-\eta\nabla\_{\theta\_{a2c}}\mathcal{L}^{sil}italic\_θ start\_POSTSUBSCRIPT italic\_a 2 italic\_c end\_POSTSUBSCRIPT ← italic\_θ start\_POSTSUBSCRIPT italic\_a 2 italic\_c end\_POSTSUBSCRIPT - italic\_η ∇ start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_a 2 italic\_c end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT caligraphic\_L start\_POSTSUPERSCRIPT italic\_s italic\_i italic\_l end\_POSTSUPERSCRIPT         sample a minibatch {(ϕst+1,fθt(ϕst+1))}subscriptitalic-ϕsubscript𝑠𝑡1subscript𝑓subscript𝜃𝑡subscriptitalic-ϕsubscript𝑠𝑡1\{(\phi\_{s\_{t+1}},f\_{\theta\_{t}}(\phi\_{s\_{t+1}}))\}{ ( italic\_ϕ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_f start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_ϕ start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ) } from ℱℱ\mathcal{F}caligraphic\_F         θp←θp−η∇θpℒp←subscript𝜃𝑝subscript𝜃𝑝𝜂subscript∇subscript𝜃𝑝superscriptℒ𝑝\theta\_{p}\leftarrow\theta\_{p}-\eta\nabla\_{\theta\_{p}}\mathcal{L}^{p}italic\_θ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ← italic\_θ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT - italic\_η ∇ start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT caligraphic\_L start\_POSTSUPERSCRIPT italic\_p end\_POSTSUPERSCRIPT      end for   end for ### 4.1 Combining SIL and RND In this section, we explain why combining RND and SIL can amplify the imitation effect and lead to deep exploration. The SIL updates only when the past R𝑅Ritalic\_R is greater than the current Vθsubscript𝑉𝜃V\_{\theta}italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT and imitates past decisions. Intuitively, if we combine SIL and RND, we find that the (R−Vθ𝑅subscript𝑉𝜃R-V\_{\theta}italic\_R - italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT) value is larger than the SIL because of the exploration bonus. In the process of optimizing the actor-critic network to maximize Rt=Σk=t∞γk−t(it+et)ksubscript𝑅𝑡subscriptsuperscriptΣ𝑘𝑡superscript𝛾𝑘𝑡subscriptsubscript𝑖𝑡subscript𝑒𝑡𝑘R\_{t}=\Sigma^{\infty}\_{k=t}\gamma^{k-t}{(i\_{t}+e\_{t})}\_{k}italic\_R start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = roman\_Σ start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k = italic\_t end\_POSTSUBSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_k - italic\_t end\_POSTSUPERSCRIPT ( italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_e start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, where itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is intrinsic reward and etsubscript𝑒𝑡e\_{t}italic\_e start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is extrinsic reward, the increase in itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT by the predictor network causes R𝑅Ritalic\_R to increase. That is, the learning progresses by weighting the good decisions of the past. This type of learning thoroughly reviews the learning history.If the policy starts to converge as the learning progresses, the itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT will be lower for the state that was frequently visited. One might think that learning can be slower as (Rt−Vθ)>(Rt+k−Vθ)subscript𝑅𝑡subscript𝑉𝜃subscript𝑅𝑡𝑘subscript𝑉𝜃(R\_{t}-V\_{\theta})>(R\_{t+k}-V\_{\theta})( italic\_R start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) > ( italic\_R start\_POSTSUBSCRIPT italic\_t + italic\_k end\_POSTSUBSCRIPT - italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ), where k>0𝑘0k>0italic\_k > 0 for the same state and itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT decreases. However, the SIL exploits past good decisions and leads to deep exploration. By adding an exploration bonus, the agent can further explore novel states. Consequently, the exploration bonus is likely to continue to occur. In addition, using the prioritized experience replay (Schaul et al., [2015](#bib.bib24)), the sampling probability is determined by the (R−Vθ𝑅subscript𝑉𝜃R-V\_{\theta}italic\_R - italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT); thus, there is a high probability that the SIL will exploit the previous transition even if itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT decreases. In other words, the two algorithms are complementary to each other, and the SIL is immune to the phenomenon in which the prediction error (itsubscript𝑖𝑡i\_{t}italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT) no longer occurs. ### 4.2 Intrinsic Penalty Reward Adding an exploration bonus to a novel state that the agent visits is clearly an effective exploration method. However, when the policy and predictor networks converge, there is no longer an exploration bonus for the novel state. In other words, the exploration bonus method provides a reward when the agent itself performs an unexpected action, not when the agent is induced to take the unexpected action. Therefore, an exploration method that entices the agent to take unexpected behavior is necessary. We propose a method to provide an intrinsic penalty reward for an action when it frequently visits the same state rather than rewarding it when the agent makes an unexpected action. The intrinsic penalty reward allows the agent to escape from the converged local policy and helps to experience diverse policies. Specifically, we provide a penalty by transforming the current intrinsic reward into λlog(it)𝜆𝑙𝑜𝑔subscript𝑖𝑡\lambda log(i\_{t})italic\_λ italic\_l italic\_o italic\_g ( italic\_i start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), where λ𝜆\lambdaitalic\_λ is a penalty weight parameter, if the current intrinsic reward is less than the quantile α𝛼\alphaitalic\_α of the past N𝑁Nitalic\_N intrinsic rewards. This reward mechanism prevents the agent from staying in the same policy. In addition, adding a penalty to the intrinsic reward indirectly amplifies the imitation effect. Since the (Rt−Vθ)subscript𝑅𝑡subscript𝑉𝜃(R\_{t}-V\_{\theta})( italic\_R start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) becomes smaller due to the penalty, the probability of sampling in replay memory is relatively smaller than that of non-penalty transition. SIL updates are more likely to exploit non-penalty transitions. Even if (Rt−Vθ)<0subscript𝑅𝑡subscript𝑉𝜃0(R\_{t}-V\_{\theta})<0( italic\_R start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - italic\_V start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) < 0 due to a penalty, it does not affect SIL because it is not updated because of the objective of SIL in equation [4](#S3.E4 "4 ‣ 3 Related Work ‣ Amplifying the Imitation Effect for Reinforcement Learning of UCAV’s Mission Execution"). In other words, the intrinsic penalty reward allows the policy network to deviate from the constantly visited state of the agent and indirectly amplifies the imitation effect for the SIL. ### 4.3 Catastrophic Forgetting in RND The predictor network in RND mainly learns about the state that the agent recently visited, which is similar to the catastrophic forgetting of continual task learning that forgets learned knowledge of previous tasks. If the prediction error increases for a state that the agent has visited before, the agent may recognize the previous state as a novel state. Consequently, an agent cannot effectively explore. The method to mitigate this phenomenon is simple but effective. We store the output of the target network and state feature as the memory of the predictor network, just like using a replay memory to reduce the correlation between samples(Mnih et al., [2013](#bib.bib16)), and train the predictor network in a batch mode. Using the predictor memory reduces the prediction error of states that the agent previously visited, which is why the agent is more likely to explore novel states. Even if the agent returns to a past policy, the prediction error of the state visited by the policy is low, intrinsic penalty is given to the state, and the probability of escaping from the state is high. ![Refer to caption](/html/1901.05856/assets/x3.png) Figure 3: Path visualization for each algorithm in 2D grid environment. The color changes from blue to red for where the agent visits more frequently. 5 Experiment ------------- ### 5.1 Conversion of State to Coordinate Feature An exploration bonus is given for state feature x𝑥xitalic\_x through ‖f^(x;θ)−f(x)‖2superscriptnorm^𝑓𝑥𝜃𝑓𝑥2\parallel\widehat{f}(x;\theta)-f(x)\parallel^{2}∥ over^ start\_ARG italic\_f end\_ARG ( italic\_x ; italic\_θ ) - italic\_f ( italic\_x ) ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT, where (f𝑓fitalic\_f) is a fixed target network and (f^^𝑓\widehat{f}over^ start\_ARG italic\_f end\_ARG) is a predictor network. However, the state of our experimental environment contains various information, such as the path and direction information of the UCAV and the relationship information between the UCAV and missile. The high-dimensional state space makes the convergence speed of the policy network slow. Thus, we limited the state for the exploration bonus to the current coordinates of the UCAV (33 rows). Consequently, the convergence rate of the policy network increased, and the meaning of the role of the exploration bonus changes clearly from ‘inducing the agent to move to a novel feature state’ to ‘inducing agent to move to novel coordinates’. ### 5.2 Test Algorithms ASIL denotes the combination of A2C and SIL. We used this model as a baseline method for a performance comparison. In this study, we propose three RL algorithms. Amplifying the imitation effect (AIE1) is the first proposed algorithm, which combines ASIL and RND. The second is the addition of intrinsic penalty rewards to ASIL + RND (AIE2), and the third is the AIE2 with the addition of replay memory for the predictor network (AIE3) described in Algorithm [1](#alg1 "Algorithm 1 ‣ 4 AIE ‣ Amplifying the Imitation Effect for Reinforcement Learning of UCAV’s Mission Execution"). ![Refer to caption](/html/1901.05856/assets/x4.png) Figure 4: Visualization of the path of the agent and loss of all coordinate states for each algorithm in the no reward 2D grid environment. The color changes from blue to red in the agent’s path figure to indicate where the agent visits more frequently. The color changes from blue to yellow in the loss figure to indicate where the loss is larger. ### 5.3 Hard Exploration in 2D Environment #### 5.3.1 Sparse Reward Setting We conducted a simple experiment to see how effective the proposed algorithms are for exploration. We constructed a 2D grid world in which the agent learns a sequence of movements that begin from a starting point and reach a goal point using a simple movement step (up, down, left, and right). The reward was set to zero except when reaching the target point (reward of 30) or leaving the environment (reward of -30). RL was performed a total of 10,000 episodes for each algorithm. Figure 2 is the visualization of the movement paths of the agent. Since the reward is too sparse, the ASIL failed to reach the target point. In contrast, all of the proposed algorithms successfully reached the target point because of the exploration bonus. For AIE1, the result showed that the agent quickly reached the target point. However, we find that AIE 2 and AIE3 that considered the intrinsic penalty reward performed a deeper exploration than AIE1 – the two algorithms arrived at the target point via more diverse paths compared to AIE1. Table 1: An exploration area score of each algorithm in a two-dimensional no-reward grid environment. We averaged the area explored by the agent after 30 repeated experiments. | Algorithm | Exploration area | | --- | --- | | ASIL | 11.2±1.25plus-or-minus1.25\pm 1.25± 1.25 | | AIE1 | 40.5±2.06plus-or-minus2.06\pm 2.06± 2.06 | | AIE2 | 43.2±2.36plus-or-minus2.36\pm 2.36± 2.36 | | AIE3 | 46.7±2.19plus-or-minus2.19\pm 2.19± 2.19 | #### 5.3.2 No-Reward Setting We experimented with the same environment in which there is no target point. The agent performs only exploration in each episode. We argue that the catastrophic forgetting is ineffective for exploration when using an exploration bonus because the agent has less chance of searching a novel state if the prediction error remains high for previously searched states. Furthermore, we argue that using replay memory for predictor network (AIE3) is more efficient for exploration because the memory mitigates the catastrophic forgetting. Figure [4](#S5.F4 "Figure 4 ‣ 5.2 Test Algorithms ‣ 5 Experiment ‣ Amplifying the Imitation Effect for Reinforcement Learning of UCAV’s Mission Execution") is the visualization of the movement paths of the agent for 5,000 episodes (left figure) and the losses of the predictor network at all coordinates (right figure). We observed that the loss of the area explored by the agent is lower than in other areas. As the episode increases, the agent explores a novel space with a high prediction error. At this point, we can observe that the loss of area that the agent explored at an episode increased compared to the loss of area at the preceding episode. However, AIE3 showed that the loss of the previously explored space remained relatively low compared to the other two algorithms. In the sparse reward environment, ASIL explored a small area, circulating throughout the area although the episode increased, but the proposed three algorithms explored many areas. Table [1](#S5.T1 "Table 1 ‣ 5.3.1 Sparse Reward Setting ‣ 5.3 Hard Exploration in 2D Environment ‣ 5 Experiment ‣ Amplifying the Imitation Effect for Reinforcement Learning of UCAV’s Mission Execution") shows the score of how each algorithm explored uniformly over four quadrants of the 2D grid space during 30,000 episodes. The formula for the score was | | | | | | | | --- | --- | --- | --- | --- | --- | | | score𝑠𝑐𝑜𝑟𝑒\displaystyle scoreitalic\_s italic\_c italic\_o italic\_r italic\_e | =\displaystyle== | mean(EQq)×σEQ×100𝑚𝑒𝑎𝑛𝐸subscript𝑄𝑞subscript𝜎𝐸𝑄100\displaystyle mean(EQ\_{q})\times\sigma\_{EQ}\times 100italic\_m italic\_e italic\_a italic\_n ( italic\_E italic\_Q start\_POSTSUBSCRIPT italic\_q end\_POSTSUBSCRIPT ) × italic\_σ start\_POSTSUBSCRIPT italic\_E italic\_Q end\_POSTSUBSCRIPT × 100 | | (5) | where EQq𝐸subscript𝑄𝑞EQ\_{q}italic\_E italic\_Q start\_POSTSUBSCRIPT italic\_q end\_POSTSUBSCRIPT is the explored portion in the total area of each quartile. We confirmed that the proposed algorithms (particularly AIE3) were very effective for exploration. ![Refer to caption](/html/1901.05856/assets/x5.png) Figure 5: (Left) Learning curves of the UCAV mission execution environment. The x and y axes represent the episode number and the average reward, respectively. The plot is the average of the reward of the results of the 10 experiments for each algorithm. The light color represents the worst performance result of each algorithm. (Right) Cumulative probability graph of being shot down by a missile. ![Refer to caption](/html/1901.05856/assets/x6.png) Figure 6: 3D view of the UCAV’s learning process. The red circle represents the air defense network, the black solid line represents the movement path of the UCAV, and the red dotted line represents the missile’s movement path. ### 5.4 Experiment for UCAV Mission Execution We performed an experiment to investigate UCAV control in a sparse reward environment and compared the performances of the algorithms. In addition, we analyzed how the UCAV manages to avoid missiles. First, since our experimental environment has a sparse reward structure, DQN, prioritized experience replay DQN, A2C and ACER failed to converge to the desired policy that generates the shortest path from the origin to the target point while avoiding an enemy’s missiles. Figure [5](#S5.F5 "Figure 5 ‣ 5.3.2 No-Reward Setting ‣ 5.3 Hard Exploration in 2D Environment ‣ 5 Experiment ‣ Amplifying the Imitation Effect for Reinforcement Learning of UCAV’s Mission Execution") (left) shows the performances of ASIL and the proposed three algorithms for an experiment consisting of 60,000 episodes. The light colors and normal colors represent the worst and average performance of the compared algorithms, respectively. The result is that AIE2 and AIE3 succeeded in converging to the desired policy, while ASIL and AIE1 fell into a local minimum once in two trials and once in three trials, respectively. In particular, AIE3 outperformed the other algorithms, as shown in Figure 4. Similar to the previous exploration experiment, we confirmed that the performance of the three proposed algorithms was better than that of ASIL (baseline model) in the UCAV control environment. Figure [6](#S5.F6 "Figure 6 ‣ 5.3.2 No-Reward Setting ‣ 5.3 Hard Exploration in 2D Environment ‣ 5 Experiment ‣ Amplifying the Imitation Effect for Reinforcement Learning of UCAV’s Mission Execution") presents snapshots of learning (animation is here111https://youtu.be/7R5lZAsCs2c). At early episodes of the learning, the UCAV took random actions and occasionally left the battlefield. However, as the episodes increased, it tended to move forward gradually but was shot down by a missile. This result can be confirmed by the cumulative shot probability plot (Figure [5](#S5.F5 "Figure 5 ‣ 5.3.2 No-Reward Setting ‣ 5.3 Hard Exploration in 2D Environment ‣ 5 Experiment ‣ Amplifying the Imitation Effect for Reinforcement Learning of UCAV’s Mission Execution") (right)). As the episodes continues, the UCAV learned how to avoid missiles and began to move to new coordinates (attempted to increase intrinsic reward). The UCAV attempted to reach the target point through various paths. ![Refer to caption](/html/1901.05856/assets/x7.png) Figure 7: UCAV’s path of after learning in 3D view. You can see the UCAV through the overlapping of air defense network, avoiding the missile and reaching the target point. Figure [7](#S5.F7 "Figure 7 ‣ 5.4 Experiment for UCAV Mission Execution ‣ 5 Experiment ‣ Amplifying the Imitation Effect for Reinforcement Learning of UCAV’s Mission Execution") is a 3D representation of the path through which the UCAV reached the target while avoiding the missile. When the UCAV entered the center of the air defense network, the probability of being shot down by a missile increased. Therefore, the UCAV learned the safe path that passed through the overlapped areas of air defense networks with a low altitude. 6 Conclusion ------------- In this paper, we proposed AIE by combining SIL and RND. In addition, we proposed AIE2 and AIE3, which can lead to efficient deep exploration. AIE2 gives an intrinsic penalty reward to states where the agent frequently visits, which prevents the agent from falling into a local optimal policy. AIE3 adopts replay memory to mitigate the catastrophic forgetting of the predictor network. These two algorithms amplify the imitation effect, leading to deep exploration, thereby enabling the policy network to quickly converge into the desired policy. We experimentally demonstrated that the AIEs in the 2D grid environment successfully explored wide areas of the grid space. In addition, for the UCAV control problem, we observed that the proposed algorithms quickly converged into the desired policy. In future work, it is necessary to discuss the configuration of the replay memory because replay memory for the predictor network has limited storage; thus, it is inefficient to insert a feature for every learning step. Acknowledgments --------------- This research was supported by Agency for Defense Development (UD170043JD).
9d30c7ae-8372-46b2-9201-763af0973a09
trentmkelly/LessWrong-43k
LessWrong
Change My View: Incumbent religions still get too much leeway In the past few years I've gone from being almost entirely dismissive of religion to being much more uncertain and equivocal. In particular, I've become much warier of implicitly judging The Old Ways against modern conditions and assumptions. These changes happened partly because I heard Jordan Peterson manage to give Sam Harris a couple of decent counterarguments, and partly due to various other things in the air such as The Secret of Our Success and Doesn't Matter, Warm Fuzzies. I think this arc of beliefs is familiar to many of you. But I feel that we're now missing something else. Sure, incumbent religions deserve a lot of credit for helping people survive millennia of scarcity in untamed environments. And Jordan Peterson might be right when he says it’s too easy to throw the baby out with the bathwater. But those same religions are also clearly contributing to This Failing Earth: reliable methods of reasoning are still niche, faith is held up as a virtue even in advanced countries, bioethicists promote death, and very few people even know about cryonics or Fun Theory. Even among educated people, feelings of awe and wonder are often more closely associated with Iron-age myths than with science. This is absurd and it bothers me greatly. Read the Bible, watch Sagan's Cosmos, and then tell me that their relative statures are anywhere near appropriate. "Merely" real phenomena like decoherence and the arrow of time and evolution by natural selection are obviously æsthetically richer (for better and for worse) than "a god did it" or other formerly useful fictions. Stories about humanoid gods have their place, and that place is in anthropology exhibits alongside rain dances and augury. Here's possibly my most important crux: Respect for incumbent religions comes at the cost of more modern philosophies. Rocket launch rituals, Wisdom Day[1], vitrification ceremonies or so on would be embraced more quickly if people didn’t repress their disdain for things like circumc
75575af9-d183-4d60-b6ac-15f1d9b123b4
trentmkelly/LessWrong-43k
LessWrong
The Great Bootstrap > This part 2 in a 3-part sequence summarizes my book, The Darwinian Trap, (see part 1 and part 2 here), The Darwinian Trap. The book aims to popularize the concept of multipolar traps and establish them as a broader cause area. If you find this series intriguing contact me at kristian@kristianronn.com if you have any input or ideas.    As we reach the final part of this blog series, it's understandable if you're feeling a bit down. The previous sections painted a rather bleak picture, touching on issues like misaligned AI, global warming, nuclear threats, and engineered pandemics. These societal-scale risks make the future seem uncertain. Yet, despite these daunting challenges, I believe there is still room for optimism. In this segment, I'll share why I think humanity can overcome these existential threats, reset the harmful selection pressures that drive destructive arms races, and pave the way for a brighter future. But first, let’s review what we've covered. * Evolution selects for fitness, not for what we intrinsically value. In this context, fitness refers to survival in an environment, be it a natural environment or one we have created for ourselves, such as the world of business. * Pursuing narrow targets often produces broadly negative outcomes. When we optimize for short-term survival while remaining indifferent to other goals— e.g. health and well-being—then, by default, we optimize against those values.  A related phenomenon by which narrow optimization tends to produce bad outcomes is Goodhart’s Law. * We can’t easily choose not to play the game and choose to act according to our intrinsic values rather than Darwinian survival imperatives. If we do, then we will be outcompeted by those who do play the game, thus creating negative outcomes for us, the conscientious objectors. * Playing the game eventually leads to extinction arms races, because strategies and mutations that favor increased power and more efficient resource exploitation tend to w
28d4e707-de74-4e1a-950c-31ba008e19e4
trentmkelly/LessWrong-43k
LessWrong
Telic intuitions across the sciences [WORK IN PROGRESS - I will keep editing this post over time] See the introductory post for a motivation. Explaining a phenomenon in terms of function, role, purpose, goal or meaning is extremely common throughout, let's say, most of the sciences that are not physics or chemistry. Yet it is almost always done qualitatively, verbally, without real support from any kind of math. In this post, I will try to list every important intuition about this class of concepts that I have stumbled upon so far. This probably does not make for a great page turner/scroller, but this is more of a reference post for future ones to quote. Bias/competence disclosure: Having been trained all the way as a physicist and half-way as a linguist, my main inspirations are grammar and statistical mechanics. My understanding of other topics will be far more cursory, besides theoretical ecology which is where I work now. That being said, I do think everyone should learn a bit of grammar and stat mech, as those are virtuous pursuits. General intuitions in existing theories Generative grammar Why grammar is an important example Grammar is the only science I know which has been consistently good at not confusing "nature" and "function": * Nature/category is what a word is, e.g. noun or adjective or verb... I can ask this question of the word in itself:  is cat a noun or a verb? * Function is what role a word serves in a larger context, e.g. cat serves as the subject in "Cat bites man." You cannot take the word out of context and ask of it, apart from any sentence: is cat a subject or a complement? The same function can be served by many different natures (e.g. a verb-like thing serves as the subject in "To live forever sounds exciting/tiring."), and the same nature can perform many different functions. Grammar is also the only science I know which has tried to propose autonomous and non-trivial theories of functions.  Quick recap: how grammar works Many of you will have seen a parse
c7863685-876a-41dc-a744-ae7358ed5539
trentmkelly/LessWrong-43k
LessWrong
Responsible AI communication survey When it comes to developing and using AI, there is an expectation of ethical and honest communication about both the technology’s opportunities and risks. But is that currently the case? This is part of the question I’m asking for my PhD research, and I want to know what you think. Please help me better understand where we stand on responsible AI communication by filling in this anonymous survey: https://forms.gle/6ZckuTQLx97fEEJk6 Thank you in advance!
185ed73b-ff19-4341-a118-1cef31f63a54
trentmkelly/LessWrong-43k
LessWrong
Is ChatGPT (or other LLMs) more 'sentient'/'conscious/etc. then a baby without a brain? I'm curious to see what the views of various community members are in regards to those attributes we typically ascribe to humans when put in comparison to the edge case where it's clearly absent, i.e. babies born without a brain (Anencephaly) See: https://en.wikipedia.org/wiki/Anencephaly I personally am not well informed enough about their inner workings to have decided. The reason for comparing to the edge case is because it's a very low bar to clear, so even minute differences should be more easily discernable.
9bfc3e99-f68c-42cc-9064-4b0f0713ae7b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Microsoft Plans to Invest $10B in OpenAI; $3B Invested to Date | Fortune [Unpaywalled](https://archive.is/6x5uE)   ---   Excerpts ======== > > If all goes according to OpenAI’s financial plans, Microsoft will close a $10 billion investment deal into the artificial intelligence startup before the end of this month, as Jeremy Kahn and I reported yesterday and according to documents seen by Fortune. > > ... > > Microsoft’s bet on OpenAI appears to be even bigger than was previously known. The documents suggest that, prior to this deal, Microsoft had already poured $3 billion into the company—$2 billion more than has been publicly reported. If the current deal is completed at the figures being discussed, the cap table in the documents states that Microsoft will have contributed a total of $13 billion in capital to OpenAI, underscoring how important it believes the technology behind ChatGPT and DALL-E 2 is to its future. > > ... > > Microsoft would be able to make as much as $92 billion from its collective investment, and venture capitalists that participate in the tender offer would be able to garner up to $150 billion. (An OpenAI spokeswoman declined to comment for this story, and a Microsoft spokesman didn’t respond to a request for comment.) > > ... > > Microsoft will receive preferential treatment when it comes to OpenAI profits. The documents lay out how investors will be reimbursed once OpenAI starts posting a profit. “First close partners” will be reimbursed their principal first (it’s unclear whether “first close partners” refers to OpenAI’s early investors, Khosla Ventures and Reid Hoffman’s foundation, or other subsequent investors in the company). Once that has happened, 75% of OpenAI’s profits will flow directly to Microsoft until the sum that Microsoft invested in OpenAI is reached. Here is a graphical representation of how the economics are structured: > > ![Economics of the OpenAI Deal](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673495591/mirroredImages/X7QbzyKWqnLmeZCnJ/zxdnq7ihx8azuwh7shlu.jpg) > > > While the terms look like a win-win for Microsoft, it could end up being quite a while before Microsoft, or any of the other investors, see a meaningful return on that investment. Documents show that, as of the end of last year, OpenAI was projecting a loss of more than $508 million for 2022. The company has projected $1 billion in revenue in 2024, as was first reported by Reuters, but it’s unclear what it expects its costs to be in the years ahead. According to the documents, OpenAI expected that its costs in 2022 would total somewhere around $544.5 million. > > >   ---   Prediction Markets ================== [@TetraspaceWest](https://twitter.com/TetraspaceWest) created [a manifold market for the $10B investment](https://manifold.markets/Tetraspace/has-microsoft-invested-10bn-in-open).   ---   Implications Conditional on Completion ====================================== If the deal completes, this investment is a *big* deal. OpenAI can afford to jump a few orders of magnitude of scale given a cash infusion that big. This investment might represent potential evidence that OpenAI leadership believes that scale is indeed all you need. [Note that whatever evidence this represents is pretty weak, until/unless they actually scale that far.] If the cash infusion is completoon, it may be a reason to accelerate timelines (to the extent that one believes OpenAI intends to advance a few orders of magnitude of scale). Alternatively, I don't think people in the LW/EA sphere should have their beliefs on AI development and timelines basically unchanged upon learning that OpenAI received a $10B cash infusion.   Edit: the considerations mentioned here are significantly weakened if this is a multi-year investment.
c3a293d3-8980-4e83-ad08-95fc68e88a52
StampyAI/alignment-research-dataset/arxiv
Arxiv
Language Conditioned Imitation Learning over Unstructured Data I Introduction --------------- Imitation learning [[4](#bib.bib4), [3](#bib.bib3)] is a popular framework for acquiring complex robotic skills from raw onboard sensors. Traditionally, imitation learning has been applied to learning individual skills from structured and isolated human demonstrations [[39](#bib.bib39), [1](#bib.bib1), [53](#bib.bib53)]. These collection requirements are difficult to scale to real world scenarios, where robots are expected to be generalists—capable of autonomously performing a wide variety of skills. As we consider deploying imitation learning in this setting, a critical challenge is scale: how can we lessen the data requirements of learning each new skill? Is it possible instead to learn many skills simultaneously from large amounts of unstructured, unlabeled demonstration data [[16](#bib.bib16), [26](#bib.bib26), [14](#bib.bib14)]? Recent works have focused on scaling up multitask imitation learning over unstructured data [[8](#bib.bib8), [26](#bib.bib26)]. However, these approaches typically assume that tasks are specified to the agent at test time via mechanisms like one-hot task selectors [[37](#bib.bib37), [43](#bib.bib43)], goal images [[29](#bib.bib29), [11](#bib.bib11), [26](#bib.bib26)], or target configurations of the state space [[14](#bib.bib14)]. While these types of conditioning are straightforward to provide in simulation, they are often impractical to provide in open-world settings. Thus, another important consideration when deploying imitation learning in everyday settings is scalable task specification: how can untrained users instruct robot behavior? This motivates robots that can follow free-form natural language instructions. Training agents to follow instructions is an active area of research in the broader machine learning community [[25](#bib.bib25)], yet it remains a difficult open challenge. Prior approaches have trained agents that map observations and language inputs directly to actions using neural networks, but often make assumptions that limit their applicability to robotics. Typical studies involve 2D observation spaces (e.g. games [[24](#bib.bib24), [32](#bib.bib32), [27](#bib.bib27), [12](#bib.bib12)] and gridworlds [[52](#bib.bib52)]), simplified actuators, (e.g. binary pick and place primitives [[18](#bib.bib18), [48](#bib.bib48)]), or synthetic predefined language [[17](#bib.bib17), [6](#bib.bib6), [20](#bib.bib20), [54](#bib.bib54)]. In this work, we study the setting of human language conditioned robotic manipulation (Fig. [1](#S0.F1 "Figure 1 ‣ Language Conditioned Imitation Learning over Unstructured Data"), step 3). In this setting, a single agent must execute a series of visual manipulation tasks in a row, each expressed in free-form natural language, e.g. “open the door all the way to the right…now grab the block…now push the red button…now open the drawer”. Furthermore, agents in this scenario are expected to be able to perform any combination of subtasks in any order. This is the first version of instruction following, to our knowledge, that combines: natural language conditioning, high-dimensional pixel inputs, 8-DOF continuous control, and complex tasks like long-horizon robotic object manipulation. Text conditioning is also a new and difficult setting for goal-directed imitation learning [[26](#bib.bib26), [8](#bib.bib8)], which introduces important new research questions. For example, how can we learn the mapping between language and actions with the fewest language labels? How can we leverage large unstructured demonstration datasets that have no language labels? How can we follow the maximum number of instructions at test time, given a finite training set? To address this setting, we propose a simple approach for combining imitation learning with free-form text conditioning (Fig. [1](#S0.F1 "Figure 1 ‣ Language Conditioned Imitation Learning over Unstructured Data")). Our method consists of only standard supervised learning subroutines, and learns perception, language understanding, and control end-to-end as a single neural network. Critically, unlike prior work in instruction following, our method can leverage large amounts of unstructured and unlabeled demonstration data (i.e. with no language or task labels). We show this reduces the burden of language annotation to less than 1% of total data. To scale up the maximum amount of instructions our agent can follow at test time, we introduce a simple technique for combining any language conditioned policy with large pretrained language models [[7](#bib.bib7), [36](#bib.bib36)]. We find that this simple modification allows our agent to follow thousands of new synonym instructions at test time, without requiring that new robot demonstrations be collected for each synonym. We believe that the capabilities proposed here of learning from unlabeled demonstrations, end-to-end learning of text conditioned visuomotor policies, and following new synonym instructions without new robot data constitute important steps towards scalable robot learning systems. We evaluate our method in a dynamically accurate simulated 3D tabletop environment with a fixed set of objects. Our experiments show that a language conditioned visuomotor policy trained with our method can perform many complex robotic manipulation skills in a row specified entirely with natural language (see [video](https://language-play.github.io/videos/live/playlang_20200326-193259_13tasks.webm)), outperforming conventional imitation baselines trained on structured data. Contributions To summarize the contributions of this work, we: * • introduced a setting of human language conditioned robotic visual manipulation. * • introduced a simple learning method for combining free-form text conditioning with multitask imitation learning. * • introduced multicontext imitation learning, applicable to any contextual imitation learning setup, allowing us to train a language conditioned policy over mostly unstructured and unlabeled demonstrations (i.e. with no task or language labels). * • demonstrated that the resulting language conditioned visuomotor policy can follow many free-form human text instructions over a long horizon in a simulated 3D tabletop setting, i.e. “open the door…now pick up the block…now press the red button” (see video). * • introduced a simple way to combine any language conditioned policy with large pretrained language models. We show this improves manipulation performance and allows an agent to be robust to thousands of new synonym instructions at test time, in 16 languages, without requiring new demonstrations for each synonym. Figure 2: Multicontext Imitation Learning (MCIL). We introduce a simple generalization of contextual imitation learning to multiple heterogeneous contexts (e.g. goal image, task id, natural language). MCIL trains a single latent goal conditioned policy over all datasets simultaneously, as well as one encoder per dataset, each mapping to the shared latent goal space. This allows training a language conditioned policy over both labeled and unlabeled demonstration datasets. ![Refer to caption](/html/2005.07648/assets/x2.png) ![Refer to caption](/html/2005.07648/assets/x3.png) Figure 2: Multicontext Imitation Learning (MCIL). We introduce a simple generalization of contextual imitation learning to multiple heterogeneous contexts (e.g. goal image, task id, natural language). MCIL trains a single latent goal conditioned policy over all datasets simultaneously, as well as one encoder per dataset, each mapping to the shared latent goal space. This allows training a language conditioned policy over both labeled and unlabeled demonstration datasets. Figure 3: Pretrained language models make language conditioned policies robust to out-of-distribution synonyms. Simply by training on top of pretrained language embeddings, we can give a language conditioned policy the ability to follow out-of-distribution synonym instructions at test time. The pretrained embedding space is responsible for relating new synonym instructions (green) to ones from the agent’s training set (black). II Related Work ---------------- Robotic learning from general sensors. In general, learning complex robotic skills from low-level sensors is possible, but requires substantial human supervision. Two common approaches are imitation learning (IL) [[3](#bib.bib3)] and reinforcement learning (RL) [[23](#bib.bib23)]. When combined with deep function approximators, IL typically requires many human demonstrations [[38](#bib.bib38), [37](#bib.bib37), [53](#bib.bib53)] to drive supervised learning of a policy. In RL, supervision takes the form of hand-designed task rewards. Reward design is non-trivial in complex environments, often requiring either task-specific instrumentation [[13](#bib.bib13)] or learned perceptual rewards [[41](#bib.bib41), [42](#bib.bib42)]. Additionally, RL agents often require hand-designed strategies [[22](#bib.bib22), [9](#bib.bib9)] or human demonstrations [[38](#bib.bib38)] to overcome hard exploration problems. Finally, even under multitask formulations of RL [[46](#bib.bib46)] and IL [[37](#bib.bib37)], each new task considered requires a corresponding and sizable human effort. This makes it difficult to scale either approach naively to a broad task setting. In this work, we focus on scaling imitation learning to be multitask and language conditioned. While there are several ways to perform imitation learning such as inverse reinforcement learning [[30](#bib.bib30), [50](#bib.bib50), [10](#bib.bib10)] and occupancy matching [[19](#bib.bib19)], we restrict our attention in this work to behavior cloning [[35](#bib.bib35)] given its stability and ease of use. Imitation learning from large unstructured datasets. Recent works [[16](#bib.bib16), [26](#bib.bib26), [14](#bib.bib14)] have sought to mitigate the costs of conventional multitask imitation learning by instead learning many skills at once over large unstructured demonstration datasets. Like these works, we incorporate unstructured demonstration data into our method. Unlike these works, the resulting goal conditioned policies can be conditioned with natural language. Task agnostic control. This paper builds on the setting of task agnostic control, where a single agent is trained to reach any reachable goal state in its environment upon command [[21](#bib.bib21), [40](#bib.bib40)]. One way of acquiring this kind of control is to first learn a model of the environment through interaction [[31](#bib.bib31), [9](#bib.bib9)] then use it for planning. However, these approaches rely on accurate forward models of visual dynamics, a challenging open problem. A powerful model-free strategy for task agnostic control is goal relabeling [[21](#bib.bib21), [2](#bib.bib2)]. This technique trains goal conditioned policies to reach any previously visited state upon demand, with many recent examples in RL [[29](#bib.bib29), [14](#bib.bib14), [28](#bib.bib28)] and IL [[26](#bib.bib26), [8](#bib.bib8)]. A limitation to models combining relabeling with image observation spaces is that tasks must be specified with goal images at test time. The present work builds on relabeled imitation, but additionally equips policies with natural language conditioning. Multicontext learning. A number of previous methods have focused on generalizing across tasks [[5](#bib.bib5)], or generalizing across goals [[40](#bib.bib40)]. We introduce multicontext learning, a framework for generalizing across heterogeneous task and goal descriptions. When one of the training sources is plentiful and the other scarce, multicontext learning can be seen as transfer learning [[45](#bib.bib45)] through a shared goal space. Multicontext imitation is a central component of our method, as it reduces the cost of human language supervision to the point where it can be practically applied. Instruction following. There is a long history of research into agents that not only learn a grounded language understanding [[49](#bib.bib49)], but demonstrate that understanding by following instructions (survey [[25](#bib.bib25)]). Recently, authors have had success using deep learning to directly map raw input and text instructions to actions. However, prior work has often studied 2D environments [[24](#bib.bib24), [32](#bib.bib32), [27](#bib.bib27), [52](#bib.bib52)] and simplified actuators [[48](#bib.bib48), [18](#bib.bib18), [12](#bib.bib12)]. Additionally, learning to follow natural language is still not the standard in instruction following research [[25](#bib.bib25)], with typical implementations instead assuming access to simulator-provided instructions drawn from a restricted vocabulary and grammar [[17](#bib.bib17)]. This work, in contrast, studies 1) natural language instructions, 2) high-dimensional continuous sensory inputs and actuators, and 3) complex tasks like long-horizon 3D robotic object manipulation. Furthermore, unlike existing RL approaches to instruction following [[18](#bib.bib18)], our IL method is sample efficient, requires no reward definition, and scales easily to the multitask setting. Unlike concurrent IL approaches to robotic instruction following [[44](#bib.bib44)], which assume access to labeled task demonstrations and pretrained object detection, our method learns from unstructured and unlabeled demonstration data and learns perception, language understanding, and control fully end-to-end via a single imitation loss. III Problem Formulation ------------------------ We consider the problem of learning a natural language conditioned control policy πθ(a|s,l)subscript𝜋𝜃conditional𝑎𝑠𝑙\pi\_{\theta}(a|s,l)italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a | italic\_s , italic\_l ), which outputs next action a∈A𝑎𝐴a\in Aitalic\_a ∈ italic\_A, conditioned on current state s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S and free-form natural language l∈L𝑙𝐿l\in Litalic\_l ∈ italic\_L describing a short-horizon task. Note that S𝑆Sitalic\_S is not the true environment state, but rather the high dimensional onboard observations of the robot, e.g. S={image,proprioception}𝑆imageproprioceptionS=\{\operatorname{image},\operatorname{proprioception}\}italic\_S = { roman\_image , roman\_proprioception }. l𝑙litalic\_l is provided by humans and has no limits on vocabulary or grammar. At test time, a human gives a robot a series of instructions in a row, one at a time: {l0,…,lN}subscript𝑙0…subscript𝑙𝑁\{l\_{0},...,l\_{N}\}{ italic\_l start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_l start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT }. The language conditioned visuomotor policy πθ(a|s,l)subscript𝜋𝜃conditional𝑎𝑠𝑙\pi\_{\theta}(a|s,l)italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a | italic\_s , italic\_l ) issues high frequency continuous control in closed loop to obtain the desired behavior described in l𝑙litalic\_l. The human may provide new instructions l𝑙litalic\_l to the agent at any time, either commanding a new subtask or providing guidance (i.e. “move your hand back slightly”). Standard imitation learning setups learn single task policies πθ(a|s)subscript𝜋𝜃conditional𝑎𝑠\pi\_{\theta}(a|s)italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a | italic\_s ) using supervised learning over a dataset 𝒟={τi}iN𝒟subscriptsuperscriptsubscript𝜏𝑖𝑁𝑖\mathscr{D}=\{\tau\_{i}\}^{N}\_{i}script\_D = { italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, of expert state-action trajectories τ={(s0,a0),…}𝜏subscript𝑠0subscript𝑎0…\tau=\{(s\_{0},a\_{0}),...\}italic\_τ = { ( italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) , … }. Closest to our problem setting is goal conditioned imitation learning [[21](#bib.bib21)], which instead aims to learn a policy πθ(a|s,g)subscript𝜋𝜃conditional𝑎𝑠𝑔\pi\_{\theta}(a|s,g)italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a | italic\_s , italic\_g ), conditioned on s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S and a task descriptor g∈G𝑔𝐺g\in Gitalic\_g ∈ italic\_G (often a one-hot task encoding [[37](#bib.bib37)]). When tasks can be described as a goal state g=sg∈S𝑔subscript𝑠𝑔𝑆g=s\_{g}\in Sitalic\_g = italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT ∈ italic\_S to reach, this allows any state visited during collection to be relabeled [[21](#bib.bib21), [2](#bib.bib2), [8](#bib.bib8)] as a “reached goal state”, with the preceding states and actions treated as optimal behavior for reaching that goal. At test time, learned behaviors are conditioned using a goal image sgsubscript𝑠𝑔s\_{g}italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT. Relabeled imitation learning can be applied to unstructured and unlabeled demonstration data [[26](#bib.bib26)] to learn general purpose goal-reaching policies. Here, demonstrators are not constrained to a predefined set of tasks, but rather engage in every available object manipulation in a scene ([see example](https://learning-from-play.github.io/assets/mp4/play_data516x360.mp4)). This yields one long unsegmented demonstration of semantically meaningful behaviors, which can be relabeled using Algorithm [2](#alg2 "Algorithm 2 ‣ A-A Relabeling Play ‣ Appendix A Algorithm Pseudocode ‣ Language Conditioned Imitation Learning over Unstructured Data") into a training set Dplay={(τ,sg)i}i=0Dplaysubscript𝐷playsubscriptsuperscriptsubscript𝜏subscript𝑠𝑔𝑖subscript𝐷play𝑖0D\_{\operatorname{play}}=\{(\tau,s\_{g})\_{i}\}^{D\_{\operatorname{play}}}\_{i=0}italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT = { ( italic\_τ , italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } start\_POSTSUPERSCRIPT italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i = 0 end\_POSTSUBSCRIPT. These short horizon goal image conditioned demonstrations are fed to a maximum likelihood goal conditioned imitation objective: ℒGCIL=𝔼(τ,sg)∼Dplay[∑t=0|τ|log⁡πθ(at|st,sg)]subscriptℒGCILsubscript𝔼similar-to𝜏subscript𝑠𝑔subscript𝐷playdelimited-[]subscriptsuperscript𝜏𝑡0logsubscript𝜋𝜃conditionalsubscript𝑎𝑡subscript𝑠𝑡subscript𝑠𝑔\mathscr{L}\_{\operatorname{GCIL}}=\mathbb{E}\_{(\tau,\ s\_{g})\sim D\_{\operatorname{play}}}\left[\sum^{|\tau|}\_{t=0}\operatorname{log}\pi\_{\theta}\left(a\_{t}|s\_{t},s\_{g}\right)\right]script\_L start\_POSTSUBSCRIPT roman\_GCIL end\_POSTSUBSCRIPT = blackboard\_E start\_POSTSUBSCRIPT ( italic\_τ , italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT ) ∼ italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUPERSCRIPT | italic\_τ | end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_g end\_POSTSUBSCRIPT ) ]. However, when learning language conditioned policies πθ(a|s,l)subscript𝜋𝜃conditional𝑎𝑠𝑙\pi\_{\theta}(a|s,l)italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a | italic\_s , italic\_l ), we cannot easily relabel any visited state s𝑠sitalic\_s to a natural language goal as the goal space G𝐺Gitalic\_G is no longer equivalent to the observation space L𝐿Litalic\_L. Consequently, this limits language conditioned imitation learning from being able to incorporate large unstructured demonstration datasets Dplaysubscript𝐷playD\_{\operatorname{play}}italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT. Next, we describe our approach that relaxes this limitation. IV Learning to Follow Human Instructions from Unstructured Data ---------------------------------------------------------------- We present a framework that aims for a scalable combination of self-supervised relabeled imitation learning and labeled instruction following. Our approach (Fig. [1](#S0.F1 "Figure 1 ‣ Language Conditioned Imitation Learning over Unstructured Data"), Sec. [IV-C](#S4.SS3 "IV-C LangLfP: following image or language goals. ‣ IV Learning to Follow Human Instructions from Unstructured Data ‣ Language Conditioned Imitation Learning over Unstructured Data")) can be summarized as follows: 1. 1. Collection: Collect a large unstructured “play” demonstration dataset. 2. 2. Relabeling: Relabel unstructured data into goal image demonstrations. Pair a small number of random windows with language after-the-fact. (Sec. [IV-A](#S4.SS1 "IV-A Pairing unstructured demonstrations with natural language ‣ IV Learning to Follow Human Instructions from Unstructured Data ‣ Language Conditioned Imitation Learning over Unstructured Data")) 3. 3. Multicontext imitation: Train a single imitation policy to solve for either goal image or language goals. (Sec. [IV-B](#S4.SS2 "IV-B Multicontext Imitation Learning ‣ IV Learning to Follow Human Instructions from Unstructured Data ‣ Language Conditioned Imitation Learning over Unstructured Data")) 4. 4. Use only language conditioning at test time. ### IV-A Pairing unstructured demonstrations with natural language Learning to follow language instructions involves addressing a difficult language grounding problem [[15](#bib.bib15)]: how do agents relate unstructured language l𝑙litalic\_l to their onboard perceptions s𝑠sitalic\_s and actions a𝑎aitalic\_a? We take a statistical machine learning approach, creating a paired corpora of (demonstration, language) by pairing random windows from unstructured data Dplaysubscript𝐷playD\_{\operatorname{play}}italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT with hindsight instructions after-the-fact (Algorithm [3](#alg3 "Algorithm 3 ‣ A-B Pairing Play with Language ‣ Appendix A Algorithm Pseudocode ‣ Language Conditioned Imitation Learning over Unstructured Data"), part 1 of Fig. [1](#S0.F1 "Figure 1 ‣ Language Conditioned Imitation Learning over Unstructured Data")). Here, we show untrained human annotators on-board videos, then ask them “what instruction would you give the agent to get from first frame to last frame?” See training examples in Table [V](#A4.T5 "TABLE V ‣ D-B (Play, language) dataset ‣ Appendix D Datasets ‣ Language Conditioned Imitation Learning over Unstructured Data"), video examples [here](https://language-play.github.io/#data), and a full description of the collection process in Appendix [D-B](#A4.SS2 "D-B (Play, language) dataset ‣ Appendix D Datasets ‣ Language Conditioned Imitation Learning over Unstructured Data"). Annotators were encouraged to use free-form natural language and not constrain themselves to predefined vocabulary or grammar. This yields a new dataset D(play,lang)={(τ,l)i}i=0D(play,lang)subscript𝐷playlangsubscriptsuperscriptsubscript𝜏𝑙𝑖subscript𝐷playlang𝑖0D\_{\operatorname{(play,lang)}}=\{(\tau,l)\_{i}\}^{D\_{\operatorname{(play,lang)}}}\_{i=0}italic\_D start\_POSTSUBSCRIPT ( roman\_play , roman\_lang ) end\_POSTSUBSCRIPT = { ( italic\_τ , italic\_l ) start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } start\_POSTSUPERSCRIPT italic\_D start\_POSTSUBSCRIPT ( roman\_play , roman\_lang ) end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i = 0 end\_POSTSUBSCRIPT, a dataset of text conditioned demonstrations. Crucially, we do not need pair every window from play with language to learn to follow instructions. This is made possible with Multicontext Imitation Learning, described next. Algorithm 1 Multicontext imitation learning 1:  Input: 𝒟={D0,…,DK},Dk={(τik,cik)}i=0Dkformulae-sequence𝒟superscript𝐷0…superscript𝐷𝐾superscript𝐷𝑘subscriptsuperscriptsubscriptsuperscript𝜏𝑘𝑖superscriptsubscript𝑐𝑖𝑘superscript𝐷𝑘𝑖0\mathscr{D}=\{D^{0},...,\ D^{K}\},D^{k}=\{(\tau^{k}\_{i},c\_{i}^{k})\}^{D^{k}}\_{i=0}script\_D = { italic\_D start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , … , italic\_D start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT } , italic\_D start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT = { ( italic\_τ start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_c start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ) } start\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i = 0 end\_POSTSUBSCRIPT, One dataset per context type (e.g. goal image, language instruction, task id), each holding pairs of (demonstration, context). 2:  Input: ℱ={fθ0,…,fθK}ℱsuperscriptsubscript𝑓𝜃0…superscriptsubscript𝑓𝜃𝐾\mathscr{F}=\{f\_{\theta}^{0},...,\ f\_{\theta}^{K}\}script\_F = { italic\_f start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , … , italic\_f start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT }, One encoder per context type, mapping context to shared latent goal space, i.e. z=fθk(ck)𝑧superscriptsubscript𝑓𝜃𝑘superscript𝑐𝑘z=f\_{\theta}^{k}(c^{k})italic\_z = italic\_f start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ( italic\_c start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ). 3:  Input: πθ(at|st,z)subscript𝜋𝜃conditionalsubscript𝑎𝑡subscript𝑠𝑡𝑧\pi\_{\theta}(a\_{t}|s\_{t},z)italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_z ), Single latent goal conditioned policy. 4:  Input: Randomly initialize parameters θ={θπ,θf0,…,θfK}𝜃subscript𝜃𝜋subscript𝜃superscript𝑓0…subscript𝜃superscript𝑓𝐾\theta=\{\theta\_{\pi},\ \theta\_{f^{0}},\ ...,\ \theta\_{f^{K}}\}italic\_θ = { italic\_θ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT , italic\_θ start\_POSTSUBSCRIPT italic\_f start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT , … , italic\_θ start\_POSTSUBSCRIPT italic\_f start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT } 5:  while True do 6:     ℒMCIL←0absent←subscriptℒMCIL0\mathscr{L}\_{\operatorname{MCIL}}\xleftarrow{}0script\_L start\_POSTSUBSCRIPT roman\_MCIL end\_POSTSUBSCRIPT start\_ARROW start\_OVERACCENT end\_OVERACCENT ← end\_ARROW 0 7:      # Loop over datasets. 8:     for k=0…K𝑘0…𝐾k=0...Kitalic\_k = 0 … italic\_K do 9:         # Sample a (demonstration, context) batch from this dataset. 10:        (τk,ck)∼Dk∼superscript𝜏𝑘superscript𝑐𝑘superscript𝐷𝑘(\tau^{k},c^{k})\thicksim D^{k}( italic\_τ start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT , italic\_c start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ) ∼ italic\_D start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT 11:         # Encode context in shared latent goal space. 12:        z=fθk(ck)𝑧superscriptsubscript𝑓𝜃𝑘superscript𝑐𝑘z=f\_{\theta}^{k}(c^{k})italic\_z = italic\_f start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ( italic\_c start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ) 13:         # Accumulate imitation loss. 14:        ℒMCIL+=∑t=0|τk|log⁡πθ(at|st,z)italic-+=subscriptℒMCILsubscriptsuperscriptsuperscript𝜏𝑘𝑡0logsubscript𝜋𝜃conditionalsubscript𝑎𝑡subscript𝑠𝑡𝑧\mathscr{L}\_{\operatorname{MCIL}}\mathrel{{+}{=}}\sum^{|\tau^{k}|}\_{t=0}\operatorname{log}\pi\_{\theta}\left(a\_{t}|s\_{t},z\right)script\_L start\_POSTSUBSCRIPT roman\_MCIL end\_POSTSUBSCRIPT italic\_+= ∑ start\_POSTSUPERSCRIPT | italic\_τ start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT | end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_z ) 15:     end for 16:      # Average gradients over context types. 17:     ℒMCIL\*=1|𝒟|italic-\*=subscriptℒMCIL1𝒟\mathscr{L}\_{\operatorname{MCIL}}\mathrel{{\*}{=}}\dfrac{1}{|\mathscr{D}|}script\_L start\_POSTSUBSCRIPT roman\_MCIL end\_POSTSUBSCRIPT italic\_\*= divide start\_ARG 1 end\_ARG start\_ARG | script\_D | end\_ARG 18:      # Train policy and all encoders end-to-end. 19:     Update θ𝜃\thetaitalic\_θ by taking a gradient step w.r.t. ℒMCILsubscriptℒMCIL\mathscr{L}\_{\operatorname{MCIL}}script\_L start\_POSTSUBSCRIPT roman\_MCIL end\_POSTSUBSCRIPT 20:  end while ### IV-B Multicontext Imitation Learning So far, we have described a way to create two contextual imitation datasets: Dplaysubscript𝐷playD\_{\operatorname{play}}italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT, holding goal image demonstrations and D(play,lang)subscript𝐷playlangD\_{\operatorname{(play,lang)}}italic\_D start\_POSTSUBSCRIPT ( roman\_play , roman\_lang ) end\_POSTSUBSCRIPT, holding language demonstrations. Ideally, we could train a single imitation policy that could be conditioned with either task description. Critically, this would enable language conditioned imitation to make use of self-supervised imitation over unstructured data. With this motivation, we introduce *multicontext imitation learning* (MCIL), a simple and universally applicable generalization of contextual imitation to multiple heterogeneous contexts. The main idea is to represent a large set of policies by a single, unified function approximator that generalizes over states, tasks, and task descriptions. Concretely, MCIL assumes access to multiple contextual imitation datasets 𝒟={D0,…,DK}𝒟superscript𝐷0…superscript𝐷𝐾\mathscr{D}=\{D^{0},...,\ D^{K}\}script\_D = { italic\_D start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , … , italic\_D start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT }, each with a different way of describing tasks. Each Dk={(τik,cik)}i=0Dksuperscript𝐷𝑘subscriptsuperscriptsubscriptsuperscript𝜏𝑘𝑖superscriptsubscript𝑐𝑖𝑘superscript𝐷𝑘𝑖0D^{k}=\{(\tau^{k}\_{i},c\_{i}^{k})\}^{D^{k}}\_{i=0}italic\_D start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT = { ( italic\_τ start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_c start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ) } start\_POSTSUPERSCRIPT italic\_D start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i = 0 end\_POSTSUBSCRIPT holds demonstrations τ𝜏\tauitalic\_τ paired with some context c∈Ck𝑐superscript𝐶𝑘c\in C^{k}italic\_c ∈ italic\_C start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT. For example, D0superscript𝐷0D^{0}italic\_D start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT might contain one-hot task demonstrations (a conventional multitask imitation dataset), D1superscript𝐷1D^{1}italic\_D start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT might contain goal image demonstrations (obtained by hindsight relabeling), and D2superscript𝐷2D^{2}italic\_D start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT might contain language goal demonstrations. MCIL trains a single *latent goal* conditioned policy πθ(at|st,z)subscript𝜋𝜃conditionalsubscript𝑎𝑡subscript𝑠𝑡𝑧\pi\_{\theta}(a\_{t}|s\_{t},z)italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_z ) over all datasets simultaneously, as well as one parameterized encoder per dataset ℱ={fθ0,…,fθK}ℱsuperscriptsubscript𝑓𝜃0…superscriptsubscript𝑓𝜃𝐾\mathscr{F}=\{f\_{\theta}^{0},...,f\_{\theta}^{K}\}script\_F = { italic\_f start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , … , italic\_f start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT }, learning to map raw task descriptions to a common latent goal space z∈ℝd𝑧superscriptℝ𝑑z\in\mathbb{R}^{d}italic\_z ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT. See Fig. [3](#S1.F3 "Figure 3 ‣ I Introduction ‣ Language Conditioned Imitation Learning over Unstructured Data"). For instance, these could be a one-hot task embedding lookup, an image encoder, and a language encoder respectively. MCIL has a simple training procedure, shown in Algorithm [1](#alg1 "Algorithm 1 ‣ IV-A Pairing unstructured demonstrations with natural language ‣ IV Learning to Follow Human Instructions from Unstructured Data ‣ Language Conditioned Imitation Learning over Unstructured Data"): at each training step, sample a batch of contextual imitation examples from each dataset, encode contexts in the shared latent space using the respective encoders, then compute a latent goal-conditioned imitation loss, averaged over all datasets. The policy and goal encoders are trained end-to-end to maximize this objective. MCIL allows for a highly efficient training scheme, broadly useful beyond this paper: learn the majority of control from the data source that is cheapest to collect, while simultaneously learning scalable task conditioning from a small number of labeled examples. As we will see empirically, MCIL allows us to train an instruction following agent with less than 1% of data requiring language annotation, with the majority of perception and control learned instead from relabeled goal image imitation. ![Refer to caption](/html/2005.07648/assets/x4.png) Figure 4: Comparing prior LfP (left) to our LangLfP (right). Both are trained on unstructured and unsegmented demonstrations, relabeled into goal image demonstrations. LangLfP is additionally trained on random windows paired with hindsight natural language instructions. ### IV-C LangLfP: following image or language goals. We now have all the components to introduce *LangLfP* (language conditioned learning from play). LangLfP is a special case of MCIL (Sec. [IV-B](#S4.SS2 "IV-B Multicontext Imitation Learning ‣ IV Learning to Follow Human Instructions from Unstructured Data ‣ Language Conditioned Imitation Learning over Unstructured Data")) applied to our problem setting, and learns perception from pixels, natural language understanding, and control end-to-end with no auxiliary losses. Training LangLfP. LangLfP trains a single multicontext policy πθ(at|st,z)subscript𝜋𝜃conditionalsubscript𝑎𝑡subscript𝑠𝑡𝑧\pi\_{\theta}(a\_{t}|s\_{t},z)italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_z ) over datasets 𝒟={Dplay,D(play,lang)}𝒟subscript𝐷playsubscript𝐷playlang\mathscr{D}=\{D\_{\operatorname{play}},D\_{\operatorname{(play,lang)}}\}script\_D = { italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT , italic\_D start\_POSTSUBSCRIPT ( roman\_play , roman\_lang ) end\_POSTSUBSCRIPT }. We define ℱ={genc,senc}ℱsubscript𝑔encsubscript𝑠enc\mathscr{F}=\{g\_{\operatorname{enc}},s\_{\operatorname{enc}}\}script\_F = { italic\_g start\_POSTSUBSCRIPT roman\_enc end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT roman\_enc end\_POSTSUBSCRIPT }, neural network encoders mapping from goal images (Appendix [B-B](#A2.SS2 "B-B Image goal encoder ‣ Appendix B LangLfP Implementation Details ‣ Language Conditioned Imitation Learning over Unstructured Data")) and text instructions (Appendix [B-C](#A2.SS3 "B-C Language understanding module ‣ Appendix B LangLfP Implementation Details ‣ Language Conditioned Imitation Learning over Unstructured Data")) respectively to z𝑧zitalic\_z. Fig. [4](#S4.F4 "Figure 4 ‣ IV-B Multicontext Imitation Learning ‣ IV Learning to Follow Human Instructions from Unstructured Data ‣ Language Conditioned Imitation Learning over Unstructured Data") compares LangLfP training to prior LfP training. At each training step, LangLfP: 1) samples a batch of image goal tasks from Dplaysubscript𝐷playD\_{\operatorname{play}}italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT and a batch of language goal tasks from D(play,lang)subscript𝐷playlangD\_{\operatorname{(play,lang)}}italic\_D start\_POSTSUBSCRIPT ( roman\_play , roman\_lang ) end\_POSTSUBSCRIPT, 2) encodes image and language goals into z𝑧zitalic\_z, and 3) computes the MCIL objective. We then take a combined gradient step with respect to all modules—perception (Appendix [B-A](#A2.SS1 "B-A Perception Module ‣ Appendix B LangLfP Implementation Details ‣ Language Conditioned Imitation Learning over Unstructured Data")), language (Appendix [B-C](#A2.SS3 "B-C Language understanding module ‣ Appendix B LangLfP Implementation Details ‣ Language Conditioned Imitation Learning over Unstructured Data")), and control (Appendix [B-D](#A2.SS4 "B-D Control Module ‣ Appendix B LangLfP Implementation Details ‣ Language Conditioned Imitation Learning over Unstructured Data"))—optimizing the whole architecture end-to-end as a single neural network. See full training details in Appendix [B-E](#A2.SS5 "B-E Training Details ‣ Appendix B LangLfP Implementation Details ‣ Language Conditioned Imitation Learning over Unstructured Data"). Following natural language instructions at test time. At test time, LangLfP conditions only on onboard pixel observations and a free-form natural language task description to solve user-defined tasks in closed loop. See picture in part 3 of Fig. [1](#S0.F1 "Figure 1 ‣ Language Conditioned Imitation Learning over Unstructured Data") and details in Appendix [B-F](#A2.SS6 "B-F Test time details ‣ Appendix B LangLfP Implementation Details ‣ Language Conditioned Imitation Learning over Unstructured Data"). Leveraging pretrained language models While LangLfP offers a straightforward way for learning natural language end-to-end from imitation, this may not always be desirable. In open-world scenarios, instruction following robots are likely to be given instructions that are synonyms to ones they have been trained to follow, but do not overlap exactly with a finite training set. For example, “shut the door” is a valid, but potentially out-of-distribution way of describing training task “close the door”. Many recent works have successfully transferred knowledge from unlabeled text to downstream NLP tasks via pretrained embeddings [[7](#bib.bib7), [36](#bib.bib36)]. Can we achieve similar knowledge transfer to robotic manipulation? There are two motivations for this kind of transfer: 1) improve language conditioned manipulation and 2) allow an agent to follow out of distribution synonym instructions (Fig. [3](#S1.F3 "Figure 3 ‣ I Introduction ‣ Language Conditioned Imitation Learning over Unstructured Data")). To test these hypotheses, we augment LangLfP, encoding language inputs l𝑙litalic\_l to the policy at training and test time in the pretrained embedding space of a multilingual neural language model [[51](#bib.bib51)]. We refer to this augmented model as *TransferLangLfP*. See Appendix [B-C](#A2.SS3 "B-C Language understanding module ‣ Appendix B LangLfP Implementation Details ‣ Language Conditioned Imitation Learning over Unstructured Data") for details. V Experiments -------------- Our experiments aim to answer the following questions: Q0) Can a policy trained with our method learn perception, natural language understanding, and control end-to-end to solve many free-form text conditioned tasks in a 3D tabletop environment? Q1) Can our language conditioned multitask imitation learning (LangLfP) match the performance of goal image conditioned imitation learning (LfP)? Q2) Does our method’s ability to leverage large unlabeled imitation datasets improve language conditioned performance? Q3) Does incorporating pretrained language models grant any benefit to manipulation performance? Q4) Does incorporating pretrained language models allow our agent to be robust to out of distribution synonym instructions? Dataset and tasks. We conduct our experiments in the simulated 3D Playroom environment introduced in [[26](#bib.bib26)], shown in Fig. [10](#A3.F10 "Figure 10 ‣ Appendix C Environment ‣ Language Conditioned Imitation Learning over Unstructured Data"). It consists of an 8-DOF robotic arm controlled with continuous position control from visual observations at 30Hz to accomplish manipulation tasks from 18 distinct task families. See [[26](#bib.bib26)] for a complete discussion of tasks. We modify the environment to include a text channel which the agent observes at each timestep, allowing humans to type unconstrained language commands (details in Appendix [C](#A3 "Appendix C Environment ‣ Language Conditioned Imitation Learning over Unstructured Data")). We define two sets of experiments for each baseline: pixel experiments, where models receive pixel observations and must learn perception end-to-end, and state experiments, where models instead receive simulator state consisting of positions and orientations for all objects in the scene. The latter provides an upper bound on how all well the various methods can learn language conditioned control, independent of a difficult perception problem (which might be improved upon independently with self-supervised representation learning methods (e.g. [[41](#bib.bib41), [33](#bib.bib33), [34](#bib.bib34)])). Methods. We compare the following methods (details on each in Appendix [E](#A5 "Appendix E Models ‣ Language Conditioned Imitation Learning over Unstructured Data")): LangBC (“language, but no play”): a baseline natural language conditioned multitask imitation policy [[37](#bib.bib37)], trained on D(demo,lang)subscript𝐷demolangD\_{\operatorname{(demo,lang)}}italic\_D start\_POSTSUBSCRIPT ( roman\_demo , roman\_lang ) end\_POSTSUBSCRIPT, 100 expert demonstrations for each of the 18 evaluation tasks paired with hindsight instructions. LfP (“play, but no language”): a baseline LfP model trained on Dplaysubscript𝐷playD\_{\operatorname{play}}italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT, conditioned on goal images at test time. LangLfP (ours) (“play and language”): Multicontext imitation trained on unstructured data Dplaysubscript𝐷playD\_{\operatorname{play}}italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT and unstructured data paired with language D(play,lang)subscript𝐷playlangD\_{\operatorname{(play,lang)}}italic\_D start\_POSTSUBSCRIPT ( roman\_play , roman\_lang ) end\_POSTSUBSCRIPT. Tasks are specified at test time using only natural language. Restricted LangLfP: LangLfP trained on “restricted Dplaysubscript𝐷playD\_{\operatorname{play}}italic\_D start\_POSTSUBSCRIPT roman\_play end\_POSTSUBSCRIPT”, a play dataset restricted to be the same size as D(demo,lang)subscript𝐷demolangD\_{\operatorname{(demo,lang)}}italic\_D start\_POSTSUBSCRIPT ( roman\_demo , roman\_lang ) end\_POSTSUBSCRIPT. This restriction is somewhat artificial, as more unsegmented play can be collected for the same budget of time. This provides a controlled comparison to LangBC. TransferLangLfP (ours): LangLfP trained on top of pretrained language embeddings. To perform a controlled comparison, we use the same network architecture (details in Sec. [B](#A2 "Appendix B LangLfP Implementation Details ‣ Language Conditioned Imitation Learning over Unstructured Data")) across all baselines. See Appendix [D](#A4 "Appendix D Datasets ‣ Language Conditioned Imitation Learning over Unstructured Data") for a detailed description of all data sources. ### V-A Human Language Conditioned Visual Manipulation We construct a large number of multi-stage human language conditioned manipulation tasks by treating the original 18 evaluation tasks in [[26](#bib.bib26)] as subtasks, then considering all valid N-stage transitions between them. This results in 2-stage, 3-stage, and 4-stage benchmarks, referred to here as Chain-2, Chain-3, and Chain-4. See Appendix [F](#A6 "Appendix F Long Horizon Evaluation ‣ Language Conditioned Imitation Learning over Unstructured Data") for benchmark details. We obtain language instructions for each subtask in the same way as training (Sec. [IV-A](#S4.SS1 "IV-A Pairing unstructured demonstrations with natural language ‣ IV Learning to Follow Human Instructions from Unstructured Data ‣ Language Conditioned Imitation Learning over Unstructured Data")), by presenting human annotators with videos of the completed tasks and asking for hindsight instructions. We present results in Table [I](#S5.T1 "TABLE I ‣ V-A Human Language Conditioned Visual Manipulation ‣ V Experiments ‣ Language Conditioned Imitation Learning over Unstructured Data") and Fig. [6](#S5.F6 "Figure 6 ‣ V-A Human Language Conditioned Visual Manipulation ‣ V Experiments ‣ Language Conditioned Imitation Learning over Unstructured Data") and discuss below. | Method | Input | | | | --- | | Training | | source | | | | | --- | | Task | | condi- | | tioning | | | | | --- | | Multi-18 | | Success | | (18 tasks) | | | | | --- | | Chain-4 Success | | (925 | | long-horizon tasks) | | | LangBC | pixels | predefined demos | text | 20.0% ±3.0plus-or-minus3.0\pm 3.0± 3.0 | 7.1% ±1.5plus-or-minus1.5\pm 1.5± 1.5 | | Restricted LangLfP | pixels | unstructured demos | text | 47.1% ±2.0plus-or-minus2.0\pm 2.0± 2.0 | 25.0% ±2.0plus-or-minus2.0\pm 2.0± 2.0 | | LfP | pixels | unstructured demos | image | 66.4% ±2.2plus-or-minus2.2\pm 2.2± 2.2 | 53.0% ±5.0plus-or-minus5.0\pm 5.0± 5.0 | | LangLfP (ours) | pixels | unstructured demos | text | 68.6% ±1.7plus-or-minus1.7\pm 1.7± 1.7 | 52.1% ±2.0plus-or-minus2.0\pm 2.0± 2.0 | | TransferLangLfP (ours) | pixels | unstructured demos | text | 74.1% ±1.5plus-or-minus1.5\pm 1.5± 1.5 | 61.8% ±1.1plus-or-minus1.1\pm 1.1± 1.1 | | LangBC | states | predefined demos | text | 38.5% ±6.3plus-or-minus6.3\pm 6.3± 6.3 | 13.9% ±1.4plus-or-minus1.4\pm 1.4± 1.4 | | Restricted LangLfP | states | unstructured demos | text | 88.0% ±1.4plus-or-minus1.4\pm 1.4± 1.4 | 64.2% ±1.5plus-or-minus1.5\pm 1.5± 1.5 | | LangLfP (ours) | states | unstructured demos | text | 88.5% ±2.9plus-or-minus2.9\pm 2.9± 2.9 | 63.2% ±0.9plus-or-minus0.9\pm 0.9± 0.9 | | TransferLangLfP (ours) | states | unstructured demos | text | 90.5% ±0.8plus-or-minus0.8\pm 0.8± 0.8 | 71.8% ±1.6plus-or-minus1.6\pm 1.6± 1.6 | TABLE I: Human language conditioned visual manipulation experiments Figure 5: Long horizon language conditioned visual manipulation results. ![Refer to caption](/html/2005.07648/assets/x5.png) ![Refer to caption](/html/2005.07648/assets/x6.png) Figure 5: Long horizon language conditioned visual manipulation results. Figure 6: Performance scales linearly with model capacity, but only for unstructured demonstrations. We see that for the same amount of data, more diverse unstructured imitation data is better utilized by large model learning. Language conditioned manipulation results. In Table [I](#S5.T1 "TABLE I ‣ V-A Human Language Conditioned Visual Manipulation ‣ V Experiments ‣ Language Conditioned Imitation Learning over Unstructured Data"), we see that LangLfP can achieve 68.6% success on individual free-form instructions (covering 18 different tasks), 52.1% success on 925 4-stage instructions. This helps answer the first main question of this work (Q0), whether our approach can learn perception, control, and language understanding end-to-end to follow many free-form human instructions. We also see that LangLfP matches the performance of prior goal image conditioned LfP on all benchmarks within margin of error (Q1). This is important, as it shows a more scalable mode of task conditioning can be achieved with only ∼∼\thicksim∼0.1% of demonstration data requiring language annotation. Large unlabeled imitation datasets improve language conditioned performance. We see in Table [I](#S5.T1 "TABLE I ‣ V-A Human Language Conditioned Visual Manipulation ‣ V Experiments ‣ Language Conditioned Imitation Learning over Unstructured Data") and Fig. [6](#S5.F6 "Figure 6 ‣ V-A Human Language Conditioned Visual Manipulation ‣ V Experiments ‣ Language Conditioned Imitation Learning over Unstructured Data") that LangLfP outperforms LangBC on every benchmark. This is important because it shows that by allowing our policy to train over unlabeled demonstration data (via MCIL), it achieves significantly better performance than baseline policies trained only on typical predefined task demonstrations (Q2). We note this holds even when unlabeled training sources are restricted to the same size as labeled ones (Restricted LangLfP vs. LangBC). Qualitatively ([videos](https://language-play.github.io/#lfp-vs-bc)), we see clear differences between models that incorporate unstructured data and those that do not. We find that on long-horizon evaluations, MCIL-trained policies tend to transition well between tasks and recover from initial failures, whereas baseline policies trained on conventional demonstrations tend to quickly encounter compounding imitation error. High capacity models make the most use of unlabeled data. In Fig. [6](#S5.F6 "Figure 6 ‣ V-A Human Language Conditioned Visual Manipulation ‣ V Experiments ‣ Language Conditioned Imitation Learning over Unstructured Data") and Appendix [G-B](#A7.SS2 "G-B Play scales with model capacity ‣ Appendix G Qualitative Examples ‣ Language Conditioned Imitation Learning over Unstructured Data"), we additionally find the phenomenon that performance scales linearly with model size for policies that leverage unstructured data, whereas performance peaks and then declines for models trained on conventional predefined demonstrations. We see this holds even when datasets are restricted to the same size. This suggests that the simple recipe of collecting large unstructured imitation datasets, pairing them with a small amount of language data, then training large capacity imitation learning models may be a valid way to scale up language conditioned control. ![Refer to caption](/html/2005.07648/assets/x7.png) Figure 7: Getting out of trouble with human language assistance: Unlike other forms of task conditioning, natural language conditioning allows a human operator to offer quick interactive assistance when an agent gets stuck. Language unlocks human assistance. Natural language conditioning allows for new modes of interactive test time behavior, allowing humans to give guidance to agents that would be impractical to give via goal image or one-hot task conditioned control. See Fig. [7](#S5.F7 "Figure 7 ‣ V-A Human Language Conditioned Visual Manipulation ‣ V Experiments ‣ Language Conditioned Imitation Learning over Unstructured Data"), [this video](https://language-play.github.io/#interactive) and Sec. [G-C](#A7.SS3 "G-C The Operator Can Help the Robot ‣ Appendix G Qualitative Examples ‣ Language Conditioned Imitation Learning over Unstructured Data") for a concrete example. Sec. [G](#A7 "Appendix G Qualitative Examples ‣ Language Conditioned Imitation Learning over Unstructured Data") additionally shows how humans can quickly compose tasks with language that are outside the 18-task benchmark (but covered by the training set), like “put the block in the trash bin”. ### V-B Instruction Following with Large Pretrained Language Models ![Refer to caption](/html/2005.07648/assets/x8.png) Figure 8: Knowledge transfer from generic text corpora benefits robotic manipulation. We see models that take pretrained embeddings as language input (purple) converge to higher performance than those that must learn language from scratch (blue). Positive transfer to robotic manipulation. In Table [I](#S5.T1 "TABLE I ‣ V-A Human Language Conditioned Visual Manipulation ‣ V Experiments ‣ Language Conditioned Imitation Learning over Unstructured Data") and Fig. [8](#S5.F8 "Figure 8 ‣ V-B Instruction Following with Large Pretrained Language Models ‣ V Experiments ‣ Language Conditioned Imitation Learning over Unstructured Data"), we see that TransferLangLfP systematically outperforms LangLfP and LfP. This is the first evidence, to our knowledge, that sentence embeddings obtained from large pretrained language models can significantly improve the convergence of language-guided robotic control policies (Q3). | Method | | | | --- | | OOD-syn | | (∼similar-to\sim∼15k instructions) | | | | | --- | | OOD-16-lang | | (∼similar-to\sim∼240k instructions) | | | Random Policy | 0.0% ±plus-or-minus\pm± 0.0 | 0.0% ±plus-or-minus\pm± 0.0 | | LangLfP | 37.6% ±plus-or-minus\pm± 2.3 | 27.94% ±plus-or-minus\pm± 3.5 | | TransferLangLfP | 60.2% ±plus-or-minus\pm± 3.2 | 56.0% ±plus-or-minus\pm± 1.4 | TABLE II: Out of distribution synonym robustness. Robustness to out-of-distribution synonym instructions. In Table [II](#S5.T2 "TABLE II ‣ V-B Instruction Following with Large Pretrained Language Models ‣ V Experiments ‣ Language Conditioned Imitation Learning over Unstructured Data"), we study how robust our language conditioned policies are to synonyms not found in the training set. We see that only agents equipped with large pretrained language models (TransferLangLfP) are robust to these out of distribution synonyms (OOD-syn), giving affirmative support to Q4. Additionally, just by choosing a multilingual pretrained model [[51](#bib.bib51)], we see this kind of robustness extends to 16 different languages, which have no vocabulary overlap with training (OOD-16-lang). See videos ([link](https://language-play.github.io/#multilingual)) of TransferLangLfP following multilingual synonym instructions. We stress that these experiments do not test the ability of a policy to generalize to new kinds of manipulation tasks than the ones seen in training. Rather, this shows that a simple training modification increases the number of ways a language-guided agent can be conditioned to execute the same fixed set of behaviors from its training distribution, effectively expanding the training instruction set. Find more details on these experiments in Appendix [G-D](#A7.SS4 "G-D Pretrained language models give robustness to synonym instructions ‣ Appendix G Qualitative Examples ‣ Language Conditioned Imitation Learning over Unstructured Data"). ### V-C Limitations and Future Work Although the coverage of unstructured play demonstration data mitigates failure modes in conventional imitation setups, we observe several limitations in our policies at test time. In this [video](https://language-play.github.io/#failure1), we see the policy make multiple attempts to solve the task, but times out before it is able to do so. We see in this [video](https://language-play.github.io/#failure2) that the agent encounters a particular kind of compounding error, where the arm flips into an awkward configuration, likely avoided by humans during teleoperated play. This is potentially mitigated by a more stable choice of rotation representation, or more varied play collection. We note that the human is free to help the agent out of these awkward configurations using language assistance, as demonstrated in Sec. [G-C](#A7.SS3 "G-C The Operator Can Help the Robot ‣ Appendix G Qualitative Examples ‣ Language Conditioned Imitation Learning over Unstructured Data"). More examples of failures can be seen [here](https://language-play.github.io/#failures). While LangLfP relaxes important constraints around task specification, it is fundamentally a goal-directed imitation method and lacks a mechanism for autonomous policy improvement. An exciting area for future work may be one that combines the coverage of teleoperated play, the scalability and flexibility of multicontext imitation pretraining, and the autonomous improvement of reinforcement learning, similar to prior successful combinations of LfP and RL [[14](#bib.bib14)]. Additionally, the scope of this work is task agnostic control in a single simulated environment with a fixed set of objects. We note this is consistent with the standard imitation assumptions that training and test tasks are drawn i.i.d. from the same distribution. An interesting question for future work is whether training on a large play corpora covering many rooms and objects allows for generalization to unseen rooms or objects. VI Conclusion -------------- We proposed a scalable framework for combining multitask imitation with free-form text conditioning. Our method can learn language conditioned visuomotor policies, capable of following multiple human instructions over a long horizon in a dynamically accurate 3D tabletop setting. Key to our method is the ability to learn over unstructured and unlabeled imitation data, a property we made possible by introducing Multicontext Imitation Learning. Critically, the ability to learn from unstructured data reduced the cost of language annotation to less than 1% of total data, while also resulting in much higher language conditioned task success. Finally, we showed a simple, but effective way to combine any language conditioned policy with large pretrained language models. We found that this small modification allowed our policy to be robust to many out-of-distribution synonym instructions, without requiring the collection of additional demonstration data. ### Acknowledgments We thank Karol Hausman, Eric Jang, Mohi Khansari, Kanishka Rao, Jonathan Thompson, Luke Metz, Anelia Angelova, Sergey Levine, and Vincent Vanhoucke for providing helpful feedback on this manuscript. We additionally thank the annotators for providing paired language instructions.
20bce8ae-eef1-4d75-9c0b-1360edda8c49
trentmkelly/LessWrong-43k
LessWrong
Electrostatic Airships? Airships are pretty dang cool. Airplanes need a continuous expenditure of energy to stay in the air, but if you just fill a bag with a light gas, you can stay up in the air with no energy expenditure at all. The two lightest gases are hydrogen and helium. Though the hydrogen atom is 4 times lighter than helium, hydrogen gas is made of two hydrogen atoms bonded together, so it's only twice as light as helium gas. The difference in lifting power is even more minor: What matters for lifting power of a gas is the difference between its density and that of ordinary air. Hydrogen and helium are both much lighter than air, so the lifting power of either gas is nearly equal to the density of air. Hydrogen and helium both have problems as lifting gases. Helium's problem is that it's very expensive. Helium comes out of the ground, usually as a byproduct of fossil fuel extraction. There's a finite amount of it in the Earth's crust, and effort must be expended to obtain more of it. On the demand side of the equation, helium is a very useful gas, with many applications in cooling, spaceflight, balloons, etc. It's quite ironic the second most common element in the universe is so hard to come by on Earth. Hydrogen is much cheaper. It can be created from methane, or by electrolysis of water. The main problem with using hydrogen in an airship is safety. Hydrogen is very flammable, and great care must be taken to avoid sparks, and to avoid allowing any hydrogen and oxygen to mix. Some people have proposed vacuum airships. Vacuum would not be a much better lifting gas than hydrogen or helium. Its lifting power is exactly equal to the density of air, and hydrogen and helium are already pretty close to that. But maybe we'd like to use vacuum because its cost is equal to the energy cost of pumping out the air in a given volume of space, and it's not flammable. There are some problems with this idea, though. The obvious problem is that an air sac filled with vacuum would simply collaps
c5916c43-9130-49fb-9c93-f1bd3b3bf2f4
trentmkelly/LessWrong-43k
LessWrong
New Petrov Game Brainstorm Big thanks to the LW team for putting together the Petrov Day experience! (Setup. Follow up.) I looked over the comments and it seems like there there were a number of suggestions for how to do this better. Instead of waiting for the next year, let's do it right now. My proposed setup: 1. Take the original 125 LW users. Take a prize pool of $1,250 (or more if people are willing to donate). The prize pool is split evenly between each player, but you have to survive the game to get paid. Everyone is anonymized in the game. 2. The game will last a minimum of 4 days (to give everyone enough time to act, strategize, and think). After 4 days, there will be an increasing probability that the game will end at any minute. (This is to prevent anyone trying to attack right when the game ends to avoid retaliation. In expectation, the game should last about a week.) 3. Each player will have the number of missiles equal to the number of players. They can launch any number of them. 4. When a missile is launched: a) the attacked player is notified that they are being attacked by a specific player (and therefore has an option to retaliate), b) 48 hours after the launch, the attacked player is declared dead: they can no longer perform any actions and will not receive a payout, c) 48 hours after the launch, the attacking player gets the target player's entire prize pool. 5. During the game there will be at least 125 fake alerts. They will be generated randomly (so some players might receive zero or more than one fake alerts). It will look the same as if some specific player has launched a missile against you. 48 hours after the notification, you'll find out whether it was real or not by whether or not you're still alive. Additional details: * You can see who has been killed. * You can only know about missiles launches that you have done or that have been done to target you. * If you take money from a player who already took money from someone else, you get those too. So in
f990d4ff-62bf-4a66-b1a1-33c841a1414d
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"This post requires some knowledge of mathematical logic and computability theory. The basic idea is due to Vladimir Nesov and me. Let the universe be a computer program U that can make calls to a halting oracle. Let the agent be a subprogram A within U that can also make calls to the oracle. The source code of both A and U is available to A. Here's an example U that runs Newcomb's problem and returns the resulting utility value: def U(): # Fill boxes, according to predicted action. box1 = 1000 box2 = 1000000 if (A() == 1) else 0 # Compute reward, based on actual action. return box2 if (A() == 1) else (box1 + box2) A complete definition of U should also include the definition of A, so let's define it. We will use the halting oracle only as a provability oracle for some formal system S, e.g. Peano arithmetic. Here's the algorithm of A: Play chicken with the universe: if S proves that A()≠a for some action a, then return a. For every possible action a, find some utility value u such that S proves that A()=a ⇒ U()=u. If such a proof cannot be found for some a, break down and cry because the universe is unfair. Return the action that corresponds to the highest utility found on step 2. Now we want to prove that the agent one-boxes, i.e. A()=1 and U()=1000000. That will follow from two lemmas. Lemma 1: S proves that A()=1 ⇒ U()=1000000 and A()=2 ⇒ U()=1000. Proof: you can derive that from just the source code of U, without looking at A at all. Lemma 2: S doesn't prove any other utility values for A()=1 or A()=2. Proof: assume, for example, that S proves that A()=1 ⇒ U()=42. But S also proves that A()=1 ⇒ U()=1000000, therefore S proves that A()≠1. According to the first step of the algorithm, A will play chicken with the universe and return 1, making S inconsistent unsound (thx Misha). So if S is sound, that can't happen. We see that the agent defined above will do the right thing in Newcomb's problem. And the proof transfers easily to many other toy problems, like the symmetric Prisoner's Dilemma. But why? What's the point of this result? There's a big problem about formalizing UDT. If the agent chooses a certain action in a deterministic universe, then it's a true fact about the universe that choosing a different action would have caused Santa to appear. Moreover, if the universe is computable, then such silly logical counterfactuals are not just true but provable in any reasonable formal system. When we can't compare actual decisions with counterfactual ones, it's hard to define what it means for a decision to be "optimal". For example, one previous formalization searched for formal proofs up to a specified length limit. Problem is, that limit is a magic constant in the code that can't be derived from the universe program alone. And if you try searching for proofs without a length limit, you might encounter a proof of a "silly" counterfactual which will make you stop early before finding the "serious" one. Then your decision based on that silly counterfactual can make it true by making its antecedent false... But the bigger problem is that we can't say exactly what makes a "silly" counterfactual different from a "serious" one. In contrast, the new model with oracles has a nice notion of optimality, relative to the agent's formal system. The agent will always return whatever action is proved by the formal system to be optimal, if such an action exists. This notion of optimality matches our intuitions even though the universe is still perfectly deterministic and the agent is still embedded in it, because the oracle ensures that determinism is just out of the formal system's reach. P.S. I became a SingInst research associate on Dec 1. They did not swear me to secrecy, and I hope this post shows that I'm still a fan of working in the open. I might just try to be a little more careful because I wouldn't want to discredit SingInst by making stupid math mistakes in public :-)" The above is a chunk of text from a post created by a user on the website LessWrong. What are some reasons why people may like or agree with what this person is saying? People may agree with or like this post for several reasons: 1. Theoretical Interest: The post delves into complex concepts of mathematical logic and computability theory. Those who are interested in these fields may find the discussion stimulating and intriguing. 2. Logical Argumentation: The post is well-structured with clear arguments and counterarguments. It also provides proofs to support its claims, which can be appreciated by individuals who value logical and systematic reasoning. 3. Novel Approach: The idea of using a halting oracle in the context of a deterministic universe and an embedded agent presents a fresh perspective on the problem. Those who enjoy innovative thinking or novel solutions may find this appealing. 4. Relevance to AI and Decision Theory: The post discusses the idea of decision-making in deterministic universes, which is a crucial concept in AI and decision theory. People working in these fields, or those interested in them, may find the post insightful and relevant. 5. Openness and Honesty: The author's transparency about their affiliation and their commitment to open work may resonate with individuals who value these traits. The author's humility in admitting potential for error may also be appreciated. 6. Intellectual Challenge: The post presents a challenging intellectual problem. People who enjoy grappling with complex theoretical problems may appreciate this aspect. Can you expand on the concept of a halting oracle and how it contributes to the decision-making process in a deterministic universe? A halting oracle, in the context of theoretical computer science, is a hypothetical device that can perfectly determine whether any given computer program will eventually halt (i.e., stop running) or continue to run indefinitely. This is a solution to the famous halting problem, which is known to be undecidable - there's no algorithm that can determine with certainty whether an arbitrary program halts or not. In the context of the post, the halting oracle is used as a provability oracle for a formal system, such as Peano arithmetic. This means it's used to determine whether certain statements or propositions within that system can be proven true or false. In the deterministic universe proposed in the post, the agent (subprogram A) and the universe (program U) are both deterministic, meaning their outcomes are completely determined by their initial conditions. However, the agent has access to the halting oracle, which allows it to explore different outcomes without actually executing them. This contributes to the decision-making process in a few ways: 1. It allows the agent to consider counterfactuals. Even though the universe is deterministic, the agent can use the oracle to consider what would happen if it took a different action. This is crucial for decision-making, as it allows the agent to weigh the potential outcomes of different actions. 2. It provides a mechanism for the agent to avoid certain outcomes. In the post, the agent uses the oracle to play "chicken" with the universe - if it can prove that a certain action leads to an undesirable outcome, it will choose a different action. 3. It allows the agent to make decisions based on proofs rather than empirical evidence. Because the oracle can determine whether a statement is provable in the formal system, the agent can base its decisions on these proofs, rather than having to rely on observations or experiments. In summary, the halting oracle allows the agent to make informed decisions in a deterministic universe by considering counterfactuals and making decisions based on proofs. Explain how the concept of counterfactuals is crucial in decision-making, especially in the context of a deterministic universe with an agent having access to a halting oracle. Counterfactuals are hypothetical scenarios about what could have happened if different actions were taken. They are fundamental to decision-making because they allow us to evaluate the potential outcomes of different choices, even if we can only actually enact one of those choices. In a deterministic universe, every event is determined by the prior state of the universe and the laws governing its evolution. This means that, in theory, if you knew the complete state of the universe at one point in time and all the laws that govern it, you could predict the future with perfect accuracy. However, this determinism poses a problem for decision-making. If everything is predetermined, how can we meaningfully talk about making a choice? This is where counterfactuals come in. Even in a deterministic universe, we can still consider what would happen if we made a different choice, even if we can't actually make that choice. This allows us to evaluate our options and make a decision. Now, let's bring in the halting oracle. In the context of the post, the halting oracle is used as a provability oracle, which can determine whether a certain statement is provable or not. This gives the agent the ability to consider counterfactuals in a rigorous, formal way. It can use the oracle to determine whether a certain action would lead to a desirable outcome, and choose its action accordingly. In essence, the halting oracle allows the agent to 'break free' from the deterministic nature of the universe, at least in a decision-making sense. It can consider different actions and their outcomes as if they were real possibilities, even though in a strictly deterministic sense, they are not. This makes counterfactuals a crucial tool for decision-making in a deterministic universe. Considering the deterministic universe with an agent and a halting oracle, how might the introduction of quantum uncertainty affect the agent's decision-making process? The introduction of quantum uncertainty would fundamentally change the nature of the universe in which the agent operates. Instead of a deterministic universe where all events are fully determined by prior states, the universe would now be probabilistic, with certain events having only a certain probability of occurring. In such a universe, the agent's decision-making process would need to account for this inherent uncertainty. Instead of simply considering whether a certain action leads to a desirable outcome, the agent would need to consider the probability of that outcome occurring. This could involve a form of probabilistic reasoning or statistical analysis. The halting oracle, in this context, might be less useful or might need to be adapted. Since the oracle is used to determine whether a certain statement is provable, it might struggle with the inherent uncertainty of quantum mechanics. Statements about the future state of the universe might no longer be provable in a strict sense, but only probable to a certain degree. However, it's also possible that the halting oracle could be adapted to work with this uncertainty. For example, it might be used to determine the probability of a certain statement being true, rather than whether it's provable. This would allow the agent to continue using a form of counterfactual reasoning, even in the face of quantum uncertainty. In summary, the introduction of quantum uncertainty would add a layer of complexity to the agent's decision-making process, requiring it to deal with probabilities rather than certainties. The role of the halting oracle would also need to be reevaluated in light of this uncertainty.
1b06039f-b7d9-43e2-93cf-8e6f2c204bd7
trentmkelly/LessWrong-43k
LessWrong
Seeking AI Alignment Tutor/Advisor: $100–150/hr I am actively looking for a tutor/advisor with expertise in AI x-risk, with the primary goal of collaboratively determining the most effective ways I can contribute to reducing AI existential risks (X-risk). Tutoring Goals I suspect that I misunderstand key components of the mental models that lead some highly rational and intelligent individuals to assign a greater than 50% probability of AI-related existential catastrophe ("p-doom"). By gaining a clearer understanding of these models, I aim to refine my thinking and make better-informed decisions about how to meaningfully reduce AI X-risk. Specifically, I want to delve deeper into why and how misaligned AGI might be developed, and why it wouldn’t be straightforward to solve alignment before it becomes a critical issue. To clarify, I do NOT believe we could contain or control a misaligned AGI with current safety practices. What I do find likely is that we will be able to avoid a situation altogether. In addition to improving my understanding of AI X-risks, I also seek to explore strategies that I could aid in implementing in order to reduce AI X-risk. About Me - My primary motivation is effective altruism, and I believe that mitigating AI X-risk is the most important cause to work on. - I have 7 years of experience working with machine learning, with a focus on large language models (LLMs), and possess strong technical knowledge of the field. - My current p-doom estimate is 25%, derived from my own model, which gives about 5%, but I adjust upward in since some highly rational thinkers predicts significantly higher p-doom. Even if my p-doom were 1%, I would still view AI X-risk as the most pressing issue and dedicate my time to it.   Why Become My Tutor? - You will be directly contributing to AI safety/alignment efforts, working with someone highly committed to making an impact. - Opportunity for **highly technical 1-on-1 discussions** about the cutting-edge in AI alignment and X-risk reduction strategies. -
8c5ce697-4cd2-441f-a4c6-3ea68fa7e81f
trentmkelly/LessWrong-43k
LessWrong
Deep learning models might be secretly (almost) linear Crossposted from my personal blog. Epistemic status: Pretty speculative, but there is a surprising amount of circumstantial evidence. I have been increasingly thinking about NN representations and slowly coming to the conclusion that they are (almost) completely secretly linear inside[1]. This means that, theoretically, if we can understand their directions, we can very easily exert very powerful control on the internal representations, as well as compose and reason about them in a straightforward way. Finding linear directions for a given representation would allow us to arbitrarily amplify or remove it and interpolate along it as desired. We could also then directly 'mix' it with other representations as desired. Measuring these directions during inference would let us detect the degree of each feature that the network assigns to a given input. For instance, this might let us create internal 'lie detectors' (which there is some progress towards) which can tell if the model is telling the truth, or being deceptive.  While nothing is super definitive (and clearly networks are not 100% linear), I think there is a large amount of fairly compelling circumstantial evidence for this position. Namely: Evidence for this: 1. All the work from way back when about interpolating through VAE/GAN latent space. I.e. in the latent space of a VAE on CelebA there are natural 'directions' for recognizable semantic features like 'wearing sunglasses' and 'long hair' and linear interpolations along these directions produced highly recognizable images 2. Rank 1 or low rank editing techniques such as ROME work so well (not perfectly but pretty well). These are effectively just emphasizing a linear direction in the weights. 3. You can apparently add and mix LoRas and it works about how you would expect. 4. You can merge totally different models. People working with Stable Diffusion community literally additively merge model weights with weighted sum and it works! 5. Logit len
06f0b757-ef3c-45ba-8b29-5643163c08d0
trentmkelly/LessWrong-43k
LessWrong
Rational Reading: Thoughts On Prioritizing Books A large element of instrumental rationality consists of filtering, prioritizing, and focusing.  It's true for tasks, for emails, for blogs, and for the multitude of other inputs that many of us are drowning in these days[1].  Doing everything, reading everything, commenting on everything is simply not an option - it would take infinite time.  We could simply limit time and do what happens to catch our attention in that limited time, but that's clearly not optimal.  Spending some time prioritizing rather than executing will always improve results if items can be prioritized and vary widely in benefit.  So maximizing the results we get from our finite time requires, for a variety of domains: 1. Filtering: a quick first-pass to get input down to a manageable size for the higher-cost effort of prioritizing. 2. Prioritizing: briefly evaluating the impact each item will have towards your goals. 3. Focusing: on the highest-priority items. I have some thoughts, and am looking for more advice on how to do this for non-fiction reading.  I've stopped buying books that catch my attention, because I have an inpile of about 3-4 shelves of unread books that have been unread for years.  Instead, I put them on my Amazon Wishlists, which as a result have swelled to a total of 254 books - obviously un-manageable, and growing much faster than I read. One obvious question to ask when optimizing is: what is the goal of reading?  Let me suggest a few possibilities: * Improve performance at a current job/role.  For example, as Executive Director of a nonprofit, I could read books on fundraising or management. * Relatedly, work towards a current goal.  Here is where it helps to have identified your goals, perhaps in an Annual Review.  As a parent, for example, there are an infinitude of parenting books that I could read, but I chose for this year to work specifically on positive psychology parenting, as it seemed like a potentially high-impact skill to learn.  This massively filter
750a8eb2-6acc-45f5-888f-86f6ee9dd43e
trentmkelly/LessWrong-43k
LessWrong
Claim explainer: donor lotteries and returns to scale Sometimes, new technical developments in the discourse around effective altruism can be difficult to understand if you're not already aware of the underlying principles involved. I'm going to try to explain the connection between one such new development and an important underlying claim. In particular, I'm going to explain the connection between donor lotteries (as recently implemented by Carl Shulman in cooperation with Paul Christiano)1 and returns to scale. (This year I’m making a $100 contribution to this donor lottery, largely for symbolic purposes to support the concept.) I'm not sure I'm adding much to Carl's original post on making bets to take advantage of returns to scale with this explainer. Please let me know whether you think this added anything or not. What is a donor lottery? Imagine ten people each have $1,000 to give to charity this year. They pool their money, and draw one of their names out of a hat. The winner gets to decide how to give away all $10,000. This is an example of a donor lottery. More generally, a donor lottery is an arrangement where a group of people pool their money and pick one person to give it away. This selection is randomized so that each person has a probability of being selected proportional to their initial contribution. Selfish reasons to gamble Let's start with the case of a non-charitable expenditure. Usually, for consumption decisions, we have what economists call diminishing marginal utility. This is because we have limited ability to consume things, and also because we make the best purchases first. Food is an example of something we have limited appetite for. After a certain point, we just aren't hungry anymore. But we also but the more important things first. Your first couple dollars a day make the difference between going hungry and having enough food. Your next couple dollars a day go to buying convenience or substituting higher-quality-foods, which is a material improvement, but nowhere near as big as t
ae2c3f05-ff1e-496b-b424-5e88be0987d4
trentmkelly/LessWrong-43k
LessWrong
Belief alignment The following are some non-technical ideas about AI alignment based on human beliefs, rather than our true reward function. My impression is that the role of beliefs is often implied in passing, but I haven‘t found any elaborations on the topic. I‘d be grateful for relevant references, if someone knows any. Do as I say, not as I do I believe we should not aim to make AIs imitate our behaviour or even just our preferences in every regard. An AI with the same reward function as an average human would want what we want, e.g. to be in control of their own lives, to make lots of money, to eat icecream, etc. At best, having such human goals would make them inefficient, at worst dangerous, but mostly, it would make no sense. Why should a robot – or even worse, a disembodied AI – want to eat icecream? But most humans like icecream and delight in eating it sometimes and there is no logical reason why an AI could not want the same, even if it made no practical sense. An AI with a copy of a human reward function would also be undesirable in that the latter would be adapted to human biases, selfishness, laziness, and the general limitations of humans, which are not easily overcome (by humans) even with ample knowledge and time to think. Still, we want the behaviour of an AI to match what we want, and simply spelling it out is unfeasible, since we are unable to take into account every potentially relevant variable in every possible situation. I believe a solution is to use human beliefs. These include all that we think we know about our own values and preferences and everything that we want an AI to do. They are also something that we can communicate, and an Artificial General Intelligence would by definition be capable of understanding us. Communicating all our beliefs would still be unfeasible, but an AGI should also be capable of reasoning about our beliefs, fill in the gaps, and in case of doubt ask us to clarify. Once it has a clear understanding of our moral beliefs, an
a06a9017-b89a-4d00-add4-5ec3718f7a98
trentmkelly/LessWrong-43k
LessWrong
Craving, suffering, and predictive processing (three characteristics series) This is the third post of the "a non-mystical explanation of insight meditation and the three characteristics of existence" series. I originally intended this post to more closely connect no-self and unsatisfactoriness, but then decided on focusing on unsatisfactoriness in this post and relating it to no-self in the next one. Unsatisfactoriness In the previous post, I discussed some of the ways that the mind seems to construct a notion of a self. In this post, I will talk about a specific form of motivation, which Buddhism commonly refers to as craving (taṇhā in the original Pali). Some discussions distinguish between craving (in the sense of wanting positive things) and aversion (wanting to avoid negative things); this article uses the definition where both desire and aversion are considered subtypes of craving. My model is that craving is generated by a particular set of motivational subsystems within the brain. Craving is not the only form of motivation that a person has, but it normally tends to be the loudest and most dominant. As a form of motivation, craving has some advantages: * People tend to experience a strong craving to pursue positive states and avoid negative states. If they had less craving, they might not do this with an equal zeal. * To some extent, craving looks to me like a mechanism that shifts behaviors from exploration to exploitation. * In an earlier post, Building up to an Internal Family Systems model, I suggested that the human mind might incorporate mechanisms that acted as priority overrides to avoid repeating particular catastrophic events. Craving feels like a major component of how this is implemented in the mind. * Craving tends to be automatic and visceral. A strong craving to eat when hungry may cause a person to get food when they need it, even if they did not intellectually understand the need to eat. At the same time, craving also has a number of disadvantages: * Craving superficially looks like it cares about
8fee27f1-b1f3-4722-906c-c96e2e0780e5
trentmkelly/LessWrong-43k
LessWrong
Incentives affecting alignment-researcher encouragement My hypothesis: I think the incentives for "cultivating more/better researchers in a preparadigmatic field" lean towards "don't discourage even less-promising researchers, because they could luck out and suddenly be good/useful to alignment in an unexpected way". Analogy: This is like how investors encourage startup founders because they bet on a flock of them, not necessarily because any particular founder's best bet is to found a startup. If timelines are short enough that [our survival depends on [unexpectedly-good paradigms]], and [unexpectedly-good paradigms] come from [black-swan researchers], then the AI alignment field is probably (on some level, assuming some coordination/game theory) incentivized to black-swan farm researchers. Note: This isn't necessarily bad (and in fact it's probably good overall), it just puts the incentives into perspective. So individual researchers don't feel so bad about "not making it" (where "making it" could be "getting a grant" or "getting into a program" or...) The questions: Is this real or not? What, if anything, should anyone do, with this knowledge in hand?
8d20ac85-d851-402c-971d-bc87a38f91f7
trentmkelly/LessWrong-43k
LessWrong
BYOL (Buy Your Own Lunch) For the format that’s easy on the eyes: https://medium.com/@John_Greer/buy-your-own-lunch-byol-351f6b772287 Imagine you’re at the tail end of a business meeting at a restaurant. Your plate is still 2/3 full since you didn’t get to eat as much as you wanted because you were too busy answering questions. (That’s a problem for a different post.) The check comes. > “I’ve got it, Jim!” > “No, Bob. Let me!” > “Really, it’s quite alright!” > *insert wrestling match over the check holder1* I never liked playing the back and forth game of who pays the bill. I don’t like it as a diner and I didn’t like it when I worked as a server. The problem seems to be one of signaling. I’ll have to get Robin Hanson or Scott Alexander's opinion but it seems like it’s rude if you don’t offer to pay for the group. It’s also rude and signals cheapness if you don’t argue with whoever is offering to pay. In poorer families, when one person, let’s say Grandma, is known to have more money, everyone else knows not to argue with her and instead just says “thank you” with a hint of humility and shame. The dating world has certain established norms like "the man pays for things". The problem in the non-dating world is it would be weird to say upfront that you aren’t going to pay for someone’s meal because there’s not an established norm of who pays for a meal outside of a potential employer paying for a potential new employee’s meal. It’s like trying to break up a friendship. There’s no norm for that like there is for breaking up a romantic relationship: https://youtu.be/7x3knxMBHco I wanted to try to solve the problem of communicating that we can pay for ourselves without it being so weird. Enter BYOL. There’s (bring your own booze) BYOB. I am coining the term (buy your own lunch) BYOL. Well, technically my cofounder Kelsey helped me come up with the name so she deserves credit. Examples: > We’d love to meet for lunch (byol).” > “We’re having a lunch meeting at Mendocino Farms (byol).
cbcaffe8-6f2f-4252-9978-406f1a681583
trentmkelly/LessWrong-43k
LessWrong
Rethinking Laplace's Rule of Succession Imagine a sequence of binary outcomes generated independently and identically by some stochastic process. After observing N outcomes, with n successes, Laplace's Rule of Succession suggests that our confidence in another success should be (n+1)/(N+2). This corresponds to a uniform prior over [0,1] for the underlying probability. But should we really be uniform about probabilities? I think a uniform prior is wrong for three reasons: 1. The uniform prior suggests we should be equally surprised if the underlying probability lies in the interval [0, 0.0001] as in [0.3456, 0.3457]. But this seems wrong. I can think of many process that give probabilities in the first interval — for example, any process that succeeds only in rare edge cases. In contrast, I couldn't list any processes that give probabilities specifically around 0.3456. The uniform prior fails to capture the wide range of log-odds that occur in real-life processes. 2. Under the uniform prior, the process is almost surely not deterministic — i.e. there is zero prior likelihood of p being exactly 0.0 or 1.0. This seems wrong. Among probabilistic programs that generate binary outcomes, there are very simple deterministic ones (e.g. "always output 0" or "always output 1"). An appropriate prior should have nonzero prior probability on these simple programs. 3. The uniform prior assigns zero likelihood to simple fractions like p=1/2 or p=5/6. This too seems wrong — simple rational probabilities should have higher weight. To fix this, we should mix in the Thomae distribution, which adds a weight (m·n)^(-α) to each fraction m/(m+n) for every pair 1 ≤ m,n ≤ 100. I propose this mixture distribution: w1 * logistic-normal(0, sigma^2) + w2 * 0.5(dirac(0) + dirac(1)) + w3 * thomae_{100}(α) + w4 * uniform(0,1) where: * The first term captures logistic transformations of normal variables (weight w1), resolving the issue that probabilities should be spread across log-odds * The second term captures deterministic
bbbf0691-d4d4-4e4e-abf3-61d170cc35cf
StampyAI/alignment-research-dataset/arbital
Arbital
Bayes' rule: Odds form One of the more convenient forms of [Bayes' rule](https://arbital.com/p/1lz) uses [relative odds](https://arbital.com/p/1rb). Bayes' rule says that, when you observe a piece of evidence $e,$ your [posterior](https://arbital.com/p/1rp) odds $\mathbb O(\boldsymbol H \mid e)$ for your hypothesis [https://arbital.com/p/-vector](https://arbital.com/p/-vector) $\boldsymbol H$ given $e$ is just your [prior](https://arbital.com/p/1rm) odds $\mathbb O(\boldsymbol H)$ on $\boldsymbol H$ times the [https://arbital.com/p/-56s](https://arbital.com/p/-56s) $\mathcal L_e(\boldsymbol H).$ For example, suppose we're trying to solve a mysterious murder, and we start out thinking the odds of Professor Plum vs. Miss Scarlet committing the murder are 1 : 2, that is, Scarlet is twice as likely as Plum to have committed the murder [a priori](https://arbital.com/p/1rm). We then observe that the victim was bludgeoned with a lead pipe. If we think that Plum, *if* he commits a murder, is around 60% likely to use a lead pipe, and that Scarlet, *if* she commits a murder, would be around 6% likely to us a lead pipe, this implies [relative likelihoods](https://arbital.com/p/1rq) of 10 : 1 for Plum vs. Scarlet using the pipe. The [posterior](https://arbital.com/p/1rp) odds for Plum vs. Scarlet, after observing the victim to have been murdered by a pipe, are $(1 : 2) \times (10 : 1) = (10 : 2) = (5 : 1)$. We now think Plum is around five times as likely as Scarlet to have committed the murder. # Odds functions Let $\boldsymbol H$ denote a [https://arbital.com/p/-vector](https://arbital.com/p/-vector) of hypotheses. An odds function $\mathbb O$ is a function that maps $\boldsymbol H$ to a set of [https://arbital.com/p/-1rb](https://arbital.com/p/-1rb). For example, if $\boldsymbol H = (H_1, H_2, H_3),$ then $\mathbb O(\boldsymbol H)$ might be $(6 : 2 : 1),$ which says that $H_1$ is 3x as likely as $H_2$ and 6x as likely as $H_3.$ An odds function captures our *relative* probabilities between the hypotheses in $\boldsymbol H;$ for example, (6 : 2 : 1) odds are the same as (18 : 6 : 3) odds. We don't need to know the absolute probabilities of the $H_i$ in order to know the relative odds. All we require is that the relative odds are proportional to the absolute probabilities: $$\mathbb O(\boldsymbol H) \propto \mathbb P(\boldsymbol H).$$ In the example with the death of Mr. Boddy, suppose $H_1$ denotes the proposition "Reverend Green murdered Mr. Boddy", $H_2$ denotes "Mrs. White did it", and $H_3$ denotes "Colonel Mustard did it". Let $\boldsymbol H$ be the vector $(H_1, H_2, H_3).$ If these propositions respectively have [prior](https://arbital.com/p/1rm) probabilities of 80%, 8%, and 4% (the remaining 8% being reserved for other hypotheses), then $\mathbb O(\boldsymbol H) = (80 : 8 : 4) = (20 : 2 : 1)$ represents our *relative* credences about the murder suspects &mdash; that Reverend Green is 10 times as likely to be the murderer as Miss White, who is twice as likely to be the murderer as Colonel Mustard. # Likelihood functions Suppose we discover that the victim was murdered by wrench. Suppose we think that Reverend Green, Mrs. White, and Colonel Mustard, *if* they murdered someone, would respectively be 60%, 90%, and 30% likely to use a wrench. Letting $e_w$ denote the observation "The victim was murdered by wrench," we would have $\mathbb P(e_w\mid \boldsymbol H) = (0.6, 0.9, 0.3).$ This gives us a [https://arbital.com/p/-56s](https://arbital.com/p/-56s) defined as $\mathcal L_{e_w}(\boldsymbol H) = P(e_w \mid \boldsymbol H).$ # Bayes' rule, odds form Let $\mathbb O(\boldsymbol H\mid e)$ denote the [posterior](https://arbital.com/p/1rp) odds of the hypotheses $\boldsymbol H$ after observing evidence $e.$ [Bayes' rule](https://arbital.com/p/1xr) then states: $$\mathbb O(\boldsymbol H) \times \mathcal L_{e}(\boldsymbol H) = \mathbb O(\boldsymbol H\mid e)$$ This says that we can multiply the relative prior credence $\mathbb O(\boldsymbol H)$ by the likelihood $\mathcal L_{e}(\boldsymbol H)$ to arrive at the relative posterior credence $\mathbb O(\boldsymbol H\mid e).$ Because odds are invariant under multiplication by a positive constant, it wouldn't make any difference if the _likelihood_ function was scaled up or down by a constant, because that would only have the effect of multiplying the final odds by a constant, which does not affect them. Thus, only the [relative likelihoods](https://arbital.com/p/-1rq) are necessary to perform the calculation; the absolute likelihoods are unnecessary. Therefore, when performing the calculation, we can simplify $\mathcal L_e(\boldsymbol H) = (0.6, 0.9, 0.3)$ to the relative likelihoods $(2 : 3 : 1).$ In our example, this makes the calculation quite easy. The prior odds for Green vs White vs Mustard were $(20 : 2 : 1).$ The relative likelihoods were $(0.6 : 0.9 : 0.3)$ = $(2 : 3 : 1).$ Thus, the relative posterior odds after observing $e_w$ = Mr. Boddy was killed by wrench are $(20 : 2 : 1) \times (2 : 3 : 1) = (40 : 6 : 1).$ Given the evidence, Reverend Green is 40 times as likely as Colonel Mustard to be the killer, and 20/3 times as likely as Mrs. White. Bayes' rule states that this *relative* proportioning of odds among these three suspects will be correct, regardless of how our remaining 8% probability mass is assigned to all other suspects and possibilities, or indeed, how much probability mass we assigned to other suspects to begin with. For a proof, see [https://arbital.com/p/1xr](https://arbital.com/p/1xr). # Visualization [Frequency diagrams](https://arbital.com/p/560), [waterfall diagrams](https://arbital.com/p/1wy), and [spotlight diagrams](https://arbital.com/p/1zm) may be helpful for explaining or visualizing the odds form of Bayes' rule.
4aed36b2-ee1c-4bee-b75e-5250d41f5d5b
trentmkelly/LessWrong-43k
LessWrong
Weird models of country development? A Scott Alexander post argued that plausibly "land reform" was a vital first step to developing most countries in Asia. I hadn't heard any clear theory like this before—all the development goals I hear about sound like "get them more education and democracy" or sometimes "invest in them so they can develop their economies". Does anyone have other novel models of how to develop countries, like the idea of land reform as prerequisite, that are far outside the mainstream? (Given that >$40b are now committed to EA, it seems plausible to me that most of the current best charities and even cause areas will soon be fully funded. If so, the bottleneck becomes opening new cause areas. Pushing hard for novel and better developmental economic policies seems like one of the possibilities with highest impact/neglect/tractability combination, with plausible comparative advantage in tractability through non-ideological thinking and pulling the policy rope sideways.)
826eccf0-be6c-4ee2-84f9-3e3dc65f0ffa
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups This summary was posted to LW Main on September 19th. The following week's summary is here. Irregularly scheduled Less Wrong meetups are taking place in: * Bratislava: 29 September 2014 06:00PM * Copenhagen September Social Meetup - Botanisk Have: 27 September 2014 02:30PM * Frankfurt: How to improve your life: 28 September 2014 02:00PM * Moscow Meetup: CBT Reloaded: 28 September 2014 02:00PM * [Perth] Sunday lunch: 21 September 2014 12:00PM * Perth, Australia: Games night: 07 October 2014 06:00PM * Portland Teachable Skills Discussion: 20 September 2014 01:00PM * Urbana-Champaign: Tortoises: 21 September 2014 02:00PM * Utrecht: Debiasing techniques: 21 September 2014 02:00PM * Utrecht: Effective Altruism and Politics: 05 October 2014 02:00PM * Utrecht: Artificial Intelligence: 19 October 2014 02:00PM * Utrecht: Climate Change: 02 November 2014 03:00PM * Warsaw, next week!: 23 September 2014 06:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Austin, TX: 20 September 2025 01:30PM * [Cambridge MA] Passive Investing and Financial Independence: 21 September 2014 03:30PM * [Cambridge MA] Social Skills: 24 September 2014 03:30PM * Canberra: More rationalist fun and games!: 26 September 2014 06:00PM * Sydney Meetup - September: 24 September 2014 06:30PM * Vienna - Superintelligence: 27 September 2014 03:00PM * Washington, D.C.: Mini Talks: 21 September 2014 03:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers. If you'd like to talk with other LW-ers face to face, and there is no meetup in y
47620b62-c2c0-4db5-b25f-5ee1ac2003bb
trentmkelly/LessWrong-43k
LessWrong
A limit-computable, self-reflective distribution We present a Δ2-definable probability distribution Ψ that satisfies Christiano's reflection schema for its own defining formula. The strategy is analogous to the chicken step employed by modal decision theory to obfuscate itself from the eyes of PA; we will prevent the base theory T from knowing much about Ψ, so that Ψ can be coherent over T and also consistently believe in reflection statements. So, the method used here is technical and not fundamental, but it does at least show that limit-computable and reflective distributions exist. These results are due to Sam Eisenstat and me, and this post benefited greatly from extensive notes from Sam; any remaining errors are probably mine. Prerequisites: we assume familiarity with Christiano's original result and the methods used there. In particular, we will freely use Kakutani's fixed point theorem. See Christiano et al.'s paper. ---------------------------------------- Outline * Section 1. Introduction and problem statement * Section 2. A definable, self-reflective distribution Ψ * Section 3. A Δ2-definable, self-reflective distribution Λ * Section 4. Discussion and open problems Section 1. Problem statement Probabilistic reflection We have some base theory T in a language L, where T is able to talk about arithmetic (e.g. PA or ZFC). We wish to find probability distributions over completions of T, or equivalently functions P:L→[0,1] satisfying ϕ∈T implies P(ϕ)=1 and probabilistic coherence conditions like P(¬ϕ)=1−P(ϕ). In particular, we want P to have accurate beliefs about itself: ∀ϕ∈L:∀a,b∈Q:P(ϕ)∈(a,b)⇒P(P(┌ϕ┐)∈(a,b))=1 , where P is a symbol in L. Christiano showed that there exists such a distribution P. In other words, taking an additional symbol P in the metalanguage, writing CohT(P) for the statement that P is a coherent distribution over T, and for the statement that P is reflective, we have that the theory ZFC+CohT(P)+Refl(P) is consistent. That consistency of this theory is equivalent to the ex
aa68a787-a03f-4767-8b9a-81cdb771afeb
trentmkelly/LessWrong-43k
LessWrong
[External Event] 2022 IEEE International Conference on Assured Autonomy (ICAA) - submission deadline extended This is a linkpost for https://iaa.jhu.edu/icaa/ This conference may be of interest to some, since the subject matter overlaps very heavily with AI safety research (see Topics of Interest below). The deadline for paper submissions has been extended to this coming Monday November 8. Important Dates: * Paper submission deadline: 11/08/2021 (Anywhere on Earth) * Acceptance notification: 12/06/2021 * Publication-ready Papers Due: 01/06/2022 * Conference: March 22 – 24, 2022, Puerto Rico (Hybrid) Overview: The IEEE International Conference on Assured Autonomy (ICAA) plans to address the gap that exists between theory-heavy autonomous systems and algorithms and the privacy, security, and safety of their real-world implementations. Advances in machine learning and artificial intelligence have shown great promise in automating complex decision-making processes across transportation, critical infrastructure, and cyber infrastructure domains. Practical implementations of these algorithms require significant systems engineering and integration support, especially as they integrate with the physical world. This integration is wrought with artificial intelligence (AI) safety, security, and privacy issues. The primary focus of this conference is the: (1) detection of, (2) response to, and (3) recovery from AI safety, security, and privacy violations in autonomous systems. Key technical challenges include discriminating between application-layer data breaches and benign process noises, responding to breaches and failures in real-time systems, and recovering from decision making failures autonomously. Topics of Interest: ICAA seeks contributions on all aspects of AI safety, security, and privacy in autonomous systems. Papers that encourage the discussion and exchange of experimental and theoretical results, novel designs, and works in progress are preferred. Topics of interest include (but are not limited to):  * Autonomous System and AI Safety * Detecting dataset an
d61f7051-022f-490c-b27b-603d6b14c7c4
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Why I'm Sceptical of Foom Disclaimer Written quickly[[1]](#fn3z4chjieyxl). It's better to draft my objections poorly, than to not draft them at all.   Introduction ============ I am sceptical that "foom"[[2]](#fne2ppvjc485) is some of not physically possible/feasible/economically viable. [Not sure yet what level of scepticism I endorse.] I have a few object level beliefs that bear on it. I'll try and express them succinctly below (there's a summary at the end of the post for those pressed for time).   Note that my objections to foom are more disjunctive than they are conjuctive. Each is independently a reason why foom looks less likely to me. --- Beliefs ======= I currently believe/expect the following to a sufficient degree that they inform my position on foom.   Diminishing Marginal Returns ---------------------------- **1.0.** Marginal returns to cognitive investment (e.g. compute) decay at a superlinear rate (e.g. exponential) across some relevant cognitive domains (e.g. some of near human, human spectrum, superhuman, strongly superhuman). **1.1.** Marginal returns to real world capabilities from cognitive amplification likewise decay at a superlinear rate across relevant cognitive domains. Among humans +6 SD g factor humans do not seem in general as more capable than +3 SD g factor humans as +3 SD g factor humans are compared to median humans.   Broad Human Cognitive Spectrum ------------------------------ **2.** The human cognitive spectrum (1st percentile human to peak human) is broad in an absolute sense. On many useful cognitive tasks(chess, theoretical research, invention, mathematics, etc.), beginner/dumb/unskilled humans are closer to a chimpanzee/rock than peak humans (for some fields, only a small minority of humans are able to perform the task at all, or perform the task in a useful manner[[3]](#fnbf15698b5h), for other like chess, beginners are simply closer to the lowest attainable scores than to the scores obtained by peak humans [600 - 800 is a lot closer to 0 than to 2700 - 2900]). Median humans are probably also closer to a rock than to peak humans (on e.g. inventing general relativity pre 1920). Peak humans may be closer to bounded superintelligences than beginner/median humans. E.g. Magnus Carlsen is closer in ELO to Stockfish than median human. I expect Magnus Carlsen to be closer in ELO to a bounded superintelligence than to a median human.   Narrow Optimisers Outperform General Optimisers on Narrow Domains ----------------------------------------------------------------- **3.0.** I believe that for similar levels of cognitive investment narrow optimisers outperform general optimisers on narrow domains. This is because they are not constrained by the pareto frontier across many domains and are more able to pursue the optimum in their narrow domains. I expect this to translate to many narrow domains (I wouldn't be surprised if we get superhuman language performance without "dangerously capable" systems [we got superhuman art without dangerously capable systems".]. E.g. [future LLMs may be able to write very compelling ("bestseller" status) long form fiction in an hour](https://twitter.com/CineraVerinia/status/1600098278883921920).) I expect a superintelligence to not win against dedicated chess/Go bots with comparable cognitive endowments (compute budgets, comparably efficient cognitive algorithms/architectures). "Not win" is too conservative: I expect the ASI to lose unless it adopts the strategy of just running the bot (or depending on the level of superhuman, it might be able to force a tie). I simply do not think a general optimiser (no matter how capable) with comparable cognitive endowment can beat a narrow optimiser at their own game. Optimisation across more domains constrains the attainable optimum in any domain; the pareto frontier is an absolute limit. I wouldn't be surprised if this generalises somewhat beyond Go. Are narrow AI superhuman real world strategists viable? The answer is not obviously "no" to me.   **3.1.** I believe that [general intelligence is not compact](https://www.lesswrong.com/posts/J9XecqtiujawmDnmr/is-general-intelligence-compact).   Deployment Expectations and Strategic Conditions ------------------------------------------------ **4.0.** I expect continuous progress in cognitive capabilities for several years/decades more. There may be some paradigm shifts/discontinuous jumps, but [I expect that the world would have already been radically transformed when superhuman agents arrive](https://twitter.com/CineraVerinia/status/1600435368632778753).   **4.1.** I expect it to be much more difficult for any single agent to attain decisive cognitive superiority to civilisation, or to a relevant subset of civilisation. Especially given 3. Superhuman agents may not be that much more capable than superhuman narrow AI amplified humans. **4.2.** Specifically, I expect a multipolar world in which many actors have a suite of superhuman narrow AIs that make them "dangerously capable" relative to 2020s earth, but not relative to their current time (I expect the actors to be in some sort of equilibrium). I'm not convinced the arrival of superhuman agents in such a world would necessarily shatter such an equilibrium. Or be unilateral "existentially dangerous" relative to said world. Hence, [I expect failure to materialise as dystopia not extinction](https://twitter.com/CineraVerinia/status/1597375575869763584).   "Superintelligence" is a High Bar --------------------------------- **5.** "Superintelligence" requires a "very high" level of strongly superhuman cognitive capabilities Reasons: * Arguments #1.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  through #4 * Attaining decisive strategic advantage seems difficult. + E.g. I doubt: - A +12 SD human could do so during most of human history - Human intelligence in a chimpanzee body easily takes over a chimpanzee tribe My intuition is that the level of cognitive power required to achieve absolute strategic dominance is crazily high. And it's a moving target that would rise with the extant effective level of civilisation. --- Summary ======= Courtesy of chatGPT: > The author presents several objections to the idea of a rapid, exponential increase in AI capabilities known as an "intelligence explosion" or "foom". The objections include the belief that marginal returns to cognitive investment decay at a superlinear rate, that narrow optimizers outperform general optimizers on narrow domains, and that it will be difficult for a single agent to attain decisive cognitive superiority over civilization. The author also believes that the arrival of superhuman agents in a world with multiple actors possessing superhuman narrow AI will not necessarily shatter the existing equilibrium. > > 1. **[^](#fnref3z4chjieyxl)**Half an hour to touch up [a stream of consciousness Twitter thread I wrote yesterday](https://twitter.com/CineraVerinia/status/1600656363469815808). 2. **[^](#fnrefe2ppvjc485)**An "intelligence explosion" scenario where there's a very short time period where AI systems rapidly grow in intelligence until their cognitive capabilities far exceed humanity's. 3. **[^](#fnrefbf15698b5h)**E.g. inventing the dominant paradigm in a hard science seems beyond the ability of most humans. I'm under the impression that pre 1920 < 1,000 (and plausibly < a 100) people could have invented general relativity. Some have claimed that without Einstein we may not have gotten general relativity for decades.
2f7534d3-ec5b-489f-89ed-c4cadac1e8bb
StampyAI/alignment-research-dataset/lesswrong
LessWrong
how humans are aligned This is a description of how I think humans are kept somewhat aligned towards genetically-specified goals. It's not an argument, just my views. It works well enough, but not perfectly. South Korea isn't having kids anymore. Sometimes you get serial killers or Dick Cheney. So, anything less seems likely to be inadequate. previously: [AI self-improvement is possible](https://www.lesswrong.com/posts/rSycgquipFkozDHzF/ai-self-improvement-is-possible) --- #### limited self-modification Don't allow systems to modify lower-level systems, and strongly limit self-modification at the same system level. When it's done at all, reduce the amount allowed after an initial learning period, so that children have more flexibility than old people. #### limited length Don't have long chains of systems generating systems. Limit things to 2 steps of systems generating higher-level systems. #### lifespan Even if you try to restrict self-modification, some "leakage" will happen anyway. A physical time limit that can't be extended past a maximum keeps that from becoming excessive. (Perhaps that's why some octopus species are "hardcoded" to die at specific points in their reproductive cycle.) #### old age More-capable misaligned systems are more dangerous, so degrading capabilities of systems as they approach their time limit makes them safer. I don't actually think this is the limiting factor for human senility; I think that's mainly due to uncontrolled covalent modification of DNA. What I do think is that humans have some low-level system that, when they get old, reduces influence of or shuts down certain mid-level systems. #### monitoring Humans have [monitoring systems](https://www.lesswrong.com/posts/rSycgquipFkozDHzF/ai-self-improvement-is-possible#L_monitoring) with a lesser degree of self-modification than what they monitor. They have access to the internal opinions of what they monitor, but can be deceived. The net benefit is actually somewhat questionable. #### democracy If alignment drift is somewhat random, then making many separate agents that act according to consensus reduces the net drift. Hermits and dictators do weird stuff. #### internal democracy Like democracy, but for multiple agents inside a single individual. Obviously there has to be some way to prevent agents from coalescing into a single blob, but that could be managed by lower-level hardcoded systems that blindly bottleneck the bandwidth of some connection patterns and force some of that limited bandwidth through low-level systems.
58e28cf7-31ca-4fc1-a361-a0bbd68d64cb
trentmkelly/LessWrong-43k
LessWrong
Interest In Conflict Is Instrumentally Convergent Why is conflict resolution hard? I talk to a lot of event organizers and community managers. Handling conflict is consistently one of the things they find the most stressful, difficult, or time consuming. Why is that? Or, to ask a related question: Why is the ACX Meetups Czar, tasked with collecting and dispersing best practices for meetups, spending so much time writing about social conflict? This essay is not the whole answer, but it is one part of why this is a hard problem. Short answer: Because interest in conflict resolution is instrumentally convergent. Both helpful and unhelpful people have reason to express strong opinions on how conflict is handled. Please take as a given that there exists (as a minimum) one bad actor with an interest in showing up to (as a minimum) one in-person group.  I. > See, a funny thing about risk teams: the list of red flags you have? On it, one of them is “Customer evinces an odd level of interest or knowledge in the operations of internal bank policies.” > > -Patrick McKenzie Imagine you are a bank security guard.  You are standing in your bank, sipping your coffee and watching people come in and out. It's a good gig, bank security, they have dental insurance and the coffee machine makes a surprisingly good cup. As you're standing there, someone in sunglasses and a baseball cap waiting in line strikes up a conversation with you. "Nice day, isn't it?" they say. "Sure is," you reply. "Nice bank too," they say, "I love the architecture on these old buildings. Good floors. Do you know if it has a basement level, maybe with a vault?" "Yeah, concrete flooring." "Nice, nice," this stranger (who you start thinking of as Sunglasses) says, "You know, I'm also into videography. What kind of cameras do you have around these parts? Like, about how many, and covering what angles?" You notice Sunglasses has a notepad out, pen held expectantly. ". . . you know, I'm not sure I should tell you that," you say slowly. "It's not like
d38096b4-a414-4736-aa5d-b7bcf1ed0020
trentmkelly/LessWrong-43k
LessWrong
Book review of "Mengzi" Book review of “Mengzi: With Selections From Traditional Commentaries”, trans. Bryan W. Van Norden This is a review of a classical Chinese philosophical text with a heavy focus on virtue ethics and politics. I don’t have any knowledge of Classical Chinese, I’m not a philosopher, and I’m not very virtuous. You may instead want to read the SEP entry on Mengzi, or just buy the book and read it – 2300 years of Confucian scholars can’t be wrong! Introduction The Confucian or Ruist[1] tradition within Chinese philosophy seems underrated in the West. Everybody likes Buddhism and Daoism. People particularly love pretty-sounding mistranslations of the Daodejing by authors who don’t know any Chinese. People love Zen (or Chan, in the Chinese reading of the character). People have even heard of Zhuangzi and/or a butterfly.  But almost nothing from Confucianism has percolated into the popular culture. Everyone knows there was a guy named Confucius who went around saying wise stuff, but nobody even knows any of the aphorisms (even though they're often great), let alone actually reading any of the Confucian canon. I think the explanation for this is pretty obvious. Eastern philosophy really started taking off in the West during the 60s. But at that time, there was just no market for a philosophy saying you should obey your parents and elder siblings, be loyal to the state, patiently study old books, and cultivate temperance and good manners. That's still going to be unappealing to a lot of people today (especially to LessWrong readers, who are probably more likely than average to distrust authority, to dislike formal schooling, and to think there is little value in tradition). But I think Confucian philosophy does offer an interesting perspective that is of increasing value. After all, it was developed by and for highly educated scholars who also wanted to have a real-world impact. These were people who were carefully trained from a young age to excel at high-stakes standa
a52d9c47-d5ae-4549-9508-73b41809fabc
trentmkelly/LessWrong-43k
LessWrong
Real-Time Research Recording: Can a Transformer Re-Derive Positional Info? New experiment: Recording myself real-time as I do mechanistic interpretability research! I try to answer the question of what happens if you train a toy transformer without positional embeddings on the task of "predict the previous token" - turns out that a two layer model can rederive them! You can watch me do it here, and you can follow along with my code here. This uses a transformer mechanistic interpretability library I'm writing called EasyTransformer, and this was a good excuse to test it out and create a demo! This is an experiment in recording and publishing myself doing "warts and all" research - figuring out how to train the model and operationalising an experiment (including 15 mins debugging loss spikes...), real-time coding and tensor fuckery, and using my go-to toolkit. My hope is to give a flavour of what actual research can look like - how long do things actually take, how often do things go wrong, what is my thought process and what am I keeping in my head as I go, what being confused looks like, and how I try to make progress. I'd love to hear whether you found this useful, and whether I should bother making a second half! Though I don't want to overstate this - this was still a small, self-contained toy question that I chose for being a good example task to record (and I wouldn't have published it if it was TOO much of a mess).
9c3fd267-c3cf-49eb-b46a-1f27d586fa61
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Acausal trade naturally results in the Nash bargaining solution In this post, I demonstrate that if we treat acausal trade as being analogous to ordinary trade (in the sense that it obeys supply and demand), the final result is the [Nash bargaining solution](https://en.wikipedia.org/wiki/Cooperative_bargaining#Nash_bargaining_solution). Supply and demand of utility ============================ Alice is an AI that wants to maximize apples. Bob is an AI that wants to maximize oranges. One day, they decide to trade. Let's say Alice releases that by producing 3 fewer apples, she could produce 4 oranges. And Bob can produce 4 apples by forgoing 3 oranges. So Alice and Bob can achieve a better outcome if they both do this. Notice that this is, in essence, just ordinary trade. (Note that in the acausal context where Alice only believes Bob has a 50% chance of existing, the apples he produces only count for 50%. For simplicity, we will treat them as having physically met and both having a 100% chance of existing.) How much more should Alice and Bob trade? We can imagine there being a supply and demand curve for apples, priced in oranges. Alice demands apples (by paying a price in oranges) and Bob supplies apples (by being paid a price in oranges). Where these curves meet is the optimal trade. In particular, the point consists of a price p.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  and a number of apples x. The number of oranges in the trade will be y=px. The two properties of utility trading ===================================== This trade has the following two properties: 1. The "books balance": the amount of oranges provided by Alice are equal in value to the apples provide by Bob when compared with the price p. 2. The solution maximizes value: the trade maximizes the value when we treat an orange as being p times as valuable as an apple. In particular, the trade is on the [Pareto front](https://en.wikipedia.org/wiki/Pareto_front). Note that it does not need be the only solution that maximizes the value. The reason the trade maximizes value is as follows: If there is a more valuable solution with more apples, than Bob's supply curve was too low (he could have offered more apples at the price p). If there is a superior solution with more oranges, the price was too high (Alice could have offered more oranges, thus lowering the price). In practice, acausal trade is more than just trading physical objects. But I still think these two criteria are natural criteria for a satisfactory trade. If a solution gives one agent twice as many utils as the other, they should be willing to give up one of their utils to give their trading partner two utils (note that each agents utils should actually be thought of as different units in the sense of [dimensional analysis](https://en.wikipedia.org/wiki/Dimensional_analysis)). So we regard an acausal trade as resulting in an outcome such that there exists a price p that satisfies the two criteria. The price is not actually part of the outcome; we just require that such a price exists. Proof that Nash bargaining is the unique solution satisfying these two properties ================================================================================= Theorem: A solution has some price p that satisfies the two criteria iff it is the Nash bargaining solution. Proof: (⟹) Let x and y be the utility gains to Alice and Bob respectively in some outcome. By condition 1, the price is p=yx. The value is therefore yxx+y.  Let's say that the Nash bargaining solution has utility gains x+Δx and y+Δy. Utilizing mixed strategies, we also get a spectrum of solutions with utility gains x+kΔx and y+kΔy where k ranges over [0,1]. If we take the derivative of (x+kΔx)(y+kΔy) with respect to k the result is yΔx+xΔy+2kΔyΔx. Since the Nash bargaining solution (k=1) maximizes the product, we get that yΔx+xΔy+2ΔyΔx≥0. yΔx+xΔy+2ΔyΔx≥0 yΔx+xΔy≥−2ΔyΔx Assuming (x,y) and (x+Δx,y+Δy) are different, yΔx+xΔy>0 (since both solutions are pareto efficient, Δx and Δy must have opposite signs). y(x+Δx)+x(y+Δy)>yx+xy yx(x+Δx)+(y+Δy)>yxx+y And thus if the (x,y) solution wasn't the Nash bargaining solution, the Nash bargaining solution is better according to the price p. Therefore, (x,y) only satisfies the two criteria if it is the Nash bargaining solution. □ (⟸) Let's say the Nash bargaining solution has utility gains x and y. Let p=yx (this satisfies condition 1). The value is therefore yxx+y. Consider a different solution with utility gains x+Δx and y+Δy. Utilizing mixed strategies, we also get a spectrum of solutions with utility gains x+kΔx and y+kΔy where k ranges over [0,1]. If we take the derivative of (x+kΔx)(y+kΔy) with respect to k the result is yΔx+xΔy+2kΔyΔx. Since the Nash bargaining solution (k=0) maximizes the product, we get that yΔx+xΔy≤0. yΔx+xΔy≤0 y(x+Δx)+x(y+Δy)≤yx+xy yx(x+Δx)+(y+Δy)≤yxx+y Thus the Nash bargaining solution is at least as good (when judged using price p=yx) as any other solution, and thus meets criteria 2. □ So there you have it. That's how you calculate a trade of utility! Open questions ============== 1. To determine the utility gains, you need a way to determine the "no trade" outcome. How do you do this? My first thought is to use the Nash equilibrium, but that gives counter-intuitive results in the [ultimatum game](https://en.wikipedia.org/wiki/Ultimatum_game), and it is also not unique. If you try minimax, you get a way to calculate [acausal blackmail](https://www.lesswrong.com/tag/rokos-basilisk) instead of acausal trade. 2. How does this generalize to more than two participants? I'm thinking that you will need to involve [Shapely Values](https://en.wikipedia.org/wiki/Shapley_value) somehow when determining how much each agent should credit the others. See [Shapley values: Better than counterfactuals](https://forum.effectivealtruism.org/posts/XHZJ9i7QBtAJZ6byW/shapley-values-better-than-counterfactuals).
070bd0e7-b7bd-4108-be00-6b8a6acb319b
trentmkelly/LessWrong-43k
LessWrong
Cutting edge technology Original post: http://bearlamp.com.au/cutting-edge-technology/ When the microscope was invented, in a very short period of time we discovered the cell and the concept of microbiology.  That one invention allowed us to open up entire fields of biology and medicine.  Suddenly we could see the microbes!  We could see the activity that had been going on under our noses for so long. when we started to improve our ability to refined pure materials we could finally make furnace bricks with specific composition.  Specific compositions could then be used to make bricks that were able to reach higher temperatures without breaking.  Higher temperatures meant better refining of materials.  Better refining meant higher quality bricks, and so on until we now have some very pure technological processes around making materials.  But it's something we didn't have before the prior technology on the skill tree.   Before we had refrigeration and food packaging, it was difficult to get your fresh food to survive to your home.  Now with production lines it's very simple.  For all his decadence Caesar probably would have had trouble ordering a cheeseburger for $2 and having it ready in under 5 minutes.  We've come a long way since Caesar.  We've built a lot of things that help us stand on the shoulders of those who came before us. Technology enables further progress.  That seems obvious.  But did that seem obvious before looking down the microscope?  Could we have predicted what bricks we could have made with purely refined materials?  Could Caesar have envisioned every citizen in his kingdom watching TV for relatively little cost to those people?  It would have been hard to forsee these things back then. With the idea that technology is enabling future growth in mind - I bring the question, "What technology is currently under-utilised?"  Would you be able to spot it when it happens?  Touch screen revolutionised phone technology.  Bitcoin - we are still watching but it's here to stay