id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
c435613c-9164-43ea-8338-b5ae2dae1e1b | trentmkelly/LessWrong-43k | LessWrong | My Detailed Notes & Commentary from Secular Solstice
Previously: General Thoughts on Secular Solstice.
This blog post is my scattered notes and ramblings about the individual components (talks and songs) of Secular Solstice in Berkeley. Talks have their title in bold, and I split the post into two columns, with the notes I took about the content of the talk on the left and my comments on the talk on the right. Songs have normal formatting.
Bonfire
The Circle
This feels like a sort of whig history: a history that neglects most of the complexities and culture-dependence of the past in order to advance a teleological narrative. I do not think that whig histories are inherently wrong (although the term has negative connotations). Whig histories should be held to a very strict standard because they make claims about how most or all of human history functions.
The song describes morality in terms of an expanding circle of concern: kin → neighbor → humanity[1] → "feathers, fur, and silicon" → future.
Trying to line these up with historical societies or ideologies is ... difficult. Many societies do not have a concept of 'neighbor,'[2] and some do not understand ethics in terms of circles of moral concern.[3] A few moral systems are universalistic (i.e. they teach that people should have moral concern for all of humanity): Christianity,[4] liberal democracy,[5] and maybe Buddhism.[6] Actually practicing universalism is really hard: Most societies which preach universalism do not live up to its ideals.
Within one of these traditions, the whig version of history can make sense. Over the centuries, Christianity has dramatically expanded and Christian activists from Francis of Assisi to Martin Luther King have made it more true to the ideals of the New Testament. Similarly, liberal democracy has expanded dramatically, extended the right to vote for more people, and gotten better at defending many freedoms. (I don't know what's going on with Buddhism, but its failure to build/maintain a dominant position in India is eviden |
8587d872-d0d5-4510-a1cb-24bc494d581c | trentmkelly/LessWrong-43k | LessWrong | Amending the "General Pupose Intelligence: Arguing the Orthogonality Thesis"
Stuart has worked on further developing the orthogonality thesis, which gave rise to a paper, a non-final version of which you can see here: http://lesswrong.com/lw/cej/general_purpose_intelligence_arguing_the/
This post won't make sense if you haven't been through that.
Today we spent some time going over it and he accepted my suggestion of a minor amendment. Which best fits here.
Besides all the other awkward things that a moral convergentist would have to argue for, namely:
> This argument generalises to other ways of producing the AI. Thus to deny the Orthogonality thesis is to assert that there is a goal system G, such that, among other things:
> 1. There cannot exist any efficient real-world algorithm with goal G.
> 2. If a being with arbitrarily high resources, intelligence, time and goal G, were to try design an efficient real-world algorithm with the same goal, it must fail.
> 3. If a human society were highly motivated to design an efficient real-world algorithm with goal G, and were given a million years to do so along with huge amounts of resources, training and knowledge about AI, it must fail.
> 4. If a high-resource human society were highly motivated to achieve the goals of G, then it could not do so (here the human society is seen as the algorithm).
> 5. Same as above, for any hypothetical alien societies.
> 6. There cannot exist any pattern of reinforcement learning that would train a highly efficient real-world intelligence to follow the goal G.
> 7. There cannot exist any evolutionary or environmental pressures that would evolving highly efficient real world intelligences to follow goal G.
We can add:
8. If there were a threshold of intelligence above which any agent will converge towards the morality/goals asserted by the anti-orthogonalist, there cannot exist any system, composed of a multitude of below-threshold intelligences that will as a whole pursue a different goal (G) than the convergent one (C), without any in |
561927e6-06de-43f1-847b-fd623c7c47bd | trentmkelly/LessWrong-43k | LessWrong | Superintelligence 11: The treacherous turn
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
----------------------------------------
Welcome. This week we discuss the 11th section in the reading guide: The treacherous turn. This corresponds to Chapter 8.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: “Existential catastrophe…” and “The treacherous turn” from Chapter 8
----------------------------------------
Summary
1. The possibility of a first mover advantage + orthogonality thesis + convergent instrumental values suggests doom for humanity (p115-6)
1. First mover advantage implies the AI is in a position to do what it wants
2. Orthogonality thesis implies that what it wants could be all sorts of things
3. Instrumental convergence thesis implies that regardless of its wants, it will try to acquire resources and eliminate threats
4. Humans have resources and may be threats
5. Therefore an AI in a position to do what it wants is likely to want to take our resources and eliminate us. i.e. doom for humanity.
2. One kind of response: why wouldn't the makers of the AI be extremely careful not to develop and release dangerous AIs, or relatedly, why wouldn't someone else shut the whole thing down? (p116)
3. It is hard to observe whether an AI is dangerous via its behavior at a time when you could turn it off, because AIs have convergent instrumental reasons to |
5ccf27d4-59f4-457c-941f-a465406332f2 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Canberra HPMOR Wrap Party!
Discussion article for the meetup : Canberra HPMOR Wrap Party!
WHEN: 15 March 2015 06:00:00PM (+1100)
WHERE: 70/10 Thynne St, Bruce, ACT
Harry Potter and the Methods of Rationality is finally coming to a conclusion! To celebrate, we invite all Canberran Less Wrongers to our wrap-up party. (: Snacks will be provided.
Make sure you read all the new chapters so that we can have some interesting discussions.
Note that this is on Sunday instead of Saturday (π day), and there will be no LW meeting on Saturday.
Discussion article for the meetup : Canberra HPMOR Wrap Party! |
75a47982-3b27-4ad1-8205-08b5d913895c | trentmkelly/LessWrong-43k | LessWrong | Confidence In Opinions, Intensity In Opinion
On a scale of 1 to 100, how sure are you?
It's a good thing to ask yourself from time to time about intense beliefs, especially if you're having a disagreement with someone else smart.
Just putting a number on something is good. If you're in business, putting any number in the high 90's is dangerous and shouldn't happen too often.
Yet, you still have to aggressively and intensely pursue your plans.
You can be only 80% sure you're correct, and still intensely pursue a course of action.
Most people make a mistake: they only go intensely after things they have a very high certainty will work.
But this is backwards. It's absolutely right to say "I'm only 80% sure that going and making a great talk to this group will help develop my business," and to still aggressively pursue giving a great talk.
The same is true with having ridiculously exceptionally good service. You can say, "I'm only 60% sure that doing this is going to lead to more customer loyalty... this might just be a time sink and cost more than it returns. But let's kill it on it, and find it."
You don't need to be highly confident to intensely pursue something.
In fact, intensely pursuing not-certain things seems to be how the world develops. |
ef28f1e1-5a21-4d7e-b6ec-6b350abfd8db | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Call for contributors to the Alignment Newsletter
**TL;DR:** I am looking for (possibly paid) contributors to write summaries and opinions for the Alignment Newsletter. This is currently experimental, but I estimate ~80% chance that it will become long-term, and so I’m looking for people who are likely to contribute at least 20 summaries over the course of their tenure at the newsletter (see caveats in the post). **To apply, read this [doc](https://docs.google.com/document/d/1zzMyG2VmyW5l3yy7rBx4B8pLwGxLnMRRHgFNJEdHLgg/edit#), write an example summary + opinion, and fill out this [form](https://docs.google.com/forms/d/e/1FAIpQLScyZ4CN4A4lj3qxNe0y8pqs8xg_K72I3QskT9lufCqtw9Ul4Q/viewform?usp=sf_link) by Friday, September 6.** I am also looking for someone to take over the work of publishing the newsletter (~1-3 hours per week); please send me an [email](mailto:rohinmshah@berkeley.edu?subject=Interested%20in%20publisher%20role%20for%20Alignment%20Newsletter%20EOM) if you’d be interested in this.
**ETA:** I now have enough interest in the publisher role that I would be shocked if none of them worked out. Feel free to continue expressing interest if you think you'd particularly benefit from doing the work, or if you think you'd be particularly good at it.
### Roles I am looking for
**Publisher:** Once all of the summaries and opinions are written, you would turn them into an actual newsletter, send it out for proofreading, fix any typos found, update the [database](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit#gid=0), etc. This currently takes me around half an hour per newsletter. Ideally, you would also take on some tasks that I haven’t found the time for: improving the visual design of the newsletter, A/B testing different versions to see what people engage with, publicity, and so on, for a total of ~1-3 hours per week.
Since I don’t yet have the setup to pay people to help with the newsletter, I am only looking for expressions of interest. If you think you’d be interested in this role, click [this link](mailto:rohinmshah@berkeley.edu?subject=Interested%20in%20publisher%20role%20for%20Alignment%20Newsletter%20EOM) to email me at <rohinmshah@berkeley.edu> with the subject line “Interested in publisher role for Alignment Newsletter EOM”. If I do end up hiring for the publisher role I’ll reach out to you with more details.
The rest of this doc will be focused on the more substantial role:
**Content creator:** You would choose articles that you’re interested in, and write summaries and opinions for them, that would then be published in the newsletter.
### Why am I looking for content creators?
In the past few months, I haven’t been allocating as much time to the newsletter (you may have noticed they’re coming out every other week now). There have been many other things that seem more important to do. This is both because I’m more optimistic about the other work I’m doing, and because I no longer find it as useful to read papers as I did when I started the newsletter. As a result, I now have over 100 articles that I would probably want to send out, but haven’t gotten around to yet. This is also partly because there’s just more stuff coming out now. (I mentioned some of these points in the [retrospective](https://alignmentforum.org/posts/3onCb5ph3ywLQZMX2/alignment-newsletter-one-year-retrospective).)
Another reason for more content creators is that as I have learned more since starting the newsletter, I have developed my own idiosyncratic beliefs, and I think I have become worse at intuitively interpreting other posts from the author’s perspective rather than my own. (In other words, I would perform worse at an Ideological Turing Test of their position than I would have in the past, unless I put in a lot of effort into it.) I expect that with more writers the newsletter will better reflect a diversity of opinions.
### Why should you do it?
**It’s impactful.** See the [retrospective](https://alignmentforum.org/posts/3onCb5ph3ywLQZMX2/alignment-newsletter-one-year-retrospective) for more on this point. I’m not currently able to get a (normal length) newsletter out every week; you’d likely be causally responsible for getting back to weekly newsletters.
**You will improve your analytical writing skills.** Hopefully clear.
**You’ll learn more about safety by reading papers.** You could do this by yourself, but by summarizing the papers, you’re also providing a valuable service for everyone else.
**You might learn more about AI safety by getting feedback from me.** This is a “might” because I don’t know how much feedback I will end up giving to you about your summaries and opinions that’s actually about key ideas in AI safety (as opposed to feedback about the writing itself).
**You might build career capital.** I certainly have built career capital by creating this newsletter -- it has made me well known in some communities. I don’t know to what extent this will transfer to you.
**You might be paid.** Currently this is experimental, so I haven’t actually thought much about payment. I expect that I could get a grant to pay you if I ended up deciding that it would be worth it. However, it might be that dealing with all of the paperwork + tax implications cancels out any time savings, though I think this is unlikely. If this is an important factor to you, please do let me know when you apply.
### Qualifications
* Likely to contribute at least 20 summaries to the newsletter over time, at least 4 of which are in the first month (for onboarding purposes). Alternatively, if you have deep expertise in a topic that the newsletter covers infrequently, such as formal verification, you should be likely to summarize relevant papers for at least the next 6 months.
* Basic familiarity with AI safety arguments
* Medium familiarity with the topic that you want to write summaries about
* Good writing skills (though I recommend just applying regardless and letting me evaluate based on your example summary)
### Application process
Fill out this [form](https://docs.google.com/forms/d/e/1FAIpQLScyZ4CN4A4lj3qxNe0y8pqs8xg_K72I3QskT9lufCqtw9Ul4Q/viewform?usp=sf_link). The main part of the application is to write an example summary and opinion for an article (which I may send out in the newsletter, if you give me permission to). Ideally you would write a summary on one of the articles from the list below, but if there isn’t an article in the subarea you’d like to write on, you can choose some other article (that hasn’t already been summarized in the newsletter) and summarize that. The whole process should take 1-4 hours, depending on how much time you put into the summary and opinion.
List of articles:
* [Aligning a toy model of optimization](https://www.alignmentforum.org/posts/H5gXpFtg93qDMZ6Xn/aligning-a-toy-model-of-optimization)
* [Four Ways An Impact Measure Could Help Alignment](https://alignmentforum.org/posts/wJK944YqvFwjdbqCP/four-ways-an-impact-measure-could-help-alignment)
* [AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence](https://arxiv.org/abs/1905.10985)
* [Learning to Interactively Learn and Assist](https://interactive-learning.github.io/)
* [Natural Adversarial Examples](http://arxiv.org/abs/1907.07174)
* [On Inductive Biases in Deep Reinforcement Learning](http://arxiv.org/abs/1907.02908) |
d3ceeb82-dfcb-419d-b1be-c0fa36357a98 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Explained Simply: Quantilizers
Standard AI are *optimizers*: they ‘look’ through possible actions they could take, and pick the one that maximises what they care about. This can be dangerous— an AI which maximises in this way needs to care about *exactly* the same things that humans care about, which is [really hard](https://www.lesswrong.com/tag/complexity-of-value)[[1]](#fnsmixvrb0gvj). If you [tell a human](https://ui.stampy.ai/?state=7523_6380r8160r8185r6984r9OGZ-9IDQ-9TDI-8TJV-) to calculate as many digits of pi as possible within a year, they’ll do ‘reasonable’ things towards that goal. An optimizing AI might work out that it could calculate many more digits in a year by taking over another supercomputer— as this is the most effective action, it seems very attractive to the AI.
[Quantilizers](https://intelligence.org/files/QuantilizersSaferAlternative.pdf) are a different approach. Instead of maximizing, they randomly choose from a few of the most effective possible actions:
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31529c1b-5fac-4361-9357-a939ab77dbc3_1452x1010.png)
They work like this:
1. Start with a goal, and a set of possible actions
2. Predict how useful each action will be for achieving the goal
3. Rank the actions from the most to the least useful
4. Pick randomly from the highest fraction only (i.e, the top 10%)
This avoids cases where the AI chooses extreme actions to maximize the goal. The AI chooses somewhat helpful actions instead.
It does leave one question — how do we make a list of possible actions in the first place? One suggestion is to ask a lot of humans to solve the task and train an AI to generate possible things it thinks humans would do. This list can then be used as an input to our quantilizer.
This does make them less effective, of course— firstly by picking less effective actions overall, and secondly by picking actions it thinks humans would take. But this might be worth the reduced risks— indeed, based on your risk tolerance, you can change the % of top actions the quantilizer will consider to make it more effective and riskier or vice versa.
So quantilizers trade some capability in exchange for greater safety, and avoid unintended consequences. They pick from lots of mild actions and very few extreme actions, so the chance of them doing something extreme or unexpected is miniscule.
Quantilizers are a proposed safer approach to AI goals. By randomly choosing from a selection of the top options, they avoid extreme behaviors that could cause harm. More research is needed, but quantilizers show promise as a model for the creation of AI systems that are beneficial but limited in scope. They provide an alternative to goal maximization, which can be dangerous, though they’re just theoretical right now.
1. **[^](#fnrefsmixvrb0gvj)**Humans care about an awful lot of different things, even just one human! |
e327e476-7f09-4b54-b5b6-aae8a231f188 | trentmkelly/LessWrong-43k | LessWrong | The Greedy Doctor Problem
TL;DR: How to reason about people who are smarter than you. A few proposals, interspersed with reinforcement learning and humorous fiction. Ending on a surprising connection to logical inductors.
What is the Greedy Doctor Problem?
I came up with a neat little thought experiment[1] :
> You are very rich and you want to make sure that you stay healthy. But you don't have any medical expertise and, therefore, you want to hire a medical professional to help you monitor your health and diagnose diseases. The medical professional is greedy, i.e. they want to charge you as much money as possible, and they do not (per se) care about your health. They only care about your health as far as they can get money from you. How can you design a payment scheme for the medical professional so that you actually get the ideal treatment?
Over the last few weeks, I've been walking around and bugging people with this question to see what they come up with. Here I want to share some of the things I learned in the process with you, as well as some potential answers. I don't think the question (as presented) is completely well-formed, so the first step to answering it is clarifying the setup and deconfusing the terms. Also, as is typical with thought experiments, I do not have a definitive "solution" and invite you (right now!) to try and come up with something yourself[2].
Some background on the problem
The subtext for the thought experiment is: How should you act when interacting with someone smarter than yourself? What can you say or do, when your interlocutor has thought of everything you might say and more? Should you trust someone's advice, when you can't pinpoint their motivation? As a Ph.D. student, I run into this problem around three to five times a week, when interacting with colleagues or my advisor[3].
After bugging a few people I learned that (of course) I'm not the first person to think about this question. In economics and political science, the situation is known as t |
68b80aeb-1585-4d81-b402-910b1f67dfd9 | trentmkelly/LessWrong-43k | LessWrong | Some Tools For Optimizing Our Media Use
Benevolent and malevolent media producers possess the power to influence society in positive and negative ways. They can do this through agenda setting, framing, priming, spreading memes, altering perceptions of groups and individuals, outright propaganda, and other methods.
I think more attention should be paid to the pathways from content to effects, so that we can optimize our cultural landscape.
Lest this post soon turn into Applause Light Vegas, I’ll now get into some methods I think can be used to sway mass opinion in a direction amenable to making the world better. Many of these methods deal with familiar biases, heuristics, and psychological effects.
Media Use Facilitating Positive Social Change
First, the mass media possesses the power to alter estimates of the likelihood and frequency of specific occurrences. Think back to some of the classic examples of the availability heuristic. When asked to estimate the number of homicides in the USA compared to suicides, people answer that there are far more homicides in the United States, even though the reverse is true. The mass media report on homicides far more often than they report on suicides, so people have more available instances of homicide in their memories and these come to mind more easily. This influences their beliefs about the real world, which can then be politicized to lead to different stances on gun control and education. The priorities of a culture with a homicide problem are not the priorities of a culture with a suicide problem.
This effect is consistent with some theoretical models of the mass media’s impact on society. Cultivation theorists understand the media, especially television, as a system of coherent memes and messages reflecting a society’s dominant ideology. If we accept the fundamental claim of cultivation theory then we should hypothesize exposure to television to be positively correlated with status quo beliefs and attitudes. We might then expect high exposure to non-fic |
e81402a0-2138-479f-bb98-1d9e2c0179b6 | trentmkelly/LessWrong-43k | LessWrong | Rodney Brooks talks about Evil AI and mentions MIRI [LINK]
Rodney Brooks says that "evil" AI is not a big problem:
http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/ |
faaff190-8854-4bef-b4c9-c6a793c3310f | trentmkelly/LessWrong-43k | LessWrong | Good HPMoR scenes / passages?
I'm doing a reading of good fan-fiction at a con this weekend, to counter the many "bad fanfic reading" panels. I want to read an interesting passage from HPMoR, but I can't remember any particular passage myself, and I don't want to re-read the whole thing this week. Can anyone remember any scene or passage that stuck in their mind as impressive?
It should:
* be less than 1000 words
* be understandable to someone who's read Harry Potter, but not HPMoR, with at most one or two sentences of background info (please write those sentences in your reply)
* be good at the things that make HPMoR special, e.g.,
* a clever "exploit" of Muggle society using magic
* gobsmacking you with a new way of looking at something you thought you knew well
* convincing you that something that seems evil is the right thing to do
* Quirrell or Harry creeping you out, scaring you, or making you question your ethics by being ruthlessly rational |
80fcd461-7eb6-4566-bf53-0b9226142bdb | trentmkelly/LessWrong-43k | LessWrong | -
- |
4da7e566-6a4c-4623-83a7-2cc2ca4e97c8 | trentmkelly/LessWrong-43k | LessWrong | The Compendium, A full argument about extinction risk from AGI
We (Connor Leahy, Gabriel Alfour, Chris Scammell, Andrea Miotti, Adam Shimi) have just published The Compendium, which brings together in a single place the most important arguments that drive our models of the AGI race, and what we need to do to avoid catastrophe.
We felt that something like this has been missing from the AI conversation. Most of these points have been shared before, but a “comprehensive worldview” doc has been missing. We’ve tried our best to fill this gap, and welcome feedback and debate about the arguments. The Compendium is a living document, and we’ll keep updating it as we learn more and change our minds.
We would appreciate your feedback, whether or not you agree with us:
* If you do agree with us, please point out where you think the arguments can be made stronger, and contact us if there are ways you’d be interested in collaborating in the future.
* If you disagree with us, please let us know where our argument loses you and which points are the most significant cruxes - we welcome debate.
Here is the twitter thread and the summary:
> The Compendium aims to present a coherent worldview about the extinction risks of artificial general intelligence (AGI), an artificial intelligence that exceeds that of humans, in a way that is accessible to non-technical readers who have no prior knowledge of AI. A reader should come away with an understanding of the current landscape, the race to AGI, and its existential stakes.
>
> AI progress is rapidly converging on building AGI, driven by a brute-force paradigm that is bottlenecked by resources, not insights. Well-resourced, ideologically motivated individuals are driving a corporate race to AGI. They are now backed by Big Tech, and will soon have the support of nations.
>
> People debate whether or not it is possible to build AGI, but most of the discourse is rooted in pseudoscience. Because humanity lacks a formal theory of intelligence, we must operate by the empirical observation that AI c |
004c540d-24cc-4f6e-9c46-111a242fda4e | trentmkelly/LessWrong-43k | LessWrong | Meetup : Paderborn Meetup, November 22th
Discussion article for the meetup : Paderborn Meetup, November 22th
WHEN: 22 November 2012 07:00:00PM (+0100)
WHERE: Gownsmen's Pub, Uni Paderborn, Warburger Straße 100, Paderborn
The first meetup in this city ever.
There is lots of rationality stuff to talk about and the more people come, the more interesting people will be there to meet.
Even if you are new to Lesswrong and have only read some articles, don't be to shy to show up.
I will have a sign.
Everybody is welcome!
Discussion article for the meetup : Paderborn Meetup, November 22th |
8f7f1533-39e2-45e1-9683-deb61d328bd8 | trentmkelly/LessWrong-43k | LessWrong | Per Tribalismum ad Astra
Capitalism is powered by greed. People want to make money, so they look hard for things they can produce and that others want. Unknowingly, however, they are powering the great information-processing machine that is the market. The output of the machine is the efficient allocation of resources and, eventually, wealth.
Something we intuitively consider bad (greed) is made to work for the benefit of the entire society. Even more so: if everyone suddenly stopped being greedy, the economy would likely fall apart, leaving us in a harsh dog-eat-dog world. Or it would at least stop growing and start to slowly deteriorate.
Greed is a powerful motivator that can be used to drive the engine of societal well-being, but it's not the only motivating force around.
Another such force is tribalism, the desire to defend your own and attack others.
However, we haven't tamed tribalism as much as we have greed. We don't use that force to power the improvement of society.
One could maybe argue that tribalism can be used to wage war effectively and that a country could benefit from winning wars. But war is inherently a negative-sum enterprise, so I don't think that really counts.
Or we could argue that tribalism is used in democracies to limit the power of the government: opposition is constantly attacking the government, which in turn prevents it from going fully bonkers. However, even if this is a force for good, it's completely undirected. Opposition is incentivized to attack the government for anything they do, good or bad. There's no self-steering mechanism, such as the one that steers greed toward productive endeavors in the market economy. The more attacks, the better, however dumb, and the whole thing eventually devolves into a political reality show devoid of any real content. This, in turn, undermines public trust in the political system.
Now consider the following, from an article about the making of Twitter's "Community Notes," published in the excellent Asterisk magaz |
3dd18869-afc1-4d75-bdaf-a5c125f9b264 | trentmkelly/LessWrong-43k | LessWrong | What are some alternatives to Shapley values which drop additivity?
I originally asked this on math.stackexchange; after reading Diffractor's Unifying Bargaining sequence (Part 1 here) I'm wondering if there are more insights floating about, so I'm repeating it here.
Shapley values seem to be the standard answer to "how should a coalition split the rewards of their cooperation", but I'm curious about alternatives.
The standard characterization of Shapley values says that Shapley values are the unique coalition payments which satisfy a bunch of properties. Three of them (efficiency, symmetry, and null player) seem pretty necessary for any "reasonable" or "practical" coalition payment rule, but the last one (linearity) does not.
If I didn't care for linearity (or its close synonyms, additivity and aggregation):
* What sorts of payment rules become available?
* What other properties of Shapley values are maintained?
* What other properties would produce a uniquely characterized payment rule?
Alternatively, are any of the other properties also reasonable to drop (for instance, symmetry)? What do you end up with? |
bccd74ff-ad3f-4e95-9a40-8ed7557e9ffe | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Why not to solve alignment by making superintelligent humans?
Okay, hear me out: we already have human level/ general intelligent human aligned agents by the billions. And making new ones artificially seems to be really hard according to lots of alignment researchers. So why haven't I heard or read about projects trying to improve human intelligence? I'm not saying to do it in order to solve the alignment of pure artificial intelligences (although that could be a possibility too), but to the point of getting a hybrid (natural + artificial) superintelligence or just more useful than AIs.
I know that then there is the question about what a superintelligence would do or how having it aligned would help us to not make misaligned ones, and although there are possible good answers to that, those are questions that already exist outside "normal" alignment.
The only similar thing to that that I've heard was Elon Musk talking about Neuralink, but there is a hugeee difference between the things that Elon mentions as aspirations (in particular merging with an AI) and the product that is making right now. I don't see how Brain-Computer Interfaces would improve our decision making that much. A lot of people when they talk about augmenting intelligence seem to bring them up, and of course they could be useful, but again: I'm not talking about using them to solve AI Alignment, but to get around it. I'm wondering if we can find a way of scaling human intelligence in the same way we scale artificial intelligence.
I found a post that briefly [mentions](https://www.lesswrong.com/posts/rpRsksjrBXEDJuHHy/brain-computer-interfaces-and-ai-alignment#Scaling_far_Beyond_Human_Intelligence) similar ideas than mine but under the term BCI, which I don't understand if it's a more abarcative term than "a device that allows you to use other devices with the mind", because as I said, I don't know any device which would improve our decision making that much if we could just use it with our minds.
The clearest strategy that comes to mind is to make artificial neurons communicable with biological ones and then integrable with whole human neural networks. Could that be possible? I know that it could sound crazy, but I guess I'm talking to the people who think aligning an AI is really difficult and that having superintelligences on humanity's side sooner or later seems like the only path forward. |
1f3aed48-851d-4805-87fb-c2e560435623 | trentmkelly/LessWrong-43k | LessWrong | Against Compromise, or, Deciding as a Team without Succombing to Entropy
|
81813359-44ab-4f53-a3d5-d4ed033217ab | trentmkelly/LessWrong-43k | LessWrong | “Flaky breakthroughs” pervade coaching — but no one tracks them
Has someone you know ever had a “breakthrough” from coaching, meditation, or psychedelics — only to later have it fade?
Show tweet
For example, many people experience ego deaths that can last days or sometimes months. But as it turns out, having a sense of self can serve important functions (try navigating a world that expects you to have opinions, goals, and boundaries when you genuinely feel you have none) and finding a better cognitive strategy without downsides is non-trivial. Because the “breakthrough” wasn’t integrated with the conflicts of everyday life, it fades. I call these instances “flaky breakthroughs.”
It’s well-known that flaky breakthroughs are common with psychedelics and meditation, but apparently it’s not well-known that flaky breakthroughs are pervasive in coaching and retreats.
For example, it is common for someone to do some coaching, feel a “breakthrough”, think, “Wow, everything is going to be different from now on,” but feel and act no differently weeks or months later.
Worse, some techniques can even cause bypassing. Such “false breakthroughs” can come with intense positive affect or “cathartic” crying without addressing the underlying issue. (More below.)
Flaky breakthroughs can set people back for years or decades: If someone has a “breakthrough” that unexpectedly reverts, they can become jaded on progress itself. They can learn helplessness and give up on growing. The most depressed person you know has likely had this happen multiple times.
Show tweet
Flaky breakthroughs pervade inner work. Despite this, almost no one — coaches, therapists, retreats, bodyworkers, etc. — tracks whether their breakthroughs last.
Almost no practitioners track whether breakthroughs last.
Earlier this year, I attempted to make a list of “10x Coaches” to refer people to. 20–30 coaches reached out as interested in working with me, and I asked each to share the best evidence that they had facilitated lasting growth for others.
But all anyone could s |
4744ce17-ec6b-4620-942a-92ecdaeeff58 | trentmkelly/LessWrong-43k | LessWrong | Untypical SIA
TL;DR: If I am untypical, then many more observers exist. If this claim is necessarily true, then classical SIA is false.
There is a situation where we could prove something like SIA, let’s call it Untypical SIA. Imagine that there are 100 green dots and one red dot on a bow’s target (and they cover all the space). We know that there were either 10 or 100 attempts to hit the target with a random arrow. We only know that the red dot was eventually hit (maybe we are in it). Now we can give higher credence to the theory that there were 100 attempts (P1(red)=0.6339) over the theory of only 10 attempts (P2(red)=0.0956) to hit the red dot. Credence in 100 attempts = P1/(P1+P2) = 0.86.
The resonating above is made from God’s view and seems to have high epistemic status: it is not an assumption, it is a proof.
Examples of application of such untypical-SIA reasoning in the real life: if there is life on Earth, there are many other planets out there, which tried but fail to develop life. If humans are the only intelligent species, there are many others species that “tried” and fail to become intelligent. If our universe is fine-tuned, there are many other universes which are not fine-tuned, so the multiverse theory is true.
If Untypical SIA is a necessary condition for a larger set of observers, then classical SIA is false.
Untypical SIA requires that the red dot is different from the green dots. If all dots are red, and we learn that one of the dots was hit, it doesn’t give us any new information to update. If uniqueness is the necessary (=if and only if) condition for update when the lack of uniqueness means no update. In that case, classical SIA is false (it is not a problem in the infinite universe, where something like SIA still works, as we should look at relative concentrations of observers in different regions of the multiverse instead; all our SIA-based cosmological constructions are safe herein).
In other words, the fact of my existence is an argument in fa |
290730a1-b6d5-42b0-a238-8d45f8643651 | trentmkelly/LessWrong-43k | LessWrong | is gpt-3 few-shot ready for real applications?
This is a lengthy reply to @the-moti ‘s post here. Creating a new post to limit thread length, and so I can crosspost to LW.
@the-moti says, in part:
> This obviously raises two different questions: 1. Why did you think that no one would use few-shot learning in practice? 2. Why did other people think people would use few-shot learning in practice?
>
> I would be interested in hearing your thoughts on these two points.
—
Thanks for asking!
First of all, I want to emphasize that the GPT-3 paper was not about few-shot GPT-3 as a practical technology.
(This is important, because the paper is the one large body of quantitative evidence we have on few-shot GPT-3 performance.)
This is not just my take on it: before the OpenAI API was announced, all the discussion I saw took for granted that we were talking about a scientific finding and its broader implications. I didn’t see any commentator whose main takeaway was “wow, if I could do this few-shot thing right now, I could build amazing projects with it.”
Indeed, a common theme in critical commentary on my post was that I was too focused on whether few-shot was useful right now with this specific model, whereas the critical commentators were more focused on the implications for even larger models, the confirmation of scaling laws over a new parameter regime, or the illustration-in-principle of a kind of meta-learning. Gwern’s May newsletter is another illustrative primary source for the focus of the discussion in this brief “pre-API” period. (The API was announced on June 11.)
As I read it (perhaps benefitting from hindsight and discussion), the main points of the paper were
(1) bigger models are better at zero/few-shot (i.e. that result from the GPT-2 paper holds over a larger scale),
(2) more “shots” are better when you’re doing zero/few-shot,
(3) there is an interaction effect between 1+2, where larger models benefit more from additional “shots,”
(4) this could actually become a practical approach (eve |
777a51ef-9db4-42ff-9332-401760a6cf4b | trentmkelly/LessWrong-43k | LessWrong | Gwern's "Why Tool AIs Want to Be Agent AIs: The Power of Agency"
I somehow hadn't read this post until now, so I am posting this here in case I am not the only one (and I wasn't able to find a previous linkpost for it). Relevant to relatively recent discussion on AI-as-a-service, but also just good as a broad reference. |
b3da2c56-2cb0-43c2-b377-f2b28d25b27c | trentmkelly/LessWrong-43k | LessWrong | Defining the normal computer control problem
There has been focus on controlling super intelligent artificial intelligence, however we currently can't even control our un-agenty computers without having to resort to formatting and other large scale interventions.
Solving the normal computer control problem might help us solve the super intelligence control problem or allow us to work towards safe intelligence augmentation.
We cannot currently keep our computers doing what we want easily. They can get infected with malware, compromised or they get updates that may be buggy. If you have sufficient expertise you can go in and fix the problem or wipe the system, but this is not ideal.
We do not have control our computers, without resorting to out of band manipulation.
Genes have found a way to control the reptilian brain and also the more powerful mammalian and human brains, somewhat, as discussed in the control problem has already been solved. And the system continues to run, our brains aren't reformatted when we get bad behaviours. Let us call this the normal computer control problem. We don't know how the genes do it, but humans tend to do what they would want (if they wanted things!) despite our flexibility. There is some control there.
This problem of control has been neglected by traditional AI as there it is not trying to solve a cognitive problem. It is not like solving chess or learning to recognize faces. It is not making anything powerful, it is just weeding out the bad programs.
Comparing the normal computer control and AI control problem
The AI control problem has been defined as asking the question
> What prior precautions can the programmers take to successfully prevent the superintelligence from catastrophically misbehaving?
In this language the normal computer control problem can be defined as.
> What type of automated system can we implement to stop a normal general purpose computer system misbehaving (and carry on with its good behaviour ) if it has a malign program in it.
To make |
7743b2b4-5aa2-4813-bcc2-36c9531a5bcb | trentmkelly/LessWrong-43k | LessWrong | [Link] Less Wrong Wiki article with very long summary of Daniel Kahneman's Thinking, Fast and Slow
I've made very extensive notes, along with my assessment, of Daniel Kahneman's Thinking, Fast and Slow, and have passed it around to aspiring rationalist friends who found my notes very useful. So I though I would share these with the Less Wrong community by creating a Less Wrong Wiki article with these notes. Feel free to optimize the article based on your own notes as well. Hope this proves as helpful to you as it did to those others whom I shared my notes with.
|
31a43d38-8168-4f25-b717-25f41defa535 | trentmkelly/LessWrong-43k | LessWrong | The Law of Cultural Proximity
[Not my area of expertise, but I would be surprised if the core thesis was wrong in a significant way. Probably not as original as I think it is. Based on a previous blog post of mine that went in a very different/weird direction. Cross-posted from Grand, Unified, Crazy.]
Introduction
Currently, different human cultures have different behavioural norms around all sorts of things. These norms cover all kinds of personal and interpersonal conduct, and extend into different legal systems in countries around the globe. In politics, this is often talked about in the form of the Overton window, which is the set of political positions that are sufficiently “mainstream” in a given culture to be considered electable. Unsurprisingly, different cultures have different Overton windows. For example, Norway and the United States currently have Overton windows that tend to overlap on some policies (the punishment of theft) but perhaps not on others (social welfare).
Shared norms and a stable, well-defined Overton window are important for the stable functioning of society, since they provide the implicit contract and social fabric on which everything else operates. But what exactly is the scope of a “society” for which that is true? We just talked about the differences between Norway and the U.S., but in a very real sense, Norway and the U.S. share “western culture” when placed in comparison with Iran or North Korea. In the other direction, there are many distinct cultures entirely within the U.S. with different norms around things like gun control. The categories were made for man, not man for the categories.
However blurry these lines are, it might be tempting to assume that they get drawn roughly according to geography; it's certainly reflected in our language (note my use of "western culture" already in this post). But this isn't quite right: the key factor is actually interactional proximity; it's just that in a historical setting geographical and interactional proximity w |
a24adab0-e149-43c3-a2e6-f349f81c55ca | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Announcing AI Safety Support
Contents:
1. What is AI Safety Support
2. Discussion Days
3. Other Online Events
4. AI Safety Resources
5. Mentorship Program
6. AI Alignment Slack
7. Consider Donating
What is [AI Safety Support](https://www.aisafetysupport.org)
------------------------------------------------------------
[AI Safety Support](https://www.aisafetysupport.org/home) has existed as an Incorporated Association in Australia since October and as an initiative with a web page since May 2020.
Our aim is to fill the gaps in that AI Safety career pipeline. We aim to provide operational support to early career and transitioning researchers, so you can engage with the community and test your career fitness. Broadly we just want to help reduce friction and enable people to do the things you are already trying to achieve.
If you are new to AI Safety, we would be happy to talk to you, and to help you figure out what steps to take next. We don’t have all the answers, but we can probably provide you with a better map of the career landscape.
Feel free to reach out or book a call with either one of us:
* Linda: [email](mailto:linda.linsefors@gmail.com), [calendly](https://calendly.com/linda-linsefors/call)
* JJ: [email](mailto:jj.hepboin@gmail.com), [calendly](https://calendly.com/jj-hepboin)
[Discussion Days](https://www.aisafetysupport.org/events/discussion-days)
-------------------------------------------------------------------------
Our longest running project (since June) are our regular [online AI Safety Discussion Days](https://www.aisafetysupport.org/events/discussion-days). Each of these events has a talk, an Icebreaker session and breakout discussions covering whatever is on your mind at the time.
The Discussion Days focus on ongoing research. If you have some ideas you want feedback on, you are welcome to present them as a talk or bring them up during the breakout discussions. However, you are also welcome to just listen and learn, or maybe give feedback on other people's ideas.
Schedule:
* Second Monday every month, [UTC 18:00](https://everytimezone.com/s/48fd68f6) - [21:30](https://everytimezone.com/s/599c7b7e) (Europe/Africa and Americas friendly time)
* Fourth Monday every month (except December), [UTC 08:00](https://everytimezone.com/s/cd44d880) - [11:30](https://everytimezone.com/s/8157f5ed)(Asia/Pacific and Europe/Africa friendly time)
The next one is on **Monday, November 23th**.
[Other Online Events](https://www.aisafetysupport.org/events#h.nsdsrjsayhut)
----------------------------------------------------------------------------
We are not the only one running online AI Safety events these days. We’re collecting all online AI Safety and adjacent events we know about in this [shared calendar](https://www.aisafetysupport.org/events#h.nsdsrjsayhut). *Notice that some events require registration or application.*
If you know of other online AI Safety events, please let us know.
[Lots of AI Safety Links](https://www.aisafetysupport.org/resources/lots-of-links)
----------------------------------------------------------------------------------
To stay up to date with new information, I highly recommend signing up to the [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/) and [80k's AI Safety group](https://groups.google.com/g/david-kruegers-80k-people).
However, there are also lots of more static resources scattered around the internet, such as study guides and research agendas. I’ve tried to list them all [here](https://www.aisafetysupport.org/resources) (though I’m still adding things). Let us know if we forgot something, or miscategorized something.
[Mentorship Program](https://www.aisafetysupport.org/resources/mentorship-program)
----------------------------------------------------------------------------------
We have spoken to a number of people who want some mentorship and to others who are interested in mentoring. So, we are experimenting with a new mentorship program to try to bring these people together. Mentors and mentees can respectively offer and ask for anything they like so this works best if we have more people to match.
As a rule of thumb, if you are at least a 3rd year PhD student, or an independent researcher with a few publications, then you can be a mentor. You might not think you are very senior, but remember that AI Safety is a very new feld, so you are already more senior than most.
We have no rules for who can be a mentee. The more experience you have, the harder it will be for us to find a mentor who knows more than you, but whoever you are, we’ll try to find someone to help you. However, try to be as **specific as you can** about what you want to get out of the mentorship. For example, maybe you have a research project in mind that you want to do?
* [Signup to be a mentor](https://docs.google.com/forms/d/e/1FAIpQLScpR72HNuy-DBlGOwkb36FuIvV-tgoBrNo-ftGs1vAcJ4g-DA/viewform)
* [Application for mentees](https://docs.google.com/forms/d/e/1FAIpQLSeTIZXxWuEQfGqEge62KaAHGQfI1xPEENBrabiSpNq35UP3rQ/viewform)
You can sign up for both!
Signup deadline for the first round is **November 30th**. We will contact everyone and pair you up during December, so expect the actual mentoring to start no earlier than January.
When or if we do a second round depends entirely on the interest we get in the first round, both from mentors and mentees.
[AI Alignment Slack](https://join.slack.com/t/ai-alignment/shared_invite/zt-fkgwbd2b-kK50z~BbVclOZMM9UP44gw)
------------------------------------------------------------------------------------------------------------
We can’t take credit for the existence of [this Slack group](https://join.slack.com/t/ai-alignment/shared_invite/zt-fkgwbd2b-kK50z~BbVclOZMM9UP44gw), since it was created by Andrei Alexandru. But we are helping to grow it, and making use of it as a place to communicate, ask questions, discuss research, and generally help each other out.
In this Slack you’ll find two channels dedicated to our [Discussion Days](https://www.aisafetysupport.org/events/discussion-days), one for [general follow-up discussions](https://app.slack.com/client/T01641E1H6C/C01E5PLKQ30), and one for [asking questions of the last speaker](https://app.slack.com/client/T01641E1H6C/C01EL93VC92).
We have one channel dedicated to [grad school application](https://app.slack.com/client/T01641E1H6C/C01EL93VC92). One idea we have around this is that it would be beneficial for AI Safety-interested students to end up in the same program, which is more likely to happen with some communication and coordination.
There are also channels for [finding study buddies](https://app.slack.com/client/T01641E1H6C/C01EL93VC92), [personal introduction](https://app.slack.com/client/T01641E1H6C/C01EL93VC92), several sub-field specific discussions and more.
[Consider Donating](https://www.aisafetysupport.org/donate)
-----------------------------------------------------------
Since April and June respectively we have both spent most of our working time on AI Safety Support. We can not do this work while also having to hold down separate jobs, and the people we are helping are mostly students with little to no income. We are therefore relying on donations to be able to continue this work. On top of that, a large-ish donation we were expecting has been delayed for an unknown amount of time, which means donations we get now would be extra helpful.
There are a few ways you can donate. We have accounts on both [Patreon](https://www.patreon.com/aiss) and [Ko-fi](https://ko-fi.com/lindalinsefors), for convenient regular donations. Both these platforms have some transaction costs, however. If you prefer sending us money more directly, [just let us know](mailto:jj.hepboin@gmail.com).
A third way you can donate to us is though [Rethink Charity](https://www.rethinkprojects.org/donate). This option will make your donation tax deductible if you live in **Canada** or **US**. If you want to donate more than $1000 through Rethink Charity, please [email them first](mailto:donations@rethinkprojects.org), so they can find the best method to donate for you. Regardless of the amount you will need to let Rethink Charity know that your donation was meant for AI Safety Support. (Please only use this option if you can take advantage of the tax deductions.)
If you are donating from somewhere else and want tax benefits, [let us know](mailto:jj.hepboin@gmail.com), and we will try to arrange something.
Donors looking to support our project through a **US based Donor Advised Fund** can do so by sending their donation to Rethink Charity (EIN 82-5325150). If you need any assistance donation support is available by [emailing Siobhan Brenton](mailto:donations@rethinkprojects.org).
Thank you! |
6f82cc31-9d8d-420e-85c7-58122d62bea2 | trentmkelly/LessWrong-43k | LessWrong | Game theory question -- iterated truel with private information
I was reading the discussion here of how truels with unlimited shots and symmetric complete information favor the weakest truelist. This set me to wondering about somewhat more complicated situations. Suppose you are in a world where there are daily truels for some substantial period of time, say 30 days. As with the linked problem, all hits are fatal and truelists are accurate 50%, 80%, or 100% of the time. Unlike the linked problem, however, rather than possessing complete information about the accuracy of other truelists, you only know your own true accuracy and the %hit of other truelists in earlier iterations, and there is no guarantee that there will be one truelist of each skill level in any given truel.
Now suppose you know that you are a perfect marksman. On which of the iterations would you intentionally miss your first shot? I definitely lack the math strength to offer a good strategy, but I'm sure many others here could do better.
Edit #2 -- I give up on the formatting. |
d382c1a8-559e-407a-9cb9-5e37b6d7dd86 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
1 Introduction
---------------
Reinforcement learning provides a mathematical formalism for learning-based control. By utilizing reinforcement learning, we can automatically acquire near-optimal behavioral skills, represented by policies, for optimizing user-specified reward functions. The reward function defines *what* an agent should do, and a reinforcement learning algorithm determines *how* to do it. While the reinforcement learning algorithms have been an active area of research for decades, the introduction of effective high-capacity function approximators – deep neural networks – into reinforcement learning, along with effective algorithms for training them, has allowed reinforcement learning methods to attain excellent results along a wide range of domains (Tesauro,, [1994](#bib.bib161); Hafner and Riedmiller,, [2011](#bib.bib45); Levine and Koltun,, [2013](#bib.bib90); Mnih et al.,, [2013](#bib.bib100); Levine et al.,, [2016](#bib.bib89); Silver et al.,, [2017](#bib.bib144); Kalashnikov et al.,, [2018](#bib.bib65)).
However, the fact that reinforcement learning algorithms provide a fundamentally *online* learning paradigm is also one of the biggest obstacles to their widespread adoption. The process of reinforcement learning involves iteratively collecting experience by interacting with the environment, typically with the latest learned policy, and then using that experience to improve the policy (Sutton and Barto,, [1998](#bib.bib151)). In many settings, this sort of online interaction is impractical, either because data collection is expensive (e.g., in robotics, educational agents, or healthcare) and dangerous (e.g., in autonomous driving, or healthcare). Furthermore, even in domains where online interaction is feasible, we might still prefer to utilize previously collected data instead – for example, if the domain is complex and effective generalization requires large datasets.
Indeed, the success of machine learning methods across a range of practically relevant problems over the past decade can in large part be attributed to the advent of scalable *data-driven* learning methods, which become better and better as they are trained with more data. Online reinforcement learning is difficult to reconcile with this paradigm. While this was arguably less of an issue when reinforcement learning methods utilized low-dimensional or linear parameterizations, and therefore relied on small datasets for small problems that were easy to collect or simulate (Lange et al.,, [2012](#bib.bib82)), once deep networks are incorporated into reinforcement learning, it is tempting to consider whether the same kind of data-driven learning can be applied with reinforcement learning objectives, thus resulting in *data-driven reinforcement learning* that utilizes only previously collected offline data, without any additional online interaction (Kumar,, [2019](#bib.bib76); Fu et al.,, [2020](#bib.bib29)). See Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") for a pictorial illustration. A number of recent works have illustrated the power of such an approach in enabling data-driven learning of policies for dialogue (Jaques et al.,, [2019](#bib.bib56)), robotic manipulation behaviors (Ebert et al.,, [2018](#bib.bib22); Kalashnikov et al.,, [2018](#bib.bib65)), and robotic navigation skills (Kahn et al.,, [2020](#bib.bib59)).

Figure 1: Pictorial illustration of classic online reinforcement learning (a), classic off-policy reinforcement learning (b), and offline reinforcement learning (c). In online reinforcement learning (a), the policy πksubscript𝜋𝑘\pi\_{k}italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is updated with streaming data collected by πksubscript𝜋𝑘\pi\_{k}italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT itself. In the classic off-policy setting (b), the agent’s experience is appended to a data buffer (also called a replay buffer) 𝒟𝒟\mathcal{D}caligraphic\_D, and each new policy πksubscript𝜋𝑘\pi\_{k}italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT collects additional data, such that 𝒟𝒟\mathcal{D}caligraphic\_D is composed of samples from π0,π1,…,πksubscript𝜋0subscript𝜋1…subscript𝜋𝑘\pi\_{0},\pi\_{1},\dots,\pi\_{k}italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, and all of this data is used to train an updated new policy πk+1subscript𝜋𝑘1\pi\_{k+1}italic\_π start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT. In contrast, offline reinforcement learning employs a dataset 𝒟𝒟\mathcal{D}caligraphic\_D collected by some (potentially unknown) behavior policy πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT. The dataset is collected once, and is not altered during training, which makes it feasible to use large previous collected datasets. The training process does not interact with the MDP at all, and the policy is only deployed after being fully trained.
Unfortunately, such data-driven *offline* reinforcement learning also poses major algorithmic challenges. As we will discuss in this article, many commonly used reinforcement learning methods can learn from off-policy data, but such methods often cannot learn effectively from entire offline data, without any additional on-policy interaction. High-dimensional and expressive function approximation generally exacerbates this issue, since function approximation leaves the algorithms vulnerable to distributional shift, one of the central challenges with offline reinforcement learning. However, the appeal of a fully offline reinforcement learning framework is enormous: in the same way that supervised machine learning methods have enabled data to be turned into generalizable and powerful *pattern recognizers* (e.g., image classifiers, speech recognition engines, etc.), offline reinforcement learning methods equipped with powerful function approximation may enable data to be turned into generalizable and powerful *decision making engines*, effectively allowing anyone with a large enough dataset to turn this dataset into a policy that can optimize a desired utility criterion. From healthcare decision-making support to autonomous driving to robotics, the implications of a reliable and effective offline reinforcement learning method would be immense.
In some application domains, the lack of effective offline reinforcement learning methods has driven research in a number of interesting directions. For example, in robotics and autonomous driving, a rapidly growing research topic is the study of simulation to real-world transfer: training policies with reinforcement learning in simulation and then transferring these policies into the real world (Sadeghi and Levine,, [2017](#bib.bib135); Tan et al.,, [2018](#bib.bib158); Chebotar et al.,, [2019](#bib.bib9)). While this approach is very pragmatic (and often effective), its popularity highlights the deficiency in offline reinforcement learning methods: if it was possible to simply train policies with previously collected data, it would likely be unnecessary in many cases to manually design high-fidelity simulators for simulation-to-real-world transfer. After all, outside of reinforcement learning (e.g., in computer vision, NLP, or speech recognition), transfer from simulation is comparatively much less prevalent, since data-driven learning is so effective.
The goal of this article is to provide the reader with the conceptual tools needed to get started on research in the field of offline reinforcement learning (also called batch reinforcement learning (Ernst et al.,, [2005](#bib.bib23); Riedmiller,, [2005](#bib.bib130); Lange et al.,, [2012](#bib.bib82))), so as to hopefully begin addressing some of these deficiencies. To this end, we will present the offline reinforcement learning problem formulation, and describe some of the challenges associated with this problem setting, particularly in light of recent research on deep reinforcement learning and the interaction between reinforcement learning and high-dimensional function approximator, such as deep networks. We will cover a variety of offline reinforcement learning methods studied in the literature. For each one, we will discuss the conceptual challenges, and initial steps taken to mitigate these challenges. We will then discuss some of the applications of offline reinforcement learning techniques that have already been explored, despite the limitations of current methods, and conclude with some perspectives on future work and open problems in the field.
2 Offline Reinforcement Learning Problem Statement and Overview
----------------------------------------------------------------
In this section, we will introduce the mathematical formalism of reinforcement learning and define our notation, and then set up the offline reinforcement learning problem setting, where the goal is to learn near-optimal policies from previously collected data. Then, we will briefly discuss some of the intuition behind why the offline reinforcement learning problem setting poses some unique challenges, using a supervised behavioral cloning example.
###
2.1 Reinforcement Learning Preliminaries
In this section, we will define basic reinforcement learning concepts, following standard textbook definitions (Sutton and Barto,, [1998](#bib.bib151)). Reinforcement learning addresses the problem of learning to control a dynamical system, in a general sense. The dynamical system is fully defined by a fully-observed or partially-observed Markov decision process (MDP).
######
Definition 2.1 (Markov decision process).
The Markov decision process is defined as a tuple ℳ=(𝒮,𝒜,T,d0,r,γ)ℳ𝒮𝒜𝑇subscript𝑑0𝑟𝛾\mathcal{M}=(\mathcal{S},\mathcal{A},T,d\_{0},r,\gamma)caligraphic\_M = ( caligraphic\_S , caligraphic\_A , italic\_T , italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_r , italic\_γ ), where 𝒮𝒮\mathcal{S}caligraphic\_S is a set of states 𝐬∈𝒮𝐬𝒮\mathbf{s}\in\mathcal{S}bold\_s ∈ caligraphic\_S, which may be either discrete or continuous (i.e., multi-dimensional vectors), 𝒜𝒜\mathcal{A}caligraphic\_A is a set of actions 𝐚∈𝒜𝐚𝒜\mathbf{a}\in\mathcal{A}bold\_a ∈ caligraphic\_A, which similarly can be discrete or continuous, T𝑇Titalic\_T defines a conditional probability distribution of the form T(𝐬t+1|𝐬t,𝐚t)𝑇conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡T(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_T ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) that describes the dynamics of the system,111We will sometimes use time subscripts (i.e., 𝐬t+1subscript𝐬𝑡1\mathbf{s}\_{t+1}bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT follows 𝐬tsubscript𝐬𝑡\mathbf{s}\_{t}bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT), and sometimes “prime” notation (i.e., 𝐬′superscript𝐬′\mathbf{s}^{\prime}bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is the state that follows 𝐬𝐬\mathbf{s}bold\_s). Explicit time subscripts can help clarify the notation in finite-horizon settings, while “prime” notation is simpler in infinite-horizon settings where absolute time step indices are less meaningful. d0subscript𝑑0d\_{0}italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT defines the initial state distribution d0(𝐬0)subscript𝑑0subscript𝐬0d\_{0}(\mathbf{s}\_{0})italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ), r:𝒮×𝒜→:𝑟→𝒮𝒜absentr:\mathcal{S}\times\mathcal{A}\rightarrow\realitalic\_r : caligraphic\_S × caligraphic\_A → defines a reward function, and γ∈(0,1]𝛾01\gamma\in(0,1]italic\_γ ∈ ( 0 , 1 ] is a scalar discount factor.
We will use the fully-observed formalism in most of this article, though the definition for the partially observed Markov decision process (POMDP) is also provided for completeness. The MDP definition can be extended to the partially observed setting as follows:
######
Definition 2.2 (Partially observed Markov decision process).
The partially observed Markov decision process is defined as a tuple ℳ=(𝒮,𝒜,𝒪,T,d0,E,r,γ)ℳ𝒮𝒜𝒪𝑇subscript𝑑0𝐸𝑟𝛾\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{O},T,d\_{0},E,r,\gamma)caligraphic\_M = ( caligraphic\_S , caligraphic\_A , caligraphic\_O , italic\_T , italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_E , italic\_r , italic\_γ ), where 𝒮𝒮\mathcal{S}caligraphic\_S, 𝒜𝒜\mathcal{A}caligraphic\_A, T𝑇Titalic\_T, d0subscript𝑑0d\_{0}italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, r𝑟ritalic\_r, and γ𝛾\gammaitalic\_γ are defined as before, 𝒪𝒪\mathcal{O}caligraphic\_O is a set of observations, where each observation is given by 𝐨∈𝒪𝐨𝒪\mathbf{o}\in\mathcal{O}bold\_o ∈ caligraphic\_O, and E𝐸Eitalic\_E is an emission function, which defines the distribution E(𝐨t|𝐬t)𝐸conditionalsubscript𝐨𝑡subscript𝐬𝑡E(\mathbf{o}\_{t}|\mathbf{s}\_{t})italic\_E ( bold\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ).
The final goal in a reinforcement learning problem is to learn a policy, which defines a distribution over actions conditioned on states, π(𝐚t|𝐬t)𝜋conditionalsubscript𝐚𝑡subscript𝐬𝑡\pi(\mathbf{a}\_{t}|\mathbf{s}\_{t})italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), or conditioned on observations in the partially observed setting, π(𝐚t|𝐨t)𝜋conditionalsubscript𝐚𝑡subscript𝐨𝑡\pi(\mathbf{a}\_{t}|\mathbf{o}\_{t})italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). The policy may also be conditioned on an observation history, π(𝐚t|𝐨0:t)𝜋conditionalsubscript𝐚𝑡subscript𝐨:0𝑡\pi(\mathbf{a}\_{t}|\mathbf{o}\_{0:t})italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_o start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT ). From these definitions, we can derive the *trajectory distribution*. The trajectory is a sequence of states and actions of length H𝐻Hitalic\_H, given by τ=(𝐬0,𝐚0,…,𝐬H,𝐚H)𝜏subscript𝐬0subscript𝐚0…subscript𝐬𝐻subscript𝐚𝐻\tau=(\mathbf{s}\_{0},\mathbf{a}\_{0},\dots,\mathbf{s}\_{H},\mathbf{a}\_{H})italic\_τ = ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , bold\_s start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ), where H𝐻Hitalic\_H may be infinite. The trajectory distribution pπsubscript𝑝𝜋p\_{\pi}italic\_p start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT for a given MDP ℳℳ\mathcal{M}caligraphic\_M and policy π𝜋\piitalic\_π is given by
| | | |
| --- | --- | --- |
| | pπ(τ)=d0(𝐬0)∏t=0Hπ(𝐚t|𝐬t)T(𝐬t+1|𝐬t,𝐚t).subscript𝑝𝜋𝜏subscript𝑑0subscript𝐬0superscriptsubscriptproduct𝑡0𝐻𝜋conditionalsubscript𝐚𝑡subscript𝐬𝑡𝑇conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡p\_{\pi}(\tau)=d\_{0}(\mathbf{s}\_{0})\prod\_{t=0}^{H}\pi(\mathbf{a}\_{t}|\mathbf{s}\_{t})T(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t}).italic\_p start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_τ ) = italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) ∏ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_T ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) . | |
This definition can easily be extended into the partially observed setting by including the observations 𝐨tsubscript𝐨𝑡\mathbf{o}\_{t}bold\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and emission function E(𝐨t|𝐬t)𝐸conditionalsubscript𝐨𝑡subscript𝐬𝑡E(\mathbf{o}\_{t}|\mathbf{s}\_{t})italic\_E ( bold\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). The reinforcement learning objective, J(π)𝐽𝜋J(\pi)italic\_J ( italic\_π ), can then be written as an expectation under this trajectory distribution:
| | | | |
| --- | --- | --- | --- |
| | J(π)=𝔼τ∼pπ(τ)[∑t=0Hγtr(𝐬t,𝐚t)].𝐽𝜋subscript𝔼similar-to𝜏subscript𝑝𝜋𝜏delimited-[]superscriptsubscript𝑡0𝐻superscript𝛾𝑡𝑟subscript𝐬𝑡subscript𝐚𝑡J(\pi)=\mathbb{E}\_{\tau\sim p\_{\pi}(\tau)}\left[\sum\_{t=0}^{H}\gamma^{t}r(\mathbf{s}\_{t},\mathbf{a}\_{t})\right].italic\_J ( italic\_π ) = blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_p start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] . | | (1) |
When H𝐻Hitalic\_H is infinite, it is sometimes also convenient to assume that the Markov chain on (𝐬t,𝐚t)subscript𝐬𝑡subscript𝐚𝑡(\mathbf{s}\_{t},\mathbf{a}\_{t})( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) defined by π(𝐚t|𝐬t)T(𝐬t+1|𝐬t,𝐚t)𝜋conditionalsubscript𝐚𝑡subscript𝐬𝑡𝑇conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡\pi(\mathbf{a}\_{t}|\mathbf{s}\_{t})T(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_T ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) is ergodic, and define the objective in terms of the expected reward under the stationary distribution of this Markov chain (Sutton and Barto,, [1998](#bib.bib151)). This definition is somewhat complicated by the role of the discount factor. For a full discussion of this topic, we refer the reader to prior work (Thomas,, [2014](#bib.bib163)).
In many cases, we will find it convenient to refer to the marginals of the trajectory distribution pπ(τ)subscript𝑝𝜋𝜏p\_{\pi}(\tau)italic\_p start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_τ ). We will use dπ(𝐬)superscript𝑑𝜋𝐬d^{\pi}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) to refer to the overall state visitation frequency, averaged over the time steps, and dtπ(𝐬t)subscriptsuperscript𝑑𝜋𝑡subscript𝐬𝑡d^{\pi}\_{t}(\mathbf{s}\_{t})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) to refer to the state visitation frequency at time step t𝑡titalic\_t.
In this section, we will briefly summarize different types of reinforcement learning algorithms and present definitions. At a high level, all standard reinforcement learning algorithms follow the same basic learning loop: the agent *interacts* with the MDP ℳℳ\mathcal{M}caligraphic\_M by using some sort of *behavior policy*, which may or may not match π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ), by observing the current state 𝐬tsubscript𝐬𝑡\mathbf{s}\_{t}bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, selecting an action 𝐚tsubscript𝐚𝑡\mathbf{a}\_{t}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, and then observing the resulting next state 𝐬t+1subscript𝐬𝑡1\mathbf{s}\_{t+1}bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT and reward value rt=r(𝐬t,𝐚t)subscript𝑟𝑡𝑟subscript𝐬𝑡subscript𝐚𝑡r\_{t}=r(\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_r ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). This may repeat for multiple steps, and the agent then uses the observed transitions (𝐬t,𝐚t,𝐬t+1,rt)subscript𝐬𝑡subscript𝐚𝑡subscript𝐬𝑡1subscript𝑟𝑡(\mathbf{s}\_{t},\mathbf{a}\_{t},\mathbf{s}\_{t+1},r\_{t})( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) to update its policy. This update might also utilize previously observed transitions. We will use 𝒟={(𝐬ti,𝐚ti,𝐬t+1i,rti)}𝒟subscriptsuperscript𝐬𝑖𝑡subscriptsuperscript𝐚𝑖𝑡subscriptsuperscript𝐬𝑖𝑡1subscriptsuperscript𝑟𝑖𝑡\mathcal{D}=\{(\mathbf{s}^{i}\_{t},\mathbf{a}^{i}\_{t},\mathbf{s}^{i}\_{t+1},r^{i}\_{t})\}caligraphic\_D = { ( bold\_s start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_s start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) } to denote the set of transitions that are available for the agent to use for updating the policy (“learning”), which may consist of either all transitions seen so far, or some subset thereof.
#### Policy gradients.
One of the most direct ways to optimize the RL objective in Equation [1](#S2.E1 "1 ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") is to directly estimate its gradient. In this case, we typically assume that the policy is parameterized by a parameter vector θ𝜃\thetaitalic\_θ, and therefore given by πθ(𝐚t|𝐬t)subscript𝜋𝜃conditionalsubscript𝐚𝑡subscript𝐬𝑡\pi\_{\theta}(\mathbf{a}\_{t}|\mathbf{s}\_{t})italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). For example, θ𝜃\thetaitalic\_θ might denote the weights of a deep network that outputs the logits for the (discrete) actions 𝐚tsubscript𝐚𝑡\mathbf{a}\_{t}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. In this case, we can express the gradient of the objective with respect to θ𝜃\thetaitalic\_θ as:
| | | | |
| --- | --- | --- | --- |
| | ∇θJ(πθ)=𝔼τ∼pπθ(τ)[∑t=0Hγt∇θlogπθ(𝐚t|𝐬t)(∑t′=tHγt′−tr(𝐬t′,𝐚t′)−b(𝐬t))⏟return estimate A^(𝐬t,𝐚t)],subscript∇𝜃𝐽subscript𝜋𝜃subscript𝔼similar-to𝜏subscript𝑝subscript𝜋𝜃𝜏delimited-[]superscriptsubscript𝑡0𝐻superscript𝛾𝑡subscript∇𝜃subscript𝜋𝜃conditionalsubscript𝐚𝑡subscript𝐬𝑡subscript⏟superscriptsubscriptsuperscript𝑡′𝑡𝐻superscript𝛾superscript𝑡′𝑡𝑟subscript𝐬superscript𝑡′subscript𝐚superscript𝑡′𝑏subscript𝐬𝑡return estimate ^𝐴subscript𝐬𝑡subscript𝐚𝑡\nabla\_{\theta}J(\pi\_{\theta})\!=\!\mathbb{E}\_{\tau\sim p\_{\pi\_{\theta}}(\tau)}\left[\!\sum\_{t=0}^{H}\right.\gamma^{t}\nabla\_{\theta}\log\pi\_{\theta}(\mathbf{a}\_{t}|\mathbf{s}\_{t})\underbrace{\left(\sum\_{t^{\prime}=t}^{H}\gamma^{t^{\prime}-t}r(\mathbf{s}\_{t^{\prime}},\mathbf{a}\_{t^{\prime}})-b(\mathbf{s}\_{t})\right)}\_{\text{return estimate }\hat{A}(\mathbf{s}\_{t},\mathbf{a}\_{t})}\left.\vphantom{\sum\_{t=0}^{H}}\!\right],∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) = blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_p start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) under⏟ start\_ARG ( ∑ start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT - italic\_t end\_POSTSUPERSCRIPT italic\_r ( bold\_s start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ) - italic\_b ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) end\_ARG start\_POSTSUBSCRIPT return estimate over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT ] , | | (2) |
where the return estimator A^(𝐬t,𝐚t)^𝐴subscript𝐬𝑡subscript𝐚𝑡\hat{A}(\mathbf{s}\_{t},\mathbf{a}\_{t})over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) can itself be learned as a separate neural network *critic*, as discussed below, or it can simply be estimated with Monte Carlo samples, in which case we simply generate samples from pπθ(τ)subscript𝑝subscript𝜋𝜃𝜏p\_{\pi\_{\theta}}(\tau)italic\_p start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_τ ), and then sum up the rewards over the time steps of the sampled trajectory. The baseline b(𝐬t)𝑏subscript𝐬𝑡b(\mathbf{s}\_{t})italic\_b ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) can be estimated as the average reward over the sampled trajectories, or by using a value function estimator V(𝐬t)𝑉subscript𝐬𝑡V(\mathbf{s}\_{t})italic\_V ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), which we discuss in the dynamic programming section.
We can equivalently write this gradient expression as an expectation with respect to dtπ(𝐬t)subscriptsuperscript𝑑𝜋𝑡subscript𝐬𝑡d^{\pi}\_{t}(\mathbf{s}\_{t})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) as
| | | |
| --- | --- | --- |
| | ∇θJ(πθ)=∑t=0H𝔼𝐬t∼dtπ(𝐬t),𝐚t∼πθ(𝐚t|𝐬t)[γt∇θlogπθ(𝐚t|𝐬t)A^(𝐬t,𝐚t)].subscript∇𝜃𝐽subscript𝜋𝜃superscriptsubscript𝑡0𝐻subscript𝔼formulae-sequencesimilar-tosubscript𝐬𝑡subscriptsuperscript𝑑𝜋𝑡subscript𝐬𝑡similar-tosubscript𝐚𝑡subscript𝜋𝜃conditionalsubscript𝐚𝑡subscript𝐬𝑡delimited-[]superscript𝛾𝑡subscript∇𝜃subscript𝜋𝜃conditionalsubscript𝐚𝑡subscript𝐬𝑡^𝐴subscript𝐬𝑡subscript𝐚𝑡\nabla\_{\theta}J(\pi\_{\theta})\!=\!\sum\_{t=0}^{H}\mathbb{E}\_{\mathbf{s}\_{t}\sim d^{\pi}\_{t}(\mathbf{s}\_{t}),\mathbf{a}\_{t}\sim\pi\_{\theta}(\mathbf{a}\_{t}|\mathbf{s}\_{t})}\left[\gamma^{t}\nabla\_{\theta}\log\pi\_{\theta}(\mathbf{a}\_{t}|\mathbf{s}\_{t})\hat{A}(\mathbf{s}\_{t},\mathbf{a}\_{t})\right].∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] . | |
A common modification is to drop the γtsuperscript𝛾𝑡\gamma^{t}italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT term in front of the gradient, which approximates an average reward setting (Thomas,, [2014](#bib.bib163)). Dropping this term and adopting an infinite-horizon formulation, we can further rewrite the policy gradient as expectation under dπ(𝐬)superscript𝑑𝜋𝐬d^{\pi}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) as
| | | |
| --- | --- | --- |
| | ∇θJ(πθ)=11−γ𝔼𝐬∼dπ(𝐬t),𝐚∼πθ(𝐚|𝐬)[∇θlogπθ(𝐚|𝐬)A^(𝐬,𝐚)].subscript∇𝜃𝐽subscript𝜋𝜃11𝛾subscript𝔼formulae-sequencesimilar-to𝐬superscript𝑑𝜋subscript𝐬𝑡similar-to𝐚subscript𝜋𝜃conditional𝐚𝐬delimited-[]subscript∇𝜃subscript𝜋𝜃conditional𝐚𝐬^𝐴𝐬𝐚\nabla\_{\theta}J(\pi\_{\theta})\!=\!\frac{1}{1-\gamma}\mathbb{E}\_{\mathbf{s}\sim d^{\pi}(\mathbf{s}\_{t}),\mathbf{a}\sim\pi\_{\theta}(\mathbf{a}|\mathbf{s})}\left[\nabla\_{\theta}\log\pi\_{\theta}(\mathbf{a}|\mathbf{s})\hat{A}(\mathbf{s},\mathbf{a})\right].∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) = divide start\_ARG 1 end\_ARG start\_ARG 1 - italic\_γ end\_ARG blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) over^ start\_ARG italic\_A end\_ARG ( bold\_s , bold\_a ) ] . | |
The constant scaling term 11−γ11𝛾\frac{1}{1-\gamma}divide start\_ARG 1 end\_ARG start\_ARG 1 - italic\_γ end\_ARG is often disregarded. This infinite-horizon formulation is often convenient to work with for analyzing and deriving policy gradient methods. For a full derivation of this gradient, we refer the reader to prior work (Sutton et al.,, [2000](#bib.bib154); Kakade,, [2002](#bib.bib64); Schulman et al.,, [2015](#bib.bib139)). We can summarize a basic Monte Carlo policy gradient algorithm as follows:
Algorithm 1 On-policy policy gradient with Monte Carlo estimator
1:initialize θ0subscript𝜃0\theta\_{0}italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT
2:for iteration k∈[0,…,K]𝑘0…𝐾k\in[0,\dots,K]italic\_k ∈ [ 0 , … , italic\_K ] do
3: sample trajectories {τi}subscript𝜏𝑖\{\tau\_{i}\}{ italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } by running πθk(𝐚t|𝐬t)subscript𝜋subscript𝜃𝑘conditionalsubscript𝐚𝑡subscript𝐬𝑡\pi\_{\theta\_{k}}(\mathbf{a}\_{t}|\mathbf{s}\_{t})italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ▷▷\triangleright▷ each τisubscript𝜏𝑖\tau\_{i}italic\_τ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT consists of 𝐬i,0,𝐚i,0,…,𝐬i,H,𝐚i,Hsubscript𝐬𝑖0subscript𝐚𝑖0…subscript𝐬𝑖𝐻subscript𝐚𝑖𝐻\mathbf{s}\_{i,0},\mathbf{a}\_{i,0},\dots,\mathbf{s}\_{i,H},\mathbf{a}\_{i,H}bold\_s start\_POSTSUBSCRIPT italic\_i , 0 end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i , 0 end\_POSTSUBSCRIPT , … , bold\_s start\_POSTSUBSCRIPT italic\_i , italic\_H end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i , italic\_H end\_POSTSUBSCRIPT
4: compute ℛi,t=∑t′=tHγt′−tr(𝐬i,t,𝐚i,t)subscriptℛ𝑖𝑡superscriptsubscriptsuperscript𝑡′𝑡𝐻superscript𝛾superscript𝑡′𝑡𝑟subscript𝐬𝑖𝑡subscript𝐚𝑖𝑡\mathcal{R}\_{i,t}=\sum\_{t^{\prime}=t}^{H}\gamma^{t^{\prime}-t}r(\mathbf{s}\_{i,t},\mathbf{a}\_{i,t})caligraphic\_R start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT = ∑ start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT - italic\_t end\_POSTSUPERSCRIPT italic\_r ( bold\_s start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT )
5: fit b(𝐬t)𝑏subscript𝐬𝑡b(\mathbf{s}\_{t})italic\_b ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) to {ℛi,t}ℛ𝑖𝑡\{\mathcal{R}{i,t}\}{ caligraphic\_R italic\_i , italic\_t } ▷▷\triangleright▷ use constant bt=1N∑iℛi,tsubscript𝑏𝑡1𝑁subscript𝑖ℛ𝑖𝑡b\_{t}=\frac{1}{N}\sum\_{i}\mathcal{R}{i,t}italic\_b start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT caligraphic\_R italic\_i , italic\_t, or fit b(𝐬t)𝑏subscript𝐬𝑡b(\mathbf{s}\_{t})italic\_b ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) to {ℛi,t}ℛ𝑖𝑡\{\mathcal{R}{i,t}\}{ caligraphic\_R italic\_i , italic\_t }
6: compute A^(𝐬i,t,𝐚i,t)=ℛi,t−b(𝐬t)^𝐴subscript𝐬𝑖𝑡subscript𝐚𝑖𝑡subscriptℛ𝑖𝑡𝑏subscript𝐬𝑡\hat{A}(\mathbf{s}\_{i,t},\mathbf{a}\_{i,t})=\mathcal{R}\_{i,t}-b(\mathbf{s}\_{t})over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT ) = caligraphic\_R start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT - italic\_b ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT )
7: estimate ∇θkJ(πθk)≈∑i,t∇θklogπθk(𝐚i,t|𝐬i,t)A^(𝐬i,t,𝐚i,t)subscript∇subscript𝜃𝑘𝐽subscript𝜋subscript𝜃𝑘subscript𝑖𝑡subscript∇subscript𝜃𝑘subscript𝜋subscript𝜃𝑘conditionalsubscript𝐚𝑖𝑡subscript𝐬𝑖𝑡^𝐴subscript𝐬𝑖𝑡subscript𝐚𝑖𝑡\nabla\_{\theta\_{k}}J(\pi\_{\theta\_{k}})\approx\sum\_{i,t}\nabla\_{\theta\_{k}}\log\pi\_{\theta\_{k}}(\mathbf{a}\_{i,t}|\mathbf{s}\_{i,t})\hat{A}(\mathbf{s}\_{i,t},\mathbf{a}\_{i,t})∇ start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ≈ ∑ start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT ∇ start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT ) over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i , italic\_t end\_POSTSUBSCRIPT )
8: update parameters: θk+1←θk+α∇θkJ(πθk)←subscript𝜃𝑘1subscript𝜃𝑘𝛼subscript∇subscript𝜃𝑘𝐽subscript𝜋subscript𝜃𝑘\theta\_{k+1}\leftarrow\theta\_{k}+\alpha\nabla\_{\theta\_{k}}J(\pi\_{\theta\_{k}})italic\_θ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ← italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT + italic\_α ∇ start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT )
9:end for
For additional details on standard on-policy policy gradient methods, we refer the reader to prior work (Sutton et al.,, [2000](#bib.bib154); Kakade,, [2002](#bib.bib64); Schulman et al.,, [2015](#bib.bib139)).
#### Approximate dynamic programming.
Another way to optimize the reinforcement learning objective is to observe that, if we can accurately estimate a state or state-action *value function*, it is easy to then recover a near-optimal policy. A value function provides an estimate of the expected cumulative reward that will be obtained by following some policy π(𝐚t|𝐬t)𝜋conditionalsubscript𝐚𝑡subscript𝐬𝑡\pi(\mathbf{a}\_{t}|\mathbf{s}\_{t})italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) when starting from a given state 𝐬tsubscript𝐬𝑡\mathbf{s}\_{t}bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, in the case of the state-value function Vπ(𝐬t)superscript𝑉𝜋subscript𝐬𝑡V^{\pi}(\mathbf{s}\_{t})italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), or when starting from a state-action tuple (𝐬t,𝐚t)subscript𝐬𝑡subscript𝐚𝑡(\mathbf{s}\_{t},\mathbf{a}\_{t})( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), in the case of the state-action value function Qπ(𝐬t,𝐚t)superscript𝑄𝜋subscript𝐬𝑡subscript𝐚𝑡Q^{\pi}(\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). We can define these value functions as:
| | | | |
| --- | --- | --- | --- |
| | Vπ(𝐬t)superscript𝑉𝜋subscript𝐬𝑡\displaystyle V^{\pi}(\mathbf{s}\_{t})italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | =𝔼τ∼pπ(τ|𝐬t)[∑t′=tHγt′−tr(𝐬t,𝐚t)]absentsubscript𝔼similar-to𝜏subscript𝑝𝜋conditional𝜏subscript𝐬𝑡delimited-[]superscriptsubscriptsuperscript𝑡′𝑡𝐻superscript𝛾superscript𝑡′𝑡𝑟subscript𝐬𝑡subscript𝐚𝑡\displaystyle=\mathbb{E}\_{\tau\sim p\_{\pi}(\tau|\mathbf{s}\_{t})}\left[\sum\_{t^{\prime}=t}^{H}\gamma^{t^{\prime}-t}r(\mathbf{s}\_{t},\mathbf{a}\_{t})\right]= blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_p start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_τ | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT - italic\_t end\_POSTSUPERSCRIPT italic\_r ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] | |
| | Qπ(𝐬t,𝐚t)superscript𝑄𝜋subscript𝐬𝑡subscript𝐚𝑡\displaystyle Q^{\pi}(\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | =𝔼τ∼pπ(τ|𝐬t,𝐚t)[∑t′=tHγt′−tr(𝐬t,𝐚t)].absentsubscript𝔼similar-to𝜏subscript𝑝𝜋conditional𝜏subscript𝐬𝑡subscript𝐚𝑡delimited-[]superscriptsubscriptsuperscript𝑡′𝑡𝐻superscript𝛾superscript𝑡′𝑡𝑟subscript𝐬𝑡subscript𝐚𝑡\displaystyle=\mathbb{E}\_{\tau\sim p\_{\pi}(\tau|\mathbf{s}\_{t},\mathbf{a}\_{t})}\left[\sum\_{t^{\prime}=t}^{H}\gamma^{t^{\prime}-t}r(\mathbf{s}\_{t},\mathbf{a}\_{t})\right].= blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_p start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_τ | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT - italic\_t end\_POSTSUPERSCRIPT italic\_r ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] . | |
From this, we can derive recursive definitions for these value functions, which are given as
| | | | |
| --- | --- | --- | --- |
| | Vπ(𝐬t)superscript𝑉𝜋subscript𝐬𝑡\displaystyle V^{\pi}(\mathbf{s}\_{t})italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | =𝔼𝐚t∼π(𝐚t|𝐬t)[Qπ(𝐬t,𝐚t)]absentsubscript𝔼similar-tosubscript𝐚𝑡𝜋conditionalsubscript𝐚𝑡subscript𝐬𝑡delimited-[]superscript𝑄𝜋subscript𝐬𝑡subscript𝐚𝑡\displaystyle=\mathbb{E}\_{\mathbf{a}\_{t}\sim\pi(\mathbf{a}\_{t}|\mathbf{s}\_{t})}\left[Q^{\pi}(\mathbf{s}\_{t},\mathbf{a}\_{t})\right]= blackboard\_E start\_POSTSUBSCRIPT bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] | |
| | Qπ(𝐬t,𝐚t)superscript𝑄𝜋subscript𝐬𝑡subscript𝐚𝑡\displaystyle Q^{\pi}(\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | =r(𝐬t,𝐚t)+γ𝔼𝐬t+1∼T(𝐬t+1|𝐬t,𝐚t)[Vπ(𝐬t+1)].absent𝑟subscript𝐬𝑡subscript𝐚𝑡𝛾subscript𝔼similar-tosubscript𝐬𝑡1𝑇conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡delimited-[]superscript𝑉𝜋subscript𝐬𝑡1\displaystyle=r(\mathbf{s}\_{t},\mathbf{a}\_{t})+\gamma\mathbb{E}\_{\mathbf{s}\_{t+1}\sim T(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})}\left[V^{\pi}(\mathbf{s}\_{t+1})\right].= italic\_r ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_γ blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_T ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ] . | |
We can combine these two equations to express the Qπ(𝐬t,𝐚t)superscript𝑄𝜋subscript𝐬𝑡subscript𝐚𝑡Q^{\pi}(\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) in terms of Qπ(𝐬t+1,𝐚t+1)superscript𝑄𝜋subscript𝐬𝑡1subscript𝐚𝑡1Q^{\pi}(\mathbf{s}\_{t+1},\mathbf{a}\_{t+1})italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ):
| | | | | |
| --- | --- | --- | --- | --- |
| | Qπ(𝐬t,𝐚t)superscript𝑄𝜋subscript𝐬𝑡subscript𝐚𝑡\displaystyle Q^{\pi}(\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | =r(𝐬t,𝐚t)+γ𝔼𝐬t+1∼T(𝐬t+1|𝐬t,𝐚t),𝐚t+1∼π(𝐚t+1|𝐬t+1)[Qπ(𝐬t+1,𝐚t+1))].\displaystyle=r(\mathbf{s}\_{t},\mathbf{a}\_{t})+\gamma\mathbb{E}\_{\mathbf{s}\_{t+1}\sim T(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t}),\mathbf{a}\_{t+1}\sim\pi(\mathbf{a}\_{t+1}|\mathbf{s}\_{t+1})}\left[Q^{\pi}(\mathbf{s}\_{t+1},\mathbf{a}\_{t+1}))\right].= italic\_r ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_γ blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_T ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ) ] . | | (3) |
We can also express these in terms of the *Bellman operator* for the policy π𝜋\piitalic\_π, which we denote ℬπsuperscriptℬ𝜋\mathcal{B}^{\pi}caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT. For example, Equation ([3](#S2.E3 "3 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")) can be written as Q→π=ℬπQ→πsuperscript→𝑄𝜋superscriptℬ𝜋superscript→𝑄𝜋\vec{Q}^{\pi}=\mathcal{B}^{\pi}\vec{Q}^{\pi}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT = caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT, where Q→πsuperscript→𝑄𝜋\vec{Q}^{\pi}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT denotes the Q-function Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT represented as a vector of length |𝒮|×|𝒜|𝒮𝒜|\mathcal{S}|\times|\mathcal{A}|| caligraphic\_S | × | caligraphic\_A |. Before moving on to deriving learning algorithms based on these definitions, we briefly discuss some properties of the Bellman operator. This Bellman operator has a unique fixed point that corresponds to the true Q-function for the policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ), which can be obtained by repeating the iteration Q→k+1π=ℬπQ→kπsubscriptsuperscript→𝑄𝜋𝑘1superscriptℬ𝜋subscriptsuperscript→𝑄𝜋𝑘\vec{Q}^{\pi}\_{k+1}=\mathcal{B}^{\pi}\vec{Q}^{\pi}\_{k}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT = caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, and it can be shown that limk→∞Q→kπ=Q→πsubscript→𝑘subscriptsuperscript→𝑄𝜋𝑘superscript→𝑄𝜋\lim\_{k\rightarrow\infty}\vec{Q}^{\pi}\_{k}=\vec{Q}^{\pi}roman\_lim start\_POSTSUBSCRIPT italic\_k → ∞ end\_POSTSUBSCRIPT over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT = over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT, which obeys Equation ([3](#S2.E3 "3 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")) (Sutton and Barto,, [1998](#bib.bib151)). The proof for this follows from the observation that ℬπsuperscriptℬ𝜋\mathcal{B}^{\pi}caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT is a contraction in the ℓ∞subscriptℓ\ell\_{\infty}roman\_ℓ start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT norm (Lagoudakis and Parr,, [2003](#bib.bib81)).
Based on these definitions, we can derive two commonly used algorithms based on dynamic programming: Q-learning and actor-critic methods. To derive Q-learning, we express the policy implicitly in terms of the Q-function, as π(𝐚t|𝐬t)=δ(𝐚t=argmaxQ(𝐬t,𝐚t))𝜋conditionalsubscript𝐚𝑡subscript𝐬𝑡𝛿subscript𝐚𝑡𝑄subscript𝐬𝑡subscript𝐚𝑡\pi(\mathbf{a}\_{t}|\mathbf{s}\_{t})=\delta(\mathbf{a}\_{t}=\arg\max Q(\mathbf{s}\_{t},\mathbf{a}\_{t}))italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = italic\_δ ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = roman\_arg roman\_max italic\_Q ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ), and only learn the Q-function Q(𝐬t,𝐚t)𝑄subscript𝐬𝑡subscript𝐚𝑡Q(\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_Q ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). By substituting this (implicit) policy into the above dynamic programming equation, we obtain the following condition on the optimal Q-function:
| | | | |
| --- | --- | --- | --- |
| | Q⋆(𝐬t,𝐚t)=r(𝐬t,𝐚t)+γ𝔼𝐬t+1∼T(𝐬t+1|𝐬t,𝐚t)[max𝐚t+1Q⋆(𝐬t+1,𝐚t+1)].superscript𝑄⋆subscript𝐬𝑡subscript𝐚𝑡𝑟subscript𝐬𝑡subscript𝐚𝑡𝛾subscript𝔼similar-tosubscript𝐬𝑡1𝑇conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡delimited-[]subscriptsubscript𝐚𝑡1superscript𝑄⋆subscript𝐬𝑡1subscript𝐚𝑡1Q^{\star}(\mathbf{s}\_{t},\mathbf{a}\_{t})=r(\mathbf{s}\_{t},\mathbf{a}\_{t})+\gamma\mathbb{E}\_{\mathbf{s}\_{t+1}\sim T(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})}\left[\max\_{\mathbf{a}\_{t+1}}Q^{\star}(\mathbf{s}\_{t+1},\mathbf{a}\_{t+1})\right].italic\_Q start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = italic\_r ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_γ blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_T ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ roman\_max start\_POSTSUBSCRIPT bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_Q start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ] . | | (4) |
We can again express this as Q→=ℬ⋆Q→→𝑄superscriptℬ⋆→𝑄\vec{Q}=\mathcal{B}^{\star}\vec{Q}over→ start\_ARG italic\_Q end\_ARG = caligraphic\_B start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG in vector notation, where ℬ⋆superscriptℬ⋆\mathcal{B}^{\star}caligraphic\_B start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT now refers to the Bellman optimality operator. Note however that this operator is not linear, due to the maximization on the right-hand side in Equation ([4](#S2.E4 "4 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")). To turn this equation into a learning algorithm, we can minimize the difference between the left-hand side and right-hand side of this equation with respect to the parameters of a parametric Q-function estimator with parameters ϕitalic-ϕ\phiitalic\_ϕ, Qϕ(𝐬t,𝐚t)subscript𝑄italic-ϕsubscript𝐬𝑡subscript𝐚𝑡Q\_{\phi}(\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). There are a number of variants of this Q-learning procedure, including variants that fully minimize the difference between the left-hand side and right-hand side of the above equation at each iteration, commonly referred to as fitted Q-iteration (Ernst et al.,, [2005](#bib.bib23); Riedmiller,, [2005](#bib.bib130)), and variants that take a single gradient step, such as the original Q-learning method (Watkins and Dayan,, [1992](#bib.bib176)). The commonly used variant in deep reinforcement learning is a kind of hybrid of these two methods, employing a replay buffer (Lin,, [1992](#bib.bib94)) and taking gradient steps on the Bellman error objective concurrently with data collection (Mnih et al.,, [2013](#bib.bib100)). We write out a general recipe for Q-learning methods in Algorithm [2](#alg2 "Algorithm 2 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
Algorithm 2 Generic Q-learning (includes FQI and DQN as special cases)
1:initialize ϕ0subscriptitalic-ϕ0\phi\_{0}italic\_ϕ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT
2:initialize π0(𝐚|𝐬)=ϵ𝒰(𝐚)+(1−ϵ)δ(𝐚=argmax𝐚Qϕ0(𝐬,𝐚))subscript𝜋0conditional𝐚𝐬italic-ϵ𝒰𝐚1italic-ϵ𝛿𝐚subscript𝐚subscript𝑄subscriptitalic-ϕ0𝐬𝐚\pi\_{0}(\mathbf{a}|\mathbf{s})=\epsilon\mathcal{U}(\mathbf{a})+(1-\epsilon)\delta(\mathbf{a}=\arg\max\_{\mathbf{a}}Q\_{\phi\_{0}}(\mathbf{s},\mathbf{a}))italic\_π start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) = italic\_ϵ caligraphic\_U ( bold\_a ) + ( 1 - italic\_ϵ ) italic\_δ ( bold\_a = roman\_arg roman\_max start\_POSTSUBSCRIPT bold\_a end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ) ▷▷\triangleright▷ Use ϵitalic-ϵ\epsilonitalic\_ϵ-greedy exploration
3:initialize replay buffer 𝒟=∅𝒟\mathcal{D}=\emptysetcaligraphic\_D = ∅ as a ring buffer of fixed size
4:initialize 𝐬∼d0(𝐬)similar-to𝐬subscript𝑑0𝐬\mathbf{s}\sim d\_{0}(\mathbf{s})bold\_s ∼ italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( bold\_s )
5:for iteration k∈[0,…,K]𝑘0…𝐾k\in[0,\dots,K]italic\_k ∈ [ 0 , … , italic\_K ] do
6: for step s∈[0,…,S−1]𝑠0…𝑆1s\in[0,\dots,S-1]italic\_s ∈ [ 0 , … , italic\_S - 1 ] do
7: 𝐚∼πk(𝐚|𝐬)similar-to𝐚subscript𝜋𝑘conditional𝐚𝐬\mathbf{a}\sim\pi\_{k}(\mathbf{a}|\mathbf{s})bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) ▷▷\triangleright▷ sample action from exploration policy
8: 𝐬′∼p(𝐬′|𝐬,𝐚)similar-tosuperscript𝐬′𝑝conditionalsuperscript𝐬′𝐬𝐚\mathbf{s}^{\prime}\sim p(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a})bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_p ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s , bold\_a ) ▷▷\triangleright▷ sample next state from MDP
9: 𝒟←𝒟∪{(𝐬,𝐚,𝐬′,r(𝐬,𝐚))}←𝒟𝒟𝐬𝐚superscript𝐬′𝑟𝐬𝐚\mathcal{D}\leftarrow\mathcal{D}\cup\{(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime},r(\mathbf{s},\mathbf{a}))\}caligraphic\_D ← caligraphic\_D ∪ { ( bold\_s , bold\_a , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_r ( bold\_s , bold\_a ) ) } ▷▷\triangleright▷ append to buffer, purging old data if buffer too big
10: end for
11: ϕk,0←ϕk←subscriptitalic-ϕ𝑘0subscriptitalic-ϕ𝑘\phi\_{k,0}\leftarrow\phi\_{k}italic\_ϕ start\_POSTSUBSCRIPT italic\_k , 0 end\_POSTSUBSCRIPT ← italic\_ϕ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT
12: for gradient step g∈[0,…,G−1]𝑔0…𝐺1g\in[0,\dots,G-1]italic\_g ∈ [ 0 , … , italic\_G - 1 ] do
13: sample batch B⊂𝒟𝐵𝒟B\subset\mathcal{D}italic\_B ⊂ caligraphic\_D ▷▷\triangleright▷ B={(𝐬i,𝐚i,𝐬i′,rt)}𝐵subscript𝐬𝑖subscript𝐚𝑖subscriptsuperscript𝐬′𝑖subscript𝑟𝑡B=\{(\mathbf{s}\_{i},\mathbf{a}\_{i},\mathbf{s}^{\prime}\_{i},r\_{t})\}italic\_B = { ( bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) }
14: estimate error ℰ(B,ϕk,g)=∑i(Qϕk,g−(ri+γmax𝐚′Qϕk(𝐬′,𝐚′)))2ℰ𝐵subscriptitalic-ϕ𝑘𝑔subscript𝑖superscriptsubscript𝑄subscriptitalic-ϕ𝑘𝑔subscript𝑟𝑖𝛾subscriptsuperscript𝐚′subscript𝑄subscriptitalic-ϕ𝑘superscript𝐬′superscript𝐚′2\mathcal{E}(B,\phi\_{k,g})=\sum\_{i}\left(Q\_{\phi\_{k,g}}-(r\_{i}+\gamma\max\_{\mathbf{a}^{\prime}}Q\_{\phi\_{k}}(\mathbf{s}^{\prime},\mathbf{a}^{\prime}))\right)^{2}caligraphic\_E ( italic\_B , italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_Q start\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - ( italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT + italic\_γ roman\_max start\_POSTSUBSCRIPT bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT
15: update parameters: ϕk,g+1←ϕk,g−α∇ϕk,gℰ(B,ϕk,g)←subscriptitalic-ϕ𝑘𝑔1subscriptitalic-ϕ𝑘𝑔𝛼subscript∇subscriptitalic-ϕ𝑘𝑔ℰ𝐵subscriptitalic-ϕ𝑘𝑔\phi\_{k,g+1}\leftarrow\phi\_{k,g}-\alpha\nabla\_{\phi\_{k,g}}\mathcal{E}(B,\phi\_{k,g})italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g + 1 end\_POSTSUBSCRIPT ← italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT - italic\_α ∇ start\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT caligraphic\_E ( italic\_B , italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT )
16: end for
17: ϕk+1←ϕk,G←subscriptitalic-ϕ𝑘1subscriptitalic-ϕ𝑘𝐺\phi\_{k+1}\leftarrow\phi\_{k,G}italic\_ϕ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ← italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_G end\_POSTSUBSCRIPT ▷▷\triangleright▷ update parameters
18:end for
Classic Q-learning can be derived as the limiting case where the buffer size is 1, and we take G=1𝐺1G=1italic\_G = 1 gradient steps and collect S=1𝑆1S=1italic\_S = 1 transition samples per iteration, while classic fitted Q-iteration runs the inner gradient descent phase to convergence (i.e., G=∞𝐺G=\inftyitalic\_G = ∞), and uses a buffer size equal to the number of sampling steps S𝑆Sitalic\_S. Note that many modern implementations also employ a *target network*, where the target value ri+γmax𝐚′Qϕk(𝐬′,𝐚′)subscript𝑟𝑖𝛾subscriptsuperscript𝐚′subscript𝑄subscriptitalic-ϕ𝑘superscript𝐬′superscript𝐚′r\_{i}+\gamma\max\_{\mathbf{a}^{\prime}}Q\_{\phi\_{k}}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT + italic\_γ roman\_max start\_POSTSUBSCRIPT bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) actually uses ϕLsubscriptitalic-ϕ𝐿\phi\_{L}italic\_ϕ start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT, where L𝐿Litalic\_L is a lagged iteration (e.g., the last k𝑘kitalic\_k that is a multiple of 1000). Note that these approximations violate the assumptions under which Q-learning algorithms can be proven to converge. However, recent work suggests that high-capacity function approximators, which correspond to a very large set 𝒬𝒬\mathcal{Q}caligraphic\_Q, generally do tend to make this method convergent in practice, yielding a Q-function that is close to Q→πsuperscript→𝑄𝜋\vec{Q}^{\pi}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT (Fu et al.,, [2019](#bib.bib30); Van Hasselt et al.,, [2018](#bib.bib170)).
#### Actor-critic algorithms.
Actor-critic algorithms combine the basic ideas from policy gradients and approximate dynamic programming. Such algorithms employ *both* a parameterized policy and a parameterized value function, and use the value function to provide a better estimate of A^(𝐬,𝐚)^𝐴𝐬𝐚\hat{A}(\mathbf{s},\mathbf{a})over^ start\_ARG italic\_A end\_ARG ( bold\_s , bold\_a ) for policy gradient calculation. There are a number of different variants of actor-critic methods, including on-policy variants that directly estimate Vπ(𝐬)superscript𝑉𝜋𝐬V^{\pi}(\mathbf{s})italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) (Konda and Tsitsiklis,, [2000](#bib.bib74)), and off-policy variants that estimate Qπ(𝐬,𝐚)superscript𝑄𝜋𝐬𝐚Q^{\pi}(\mathbf{s},\mathbf{a})italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) via a parameterized state-action value function Qϕπ(𝐬,𝐚)subscriptsuperscript𝑄𝜋italic-ϕ𝐬𝐚Q^{\pi}\_{\phi}(\mathbf{s},\mathbf{a})italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) (Haarnoja et al.,, [2018](#bib.bib43), [2017](#bib.bib42); Heess et al.,, [2015](#bib.bib50)). We will focus on the latter class of algorithms, since they can be extended to the offline setting. The basic design of such an algorithm is a straightforward combination of the ideas in dynamic programming and policy gradients. Unlike Q-learning, which directly attempts to learn the optimal Q-function, actor-critic methods aim to learn the Q-function corresponding to the current parameterized policy πθ(𝐚|𝐬)subscript𝜋𝜃conditional𝐚𝐬\pi\_{\theta}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a | bold\_s ), which must obey the equation
| | | |
| --- | --- | --- |
| | Qπ(𝐬t,𝐚t)=r(𝐬t,𝐚t)+γ𝔼𝐬t+1∼T(𝐬t+1|𝐬t,𝐚t),𝐚t+1∼πθ(𝐚t+1|𝐬t+1)[Qπ(𝐬t+1,𝐚t+1)].superscript𝑄𝜋subscript𝐬𝑡subscript𝐚𝑡𝑟subscript𝐬𝑡subscript𝐚𝑡𝛾subscript𝔼formulae-sequencesimilar-tosubscript𝐬𝑡1𝑇conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡similar-tosubscript𝐚𝑡1subscript𝜋𝜃conditionalsubscript𝐚𝑡1subscript𝐬𝑡1delimited-[]superscript𝑄𝜋subscript𝐬𝑡1subscript𝐚𝑡1Q^{\pi}(\mathbf{s}\_{t},\mathbf{a}\_{t})=r(\mathbf{s}\_{t},\mathbf{a}\_{t})+\gamma\mathbb{E}\_{\mathbf{s}\_{t+1}\sim T(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t}),\mathbf{a}\_{t+1}\sim\pi\_{\theta}(\mathbf{a}\_{t+1}|\mathbf{s}\_{t+1})}\left[Q^{\pi}(\mathbf{s}\_{t+1},\mathbf{a}\_{t+1})\right].italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = italic\_r ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_γ blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_T ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ] . | |
As before, this equation can be expressed in vector form in terms of the Bellman operator for the policy, Q→π=ℬπQ→πsuperscript→𝑄𝜋superscriptℬ𝜋superscript→𝑄𝜋\vec{Q}^{\pi}=\mathcal{B}^{\pi}\vec{Q}^{\pi}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT = caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT, where Q→πsuperscript→𝑄𝜋\vec{Q}^{\pi}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT denotes the Q-function Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT represented as a vector of length |𝒮|×|𝒜|𝒮𝒜|\mathcal{S}|\times|\mathcal{A}|| caligraphic\_S | × | caligraphic\_A |. We can now instantiate a complete algorithm based on this idea, shown in Algorithm [3](#alg3 "Algorithm 3 ‣ Actor-critic algorithms. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
Algorithm 3 Generic off-policy actor-critic
1:initialize ϕ0subscriptitalic-ϕ0\phi\_{0}italic\_ϕ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT
2:initialize θ0subscript𝜃0\theta\_{0}italic\_θ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT
3:initialize replay buffer 𝒟=∅𝒟\mathcal{D}=\emptysetcaligraphic\_D = ∅ as a ring buffer of fixed size
4:initialize 𝐬∼d0(𝐬)similar-to𝐬subscript𝑑0𝐬\mathbf{s}\sim d\_{0}(\mathbf{s})bold\_s ∼ italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( bold\_s )
5:for iteration k∈[0,…,K]𝑘0…𝐾k\in[0,\dots,K]italic\_k ∈ [ 0 , … , italic\_K ] do
6: for step s∈[0,…,S−1]𝑠0…𝑆1s\in[0,\dots,S-1]italic\_s ∈ [ 0 , … , italic\_S - 1 ] do
7: 𝐚∼πθk(𝐚|𝐬)similar-to𝐚subscript𝜋subscript𝜃𝑘conditional𝐚𝐬\mathbf{a}\sim\pi\_{\theta\_{k}}(\mathbf{a}|\mathbf{s})bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) ▷▷\triangleright▷ sample action from current policy
8: 𝐬′∼p(𝐬′|𝐬,𝐚)similar-tosuperscript𝐬′𝑝conditionalsuperscript𝐬′𝐬𝐚\mathbf{s}^{\prime}\sim p(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a})bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_p ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s , bold\_a ) ▷▷\triangleright▷ sample next state from MDP
9: 𝒟←𝒟∪{(𝐬,𝐚,𝐬′,r(𝐬,𝐚))}←𝒟𝒟𝐬𝐚superscript𝐬′𝑟𝐬𝐚\mathcal{D}\leftarrow\mathcal{D}\cup\{(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime},r(\mathbf{s},\mathbf{a}))\}caligraphic\_D ← caligraphic\_D ∪ { ( bold\_s , bold\_a , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_r ( bold\_s , bold\_a ) ) } ▷▷\triangleright▷ append to buffer, purging old data if buffer too big
10: end for
11: ϕk,0←ϕk←subscriptitalic-ϕ𝑘0subscriptitalic-ϕ𝑘\phi\_{k,0}\leftarrow\phi\_{k}italic\_ϕ start\_POSTSUBSCRIPT italic\_k , 0 end\_POSTSUBSCRIPT ← italic\_ϕ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT
12: for gradient step g∈[0,…,GQ−1]𝑔0…subscript𝐺𝑄1g\in[0,\dots,G\_{Q}-1]italic\_g ∈ [ 0 , … , italic\_G start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT - 1 ] do
13: sample batch B⊂𝒟𝐵𝒟B\subset\mathcal{D}italic\_B ⊂ caligraphic\_D ▷▷\triangleright▷ B={(𝐬i,𝐚i,𝐬i′,rt)}𝐵subscript𝐬𝑖subscript𝐚𝑖subscriptsuperscript𝐬′𝑖subscript𝑟𝑡B=\{(\mathbf{s}\_{i},\mathbf{a}\_{i},\mathbf{s}^{\prime}\_{i},r\_{t})\}italic\_B = { ( bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) }
14: estimate error ℰ(B,ϕk,g)=∑i(Qϕk,g−(ri+γ𝔼𝐚′∼πk(𝐚′|𝐬′)Qϕk(𝐬′,𝐚′)))2ℰ𝐵subscriptitalic-ϕ𝑘𝑔subscript𝑖superscriptsubscript𝑄subscriptitalic-ϕ𝑘𝑔subscript𝑟𝑖𝛾subscript𝔼similar-tosuperscript𝐚′subscript𝜋𝑘conditionalsuperscript𝐚′superscript𝐬′subscript𝑄subscriptitalic-ϕ𝑘superscript𝐬′superscript𝐚′2\mathcal{E}(B,\phi\_{k,g})=\sum\_{i}\left(Q\_{\phi\_{k,g}}-(r\_{i}+\gamma\mathbb{E}\_{\mathbf{a}^{\prime}\sim\pi\_{k}(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}Q\_{\phi\_{k}}(\mathbf{s}^{\prime},\mathbf{a}^{\prime}))\right)^{2}caligraphic\_E ( italic\_B , italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_Q start\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - ( italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT + italic\_γ blackboard\_E start\_POSTSUBSCRIPT bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT
15: update parameters: ϕk,g+1←ϕk,g−αQ∇ϕk,gℰ(B,ϕk,g)←subscriptitalic-ϕ𝑘𝑔1subscriptitalic-ϕ𝑘𝑔subscript𝛼𝑄subscript∇subscriptitalic-ϕ𝑘𝑔ℰ𝐵subscriptitalic-ϕ𝑘𝑔\phi\_{k,g+1}\leftarrow\phi\_{k,g}-\alpha\_{Q}\nabla\_{\phi\_{k,g}}\mathcal{E}(B,\phi\_{k,g})italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g + 1 end\_POSTSUBSCRIPT ← italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT - italic\_α start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ∇ start\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT caligraphic\_E ( italic\_B , italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT )
16: end for
17: ϕk+1←ϕk,GQ←subscriptitalic-ϕ𝑘1subscriptitalic-ϕ𝑘subscript𝐺𝑄\phi\_{k+1}\leftarrow\phi\_{k,G\_{Q}}italic\_ϕ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ← italic\_ϕ start\_POSTSUBSCRIPT italic\_k , italic\_G start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ▷▷\triangleright▷ update Q-function parameters
18: θk,0←θk←subscript𝜃𝑘0subscript𝜃𝑘\theta\_{k,0}\leftarrow\theta\_{k}italic\_θ start\_POSTSUBSCRIPT italic\_k , 0 end\_POSTSUBSCRIPT ← italic\_θ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT
19: for gradient step g∈[0,…Gπ−1]𝑔0…subscript𝐺𝜋1g\in[0,\dots G\_{\pi}-1]italic\_g ∈ [ 0 , … italic\_G start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT - 1 ] do
20: sample batch of states {𝐬i}subscript𝐬𝑖\{\mathbf{s}\_{i}\}{ bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } from 𝒟𝒟\mathcal{D}caligraphic\_D
21: for each 𝐬isubscript𝐬𝑖\mathbf{s}\_{i}bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, sample 𝐚i∼πθk,g(𝐚|𝐬i)similar-tosubscript𝐚𝑖subscript𝜋subscript𝜃𝑘𝑔conditional𝐚subscript𝐬𝑖\mathbf{a}\_{i}\sim\pi\_{\theta\_{k,g}}(\mathbf{a}|\mathbf{s}\_{i})bold\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_a | bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ▷▷\triangleright▷ do not use actions in the buffer!
22: for each (𝐬i,𝐚i)subscript𝐬𝑖subscript𝐚𝑖(\mathbf{s}\_{i},\mathbf{a}\_{i})( bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ), compute A^(𝐬i,𝐚i)=Qϕk+1(𝐬i,𝐚i)−𝔼𝐚∼πk,g(𝐚|𝐬i)[Qϕk+1(𝐬i,𝐚)]^𝐴subscript𝐬𝑖subscript𝐚𝑖subscript𝑄subscriptitalic-ϕ𝑘1subscript𝐬𝑖subscript𝐚𝑖subscript𝔼similar-to𝐚subscript𝜋𝑘𝑔conditional𝐚subscript𝐬𝑖delimited-[]subscript𝑄subscriptitalic-ϕ𝑘1subscript𝐬𝑖𝐚\hat{A}(\mathbf{s}\_{i},\mathbf{a}\_{i})=Q\_{\phi\_{k+1}}(\mathbf{s}\_{i},\mathbf{a}\_{i})-\mathbb{E}\_{\mathbf{a}\sim\pi\_{k,g}(\mathbf{a}|\mathbf{s}\_{i})}[Q\_{\phi\_{k+1}}(\mathbf{s}\_{i},\mathbf{a})]over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_Q start\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT ( bold\_a | bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_a ) ]
23: ∇θk,gJ(πθk,g)≈1N∇θk,glogπθk,g(𝐬i,𝐚i)A^(𝐬i,𝐚i)subscript∇subscript𝜃𝑘𝑔𝐽subscript𝜋subscript𝜃𝑘𝑔1𝑁subscript∇subscript𝜃𝑘𝑔subscript𝜋subscript𝜃𝑘𝑔subscript𝐬𝑖subscript𝐚𝑖^𝐴subscript𝐬𝑖subscript𝐚𝑖\nabla\_{\theta\_{k,g}}J(\pi\_{\theta\_{k,g}})\approx\frac{1}{N}\nabla\_{\theta\_{k,g}}\log\pi\_{\theta\_{k,g}}(\mathbf{s}\_{i},\mathbf{a}\_{i})\hat{A}(\mathbf{s}\_{i},\mathbf{a}\_{i})∇ start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ≈ divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG ∇ start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT )
24: θk,g+1←θk,g+απ∇θk,gJ(πθk,g)←subscript𝜃𝑘𝑔1subscript𝜃𝑘𝑔subscript𝛼𝜋subscript∇subscript𝜃𝑘𝑔𝐽subscript𝜋subscript𝜃𝑘𝑔\theta\_{k,g+1}\leftarrow\theta\_{k,g}+\alpha\_{\pi}\nabla\_{\theta\_{k,g}}J(\pi\_{\theta\_{k,g}})italic\_θ start\_POSTSUBSCRIPT italic\_k , italic\_g + 1 end\_POSTSUBSCRIPT ← italic\_θ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT + italic\_α start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ∇ start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT italic\_k , italic\_g end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT )
25: end for
26: θk+1←θk,Gπ←𝜃𝑘1subscript𝜃𝑘subscript𝐺𝜋\theta{k+1}\leftarrow\theta\_{k,G\_{\pi}}italic\_θ italic\_k + 1 ← italic\_θ start\_POSTSUBSCRIPT italic\_k , italic\_G start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ▷▷\triangleright▷ update policy parameters
27:end for
For more details, we refer the reader to standard textbooks and prior works (Sutton and Barto,, [1998](#bib.bib151); Konda and Tsitsiklis,, [2000](#bib.bib74)).
Actor-critic algorithms are closely related with another class of methods that frequently arises in dynamic programming, called policy iteration (PI) (Lagoudakis and Parr,, [2003](#bib.bib81)). Policy iteration consists of two phases: policy evaluation and policy improvement. The policy evaluation phase computes the Q-function for the current policy π𝜋\piitalic\_π, Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT, by solving for the fixed point such that Qπ=ℬπQπsuperscript𝑄𝜋superscriptℬ𝜋superscript𝑄𝜋Q^{\pi}=\mathcal{B}^{\pi}Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT = caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT. This can be done via linear programming or solving a system of linear equations, as we will discuss in Section [4](#S4 "4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), or via gradient updates, analogously to line 15 in Algorithm [3](#alg3 "Algorithm 3 ‣ Actor-critic algorithms. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). The next policy iterate is then computed in the policy improvement phase, by choosing the action that greedily maximizes the Q-value at each state, such that πk+1(𝐚|𝐬)=δ(𝐚=argmax𝐚Qπk(𝐬,𝐚))subscript𝜋𝑘1conditional𝐚𝐬𝛿𝐚subscript𝐚superscript𝑄subscript𝜋𝑘𝐬𝐚\pi\_{k+1}(\mathbf{a}|\mathbf{s})=\delta(\mathbf{a}=\arg\max\_{\mathbf{a}}Q^{\pi\_{k}}(\mathbf{s},\mathbf{a}))italic\_π start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) = italic\_δ ( bold\_a = roman\_arg roman\_max start\_POSTSUBSCRIPT bold\_a end\_POSTSUBSCRIPT italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) ), or by using a gradient based update procedure as is employed in Algorithm [3](#alg3 "Algorithm 3 ‣ Actor-critic algorithms. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") on line 24. In the absence of function approximation (e.g., with tabular representations) policy iteration produces a monotonically improving sequence of policies, and converges to the optimal policy. Policy iteration can be obtained as a special case of the generic actor-critic algorithm in Algorithm [3](#alg3 "Algorithm 3 ‣ Actor-critic algorithms. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") when we set GQ=∞subscript𝐺𝑄G\_{Q}=\inftyitalic\_G start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT = ∞ and Gπ=∞subscript𝐺𝜋G\_{\pi}=\inftyitalic\_G start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT = ∞, when the buffer 𝒟𝒟\mathcal{D}caligraphic\_D consists of each and every transition of the MDP.
#### Model-based reinforcement learning.
Model-based reinforcement learning is a general term that refers to a broad class of methods that utilize explicit estimates of the transition or dynamics function T(𝐬t+1|𝐬t,𝐚t)𝑇conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡T(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_T ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), parameterized by a parameter vector ψ𝜓\psiitalic\_ψ, which we will denote Tψ(𝐬t+1|𝐬t,𝐚t)subscript𝑇𝜓conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡T\_{\psi}(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_T start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). There is no single recipe for a model-based reinforcement learning method. Some commonly used model-based reinforcement learning algorithms learn only the dynamics model Tψ(𝐬t+1|𝐬t,𝐚t)subscript𝑇𝜓conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡T\_{\psi}(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_T start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), and then utilize it for planning at test time, often by means of model-predictive control (MPC) (Tassa et al.,, [2012](#bib.bib160)) with various trajectory optimization methods (Nagabandi et al.,, [2018](#bib.bib108); Chua et al.,, [2018](#bib.bib12)). Other model-based reinforcement learning methods utilize a learned policy πθ(𝐚t|𝐬t)subscript𝜋𝜃conditionalsubscript𝐚𝑡subscript𝐬𝑡\pi\_{\theta}(\mathbf{a}\_{t}|\mathbf{s}\_{t})italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) in addition to the dynamics model, and employ backpropagation through time to optimize the policy with respect to the expected reward objective (Deisenroth and Rasmussen,, [2011](#bib.bib19)). Yet another set of algorithms employ the model to generate “synthetic” samples to augment the sample set available to model-free reinforcement learning methods. The classic Dyna algorithm uses this recipe in combination with Q-learning and one-step predictions via the model from previously seen states (Sutton,, [1991](#bib.bib150)), while a variety of recently proposed algorithms employ synthetic model-based rollouts with policy gradients (Parmas et al.,, [2019](#bib.bib119); [Kaiser et al., 2019a,](#bib.bib61) ) and actor-critic algorithms (Janner et al.,, [2019](#bib.bib55)). Since there are so many variants of model-based reinforcement learning algorithms, we will not go into detail on each of them in this section, but we will discuss some considerations for offline model-based reinforcement learning in Section [5](#S5 "5 Offline Model-Based Reinforcement Learning ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
###
2.2 Offline Reinforcement Learning
The offline reinforcement learning problem can be defined as a *data-driven* formulation of the reinforcement learning problem. The end goal is still to optimize the objective in Equation ([1](#S2.E1 "1 ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")). However, the agent no longer has the ability to interact with the environment and collect additional transitions using the behavior policy. Instead, the learning algorithm is provided with a *static* dataset of transitions, 𝒟={(𝐬ti,𝐚ti,𝐬t+1i,rti)}𝒟subscriptsuperscript𝐬𝑖𝑡subscriptsuperscript𝐚𝑖𝑡subscriptsuperscript𝐬𝑖𝑡1subscriptsuperscript𝑟𝑖𝑡\mathcal{D}=\{(\mathbf{s}^{i}\_{t},\mathbf{a}^{i}\_{t},\mathbf{s}^{i}\_{t+1},r^{i}\_{t})\}caligraphic\_D = { ( bold\_s start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_s start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) }, and must learn the best policy it can using this dataset. This formulation more closely resembles the standard supervised learning problem statement, and we can regard 𝒟𝒟\mathcal{D}caligraphic\_D as the *training set* for the policy. In essence, offline reinforcement learning requires the learning algorithm to derive a sufficient understanding of the dynamical system underlying the MDP ℳℳ\mathcal{M}caligraphic\_M entirely from a fixed dataset, and then construct a policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) that attains the largest possible cumulative reward *when it is actually used to interact with the MDP*. We will use πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT to denote the distribution over states and actions in 𝒟𝒟\mathcal{D}caligraphic\_D, such that we assume that the state-action tuples (𝐬,𝐚)∈𝒟𝐬𝐚𝒟(\mathbf{s},\mathbf{a})\in\mathcal{D}( bold\_s , bold\_a ) ∈ caligraphic\_D are sampled according to 𝐬∼dπβ(𝐬)similar-to𝐬superscript𝑑subscript𝜋𝛽𝐬\mathbf{s}\sim d^{{\pi\_{\beta}}}(\mathbf{s})bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ), and the actions are sampled according to the behavior policy, such that 𝐚∼πβ(𝐚|𝐬)similar-to𝐚subscript𝜋𝛽conditional𝐚𝐬\mathbf{a}\sim{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ).
This problem statement has been presented under a number of different names. The term “off-policy reinforcement learning” is typically used as an umbrella term to denote all reinforcement learning algorithms that can employ datasets of transitions 𝒟𝒟\mathcal{D}caligraphic\_D where the corresponding actions in each transition were collected with any policy *other* than the current policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ). Q-learning algorithms, actor-critic algorithms that utilize Q-functions, and many model-based reinforcement learning algorithm are off-policy algorithms. However, off-policy algorithms still often employ additional interaction (i.e., online data collection) during the learning process. Therefore, the term “fully off-policy” is sometimes used to indicate that no additional online data collection is performed. Another commonly used term is “batch reinforcement learning” (Ernst et al.,, [2005](#bib.bib23); Riedmiller,, [2005](#bib.bib130); Lange et al.,, [2012](#bib.bib82)). While this term has been used widely in the literature, it can also cause some amount of confusion, since the use of a “batch” in an iterative learning algorithm can also refer to a method that consumes a batch of data, updates a model, and then obtains a different batch, as opposed to a traditional online learning algorithm, which consumes one sample at a time. In fact, Lange et al., ([2012](#bib.bib82)) further introduces qualifiers “pure” and “growing” batch reinforcement learning to clarify this. To avoid this confusion, we will instead use the term “offline reinforcement learning” in this tutorial.
The offline reinforcement learning problem can be approached using algorithms from each of the four categories covered in the previous section, and in principle any off-policy RL algorithm *could* be used as an offline RL algorithm. For example, a simple offline RL method can be obtained simply by using Q-learning without additional online exploration, using 𝒟𝒟\mathcal{D}caligraphic\_D to pre-populate the data buffer. This corresponds to changing the initialization of 𝒟𝒟\mathcal{D}caligraphic\_D in Algorithm [2](#alg2 "Algorithm 2 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), and setting S=0𝑆0S=0italic\_S = 0. However, as we will discuss later, not all such methods are effective in the offline setting.
###
2.3 Example Scenarios
Before delving deeper into the technical questions surrounding offline RL, we will first discuss a few example scenarios where offline RL might be utilized. These scenarios will help us to understand the factors that we must consider when designing offline RL methods that are not only convergent and principled, but also likely to work well in practice. A more complete discussion of actual applications is provided in Section [6](#S6 "6 Applications and Evaluation ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
#### Decision making in health care.
An example health care scenario might formulate a Markov decision process to model the process of diagnosing and treating a patient, where actions correspond to various available interventions (e.g., diagnostic tests and treatments), and observations correspond to the patient’s symptoms and results of diagnostic tests. A partially observed MDP formulation may be most suitable in such cases. Conventional active reinforcement learning in such scenarios might be prohibitively dangerous – even utilizing a fully trained policy to treat a patient is a difficult prospect for clinicians, and deploying a partially trained policy would be out of the question. Therefore, offline RL might be the only viable path to apply reinforcement learning in such settings. Offline data would then be obtained from treatment histories of real patients, with the “actions” that were selected by their physician.
#### Learning goal-directed dialogue policies.
Dialogue can be viewed as interactive sequential decision making problem, which can also be modeled as an MDP, particularly when the dialogue is goal-directed (e.g., a chat bot on an e-commerce website that is offering information about a product to persuade a user to make a purchase). However, since the goal for such agents is to interact successfully with real humans, collecting trials requires interacting with live humans, which may be prohibitively expensive at the scale needed to train effective conversational agents. However, offline data can be collected directly from humans, and may indeed be natural to collect in any application domain where the aim is to partially or completely supplant human operators, by recording the interactions that are already taking place with the human-operated system.
#### Learning robotic manipulation skills.
In a robotic manipulation setting, active reinforcement learning may in fact be quite feasible. However, we might want to learn policies for a variety of robotic skills (e.g., all of the steps necessary to prepare a variety of meals for a home cooking robot) that generalize effectively over different environments and settings. In that case, each skill by itself might require a very large amount of interaction, as we would not only need to collect enough data to learn the skill, but enough data such that this skill generalizes effectively to all the situations (e.g., all the different homes) in which the robot might need to perform it. With offline RL, we could instead imagine including all of the data the robot has ever collected for *all* of its previously learned skills in the data buffer for each new skill that it learns. In this way, some skills could conceivably be acquired without any new data collection, if they can be assembled from parts of previously learned behaviors (e.g., cooking a soup that includes onions and carrots can likely be learned from experience cooking a soup with onions and meat, as well as another soup with carrots and cucumbers). In this way, offline RL can effectively utilize *multi-task* data.
###
2.4 What Makes Offline Reinforcement Learning Difficult?
Offline reinforcement learning is a difficult problem for multiple reasons, some of which are reasonably clear, and some of which might be a bit less clear. Arguably the most obvious challenge with offline reinforcement learning is that, because the learning algorithm must rely entirely on the static dataset 𝒟𝒟\mathcal{D}caligraphic\_D, there is no possibility of improving exploration: exploration is *outside* the scope of the algorithm, so if 𝒟𝒟\mathcal{D}caligraphic\_D does not contain transitions that illustrate high-reward regions of the state space, it may be impossible to discover those high-reward regions. However, because there is nothing that we can do to address this challenge, we will not spend any more time on it, and will instead assume that 𝒟𝒟\mathcal{D}caligraphic\_D adequately covers the space of high-reward transitions to make learning feasible.222It is worth noting that defining this notion formally is itself an open problem, and to the best knowledge of the authors, there are no known non-trivial “sufficiency” conditions on 𝒟𝒟\mathcal{D}caligraphic\_D that allows us to formulate guarantees that any offline reinforcement learning algorithm will recover an optimal or near-optimal policy.
A more subtle but practically more important challenge with offline reinforcement learning is that, at its core, offline reinforcement learning is about making and answering counterfactual queries. Counterfactual queries are, intuitively, “what if” questions. Such queries require forming hypotheses about what *might* happen if the agent were to carry out a course of action different from the one seen in the data. This is a necessity in offline RL, since we if we want the learned policy to perform better than the behavior seen in the dataset 𝒟𝒟\mathcal{D}caligraphic\_D, we must execute a sequence of actions that is in some way different. Unfortunately, this strains the capabilities of many of our current machine learning tools, which are designed around the assumption that the data is independent and identically distributed (i.i.d.). That is, in standard supervised learning, the goal is to train a model that attains good performance (e.g., high accuracy) on data coming from the same distribution as the training data. In offline RL, the whole point is to learn a policy that does something *differently* (presumably better) from the pattern of behavior observed in the dataset 𝒟𝒟\mathcal{D}caligraphic\_D.
The fundamental challenge with making such counterfactual queries is distributional shift: while our function approximator (policy, value function, or model) might be trained under one distribution, it will be evaluated on a different distribution, due both to the change in visited states for the new policy and, more subtly, by the act of maximizing the expected return. This latter point is discussed in more detail in Section [4](#S4 "4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). Distributional shift issues can be addressed in several ways, with the simplest one being to constrain something about the learning process such that the distributional shift is bounded. For example, by constraining how much the learned policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) differs from the behavior policy πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ), we can bound state distributional shift (Kakade and Langford,, [2002](#bib.bib63); Schulman et al.,, [2015](#bib.bib139)).
In this section, we will provide a short theoretical illustration of how harmful distributional shift can be on the performance of policies in MDPs. In this example, based on Ross et al., ([2011](#bib.bib132)), we will assume that we are provided with optimal action labels 𝐚⋆superscript𝐚⋆\mathbf{a}^{\star}bold\_a start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT at each state 𝐬∈𝒟𝐬𝒟\mathbf{s}\in\mathcal{D}bold\_s ∈ caligraphic\_D. One might expect that, under such a strong assumption, the performance of our learned policy should be at least as good as the policies that we can learn with reinforcement learning without such optimal action labels. The goal in this analysis will be to bound the number of mistakes made by the learned policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) based on this labeled dataset, denoted as
| | | |
| --- | --- | --- |
| | ℓ(π)=𝔼pπ(τ)[∑t=0Hδ(𝐚t≠𝐚t⋆)].ℓ𝜋subscript𝔼subscript𝑝𝜋𝜏delimited-[]superscriptsubscript𝑡0𝐻𝛿subscript𝐚𝑡superscriptsubscript𝐚𝑡⋆\ell(\pi)=\mathbb{E}\_{p\_{\pi}(\tau)}\left[\sum\_{t=0}^{H}\delta(\mathbf{a}\_{t}\neq\mathbf{a}\_{t}^{\star})\right].roman\_ℓ ( italic\_π ) = blackboard\_E start\_POSTSUBSCRIPT italic\_p start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_δ ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ≠ bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT ) ] . | |
If we train π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) with supervised learning (i.e., standard empirical risk minimization) on this labeled dataset, we have the following result from Ross et al., ([2011](#bib.bib132)):
######
Theorem 2.1 (Behavioral cloning error bound).
If π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) is trained via empirical risk minimization on 𝐬∼dπβ(𝐬)similar-to𝐬superscript𝑑subscript𝜋𝛽𝐬\mathbf{s}\sim d^{{\pi\_{\beta}}}(\mathbf{s})bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) and optimal labels 𝐚⋆superscript𝐚normal-⋆\mathbf{a}^{\star}bold\_a start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT, and attains generalization error ϵitalic-ϵ\epsilonitalic\_ϵ on 𝐬∼dπβ(𝐬)similar-to𝐬superscript𝑑subscript𝜋𝛽𝐬\mathbf{s}\sim d^{{\pi\_{\beta}}}(\mathbf{s})bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ), then ℓ(π)≤C+H2ϵnormal-ℓ𝜋𝐶superscript𝐻2italic-ϵ\ell(\pi)\leq C+H^{2}\epsilonroman\_ℓ ( italic\_π ) ≤ italic\_C + italic\_H start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_ϵ is the best possible bound on the expected error of the learned policy.
###### Proof.
The proof follows from Theorem 2.1 from Ross et al., ([2011](#bib.bib132)) using the 0-1 loss, and the bound is the best possible bound following the example from Ross and Bagnell, ([2010](#bib.bib131)).
∎
Interestingly, if we allow for additional data collection, where we follow the learned policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) to gather additional states 𝐬∼dπ(𝐬)similar-to𝐬superscript𝑑𝜋𝐬\mathbf{s}\sim d^{\pi}(\mathbf{s})bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ), and then access optimal action labels for these new *on-policy* states, the best possible bound becomes substantially better:
######
Theorem 2.2 (DAgger error bound).
If π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) is trained via empirical risk minimization on 𝐬∼dπ(𝐬)similar-to𝐬superscript𝑑𝜋𝐬\mathbf{s}\sim d^{\pi}(\mathbf{s})bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) and optimal labels 𝐚⋆superscript𝐚normal-⋆\mathbf{a}^{\star}bold\_a start\_POSTSUPERSCRIPT ⋆ end\_POSTSUPERSCRIPT, and attains generalization error ϵitalic-ϵ\epsilonitalic\_ϵ on 𝐬∼dπ(𝐬)similar-to𝐬superscript𝑑𝜋𝐬\mathbf{s}\sim d^{\pi}(\mathbf{s})bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ), then ℓ(π)≤C+Hϵnormal-ℓ𝜋𝐶𝐻italic-ϵ\ell(\pi)\leq C+H\epsilonroman\_ℓ ( italic\_π ) ≤ italic\_C + italic\_H italic\_ϵ is the best possible bound on the expected error of the learned policy.
###### Proof.
The proof follows from Theorem 3.2 from Ross et al., ([2011](#bib.bib132)). This is the best possible bound, because the probability of a mistake at any time step is at least ϵitalic-ϵ\epsilonitalic\_ϵ, and ∑t=1Hϵ=Hϵsuperscriptsubscript𝑡1𝐻italic-ϵ𝐻italic-ϵ\sum\_{t=1}^{H}\epsilon=H\epsilon∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_ϵ = italic\_H italic\_ϵ.
∎
This means that, even with optimal action labels, we get an error bound that is at best quadratic in the time horizon H𝐻Hitalic\_H in the offline case, but linear in H𝐻Hitalic\_H in the online case. Intuitively, the reason for this gap in performance is that, in the offline case, the learned policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) may enter into states that are far outside of its training distribution, since dπ(𝐬)superscript𝑑𝜋𝐬d^{\pi}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) may be very different from dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{{\pi\_{\beta}}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ). In these out-of-distribution states, the generalization error bound ϵitalic-ϵ\epsilonitalic\_ϵ no longer holds, since standard empirical risk minimization makes no guarantees about error when encountering out-of-distribution inputs that were not seen during training. Once the policy enters one of these out-of-distribution states, it will keep making mistakes and may remain out-of-distribution for the remainder of the trial, accumulating O(H)𝑂𝐻O(H)italic\_O ( italic\_H ) error. Since there is a non-trivial chance of entering an out-of-distribution state at every one of the H𝐻Hitalic\_H time steps, the overall error therefore scales as O(H2)𝑂superscript𝐻2O(H^{2})italic\_O ( italic\_H start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ). In the on-policy case, such out-of-distribution states are not an issue. Of course, this example is somewhat orthogonal to the main purpose of this tutorial, which is to study offline reinforcement learning algorithms, rather than offline behavioral cloning methods. However, it should serve as a warning, as it indicates that the challenges of distributional shift are likely to cause considerable harm to any policy trained from an offline dataset if care is not taken to minimize its detrimental effects.
3 Offline Evaluation and Reinforcement Learning via Importance Sampling
------------------------------------------------------------------------
In this section, we survey offline reinforcement learning algorithms based on direct estimation of policy return. These methods generally utilize some form of importance sampling to either evaluate the return of a given policy, or to estimate the corresponding policy gradient, corresponding to an offline variant of the policy gradient methods discussed in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). The most direct way to utilize this idea is to employ importance sampling to estimate J(π)𝐽𝜋J(\pi)italic\_J ( italic\_π ) with trajectories sampled from πβ(τ)subscript𝜋𝛽𝜏{\pi\_{\beta}}(\tau)italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_τ ). This is known as *off-policy evaluation*. In principle, once we can evaluate J(π)𝐽𝜋J(\pi)italic\_J ( italic\_π ), we can select the most performant policy. In this section, we review approaches for off-policy evaluation with importance sampling and then discuss how these ideas can be used for offline reinforcement learning.
###
3.1 Off-Policy Evaluation via Importance Sampling
We can naïvely use importance sampling to derive an unbiased estimator of J(π)𝐽𝜋J(\pi)italic\_J ( italic\_π ) that relies on off-policy trajectories:
| | | | |
| --- | --- | --- | --- |
| | J(πθ)𝐽subscript𝜋𝜃\displaystyle J(\pi\_{\theta})italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) | =𝔼τ∼πβ(τ)[πθ(τ)πβ(τ)∑t=0Hγtr(𝐬,𝐚)]absentsubscript𝔼similar-to𝜏subscript𝜋𝛽𝜏delimited-[]subscript𝜋𝜃𝜏subscript𝜋𝛽𝜏superscriptsubscript𝑡0𝐻superscript𝛾𝑡𝑟𝐬𝐚\displaystyle=\mathbb{E}\_{\tau\sim{\pi\_{\beta}}(\tau)}\left[\frac{\pi\_{\theta}(\tau)}{{\pi\_{\beta}}(\tau)}\sum\_{t=0}^{H}\gamma^{t}r(\mathbf{s},\mathbf{a})\right]= blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_τ ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_τ ) end\_ARG ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r ( bold\_s , bold\_a ) ] | |
| | | =𝔼τ∼πβ(τ)[(∏t=0Hπθ(𝐚t|𝐬t)πβ(𝐚t|𝐬t))∑t=0Hγtr(𝐬,𝐚)]≈∑i=1nwHi∑t=0Hγtrti,absentsubscript𝔼similar-to𝜏subscript𝜋𝛽𝜏delimited-[]superscriptsubscriptproduct𝑡0𝐻subscript𝜋𝜃conditionalsubscript𝐚𝑡subscript𝐬𝑡subscript𝜋𝛽conditionalsubscript𝐚𝑡subscript𝐬𝑡superscriptsubscript𝑡0𝐻superscript𝛾𝑡𝑟𝐬𝐚superscriptsubscript𝑖1𝑛superscriptsubscript𝑤𝐻𝑖superscriptsubscript𝑡0𝐻superscript𝛾𝑡superscriptsubscript𝑟𝑡𝑖\displaystyle=\mathbb{E}\_{\tau\sim{\pi\_{\beta}}(\tau)}\left[\left(\prod\_{t=0}^{H}\frac{\pi\_{\theta}(\mathbf{a}\_{t}|\mathbf{s}\_{t})}{{\pi\_{\beta}}(\mathbf{a}\_{t}|\mathbf{s}\_{t})}\right)\sum\_{t=0}^{H}\gamma^{t}r(\mathbf{s},\mathbf{a})\right]\approx\sum\_{i=1}^{n}w\_{H}^{i}\sum\_{t=0}^{H}\gamma^{t}r\_{t}^{i},= blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ ( ∏ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG ) ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r ( bold\_s , bold\_a ) ] ≈ ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , | | (5) |
where wti=1n∏t′=0tπθ(𝐚t′i|𝐬t′i)πβ(𝐚t′i|𝐬t′i)superscriptsubscript𝑤𝑡𝑖1𝑛superscriptsubscriptproductsuperscript𝑡′0𝑡subscript𝜋𝜃conditionalsuperscriptsubscript𝐚superscript𝑡′𝑖superscriptsubscript𝐬superscript𝑡′𝑖subscript𝜋𝛽conditionalsuperscriptsubscript𝐚superscript𝑡′𝑖superscriptsubscript𝐬superscript𝑡′𝑖w\_{t}^{i}=\frac{1}{n}\prod\_{t^{\prime}=0}^{t}\frac{\pi\_{\theta}(\mathbf{a}\_{t^{\prime}}^{i}|\mathbf{s}\_{t^{\prime}}^{i})}{{\pi\_{\beta}}(\mathbf{a}\_{t^{\prime}}^{i}|\mathbf{s}\_{t^{\prime}}^{i})}italic\_w start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT = divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∏ start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) end\_ARG and {(𝐬0i,𝐚0i,r0i,𝐬1i,…)}i=1nsuperscriptsubscriptsuperscriptsubscript𝐬0𝑖superscriptsubscript𝐚0𝑖superscriptsubscript𝑟0𝑖superscriptsubscript𝐬1𝑖…𝑖1𝑛\{(\mathbf{s}\_{0}^{i},\mathbf{a}\_{0}^{i},r\_{0}^{i},\mathbf{s}\_{1}^{i},\ldots)\}\_{i=1}^{n}{ ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , italic\_r start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , bold\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , … ) } start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT are n𝑛nitalic\_n trajectory samples from πβ(τ)subscript𝜋𝛽𝜏{\pi\_{\beta}}(\tau)italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_τ ) (Precup,, [2000](#bib.bib126)). Unfortunately, such an estimator can have very high variance (potentially unbounded if H𝐻Hitalic\_H is infinite) due to the product of importance weights. Self-normalizing the importance weights (i.e., dividing the weights by ∑i=1nwHisuperscriptsubscript𝑖1𝑛superscriptsubscript𝑤𝐻𝑖\sum\_{i=1}^{n}w\_{H}^{i}∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT) results in the *weighted importance sampling* estimator (Precup,, [2000](#bib.bib126)), which is biased, but can have much lower variance and is still a strongly consistent estimator.
To improve this estimator, we need to take advantage of the statistical structure of the problem. Because rtsubscript𝑟𝑡r\_{t}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT does not depend on 𝐬t′subscript𝐬superscript𝑡′\mathbf{s}\_{t^{\prime}}bold\_s start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT and 𝐚t′subscript𝐚superscript𝑡′\mathbf{a}\_{t^{\prime}}bold\_a start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT for t′>tsuperscript𝑡′𝑡t^{\prime}>titalic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT > italic\_t, we can drop the importance weights from future time steps, resulting in the *per-decision* importance sampling estimator (Precup,, [2000](#bib.bib126)):
| | | |
| --- | --- | --- |
| | J(πθ)=𝔼τ∼πβ(τ)[∑t=0H(∏t′=0tπθ(𝐚t|𝐬t)πβ(𝐚t|𝐬t))γtr(𝐬,𝐚)]≈1n∑i=1n∑t=0Hwtiγtrti.𝐽subscript𝜋𝜃subscript𝔼similar-to𝜏subscript𝜋𝛽𝜏delimited-[]superscriptsubscript𝑡0𝐻superscriptsubscriptproductsuperscript𝑡′0𝑡subscript𝜋𝜃conditionalsubscript𝐚𝑡subscript𝐬𝑡subscript𝜋𝛽conditionalsubscript𝐚𝑡subscript𝐬𝑡superscript𝛾𝑡𝑟𝐬𝐚1𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑡0𝐻superscriptsubscript𝑤𝑡𝑖superscript𝛾𝑡superscriptsubscript𝑟𝑡𝑖J(\pi\_{\theta})=\mathbb{E}\_{\tau\sim{\pi\_{\beta}}(\tau)}\left[\sum\_{t=0}^{H}\left(\prod\_{t^{\prime}=0}^{t}\frac{\pi\_{\theta}(\mathbf{a}\_{t}|\mathbf{s}\_{t})}{{\pi\_{\beta}}(\mathbf{a}\_{t}|\mathbf{s}\_{t})}\right)\gamma^{t}r(\mathbf{s},\mathbf{a})\right]\approx\frac{1}{n}\sum\_{i=1}^{n}\sum\_{t=0}^{H}w\_{t}^{i}\gamma^{t}r\_{t}^{i}.italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) = blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT ( ∏ start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG ) italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r ( bold\_s , bold\_a ) ] ≈ divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT . | |
As before, this estimator can have high variance, and we can form a weighted per-decision importance estimator by normalizing the weights. Unfortunately, in many practical problems, the weighted per-decision importance estimator still has too much variance to be effective.
If we have an approximate model that can be used to obtain an approximation to the Q-values for each state-action tuple (𝐬t,𝐚t)subscript𝐬𝑡subscript𝐚𝑡(\mathbf{s}\_{t},\mathbf{a}\_{t})( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), which we denote Q^π(𝐬t,𝐚t)superscript^𝑄𝜋subscript𝐬𝑡subscript𝐚𝑡\hat{Q}^{\pi}(\mathbf{s}\_{t},\mathbf{a}\_{t})over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), we can incorporate it into this estimate. Such an estimate can be obtained, for example, by estimating a model of the MDP transition probability T(𝐬t+1|𝐬t,𝐚t)𝑇conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡T(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_T ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) and then solving for the corresponding Q-function, or via any other method for approximating Q-values. We can incorporate these estimates as control variates into the importance sampled estimator to get the best of both:
| | | | |
| --- | --- | --- | --- |
| | J(πθ)≈∑i=1n∑t=0Hγt(wti(rti−Q^πθ(𝐬t,𝐚t))−wt−1i𝔼𝐚∼πθ(𝐚|𝐬t)[Q^πθ(𝐬t,𝐚)]).𝐽subscript𝜋𝜃superscriptsubscript𝑖1𝑛superscriptsubscript𝑡0𝐻superscript𝛾𝑡superscriptsubscript𝑤𝑡𝑖superscriptsubscript𝑟𝑡𝑖superscript^𝑄subscript𝜋𝜃subscript𝐬𝑡subscript𝐚𝑡superscriptsubscript𝑤𝑡1𝑖subscript𝔼similar-to𝐚subscript𝜋𝜃conditional𝐚subscript𝐬𝑡delimited-[]superscript^𝑄subscript𝜋𝜃subscript𝐬𝑡𝐚J(\pi\_{\theta})\approx\sum\_{i=1}^{n}\sum\_{t=0}^{H}\gamma^{t}\left(w\_{t}^{i}\left(r\_{t}^{i}-\hat{Q}^{\pi\_{\theta}}(\mathbf{s}\_{t},\mathbf{a}\_{t})\right)-w\_{t-1}^{i}\mathbb{E}\_{\mathbf{a}\sim\pi\_{\theta}(\mathbf{a}|\mathbf{s}\_{t})}\left[\hat{Q}^{\pi\_{\theta}}(\mathbf{s}\_{t},\mathbf{a})\right]\right).italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) ≈ ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_w start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT - over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) - italic\_w start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a ) ] ) . | | (6) |
This is known as the doubly robust estimator (Jiang and Li,, [2015](#bib.bib57); Thomas and Brunskill,, [2016](#bib.bib164)) because it is unbiased if either πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT is known or if the model is correct. We can also form a weighted version by normalizing the weights. More sophisticated estimators can be formed by training the model with knowledge of the policy to be evaluated (Farajtabar et al.,, [2018](#bib.bib27)), and by trading off bias and variance more optimally (Thomas and Brunskill,, [2016](#bib.bib164); Wang et al.,, [2017](#bib.bib174)).
Beyond consistency or unbiased estimates, we frequently desire a (high-probability) guarantee on the performance of a policy. Thomas et al., ([2015](#bib.bib165)) derived confidence bounds based on concentration inequalities specialized to deal with the high variance and potentially large range of the importance weighted estimators. Alternatively, we can construct confidence bounds based on distributional assumptions (e.g., asymptotic normality) (Thomas et al.,, [2015](#bib.bib165)) or via bootstrapping (Thomas et al.,, [2015](#bib.bib165); Hanna et al.,, [2017](#bib.bib49)) which may be less conservative at the cost of looser guarantees.
Such estimators can also be utilized for policy improvement, by searching over policies with respect to their estimated return. In safety-critical applications of offline RL, we would like to improve over the behavior policy with a guarantee that with high probability our performance is no lower than a bound. Thomas et al., ([2015](#bib.bib165)) show that we can search for policies using lower confidence bounds on importance sampling estimators to ensure that the safety constraint is met. Alternatively, we can search over policies in a model of the MDP and bound the error of the estimated model with high probability (Ghavamzadeh et al.,, [2016](#bib.bib35); Laroche et al.,, [2017](#bib.bib84); Nadjahi et al.,, [2019](#bib.bib107)).
###
3.2 The Off-Policy Policy Gradient
Importance sampling can also be used to directly estimate the policy gradient, rather than just obtaining an estimate of the value for a given policy. As discussed in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), policy gradient methods aim to optimize J(π)𝐽𝜋J(\pi)italic\_J ( italic\_π ) by computing estimates of the gradient with respect to the policy parameters. We can estimate the gradient with Monte Carlo samples, as in Equation ([2](#S2.E2 "2 ‣ Policy gradients. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")), but this requires *on-policy* trajectories (i.e., τ∼πθ(τ)similar-to𝜏subscript𝜋𝜃𝜏\tau\sim\pi\_{\theta}(\tau)italic\_τ ∼ italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_τ )). Here, we extend this approach to the offline setting.
Previous work has generally focused on the *off-policy* setting, where trajectories are sampled from a distinct behavior policy πβ(𝐚|𝐬)≠π(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬𝜋conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})\neq\pi(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) ≠ italic\_π ( bold\_a | bold\_s ). However, in contrast to the offline setting, such methods assume we can continually sample new trajectories from πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT, while old trajectories are reused for efficiency. We begin with the off-policy setting, and then discuss additional challenges in extending such methods to the offline setting.
Noting the similar structure between J(π)𝐽𝜋J(\pi)italic\_J ( italic\_π ) and the policy gradient, we can adapt the techniques for estimating J(π)𝐽𝜋J(\pi)italic\_J ( italic\_π ) off-policy to the policy gradient
| | | | |
| --- | --- | --- | --- |
| | ∇θJ(πθ)subscript∇𝜃𝐽subscript𝜋𝜃\displaystyle\nabla\_{\theta}J(\pi\_{\theta})∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) | =𝔼τ∼πβ(τ)[πθ(τ)πβ(τ)∑t=0Hγt∇θlogπθ(𝐚t|𝐬t)A^(𝐬t,𝐚t)]absentsubscript𝔼similar-to𝜏subscript𝜋𝛽𝜏delimited-[]subscript𝜋𝜃𝜏subscript𝜋𝛽𝜏superscriptsubscript𝑡0𝐻superscript𝛾𝑡subscript∇𝜃subscript𝜋𝜃conditionalsubscript𝐚𝑡subscript𝐬𝑡^𝐴subscript𝐬𝑡subscript𝐚𝑡\displaystyle=\mathbb{E}\_{\tau\sim{\pi\_{\beta}}(\tau)}\left[\frac{{\pi\_{\theta}}(\tau)}{{\pi\_{\beta}}(\tau)}\sum\_{t=0}^{H}\right.\gamma^{t}\nabla\_{\theta}\log\pi\_{\theta}(\mathbf{a}\_{t}|\mathbf{s}\_{t})\hat{A}(\mathbf{s}\_{t},\mathbf{a}\_{t})\left.\vphantom{\sum\_{t=0}^{H}}\right]= blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_τ ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_τ ) end\_ARG ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] | |
| | | =𝔼τ∼πβ(τ)[(∏t=0Hπθ(𝐚t|𝐬t)πβ(𝐚t|𝐬t))∑t=0Hγt∇θlogπθ(𝐚t|𝐬t)A^(𝐬t,𝐚t)]absentsubscript𝔼similar-to𝜏subscript𝜋𝛽𝜏delimited-[]superscriptsubscriptproduct𝑡0𝐻subscript𝜋𝜃conditionalsubscript𝐚𝑡subscript𝐬𝑡subscript𝜋𝛽conditionalsubscript𝐚𝑡subscript𝐬𝑡superscriptsubscript𝑡0𝐻superscript𝛾𝑡subscript∇𝜃subscript𝜋𝜃conditionalsubscript𝐚𝑡subscript𝐬𝑡^𝐴subscript𝐬𝑡subscript𝐚𝑡\displaystyle=\mathbb{E}\_{\tau\sim{\pi\_{\beta}}(\tau)}\left[\left(\prod\_{t=0}^{H}\frac{\pi\_{\theta}(\mathbf{a}\_{t}|\mathbf{s}\_{t})}{{\pi\_{\beta}}(\mathbf{a}\_{t}|\mathbf{s}\_{t})}\right)\sum\_{t=0}^{H}\right.\gamma^{t}\nabla\_{\theta}\log\pi\_{\theta}(\mathbf{a}\_{t}|\mathbf{s}\_{t})\hat{A}(\mathbf{s}\_{t},\mathbf{a}\_{t})\left.\vphantom{\sum\_{t=0}^{H}}\right]= blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_τ ) end\_POSTSUBSCRIPT [ ( ∏ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG ) ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] | |
| | | ≈∑i=1nwHi∑t=0Hγt∇θlogπθ(𝐚ti|𝐬ti)A^(𝐬ti,𝐚ti),absentsuperscriptsubscript𝑖1𝑛superscriptsubscript𝑤𝐻𝑖superscriptsubscript𝑡0𝐻superscript𝛾𝑡subscript∇𝜃subscript𝜋𝜃conditionalsuperscriptsubscript𝐚𝑡𝑖superscriptsubscript𝐬𝑡𝑖^𝐴superscriptsubscript𝐬𝑡𝑖superscriptsubscript𝐚𝑡𝑖\displaystyle\approx\sum\_{i=1}^{n}w\_{H}^{i}\sum\_{t=0}^{H}\gamma^{t}\nabla\_{\theta}\log\pi\_{\theta}(\mathbf{a}\_{t}^{i}|\mathbf{s}\_{t}^{i})\hat{A}(\mathbf{s}\_{t}^{i},\mathbf{a}\_{t}^{i}),≈ ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) , | |
where {(𝐬0i,𝐚0i,r0i,𝐬1i,…)}i=1nsuperscriptsubscriptsuperscriptsubscript𝐬0𝑖superscriptsubscript𝐚0𝑖superscriptsubscript𝑟0𝑖superscriptsubscript𝐬1𝑖…𝑖1𝑛\{(\mathbf{s}\_{0}^{i},\mathbf{a}\_{0}^{i},r\_{0}^{i},\mathbf{s}\_{1}^{i},\ldots)\}\_{i=1}^{n}{ ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , italic\_r start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , bold\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , … ) } start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT are n𝑛nitalic\_n trajectory samples from πβ(τ)subscript𝜋𝛽𝜏{\pi\_{\beta}}(\tau)italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_τ ) (Precup,, [2000](#bib.bib126); Precup et al.,, [2001](#bib.bib127); Peshkin and Shelton,, [2002](#bib.bib122)). Similarly, we can self-normalize the importance weights resulting in the *weighted importance sampling* policy gradient estimator (Precup,, [2000](#bib.bib126)), which is biased, but can have much lower variance and is still a consistent estimator.
If we use the Monte Carlo estimator with baseline for A^^𝐴\hat{A}over^ start\_ARG italic\_A end\_ARG (i.e., A^(𝐬ti,𝐚ti)=∑t′=tHγt′−trt′−b(𝐬ti)^𝐴superscriptsubscript𝐬𝑡𝑖superscriptsubscript𝐚𝑡𝑖superscriptsubscriptsuperscript𝑡′𝑡𝐻superscript𝛾superscript𝑡′𝑡subscript𝑟superscript𝑡′𝑏superscriptsubscript𝐬𝑡𝑖\hat{A}(\mathbf{s}\_{t}^{i},\mathbf{a}\_{t}^{i})=\sum\_{t^{\prime}=t}^{H}\gamma^{t^{\prime}-t}r\_{t^{\prime}}-b(\mathbf{s}\_{t}^{i})over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT - italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT - italic\_b ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT )), then because rtsubscript𝑟𝑡r\_{t}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT does not depend on 𝐬t′subscript𝐬superscript𝑡′\mathbf{s}\_{t^{\prime}}bold\_s start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT and 𝐚t′subscript𝐚superscript𝑡′\mathbf{a}\_{t^{\prime}}bold\_a start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT for t′>tsuperscript𝑡′𝑡t^{\prime}>titalic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT > italic\_t, we can drop importance weights in the future, resulting in the *per-decision* importance sampling policy gradient estimator (Precup,, [2000](#bib.bib126)):
| | | |
| --- | --- | --- |
| | ∇θJ(πθ)≈∑i=1n∑t=0Hwtiγt(∑t′=tHγt′−twt′iwtirt′−b(𝐬ti))∇θlogπθ(𝐚ti|𝐬ti).subscript∇𝜃𝐽subscript𝜋𝜃superscriptsubscript𝑖1𝑛superscriptsubscript𝑡0𝐻superscriptsubscript𝑤𝑡𝑖superscript𝛾𝑡superscriptsubscriptsuperscript𝑡′𝑡𝐻superscript𝛾superscript𝑡′𝑡superscriptsubscript𝑤superscript𝑡′𝑖superscriptsubscript𝑤𝑡𝑖subscript𝑟superscript𝑡′𝑏superscriptsubscript𝐬𝑡𝑖subscript∇𝜃subscript𝜋𝜃conditionalsuperscriptsubscript𝐚𝑡𝑖superscriptsubscript𝐬𝑡𝑖\nabla\_{\theta}J(\pi\_{\theta})\approx\sum\_{i=1}^{n}\sum\_{t=0}^{H}w\_{t}^{i}\gamma^{t}\left(\sum\_{t^{\prime}=t}^{H}\gamma^{t^{\prime}-t}\frac{w\_{t^{\prime}}^{i}}{w\_{t}^{i}}r\_{t^{\prime}}-b(\mathbf{s}\_{t}^{i})\right)\nabla\_{\theta}\log\pi\_{\theta}(\mathbf{a}\_{t}^{i}|\mathbf{s}\_{t}^{i}).∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) ≈ ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( ∑ start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT - italic\_t end\_POSTSUPERSCRIPT divide start\_ARG italic\_w start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT end\_ARG start\_ARG italic\_w start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT end\_ARG italic\_r start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT - italic\_b ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) ) ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) . | |
As before, this estimator can have high variance, and we can form a weighted per-decision importance estimator by normalizing the weights. Paralleling the development of doubly robust estimators for policy evaluation, doubly robust estimators for the policy gradient have also been derived ([Gu et al., 2017a,](#bib.bib39) ; Huang and Jiang,, [2019](#bib.bib52); Pankov,, [2018](#bib.bib118); Cheng et al.,, [2019](#bib.bib11)).
Unfortunately, in many practical problems, these estimators have too high variance to be effective.
Practical off-policy algorithms derived from such estimators can also employ regularization, such that the learned policy πθ(𝐚|𝐬)subscript𝜋𝜃conditional𝐚𝐬\pi\_{\theta}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) does not deviate too far from the behavior policy πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ), thus keeping the variance of the importance weights from becoming too large. One example of such a regularizer is the *soft max* over the (unnormalized) importance weights (Levine and Koltun,, [2013](#bib.bib90)). This regularized gradient estimator ∇θJ¯(πθ)subscript∇𝜃¯𝐽subscript𝜋𝜃\nabla\_{\theta}\bar{J}(\pi\_{\theta})∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT over¯ start\_ARG italic\_J end\_ARG ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) has the following form:
| | | |
| --- | --- | --- |
| | ∇θJ¯(πθ)≈(∑i=1nwHi∑t=0Hγt∇θlogπθ(𝐚ti|𝐬ti)A^(𝐬ti,𝐚ti))+λlog(∑i=1nwHi).subscript∇𝜃¯𝐽subscript𝜋𝜃superscriptsubscript𝑖1𝑛superscriptsubscript𝑤𝐻𝑖superscriptsubscript𝑡0𝐻superscript𝛾𝑡subscript∇𝜃subscript𝜋𝜃conditionalsuperscriptsubscript𝐚𝑡𝑖superscriptsubscript𝐬𝑡𝑖^𝐴superscriptsubscript𝐬𝑡𝑖superscriptsubscript𝐚𝑡𝑖𝜆superscriptsubscript𝑖1𝑛superscriptsubscript𝑤𝐻𝑖\nabla\_{\theta}\bar{J}(\pi\_{\theta})\approx\left(\sum\_{i=1}^{n}w\_{H}^{i}\sum\_{t=0}^{H}\gamma^{t}\nabla\_{\theta}\log\pi\_{\theta}(\mathbf{a}\_{t}^{i}|\mathbf{s}\_{t}^{i})\hat{A}(\mathbf{s}\_{t}^{i},\mathbf{a}\_{t}^{i})\right)+\lambda\log\left(\sum\_{i=1}^{n}w\_{H}^{i}\right).∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT over¯ start\_ARG italic\_J end\_ARG ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) ≈ ( ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_H end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) over^ start\_ARG italic\_A end\_ARG ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) ) + italic\_λ roman\_log ( ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) . | |
It is easy to check that ∑iwHi→1→subscript𝑖superscriptsubscript𝑤𝐻𝑖1\sum\_{i}w\_{H}^{i}\rightarrow 1∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_w start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT → 1 as n→∞→𝑛n\rightarrow\inftyitalic\_n → ∞, meaning that this gradient estimator is consistent. However, with a finite number of samples, such an estimator automatically adjusts the policy πθsubscript𝜋𝜃\pi\_{\theta}italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT to ensure that at least one sample has a high (unnormalized) importance weight. More recent deep reinforcement learning algorithms based on importance sampling often employ a sample-based KL-divergence regularizer (Schulman et al.,, [2017](#bib.bib140)), which has a functional form mathematically similar to this one when also utilizing an entropy regularizer on the policy πθsubscript𝜋𝜃\pi\_{\theta}italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT.
###
3.3 Approximate Off-Policy Policy Gradients
The importance-weighted policy objective requires multiplying per-action importance weights over the time steps, which leads to very high variance. We can derive an approximate importance-sampled gradient by using the state distribution of the behavior policy, dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{\pi\_{\beta}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ), in place of that of the current policy, dπ(𝐬)superscript𝑑𝜋𝐬d^{\pi}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ). This results in a biased gradient due to the mismatch in state distributions, but can provide reasonable learning performance in practice.
The corresponding objective, which we will denote Jπβ(πθ)subscript𝐽subscript𝜋𝛽subscript𝜋𝜃J\_{\pi\_{\beta}}(\pi\_{\theta})italic\_J start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) to emphasize its dependence on the behavior policy’s state distribution, is given by
| | | |
| --- | --- | --- |
| | Jπβ(πθ)=𝔼𝐬∼dπβ(𝐬)[Vπ(𝐬)].subscript𝐽subscript𝜋𝛽subscript𝜋𝜃subscript𝔼similar-to𝐬superscript𝑑subscript𝜋𝛽𝐬delimited-[]superscript𝑉𝜋𝐬J\_{\pi\_{\beta}}(\pi\_{\theta})=\mathbb{E}\_{\mathbf{s}\sim d^{\pi\_{\beta}}(\mathbf{s})}\left[V^{\pi}(\mathbf{s})\right].italic\_J start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) = blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) end\_POSTSUBSCRIPT [ italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) ] . | |
Note that Jπβ(πθ)subscript𝐽subscript𝜋𝛽subscript𝜋𝜃J\_{\pi\_{\beta}}(\pi\_{\theta})italic\_J start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) and J(πθ)𝐽subscript𝜋𝜃J(\pi\_{\theta})italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) differ in the distribution of states under which the return is estimated (dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{\pi\_{\beta}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) vs. dπ(𝐬)superscript𝑑𝜋𝐬d^{\pi}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s )), making Jπβ(πθ)subscript𝐽subscript𝜋𝛽subscript𝜋𝜃J\_{\pi\_{\beta}}(\pi\_{\theta})italic\_J start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) a biased estimator for J(πθ)𝐽subscript𝜋𝜃J(\pi\_{\theta})italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ). This may lead to suboptimal solutions in certain cases (see Imani et al., ([2018](#bib.bib53)) for some examples). However, expectations under state distributions from dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{\pi\_{\beta}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) can be calculated easily by sampling states from the dataset 𝒟𝒟\mathcal{D}caligraphic\_D in the offline case, removing the need for importance sampling.
Recent empirical work has found that this bias is acceptable in practice (Fu et al.,, [2019](#bib.bib30)). Differentiating the objective and applying a further approximation results in the *off-policy policy gradient* (Degris et al.,, [2012](#bib.bib18)):
| | | | |
| --- | --- | --- | --- |
| | ∇θJπβ(πθ)subscript∇𝜃subscript𝐽subscript𝜋𝛽subscript𝜋𝜃\displaystyle\nabla\_{\theta}J\_{\pi\_{\beta}}(\pi\_{\theta})∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT italic\_J start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ) | =𝔼𝐬∼dπβ(𝐬),𝐚∼πθ(𝐚|𝐬)[Qπθ(𝐬,𝐚)∇θlogπθ(𝐚|𝐬)+∇θQπθ(𝐬,𝐚)]absentsubscript𝔼formulae-sequencesimilar-to𝐬superscript𝑑subscript𝜋𝛽𝐬similar-to𝐚subscript𝜋𝜃conditional𝐚𝐬delimited-[]superscript𝑄subscript𝜋𝜃𝐬𝐚subscript∇𝜃subscript𝜋𝜃conditional𝐚𝐬subscript∇𝜃superscript𝑄subscript𝜋𝜃𝐬𝐚\displaystyle=\mathbb{E}\_{\mathbf{s}\sim d^{\pi\_{\beta}}(\mathbf{s}),\mathbf{a}\sim\pi\_{\theta}(\mathbf{a}|\mathbf{s})}\left[Q^{\pi\_{\theta}}(\mathbf{s},\mathbf{a})\nabla\_{\theta}\log\pi\_{\theta}(\mathbf{a}|\mathbf{s})+\nabla\_{\theta}Q^{\pi\_{\theta}}(\mathbf{s},\mathbf{a})\right]= blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) , bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) + ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) ] | |
| | | ≈𝔼𝐬∼dπβ(𝐬),𝐚∼πθ(𝐚|𝐬)[Qπθ(𝐬,𝐚)∇θlogπθ(𝐚|𝐬)].absentsubscript𝔼formulae-sequencesimilar-to𝐬superscript𝑑subscript𝜋𝛽𝐬similar-to𝐚subscript𝜋𝜃conditional𝐚𝐬delimited-[]superscript𝑄subscript𝜋𝜃𝐬𝐚subscript∇𝜃subscript𝜋𝜃conditional𝐚𝐬\displaystyle\approx\mathbb{E}\_{\mathbf{s}\sim d^{\pi\_{\beta}}(\mathbf{s}),\mathbf{a}\sim\pi\_{\theta}(\mathbf{a}|\mathbf{s})}\left[Q^{\pi\_{\theta}}(\mathbf{s},\mathbf{a})\nabla\_{\theta}\log\pi\_{\theta}(\mathbf{a}|\mathbf{s})\right].≈ blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) , bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) ∇ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT roman\_log italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) ] . | |
Degris et al., ([2012](#bib.bib18)) show that under restrictive conditions, the approximate gradient preserves the local optima of Jπβ(π)subscript𝐽subscript𝜋𝛽𝜋J\_{\pi\_{\beta}}(\pi)italic\_J start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_π ). This approximate gradient is used as a starting point in many widely used deep reinforcement learning algorithms (Silver et al.,, [2014](#bib.bib143); Lillicrap et al.,, [2015](#bib.bib93); Wang et al.,, [2016](#bib.bib175); [Gu et al., 2017b,](#bib.bib40) ).
To compute an estimate of the approximate gradient, we additionally need to estimate Qπθ(𝐬,𝐚)superscript𝑄subscript𝜋𝜃𝐬𝐚Q^{\pi\_{\theta}}(\mathbf{s},\mathbf{a})italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) from the off-policy trajectories. Basic methods for doing this were discussed in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), and we defer a more in-depth discussion of offline state-action value function estimators to Section [4](#S4 "4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). Some estimators use action samples, which required an importance weight to correct from πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT samples to πθsubscript𝜋𝜃\pi\_{\theta}italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT samples. Further improvements can be obtained by introducing control variates and clipping importance weights to control variance (Wang et al.,, [2016](#bib.bib175); Espeholt et al.,, [2018](#bib.bib24)).
###
3.4 Marginalized Importance Sampling
If we would like to avoid the bias incurred by using the off-policy state distribution and the high variance from using per-action importance weighting, we can instead attempt to directly estimate the *state-marginal importance ratio* ρπθ(𝐬)=dπθ(𝐬)dπβ(𝐬)superscript𝜌subscript𝜋𝜃𝐬superscript𝑑subscript𝜋𝜃𝐬superscript𝑑subscript𝜋𝛽𝐬\rho^{\pi\_{\theta}}(\mathbf{s})=\frac{d^{\pi\_{\theta}}(\mathbf{s})}{d^{\pi\_{\beta}}(\mathbf{s})}italic\_ρ start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) = divide start\_ARG italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) end\_ARG start\_ARG italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) end\_ARG. An estimator using state marginal importance weights can be shown to be have no greater variance than using the product of per-action importance weights. However, computing this ratio exactly is generally intractable. Only recently have practical methods for estimating the marginalized importance ratio been introduced (Sutton et al.,, [2016](#bib.bib153); Yu,, [2015](#bib.bib182); Hallak et al.,, [2015](#bib.bib47), [2016](#bib.bib48); Hallak and Mannor,, [2017](#bib.bib46); [Nachum et al., 2019a,](#bib.bib104) ; [Zhang et al., 2020a,](#bib.bib187) ; [Nachum et al., 2019b,](#bib.bib106) ).
We discuss some key aspects of these methods next.
Most of these methods utilize some form of dynamic programming to estimate the importance ratio ρπsuperscript𝜌𝜋\rho^{\pi}italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT. Based on the form of the underlying Bellman equation used, we can classify them into two categories: methods that use a “forward” Bellman equation to estimate the importance ratios directly, and methods that use a “backward” Bellman equation on a functional that resembles a value function and then derive the importance ratios from the learned value functional.
#### Forward Bellman equation based approaches.
The state-marginal importance ratio, ρπ(𝐬)superscript𝜌𝜋𝐬\rho^{\pi}(\mathbf{s})italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ), satisfies a kind of “forward” Bellman equation:
| | | | |
| --- | --- | --- | --- |
| | ∀𝐬′,dπβ(𝐬′)ρπ(𝐬′)⏟≔(dπβ∘ρπ)(𝐬′)=(1−γ)d0(𝐬′)+γ∑𝐬,𝐚dπβ(𝐬)ρπ(𝐬)π(𝐚|𝐬)T(𝐬′|𝐬,𝐚)⏟≔(ℬ¯π∘ρπ)(𝐬′).for-allsuperscript𝐬′subscript⏟superscript𝑑subscript𝜋𝛽superscript𝐬′superscript𝜌𝜋superscript𝐬′≔absentsuperscript𝑑subscript𝜋𝛽superscript𝜌𝜋superscript𝐬′
subscript⏟1𝛾subscript𝑑0superscript𝐬′𝛾subscript𝐬𝐚superscript𝑑subscript𝜋𝛽𝐬superscript𝜌𝜋𝐬𝜋conditional𝐚𝐬𝑇conditionalsuperscript𝐬′𝐬𝐚≔absentsuperscript¯ℬ𝜋superscript𝜌𝜋superscript𝐬′\forall\mathbf{s}^{\prime},~{}~{}\underbrace{d^{\pi\_{\beta}}(\mathbf{s}^{\prime})\rho^{\pi}(\mathbf{s}^{\prime})}\_{\coloneqq(d^{\pi\_{\beta}}\circ\rho^{\pi})(\mathbf{s}^{\prime})}=\underbrace{(1-\gamma)d\_{0}(\mathbf{s}^{\prime})+\gamma\sum\_{\mathbf{s},\mathbf{a}}d^{\pi\_{\beta}}(\mathbf{s})\rho^{\pi}(\mathbf{s})\pi(\mathbf{a}|\mathbf{s})T(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a})}\_{\coloneqq(\bar{\mathcal{B}}^{\pi}\circ\rho^{\pi})(\mathbf{s}^{\prime})}.∀ bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , under⏟ start\_ARG italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_ARG start\_POSTSUBSCRIPT ≔ ( italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ∘ italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT = under⏟ start\_ARG ( 1 - italic\_γ ) italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) + italic\_γ ∑ start\_POSTSUBSCRIPT bold\_s , bold\_a end\_POSTSUBSCRIPT italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) italic\_π ( bold\_a | bold\_s ) italic\_T ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s , bold\_a ) end\_ARG start\_POSTSUBSCRIPT ≔ ( over¯ start\_ARG caligraphic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ∘ italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT . | | (7) |
This relationship can be leveraged to perform temporal difference updates to estimate the state-marginal importance ratio under the policy.
For example, when stochastic approximation is used, Gelada and Bellemare, ([2019](#bib.bib34)) use the following update rule in order to estimate ρπ(𝐬′)superscript𝜌𝜋superscript𝐬′\rho^{\pi}(\mathbf{s}^{\prime})italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) online:
| | | | |
| --- | --- | --- | --- |
| | ρ^π(𝐬′)←ρ^π(𝐬′)+α[(1−γ)+γπ(𝐚|𝐬)πβ(𝐚|𝐬)ρ^π(𝐬)−ρ^π(𝐬′)],←superscript^𝜌𝜋superscript𝐬′superscript^𝜌𝜋superscript𝐬′𝛼delimited-[]1𝛾𝛾𝜋conditional𝐚𝐬subscript𝜋𝛽conditional𝐚𝐬superscript^𝜌𝜋𝐬superscript^𝜌𝜋superscript𝐬′\hat{\rho}^{\pi}(\mathbf{s}^{\prime})\leftarrow\hat{\rho}^{\pi}(\mathbf{s}^{\prime})+\alpha\left[(1-\gamma)+\gamma\frac{\pi(\mathbf{a}|\mathbf{s})}{{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})}\hat{\rho}^{\pi}(\mathbf{s})-\hat{\rho}^{\pi}(\mathbf{s}^{\prime})\right],over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ← over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) + italic\_α [ ( 1 - italic\_γ ) + italic\_γ divide start\_ARG italic\_π ( bold\_a | bold\_s ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) end\_ARG over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) - over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] , | | (8) |
with 𝐬∼dπβ(𝐬),𝐚∼πβ(𝐚|𝐬),𝐬′∼T(𝐬′|𝐬,𝐚)formulae-sequencesimilar-to𝐬superscript𝑑subscript𝜋𝛽𝐬formulae-sequencesimilar-to𝐚subscript𝜋𝛽conditional𝐚𝐬similar-tosuperscript𝐬′𝑇conditionalsuperscript𝐬′𝐬𝐚\mathbf{s}\sim d^{\pi\_{\beta}}(\mathbf{s}),\mathbf{a}\sim{\pi\_{\beta}}(\mathbf{a}|\mathbf{s}),\mathbf{s}^{\prime}\sim T(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a})bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) , bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_T ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s , bold\_a ).
Several additional techniques, including using TD(λ𝜆\lambdaitalic\_λ) estimates and automatic adjustment of feature dimensions, have been used to stabilize learning. We refer the readers to Hallak and Mannor, ([2017](#bib.bib46)) and Gelada and Bellemare, ([2019](#bib.bib34)) for a detailed discussion. Gelada and Bellemare, ([2019](#bib.bib34)) also discusses several practical tricks, such as soft-normalization and discounted evaluation, to adapt these methods to deep Q-learning settings, unlike prior work that mainly focuses on linear function approximation. Wen et al., ([2020](#bib.bib178)) view to problem from the lens of power iteration, and propose a variational power method approach to combine function approximation and power iteration to estimate ρπsuperscript𝜌𝜋\rho^{\pi}italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT. Imani et al., ([2018](#bib.bib53)); Zhang et al., ([2019](#bib.bib188)) show that similar methods can be used to estimate the off-policy policy gradient and thus be used in an off-policy actor critic method.
Alternatively, Liu et al., ([2018](#bib.bib95)) propose to use an adversarial approach to obtain the state-marginal importance ratios. From Eq. [7](#S3.E7 "7 ‣ Forward Bellman equation based approaches. ‣ 3.4 Marginalized Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), they derive a functional
| | | | |
| --- | --- | --- | --- |
| | L(ρ,f)=γ𝔼𝐬,𝐚,𝐬′∼𝒟[(ρ(𝐬)π(𝐚|𝐬)πβ(𝐚|𝐬)−ρ(𝐬′))f(𝐬′)]+(1−γ)𝔼𝐬0∼d0[(1−ρ(𝐬))f(𝐬)]𝐿𝜌𝑓𝛾subscript𝔼similar-to𝐬𝐚superscript𝐬′
𝒟delimited-[]𝜌𝐬𝜋conditional𝐚𝐬subscript𝜋𝛽conditional𝐚𝐬𝜌superscript𝐬′𝑓superscript𝐬′1𝛾subscript𝔼similar-tosubscript𝐬0subscript𝑑0delimited-[]1𝜌𝐬𝑓𝐬L(\rho,f)=\gamma\mathbb{E}\_{\mathbf{s},\mathbf{a},\mathbf{s}^{\prime}\sim\mathcal{D}}\left[\left(\rho(\mathbf{s})\frac{\pi(\mathbf{a}|\mathbf{s})}{{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})}-\rho(\mathbf{s}^{\prime})\right)f(\mathbf{s}^{\prime})\right]+(1-\gamma)\mathbb{E}\_{\mathbf{s}\_{0}\sim d\_{0}}[(1-\rho(\mathbf{s}))f(\mathbf{s})]italic\_L ( italic\_ρ , italic\_f ) = italic\_γ blackboard\_E start\_POSTSUBSCRIPT bold\_s , bold\_a , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ caligraphic\_D end\_POSTSUBSCRIPT [ ( italic\_ρ ( bold\_s ) divide start\_ARG italic\_π ( bold\_a | bold\_s ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) end\_ARG - italic\_ρ ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) italic\_f ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] + ( 1 - italic\_γ ) blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∼ italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ( 1 - italic\_ρ ( bold\_s ) ) italic\_f ( bold\_s ) ] | | (9) |
such that L(ρ,f)=0,∀f𝐿𝜌𝑓0for-all𝑓L(\rho,f)=0,\forall~{}fitalic\_L ( italic\_ρ , italic\_f ) = 0 , ∀ italic\_f if and only if ρ=ρπ𝜌superscript𝜌𝜋\rho=\rho^{\pi}italic\_ρ = italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT. Therefore, we can learn ρ𝜌\rhoitalic\_ρ by minimizing a worst-case estimate of L(ρ,f)2𝐿superscript𝜌𝑓2L(\rho,f)^{2}italic\_L ( italic\_ρ , italic\_f ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT, by solving an adversarial, saddle-point optimization: minρmaxfL(ρ,f)2subscript𝜌subscript𝑓𝐿superscript𝜌𝑓2\min\_{\rho}\max\_{f}L(\rho,f)^{2}roman\_min start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT roman\_max start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT italic\_L ( italic\_ρ , italic\_f ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT. Recent work (Mousavi et al.,, [2020](#bib.bib102); [Kallus and Uehara, 2019a,](#bib.bib66) ; [Kallus and Uehara, 2019b,](#bib.bib67) ; Tang et al.,, [2019](#bib.bib159); Uehara and Jiang,, [2019](#bib.bib169)) has refined this approach, in particular removing the need to have access to πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT. Once ρ\*superscript𝜌\rho^{\*}italic\_ρ start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is obtained, Liu et al., ([2019](#bib.bib96)) use this estimator to compute the off-policy policy gradient.
[Zhang et al., 2020a](#bib.bib187) present another off-policy evaluation method that computes the importance ratio for the state-action marginal, ρπ(𝐬,𝐚):=dπ(𝐬,𝐚)dπβ(𝐬,𝐚)assignsuperscript𝜌𝜋𝐬𝐚superscript𝑑𝜋𝐬𝐚superscript𝑑subscript𝜋𝛽𝐬𝐚\rho^{\pi}(\mathbf{s},\mathbf{a}):=\frac{d^{\pi}(\mathbf{s},\mathbf{a})}{d^{\pi\_{\beta}}(\mathbf{s},\mathbf{a})}italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) := divide start\_ARG italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) end\_ARG start\_ARG italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) end\_ARG, by directly optimizing a variant of the Bellman residual error corresponding to a modified forward Bellman equation, that includes actions, shown in Equation [10](#S3.E10 "10 ‣ Forward Bellman equation based approaches. ‣ 3.4 Marginalized Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
| | | | |
| --- | --- | --- | --- |
| | dπβ(𝐬′,𝐚′)ρπ(𝐬′,𝐚′)⏟≔(dπβ∘ρπ)(𝐬′,𝐚′)=(1−γ)d0(𝐬′)π(𝐚′|𝐬′)+γ∑𝐬,𝐚dπβ(𝐬,𝐚)ρπ(𝐬,𝐚)π(𝐚|𝐬)T(𝐬′|𝐬,𝐚)⏟≔(ℬ¯π∘ρπ)(𝐬′,𝐚′).subscript⏟superscript𝑑subscript𝜋𝛽superscript𝐬′superscript𝐚′superscript𝜌𝜋superscript𝐬′superscript𝐚′≔absentsuperscript𝑑subscript𝜋𝛽superscript𝜌𝜋superscript𝐬′superscript𝐚′subscript⏟1𝛾subscript𝑑0superscript𝐬′𝜋conditionalsuperscript𝐚′superscript𝐬′𝛾subscript𝐬𝐚superscript𝑑subscript𝜋𝛽𝐬𝐚superscript𝜌𝜋𝐬𝐚𝜋conditional𝐚𝐬𝑇conditionalsuperscript𝐬′𝐬𝐚≔absentsuperscript¯ℬ𝜋superscript𝜌𝜋superscript𝐬′superscript𝐚′\underbrace{d^{\pi\_{\beta}}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})\rho^{\pi}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})}\_{\coloneqq(d^{\pi\_{\beta}}\circ\rho^{\pi})(\mathbf{s}^{\prime},\mathbf{a}^{\prime})}\!=\!\underbrace{(1-\gamma)d\_{0}(\mathbf{s}^{\prime})\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})\!+\!\gamma\sum\_{\mathbf{s},\mathbf{a}}d^{\pi\_{\beta}}(\mathbf{s},\mathbf{a})\rho^{\pi}(\mathbf{s},\mathbf{a})\pi(\mathbf{a}|\mathbf{s})T(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a})}\_{\coloneqq(\bar{\mathcal{B}}^{\pi}\circ\rho^{\pi})(\mathbf{s}^{\prime},\mathbf{a}^{\prime})}.under⏟ start\_ARG italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_ARG start\_POSTSUBSCRIPT ≔ ( italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ∘ italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT = under⏟ start\_ARG ( 1 - italic\_γ ) italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) italic\_π ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) + italic\_γ ∑ start\_POSTSUBSCRIPT bold\_s , bold\_a end\_POSTSUBSCRIPT italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) italic\_π ( bold\_a | bold\_s ) italic\_T ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s , bold\_a ) end\_ARG start\_POSTSUBSCRIPT ≔ ( over¯ start\_ARG caligraphic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ∘ italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT . | | (10) |
Their approach can be derived by applying a divergence metric (such as an f-divergence, which we will review in Section [4](#S4 "4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")) between the two sides of the modified forward Bellman equation in Equation [10](#S3.E10 "10 ‣ Forward Bellman equation based approaches. ‣ 3.4 Marginalized Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), while additionally constraining the importance ratio, ρπ(𝐬,𝐚)superscript𝜌𝜋𝐬𝐚\rho^{\pi}(\mathbf{s},\mathbf{a})italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ), to integrate to 1111 in expectation over the dataset, 𝒟𝒟\mathcal{D}caligraphic\_D, to prevent degenerate solutions, as follows
| | | | |
| --- | --- | --- | --- |
| | minρπDf((ℬ¯π∘ρπ)(𝐬,𝐚),(dπβ∘ρπ)(𝐬,𝐚))s.t.𝔼𝐬,𝐚,𝐬′∼𝒟[ρπ(𝐬,𝐚)]=1.subscriptsuperscript𝜌𝜋subscript𝐷𝑓superscript¯ℬ𝜋superscript𝜌𝜋𝐬𝐚superscript𝑑subscript𝜋𝛽superscript𝜌𝜋𝐬𝐚s.t.subscript𝔼similar-to𝐬𝐚superscript𝐬′
𝒟delimited-[]superscript𝜌𝜋𝐬𝐚1\min\_{\rho^{\pi}}~{}~{}D\_{f}\left(\left(\bar{\mathcal{B}}^{\pi}\circ\rho^{\pi}\right)(\mathbf{s},\mathbf{a}),\left(d^{\pi\_{\beta}}\circ\rho^{\pi}\right)(\mathbf{s},\mathbf{a})\right)~{}~{}~{}\text{s.t.}~{}~{}~{}\mathbb{E}\_{\mathbf{s},\mathbf{a},\mathbf{s}^{\prime}\sim\mathcal{D}}[\rho^{\pi}(\mathbf{s},\mathbf{a})]=1.roman\_min start\_POSTSUBSCRIPT italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_D start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT ( ( over¯ start\_ARG caligraphic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ∘ italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) ( bold\_s , bold\_a ) , ( italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ∘ italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) ( bold\_s , bold\_a ) ) s.t. blackboard\_E start\_POSTSUBSCRIPT bold\_s , bold\_a , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ caligraphic\_D end\_POSTSUBSCRIPT [ italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) ] = 1 . | | (11) |
They further apply tricks inspired from dual embeddings (Dai et al.,, [2016](#bib.bib14)) to make the objective tractable, and to avoid the bias caused due to sampled estimates. We refer the readers to [Zhang et al., 2020a](#bib.bib187) for further discussion.
[Zhang et al., 2020b](#bib.bib189) show that primal-dual solvers may not be able to solve Eqn. [11](#S3.E11 "11 ‣ Forward Bellman equation based approaches. ‣ 3.4 Marginalized Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), and modify the objective in by replacing the f-divergence with a norm induced by 1/dπβ1superscript𝑑subscript𝜋𝛽1/d^{\pi\_{\beta}}1 / italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT. This creates an optimization problem that is provably convergent under linear function approximation.
| | | | |
| --- | --- | --- | --- |
| | minρπ12||((ℬ¯π∘ρπ)(𝐬,𝐚)−(dπβ∘ρπ)(𝐬,𝐚)||(dπβ)−12+λ2(𝔼𝐬,𝐚,𝐬′∼𝒟[ρπ(𝐬,𝐚)]−1)2.\min\_{\rho^{\pi}}~{}~{}\frac{1}{2}||\left(\left(\bar{\mathcal{B}}^{\pi}\circ\rho^{\pi}\right)(\mathbf{s},\mathbf{a})-\left(d^{\pi\_{\beta}}\circ\rho^{\pi}\right)(\mathbf{s},\mathbf{a})\right||^{2}\_{(d^{\pi\_{\beta}})^{-1}}+\frac{\lambda}{2}(\mathbb{E}\_{\mathbf{s},\mathbf{a},\mathbf{s}^{\prime}\sim\mathcal{D}}[\rho^{\pi}(\mathbf{s},\mathbf{a})]-1)^{2}.roman\_min start\_POSTSUBSCRIPT italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG | | ( ( over¯ start\_ARG caligraphic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ∘ italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) ( bold\_s , bold\_a ) - ( italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ∘ italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) ( bold\_s , bold\_a ) | | start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT ( italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT + divide start\_ARG italic\_λ end\_ARG start\_ARG 2 end\_ARG ( blackboard\_E start\_POSTSUBSCRIPT bold\_s , bold\_a , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ caligraphic\_D end\_POSTSUBSCRIPT [ italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) ] - 1 ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT . | | (12) |
#### Backward Bellman equation based approaches via convex duality.
Finally, we discuss methods for off-policy evaluation and improvement that utilize a backward Bellman equation – giving rise to a value-function like functional – by exploiting convex duality. Because these methods start from an optimization perspective, they can bring to bear the mature tools of convex optimization and online learning. Lee and He, ([2018](#bib.bib86)) extend a line of work applying to convex optimization techniques to policy optimization (Chen and Wang,, [2016](#bib.bib10); Wang and Chen,, [2016](#bib.bib173); [Dai et al., 2017a,](#bib.bib15) ; [Dai et al., 2017b,](#bib.bib16) ) to the off-policy setting. They prove sample complexity bounds in the off-policy setting, however, extending these results to practical deep RL settings has proven challenging.
[Nachum et al., 2019a](#bib.bib104) develop similar ideas for off-policy evaluation. The key idea is to devise a convex optimization problem with ρπsuperscript𝜌𝜋\rho^{\pi}italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT as its optimal solution. The chosen optimization problem is
| | | | |
| --- | --- | --- | --- |
| | ρπ=argminx:𝒮×𝒜→ℝ12𝔼𝐬,𝐚,𝐬′∼𝒟[x(𝐬,𝐚)2]−𝔼𝐬∼dπ(𝐬),𝐚∼π(𝐚|𝐬)[x(𝐬,𝐚)].superscript𝜌𝜋subscript:𝑥→𝒮𝒜ℝ12subscript𝔼similar-to𝐬𝐚superscript𝐬′
𝒟delimited-[]𝑥superscript𝐬𝐚2subscript𝔼formulae-sequencesimilar-to𝐬superscript𝑑𝜋𝐬similar-to𝐚𝜋conditional𝐚𝐬delimited-[]𝑥𝐬𝐚\rho^{\pi}=\arg\min\_{x:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}}~{}~{}\frac{1}{2}\mathbb{E}\_{\mathbf{s},\mathbf{a},\mathbf{s}^{\prime}\sim\mathcal{D}}\left[x(\mathbf{s},\mathbf{a})^{2}\right]-\mathbb{E}\_{\mathbf{s}\sim d^{\pi}(\mathbf{s}),\mathbf{a}\sim\pi(\mathbf{a}|\mathbf{s})}\left[x(\mathbf{s},\mathbf{a})\right].italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT = roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_x : caligraphic\_S × caligraphic\_A → blackboard\_R end\_POSTSUBSCRIPT divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG blackboard\_E start\_POSTSUBSCRIPT bold\_s , bold\_a , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ caligraphic\_D end\_POSTSUBSCRIPT [ italic\_x ( bold\_s , bold\_a ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] - blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) , bold\_a ∼ italic\_π ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ italic\_x ( bold\_s , bold\_a ) ] . | | (13) |
Unfortunately, this objective requires samples from the on-policystate-marginal distribution, dπ(𝐬)superscript𝑑𝜋𝐬d^{\pi}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ). The key insight is a change of variables, x(s,a)=ν(𝐬,𝐚)−𝔼𝐬′∼T(𝐬′|𝐬,𝐚),𝐚′∼π(𝐚′|𝐬′)[ν(𝐬′,𝐚′)]𝑥𝑠𝑎𝜈𝐬𝐚subscript𝔼formulae-sequencesimilar-tosuperscript𝐬′𝑇conditionalsuperscript𝐬′𝐬𝐚similar-tosuperscript𝐚′𝜋conditionalsuperscript𝐚′superscript𝐬′delimited-[]𝜈superscript𝐬′superscript𝐚′x(s,a)=\nu(\mathbf{s},\mathbf{a})-\mathbb{E}\_{\mathbf{s}^{\prime}\sim T(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a}),\mathbf{a}^{\prime}\sim\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}[\nu(\mathbf{s}^{\prime},\mathbf{a}^{\prime})]italic\_x ( italic\_s , italic\_a ) = italic\_ν ( bold\_s , bold\_a ) - blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_T ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s , bold\_a ) , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_ν ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] and introduce the variable ν(𝐬,𝐚)𝜈𝐬𝐚\nu(\mathbf{s},\mathbf{a})italic\_ν ( bold\_s , bold\_a ) to simplify the expression in Equation [13](#S3.E13 "13 ‣ Backward Bellman equation based approaches via convex duality. ‣ 3.4 Marginalized Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), and obtain the “backward” dynamic programming procedure shown in Equation [14](#S3.E14 "14 ‣ Backward Bellman equation based approaches via convex duality. ‣ 3.4 Marginalized Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). For brevity, we define a modified Bellman operator, ℬ~πν(𝐬,𝐚):=𝔼𝐬′∼T(𝐬′|𝐬,𝐚),𝐚′∼π(𝐚′|𝐬′)[ν(𝐬′,𝐚′)]assignsuperscript~ℬ𝜋𝜈𝐬𝐚subscript𝔼formulae-sequencesimilar-tosuperscript𝐬′𝑇conditionalsuperscript𝐬′𝐬𝐚similar-tosuperscript𝐚′𝜋conditionalsuperscript𝐚′superscript𝐬′delimited-[]𝜈superscript𝐬′superscript𝐚′\tilde{\mathcal{B}}^{\pi}\nu(\mathbf{s},\mathbf{a}):=\mathbb{E}\_{\mathbf{s}^{\prime}\sim T(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a}),\mathbf{a}^{\prime}\sim\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}[\nu(\mathbf{s}^{\prime},\mathbf{a}^{\prime})]over~ start\_ARG caligraphic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_ν ( bold\_s , bold\_a ) := blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_T ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s , bold\_a ) , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_ν ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ], that is similar to the expression for ℬπsuperscriptℬ𝜋\mathcal{B}^{\pi}caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT without the reward term r(s,a)𝑟𝑠𝑎r(s,a)italic\_r ( italic\_s , italic\_a ).
| | | | |
| --- | --- | --- | --- |
| | minν:𝒮×𝒜→ℝ12𝔼𝐬,𝐚,𝐬′∼𝒟[(ν(𝐬,𝐚)−ℬ~πν(𝐬,𝐚))2]−𝔼𝐬0∼d0(𝐬0),𝐚∼π(𝐚|𝐬0)[ν(𝐬0,𝐚)].subscript:𝜈→𝒮𝒜ℝ12subscript𝔼similar-to𝐬𝐚superscript𝐬′
𝒟delimited-[]superscript𝜈𝐬𝐚superscript~ℬ𝜋𝜈𝐬𝐚2subscript𝔼formulae-sequencesimilar-tosubscript𝐬0subscript𝑑0subscript𝐬0similar-to𝐚𝜋conditional𝐚subscript𝐬0delimited-[]𝜈subscript𝐬0𝐚\min\_{\nu:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}}~{}~{}\frac{1}{2}\mathbb{E}\_{\mathbf{s},\mathbf{a},\mathbf{s}^{\prime}\sim\mathcal{D}}\left[\left(\nu(\mathbf{s},\mathbf{a})-\tilde{\mathcal{B}}^{\pi}\nu(\mathbf{s},\mathbf{a})\right)^{2}\right]-\mathbb{E}\_{\mathbf{s}\_{0}\sim d\_{0}(\mathbf{s}\_{0}),\mathbf{a}\sim\pi(\mathbf{a}|\mathbf{s}\_{0})}\left[\nu(\mathbf{s}\_{0},\mathbf{a})\right].roman\_min start\_POSTSUBSCRIPT italic\_ν : caligraphic\_S × caligraphic\_A → blackboard\_R end\_POSTSUBSCRIPT divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG blackboard\_E start\_POSTSUBSCRIPT bold\_s , bold\_a , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ caligraphic\_D end\_POSTSUBSCRIPT [ ( italic\_ν ( bold\_s , bold\_a ) - over~ start\_ARG caligraphic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_ν ( bold\_s , bold\_a ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] - blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∼ italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) , bold\_a ∼ italic\_π ( bold\_a | bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_ν ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , bold\_a ) ] . | | (14) |
Remarkably, Equation [14](#S3.E14 "14 ‣ Backward Bellman equation based approaches via convex duality. ‣ 3.4 Marginalized Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") does not require on-policy samples to evaluate. Given an optimal solution for the objective in Equation [14](#S3.E14 "14 ‣ Backward Bellman equation based approaches via convex duality. ‣ 3.4 Marginalized Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), denoted as ν\*superscript𝜈\nu^{\*}italic\_ν start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, we can obtain the density ratio, ρπsuperscript𝜌𝜋\rho^{\pi}italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT, using the relation, ρπ(𝐬,𝐚)=ν\*(𝐬,𝐚)−ℬ~πν\*(𝐬,𝐚)superscript𝜌𝜋𝐬𝐚superscript𝜈𝐬𝐚superscript~ℬ𝜋superscript𝜈𝐬𝐚\rho^{\pi}(\mathbf{s},\mathbf{a})=\nu^{\*}(\mathbf{s},\mathbf{a})-\tilde{\mathcal{B}}^{\pi}\nu^{\*}(\mathbf{s},\mathbf{a})italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) = italic\_ν start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) - over~ start\_ARG caligraphic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_ν start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ). The density ratio can then be used for off-policy evaluation and improvement.
[Nachum et al., 2019b](#bib.bib106) ; Nachum and Dai, ([2020](#bib.bib105)) build on a similar framework to devise an off-policy RL algorithm. The key idea is to obtain an estimate of the on-policy policy gradient for a state-marginal regularized RL objective by solving an optimization problem. The regularizer applied in this family of methods is the f-divergence between the state(-action) marginal of the learned policy and the state-action marginal of the dataset. We will cover f-divergences in detail in Section [4](#S4 "4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
The f-divergence regularized RL problem, with a tradeoff factor, α𝛼\alphaitalic\_α, is given by:
| | | | |
| --- | --- | --- | --- |
| | maxπ𝔼𝐬∼dπ(𝐬),𝐚∼π(⋅|𝐬)[r(𝐬,𝐚)]−αDf(dπ(𝐬,𝐚),dπβ(𝐬,𝐚)).\max\_{\pi}~{}~{}\mathbb{E}\_{\mathbf{s}\sim d^{\pi}(\mathbf{s}),\mathbf{a}\sim\pi(\cdot|\mathbf{s})}\left[r(\mathbf{s},\mathbf{a})\right]-\alpha D\_{f}(d^{\pi}(\mathbf{s},\mathbf{a}),d^{\pi\_{\beta}}(\mathbf{s},\mathbf{a})).roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) , bold\_a ∼ italic\_π ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ italic\_r ( bold\_s , bold\_a ) ] - italic\_α italic\_D start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT ( italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) , italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) ) . | | (15) |
By exploiting the variational (dual) form of the f-divergence shown below
| | | | |
| --- | --- | --- | --- |
| | Df(p,q)=maxx:𝒮×𝒜→ℝ(𝔼𝐲∼p(𝐲)[x(𝐲)]−𝔼𝐲∼q(𝐲)[f\*(x(𝐲))]),subscript𝐷𝑓𝑝𝑞subscript:𝑥→𝒮𝒜ℝsubscript𝔼similar-to𝐲𝑝𝐲delimited-[]𝑥𝐲subscript𝔼similar-to𝐲𝑞𝐲delimited-[]superscript𝑓𝑥𝐲D\_{f}(p,q)=\max\_{x:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}}\left(\mathbb{E}\_{\mathbf{y}\sim p(\mathbf{y})}[x(\mathbf{y})]-\mathbb{E}\_{\mathbf{y}\sim q(\mathbf{y})}[f^{\*}(x(\mathbf{y}))]\right),italic\_D start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT ( italic\_p , italic\_q ) = roman\_max start\_POSTSUBSCRIPT italic\_x : caligraphic\_S × caligraphic\_A → blackboard\_R end\_POSTSUBSCRIPT ( blackboard\_E start\_POSTSUBSCRIPT bold\_y ∼ italic\_p ( bold\_y ) end\_POSTSUBSCRIPT [ italic\_x ( bold\_y ) ] - blackboard\_E start\_POSTSUBSCRIPT bold\_y ∼ italic\_q ( bold\_y ) end\_POSTSUBSCRIPT [ italic\_f start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_x ( bold\_y ) ) ] ) , | | (16) |
and then applying a change of variables from x𝑥xitalic\_x to Q𝑄Qitalic\_Q (c.f., ([Nachum et al., 2019a,](#bib.bib104) )) where Q𝑄Qitalic\_Q satisfies
Q(𝐬,𝐚)=𝔼𝐬′∼T(𝐬,𝐚),𝐚′∼π(𝐚′|𝐬′)[r(𝐬,𝐚)−αx(𝐬,𝐚)+γQ(𝐬′,𝐚′)]𝑄𝐬𝐚subscript𝔼formulae-sequencesimilar-tosuperscript𝐬′𝑇𝐬𝐚similar-tosuperscript𝐚′𝜋conditionalsuperscript𝐚′superscript𝐬′delimited-[]𝑟𝐬𝐚𝛼𝑥𝐬𝐚𝛾𝑄superscript𝐬′superscript𝐚′Q(\mathbf{s},\mathbf{a})=\mathbb{E}\_{\mathbf{s}^{\prime}\sim T(\mathbf{s},\mathbf{a}),\mathbf{a}^{\prime}\sim\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}\left[r(\mathbf{s},\mathbf{a})-\alpha x(\mathbf{s},\mathbf{a})+\gamma Q(\mathbf{s}^{\prime},\mathbf{a}^{\prime})\right]italic\_Q ( bold\_s , bold\_a ) = blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_T ( bold\_s , bold\_a ) , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_r ( bold\_s , bold\_a ) - italic\_α italic\_x ( bold\_s , bold\_a ) + italic\_γ italic\_Q ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ], we obtain a saddle-point optimization problem for optimizing the regularized RL objective,
| | | | |
| --- | --- | --- | --- |
| | maxπminQsubscript𝜋subscript𝑄\displaystyle\max\_{\pi}~{}\min\_{Q}roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT roman\_min start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT | L(Q,πβ,π)≔𝔼𝐬0∼d0(𝐬0),𝐚∼π(⋅|𝐬0)[Q(𝐬0,𝐚)]\displaystyle~{}~{}L(Q,{\pi\_{\beta}},\pi)\coloneqq\mathbb{E}\_{\mathbf{s}\_{0}\sim d\_{0}(\mathbf{s}\_{0}),\mathbf{a}\sim\pi(\cdot|\mathbf{s}\_{0})}\left[Q(\mathbf{s}\_{0},\mathbf{a})\right]italic\_L ( italic\_Q , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT , italic\_π ) ≔ blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∼ italic\_d start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) , bold\_a ∼ italic\_π ( ⋅ | bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , bold\_a ) ] | |
| | | +α𝔼𝐬,𝐚∼dπβ(𝐬,𝐚)[f\*(r(𝐬,𝐚)+γ𝔼𝐬′∼T(𝐬,𝐚),𝐚′∼π(𝐚′|𝐬′)[Q(𝐬′,𝐚′)]−Q(𝐬,𝐚)α)].𝛼subscript𝔼similar-to𝐬𝐚
superscript𝑑subscript𝜋𝛽𝐬𝐚delimited-[]superscript𝑓𝑟𝐬𝐚𝛾subscript𝔼formulae-sequencesimilar-tosuperscript𝐬′𝑇𝐬𝐚similar-tosuperscript𝐚′𝜋conditionalsuperscript𝐚′superscript𝐬′delimited-[]𝑄superscript𝐬′superscript𝐚′𝑄𝐬𝐚𝛼\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}+\alpha\mathbb{E}\_{\mathbf{s},\mathbf{a}\sim d^{\pi\_{\beta}}(\mathbf{s},\mathbf{a})}\left[f^{\*}\left(\frac{r(\mathbf{s},\mathbf{a})+\gamma\mathbb{E}\_{\mathbf{s}^{\prime}\sim T(\mathbf{s},\mathbf{a}),\mathbf{a}^{\prime}\sim\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}[Q(\mathbf{s}^{\prime},\mathbf{a}^{\prime})]-Q(\mathbf{s},\mathbf{a})}{\alpha}\right)\right].+ italic\_α blackboard\_E start\_POSTSUBSCRIPT bold\_s , bold\_a ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) end\_POSTSUBSCRIPT [ italic\_f start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( divide start\_ARG italic\_r ( bold\_s , bold\_a ) + italic\_γ blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_T ( bold\_s , bold\_a ) , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] - italic\_Q ( bold\_s , bold\_a ) end\_ARG start\_ARG italic\_α end\_ARG ) ] . | |
When f(x)=x2𝑓𝑥superscript𝑥2f(x)=x^{2}italic\_f ( italic\_x ) = italic\_x start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT, f\*(x)=x2superscript𝑓𝑥superscript𝑥2f^{\*}(x)=x^{2}italic\_f start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_x ) = italic\_x start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT and this objective reduces to applying a regular actor-critic algorithm as discussed in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") along with an additional term minimizing Q-values at the initial state 𝐬0subscript𝐬0\mathbf{s}\_{0}bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT.
It can also be shown that the derivative with respect to the policy of L(Q\*,πβ,π)𝐿superscript𝑄subscript𝜋𝛽𝜋L(Q^{\*},{\pi\_{\beta}},\pi)italic\_L ( italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT , italic\_π ), at the optimal Q-function, Q\*superscript𝑄Q^{\*}italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, is precisely equal to the on-policy policy gradient in the regularized policy gradient problem:
| | | | |
| --- | --- | --- | --- |
| | ∂∂πL(Q\*,πβ,π)=𝔼𝐬∼dπ(𝐬),𝐚∼π(⋅|𝐬)[Q~π(𝐬,𝐚)⋅∇πlogπ(𝐚|𝐬)],\frac{\partial}{\partial\pi}L(Q^{\*},{\pi\_{\beta}},\pi)=\mathbb{E}\_{\mathbf{s}\sim d^{\pi}(\mathbf{s}),\mathbf{a}\sim\pi(\cdot|\mathbf{s})}\left[\tilde{Q}^{\pi}(\mathbf{s},\mathbf{a})\cdot\nabla\_{\pi}\log\pi(\mathbf{a}|\mathbf{s})\right],divide start\_ARG ∂ end\_ARG start\_ARG ∂ italic\_π end\_ARG italic\_L ( italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT , italic\_π ) = blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) , bold\_a ∼ italic\_π ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ over~ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) ⋅ ∇ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT roman\_log italic\_π ( bold\_a | bold\_s ) ] , | | (17) |
where Q~πsuperscript~𝑄𝜋\tilde{Q}^{\pi}over~ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT is the action-value function corresponding to the regularized RL problem.
Finally, we note that this is a rapidly developing area and recent results suggest that many of these methods can be unified under a single framework (Tang et al.,, [2019](#bib.bib159); Nachum and Dai,, [2020](#bib.bib105)).
###
3.5 Challenges and Open Problems
The methods discussed in this section utilize some form of importance sampling to either estimate the return of the current policy πθsubscript𝜋𝜃\pi\_{\theta}italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT, or estimate the gradient of this return. The policy improvement methods discussed in this section have been developed primarily for a classic off-policy learning setting, where additional data is collected online, but previously collected data is reused to improve efficiency. To the best of our knowledge, such methods have not generally been applied in the offline setting, with the exception of the evaluation and high-confidence improvement techniques in Section [3.1](#S3.SS1 "3.1 Off-Policy Evaluation via Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
Applying such methods in the fully offline setting poses a number of major challenges. First, importance sampling already suffer from high variance, and this variance increases dramatically in the sequential setting, since the importance weights at successive time steps are multiplied together (see, e.g., Equation ([5](#S3.E5 "5 ‣ 3.1 Off-Policy Evaluation via Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"))), resulting in exponential blowup. Approximate and marginalized importance sampling methods alleviate this issue to some degree by avoiding multiplication of importance weights over multiple time steps, but the fundamental issue still remains: when the behavior policy πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT is too different from the current learned policy πθsubscript𝜋𝜃\pi\_{\theta}italic\_π start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT, the importance weights will become degenerate, and any estimate of the return or the gradient will have too much variance to be usable, especially in high-dimensional state and action spaces, or for long-horizon problems. For this reason, importance-sampled estimators are most suitable in the case where the policy only deviates by a limited amount from the behavior policy. In the classic off-policy setting, this is generally the case, since new trajectories are repeatedly collected and added to the dataset using the latest policy, but in the offline setting this is generally not the case. Thus, the maximum improvement that can be reliably obtained via importance sampling is limited by (i) the suboptimality of the behavior policy; (ii) the dimensionality of the state and action space; (iii) the effective horizon of the task. The second challenge is that the most effective of these off-policy policy gradient methods either requires estimating the value function, or the state-marginal density ratios via dynamic programming. As several prior works have shown, and as we will review in Section [4](#S4 "4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), dynamic programming methods suffer from issues pertaining to out-of-distribution queries in completely offline settings, making it hard to stably learn the value function without additional corrections. This problem is not as severe in the classic off-policy setting, which allows additional data collection. Nonetheless, as we will discussion in Section [6](#S6 "6 Applications and Evaluation ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), a number of prior applications have effectively utilized importance sampling in an offline setting.
4 Offline Reinforcement Learning via Dynamic Programming
---------------------------------------------------------
Dynamic programming methods, such as Q-learning algorithms, in principle can offer a more attractive option for offline reinforcement learning as compared to pure policy gradients. As discussed in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), dynamic programming methods aim to learn a state or state-action value function, and then either use this value function to directly recover the optimal policy or, as in the case of actor-critic methods, use this value function to estimate a gradient for the expected returns of a policy. Basic offline dynamic programming algorithms can be constructed on the basis of fitted Q-learning methods (Ernst et al.,, [2005](#bib.bib23); Riedmiller,, [2005](#bib.bib130); Hafner and Riedmiller,, [2011](#bib.bib45)), as well as policy iteration methods (Sutton and Barto,, [1998](#bib.bib151)). The generic Q-learning and actor-critic algorithms presented in Algorithm [2](#alg2 "Algorithm 2 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") and Algorithm [3](#alg3 "Algorithm 3 ‣ Actor-critic algorithms. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") can in principle be utilized as offline reinforcement learning, simply by setting the number of collection steps S𝑆Sitalic\_S to zero, and initializing the buffer to be non-empty. Such algorithms can be viable for offline RL, and indeed have been used as such even in recent deep reinforcement learning methods. We will discuss an older class of sucm methods, based on linear function approximation, in Section [4.1](#S4.SS1 "4.1 Off-Policy Value Function Estimation with Linear Value Functions ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), but such techniques have also been used effectively with deep neural network function approximators. For example, Kalashnikov et al., ([2018](#bib.bib65)) describes a Q-learning algorithm called QT-Opt that was able to learn effective vision-based robotic grasping strategies from about 500,000 grasping trials logged over the course of previous experiments, but observes that additional online fine-tuning still improved the performance of the policy considerably over the one trained purely from logged data. Some prior works on offline RL have also noted that, for some types of datasets, conventional dynamic programming algorithms, such as deep Q-learning or deterministic actor-critic algorithms, can in fact work reasonably well (Agarwal et al.,, [2019](#bib.bib1)).
However, as we will discuss in Section [4.2](#S4.SS2 "4.2 Distributional Shift in Offline Reinforcement Learning via Dynamic Programming ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), such methods suffer from a number of issues when all online collection is halted, and only offline data is used. These issues essentially amount to distributional shift, as discussed in Section [2.4](#S2.SS4 "2.4 What Makes Offline Reinforcement Learning Difficult? ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). Solutions to this issue can be broadly grouped into two categories: policy constraint methods, discussed in Section [4.3](#S4.SS3 "4.3 Policy Constraints for Off-Policy Evaluation and Improvement ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), which constrain the learned policy π𝜋\piitalic\_π to lie close to the behavior policy πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT, thus mitigating distributional shift, and uncertainty-based methods, discussed in Section [4.4](#S4.SS4 "4.4 Offline Approximate Dynamic Programming with Uncertainty Estimation ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), which attempt to estimate the epistemic uncertainty of Q-values, and then utilize this uncertainty to detect distributional shift. We will discuss both classes of distributional shift corrections, and then conclude with perspectives on remaining challenges and open problems in Section [4.6](#S4.SS6 "4.6 Challenges and Open Problems ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
###
4.1 Off-Policy Value Function Estimation with Linear Value Functions
We first discuss standard offline reinforcement learning methods based on value function and policy estimation with linear function approximators, which do not by themselves provide any mitigate for distributional shift, but can work well when good linear features are available. While modern deep reinforcement learning methods generally eschew linear features in favor of non-linear neural network function approximators, linear methods constitute an important class of offline reinforcement learning algorithms in the literature (Lange et al.,, [2012](#bib.bib82); Lagoudakis and Parr,, [2003](#bib.bib81)). We begin with algorithms that use a linear function to approximate the Q-function, such that Qϕ≈𝐟(s,a)Tϕsubscript𝑄italic-ϕ𝐟superscript𝑠𝑎𝑇italic-ϕQ\_{\phi}\approx\mathbf{f}(s,a)^{T}\phiitalic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT ≈ bold\_f ( italic\_s , italic\_a ) start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ, where 𝐟(s,a)∈ℝd𝐟𝑠𝑎superscriptℝ𝑑\mathbf{f}(s,a)\in\mathbb{R}^{d}bold\_f ( italic\_s , italic\_a ) ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT is a feature vector associated with a state-action pair.
This parametric Q-function can then be estimated for a given policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) using data from a different behavior policy πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ), with state visitation frequency dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{{\pi\_{\beta}}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ). As discussed in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), the Q-function Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT for a given policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) must satisfy the Bellman equation, given in full in Equation ([3](#S2.E3 "3 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")), and written in Bellman operator notation as Qπ→=ℬπQπ→→superscript𝑄𝜋superscriptℬ𝜋→superscript𝑄𝜋\vec{Q^{\pi}}=\mathcal{B}^{\pi}\vec{Q^{\pi}}over→ start\_ARG italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT end\_ARG = caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT end\_ARG.
When linear function approximation is used to represent the Q-function, the Q-function for a policy can be represented as the solution of a linear system, and hence can be computed via least squares minimization, since the Bellman operator ℬπsuperscriptℬ𝜋\mathcal{B}^{\pi}caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT is linear. This provides a convenient way to compute Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT directly in closed form, as compared to using gradient descent in Algorithm [3](#alg3 "Algorithm 3 ‣ Actor-critic algorithms. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). The resulting Q-value estimates can then be used to improve the policy.
We start with a discussion of different ways of solving the linear system for computing Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT. To recap, the Q-function is a linear function on a feature vector 𝐟(𝐬,𝐚)𝐟𝐬𝐚\mathbf{f}(\mathbf{s},\mathbf{a})bold\_f ( bold\_s , bold\_a ), which we can express in tabular form as 𝐅∈ℝ|S||A|×d𝐅superscriptℝ𝑆𝐴𝑑\mathbf{F}\in\mathbb{R}^{|S||A|\times d}bold\_F ∈ blackboard\_R start\_POSTSUPERSCRIPT | italic\_S | | italic\_A | × italic\_d end\_POSTSUPERSCRIPT, such that Q→ϕ=𝐅ϕsubscript→𝑄italic-ϕ𝐅italic-ϕ\vec{Q}\_{\phi}=\mathbf{F}\phiover→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT = bold\_F italic\_ϕ. Multiple procedures that can be used to obtain a closed form expression for the optimal ϕitalic-ϕ\phiitalic\_ϕ for a given policy π𝜋\piitalic\_π and, as discussed in by Sutton et al., ([2009](#bib.bib152)) and Lagoudakis and Parr, ([2003](#bib.bib81)), these procedures may each yield different solutions under function approximation. We first summarize two methods for selecting ϕitalic-ϕ\phiitalic\_ϕ to approximate Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT for a given policy π𝜋\piitalic\_π, and then discuss how to utilize these methods in a complete reinforcement learning method.
#### Bellman residual minimization.
The first approach selects ϕitalic-ϕ\phiitalic\_ϕ such that the resulting linear Q-function satisfies the Bellman equation as closely as possible in terms of squared error, which can be obtained via the least squares solution. To derive the corresponding least squares problem, we first write the Bellman equation in terms of the Bellman operator, and expand it using the vectorized expression for the reward function, R→→𝑅\vec{R}over→ start\_ARG italic\_R end\_ARG, and a linear operator representing the dynamics and policy backup, which we denote as Pπsuperscript𝑃𝜋P^{\pi}italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT, such that (PπQ→)(𝐬,𝐚)=𝔼𝐬′∼T(𝐬′|𝐬,𝐚),𝐚′∼π(𝐚′|𝐬′)[Q(𝐬′,𝐚′)]superscript𝑃𝜋→𝑄𝐬𝐚subscript𝔼formulae-sequencesimilar-tosuperscript𝐬′𝑇conditionalsuperscript𝐬′𝐬𝐚similar-tosuperscript𝐚′𝜋conditionalsuperscript𝐚′superscript𝐬′delimited-[]𝑄superscript𝐬′superscript𝐚′(P^{\pi}\vec{Q})(\mathbf{s},\mathbf{a})=\mathbb{E}\_{\mathbf{s}^{\prime}\sim T(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a}),\mathbf{a}^{\prime}\sim\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}[Q(\mathbf{s}^{\prime},\mathbf{a}^{\prime})]( italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG ) ( bold\_s , bold\_a ) = blackboard\_E start\_POSTSUBSCRIPT bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_T ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s , bold\_a ) , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ]. The corresponding expression of the Bellman equation is given by
| | | |
| --- | --- | --- |
| | 𝐅ϕ≈ℬπ𝐅ϕ=R→+γPπ𝐅ϕ⟹(𝐅−γPπ𝐅)ϕ≈R→.𝐅italic-ϕsuperscriptℬ𝜋𝐅italic-ϕ→𝑅𝛾superscript𝑃𝜋𝐅italic-ϕ𝐅𝛾superscript𝑃𝜋𝐅italic-ϕ→𝑅\mathbf{F}\phi\approx\mathcal{B}^{\pi}\mathbf{F}\phi=\vec{R}+\gamma P^{\pi}\mathbf{F}\phi\implies\left(\mathbf{F}-\gamma P^{\pi}\mathbf{F}\right)\phi\approx\vec{R}.bold\_F italic\_ϕ ≈ caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT bold\_F italic\_ϕ = over→ start\_ARG italic\_R end\_ARG + italic\_γ italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT bold\_F italic\_ϕ ⟹ ( bold\_F - italic\_γ italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT bold\_F ) italic\_ϕ ≈ over→ start\_ARG italic\_R end\_ARG . | |
Writing out the least squares solution using the normal equations, we obtain
| | | |
| --- | --- | --- |
| | ϕ=((𝐅−γPπ𝐅)T(𝐅−γPπ𝐅))−1(𝐅−γPπ𝐅)TR→.italic-ϕsuperscriptsuperscript𝐅𝛾superscript𝑃𝜋𝐅𝑇𝐅𝛾superscript𝑃𝜋𝐅1superscript𝐅𝛾superscript𝑃𝜋𝐅𝑇→𝑅\phi=\left((\mathbf{F}-\gamma P^{\pi}\mathbf{F})^{T}(\mathbf{F}-\gamma P^{\pi}\mathbf{F})\right)^{-1}(\mathbf{F}-\gamma P^{\pi}\mathbf{F})^{T}\vec{R}.italic\_ϕ = ( ( bold\_F - italic\_γ italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT bold\_F ) start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ( bold\_F - italic\_γ italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT bold\_F ) ) start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ( bold\_F - italic\_γ italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT bold\_F ) start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT over→ start\_ARG italic\_R end\_ARG . | |
This expression minimizes the ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT norm of the Bellman residual (the squared difference between the left-hand side and right-hand side of the Bellman equation), and is referred to as the Bellman residual minimizing solution.
#### Least-squares fixed point approximation.
An alternate approach is use projected fixed-point iteration, rather than direct minimization of the Bellman error, which gives rise to the least-squares fixed point approximation. In this approach, instead of minimizing the squared difference between the left-hand side and right-hand side of the Bellman equation, we instead attempt to iterate the Bellman operator to convergence. In the tabular case, as discussed in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), we know that iterating Q→k+1←ℬπQ→k←subscript→𝑄𝑘1superscriptℬ𝜋subscript→𝑄𝑘\vec{Q}\_{k+1}\leftarrow\mathcal{B}^{\pi}\vec{Q}\_{k}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ← caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT converges, as k→∞→𝑘k\rightarrow\inftyitalic\_k → ∞, to Q→πsuperscript→𝑄𝜋\vec{Q}^{\pi}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT. In the case where we use function approximation to represent Q→ksubscript→𝑄𝑘\vec{Q}\_{k}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, we cannot set Q→k+1subscript→𝑄𝑘1\vec{Q}\_{k+1}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT to ℬπQ→ksuperscriptℬ𝜋subscript→𝑄𝑘\mathcal{B}^{\pi}\vec{Q}\_{k}caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT precisely, because there may not be any value of the weights ϕitalic-ϕ\phiitalic\_ϕ that represent ℬπQ→ksuperscriptℬ𝜋subscript→𝑄𝑘\mathcal{B}^{\pi}\vec{Q}\_{k}caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT exactly. We therefore must employ a *projected* fixed point iteration method, where each iterate ℬπQ→k=ℬπ𝐅ϕksuperscriptℬ𝜋subscript→𝑄𝑘superscriptℬ𝜋𝐅subscriptitalic-ϕ𝑘\mathcal{B}^{\pi}\vec{Q}\_{k}=\mathcal{B}^{\pi}\mathbf{F}\phi\_{k}caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT = caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT bold\_F italic\_ϕ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is projected onto the span of 𝐅𝐅\mathbf{F}bold\_F to obtain ϕk+1subscriptitalic-ϕ𝑘1\phi\_{k+1}italic\_ϕ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT. We can express this projection via a projection operator, ΠΠ\Piroman\_Π, such that the projected Bellman iteration is given by Q→k+1=ΠℬπQ→ksubscript→𝑄𝑘1Πsuperscriptℬ𝜋subscript→𝑄𝑘\vec{Q}\_{k+1}=\Pi\mathcal{B}^{\pi}\vec{Q}\_{k}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT = roman\_Π caligraphic\_B start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT. We can obtain a solution for this operator by solving the normal equation, and observe that Π=𝐅(𝐅T𝐅)−1𝐅TΠ𝐅superscriptsuperscript𝐅𝑇𝐅1superscript𝐅𝑇\Pi=\mathbf{F}(\mathbf{F}^{T}\mathbf{F})^{-1}\mathbf{F}^{T}roman\_Π = bold\_F ( bold\_F start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT bold\_F ) start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT bold\_F start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT. We can expand out the projected Bellman iteration expression as:
| | | | |
| --- | --- | --- | --- |
| | Q→k+1subscript→𝑄𝑘1\displaystyle\vec{Q}\_{k+1}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT | =𝐅(𝐅T𝐅)−1𝐅T(R→+γPπQ→k)absent𝐅superscriptsuperscript𝐅𝑇𝐅1superscript𝐅𝑇→𝑅𝛾superscript𝑃𝜋subscript→𝑄𝑘\displaystyle=\mathbf{F}(\mathbf{F}^{T}\mathbf{F})^{-1}\mathbf{F}^{T}(\vec{R}+\gamma P^{\pi}\vec{Q}\_{k})= bold\_F ( bold\_F start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT bold\_F ) start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT bold\_F start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ( over→ start\_ARG italic\_R end\_ARG + italic\_γ italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT over→ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) | |
| | 𝐅ϕk+1𝐅subscriptitalic-ϕ𝑘1\displaystyle\mathbf{F}\phi\_{k+1}bold\_F italic\_ϕ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT | =𝐅(𝐅T𝐅)−1𝐅T(R→+γPπ𝐅ϕk)absent𝐅superscriptsuperscript𝐅𝑇𝐅1superscript𝐅𝑇→𝑅𝛾superscript𝑃𝜋𝐅subscriptitalic-ϕ𝑘\displaystyle=\mathbf{F}(\mathbf{F}^{T}\mathbf{F})^{-1}\mathbf{F}^{T}(\vec{R}+\gamma P^{\pi}\mathbf{F}\phi\_{k})= bold\_F ( bold\_F start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT bold\_F ) start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT bold\_F start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ( over→ start\_ARG italic\_R end\_ARG + italic\_γ italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT bold\_F italic\_ϕ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) | |
| | ϕk+1subscriptitalic-ϕ𝑘1\displaystyle\phi\_{k+1}italic\_ϕ start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT | =(𝐅T𝐅)−1𝐅T(R→+γPπ𝐅ϕk).absentsuperscriptsuperscript𝐅𝑇𝐅1superscript𝐅𝑇→𝑅𝛾superscript𝑃𝜋𝐅subscriptitalic-ϕ𝑘\displaystyle=(\mathbf{F}^{T}\mathbf{F})^{-1}\mathbf{F}^{T}(\vec{R}+\gamma P^{\pi}\mathbf{F}\phi\_{k}).= ( bold\_F start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT bold\_F ) start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT bold\_F start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ( over→ start\_ARG italic\_R end\_ARG + italic\_γ italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT bold\_F italic\_ϕ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) . | | (18) |
By repeatedly applying this recurrence, we can obtain the fixed point of the projected Bellman operator as k→∞→𝑘k\rightarrow\inftyitalic\_k → ∞ (Sutton et al.,, [2009](#bib.bib152)).
Both methods have been used in the literature, and there is no clear consensus on which approach is preferred, though they yield different solutions in general when the true Q-function Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT is not in the span of 𝐅𝐅\mathbf{F}bold\_F (i.e., cannot be represented by any parameter vector ϕitalic-ϕ\phiitalic\_ϕ) (Lagoudakis and Parr,, [2003](#bib.bib81)). We might at first surmise that a good linear fitting method should find the Q-function 𝐅ϕ𝐅italic-ϕ\mathbf{F}\phibold\_F italic\_ϕ that corresponds to a least-squares projection of the true Q→πsuperscript→𝑄𝜋\vec{Q}^{\pi}over→ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT onto the hyperplane defined by 𝐅𝐅\mathbf{F}bold\_F. Unfortunately, *neither* the Bellman residual minimization method nor the least-squares fixed point method in general obtains this solution. The Bellman residual minimization does not in general produce a fixed point of *either* the Bellman operator or the projected Bellman operator. However, the solution obtained via Bellman residual minimization may be closer to the true Q-function in terms of ℓ2subscriptℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT distance, since it is explicitly obtained by minimizing Bellman residual error. Least-squares fixed point iteration obtains a Q-function that is a fixed point of the projected Bellman operator, but may be arbitrarily suboptimal. However, least-squares fixed point iteration can learn solutions that lead, empirically, to better-performing policies (Sutton et al.,, [2009](#bib.bib152); Lagoudakis and Parr,, [2003](#bib.bib81)). In general, there are no theoretical arguments that justify the superiority of one approach over the other. In practice, least-squares fixed-point iteration can give rise to more effective policies as compared to the Bellman residual, while the Bellman residual minimization approach can be more stable and predictable (Lagoudakis and Parr,, [2003](#bib.bib81)).
#### Least squares temporal difference Q-learning (LSTD-Q).
Now that we have covered different approaches to solve a linear system of equations to obtain an approximation to Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT, we describe LSTD-Q, a method for estimating Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT from a static dataset, directly from samples. This method incrementally estimates the terms 𝐅T(𝐅−γPπ𝐅)superscript𝐅𝑇𝐅𝛾superscript𝑃𝜋𝐅\mathbf{F}^{T}(\mathbf{F}-\gamma P^{\pi}\mathbf{F})bold\_F start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ( bold\_F - italic\_γ italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT bold\_F ) and 𝐅TR→superscript𝐅𝑇→𝑅\mathbf{F}^{T}\vec{R}bold\_F start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT over→ start\_ARG italic\_R end\_ARG, which appear in Equation ([18](#S4.E18 "18 ‣ Least-squares fixed point approximation. ‣ 4.1 Off-Policy Value Function Estimation with Linear Value Functions ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")), and then inverts this sampled estimate to then obtain ϕitalic-ϕ\phiitalic\_ϕ. We defer further discussion on LSTD-Q to Lagoudakis and Parr, ([2003](#bib.bib81)), which also describes several computationally efficient implementations of the LSTD-Q algorithm. Note that the LSTD-Q algorithm is not directly applicable to estimating Q\*superscript𝑄Q^{\*}italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, the optimal Q-function, since the optimal Bellman equation for Q\*superscript𝑄Q^{\*}italic\_Q start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is not linear due to the maximization, and thus cannot be solved in closed form.
#### Least squares policy iteration (LSPI).
Finally, we discuss least-squares policy iteration (LSPI), a classical offline reinforcement learning algorithm that performs approximate policy iteration (see discussion in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")) using a linear approximation to the Q-function. LSPI uses LSTD-Q as an intermediate sub-routine to perform approximate policy evaluation, thereby obtaining an estimate for Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT, denoted Qϕsubscript𝑄italic-ϕQ\_{\phi}italic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT. Then, the algorithm sets the next policy iterate to be equal to the greedy policy corresponding to the approximate Qϕsubscript𝑄italic-ϕQ\_{\phi}italic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT, such that πk+1(𝐚|𝐬)=δ(𝐚=argmax𝐚Qϕ(𝐬,𝐚))subscript𝜋𝑘1conditional𝐚𝐬𝛿𝐚subscript𝐚subscript𝑄italic-ϕ𝐬𝐚\pi\_{k+1}(\mathbf{a}|\mathbf{s})=\delta(\mathbf{a}=\arg\max\_{\mathbf{a}}Q\_{\phi}(\mathbf{s},\mathbf{a}))italic\_π start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) = italic\_δ ( bold\_a = roman\_arg roman\_max start\_POSTSUBSCRIPT bold\_a end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ).
An important and useful characteristic of LSPI is that it does not require a separate approximate representation for the policy when the actions are discrete, and hence removes any source of error that arises due to function approximation in the actor in actor-critic methods. However, when the action space is continuous, a policy gradient method similar to the generic actor-critic algorithm in Algorithm [3](#alg3 "Algorithm 3 ‣ Actor-critic algorithms. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") can be used to optimize a parametric policy.
###
4.2 Distributional Shift in Offline Reinforcement Learning via Dynamic Programming
Both the linear and non-linear dynamic programming methods discussed so far, in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") and Section [4.1](#S4.SS1 "4.1 Off-Policy Value Function Estimation with Linear Value Functions ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") above, can *in principle* learn from offline data, collected according to a different (unknown) behavioral policy πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ), with state visitation frequency dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{{\pi\_{\beta}}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ). However, in practice, these procedures can result in very poor performance, due to the distributional shift issues alluded to in Section [2.4](#S2.SS4 "2.4 What Makes Offline Reinforcement Learning Difficult? ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
Distributional shift affects offline reinforcement learning via dynamic programming both at test time and at training time. At test time, the state visitation frequency dπ(𝐬)superscript𝑑𝜋𝐬d^{\pi}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) induced by a policy trained with offline RL will differ systematically from the state visitation frequency of the training data, dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{{\pi\_{\beta}}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ). This means that, as in the case of policy gradients, a policy trained via an actor-critic method may produce unexpected and erroneous actions in out-of-distribution states 𝐬∼dπβ(𝐬)similar-to𝐬superscript𝑑subscript𝜋𝛽𝐬\mathbf{s}\sim d^{{\pi\_{\beta}}}(\mathbf{s})bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ), as can the implicit greedy policy induced by a Q-function in a Q-learning method. One way to mitigate this distributional shift is to limit how much the learned policy can diverge from the behavior policy. For example, by constraining π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) such that DKL(π(𝐚|𝐬)∥πβ(𝐚|𝐬))≤ϵD\_{\text{KL}}(\pi(\mathbf{a}|\mathbf{s})\|{\pi\_{\beta}}(\mathbf{a}|\mathbf{s}))\leq\epsilonitalic\_D start\_POSTSUBSCRIPT KL end\_POSTSUBSCRIPT ( italic\_π ( bold\_a | bold\_s ) ∥ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) ) ≤ italic\_ϵ, we can bound DKL(dπ(𝐬)∥dπβ(𝐬))subscript𝐷KLconditionalsuperscript𝑑𝜋𝐬superscript𝑑subscript𝜋𝛽𝐬D\_{\text{KL}}(d^{\pi}(\mathbf{s})\|d^{{\pi\_{\beta}}}(\mathbf{s}))italic\_D start\_POSTSUBSCRIPT KL end\_POSTSUBSCRIPT ( italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) ∥ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ) ) by δ𝛿\deltaitalic\_δ, which is O(ϵ/(1−γ)2)𝑂italic-ϵsuperscript1𝛾2O(\epsilon/(1-\gamma)^{2})italic\_O ( italic\_ϵ / ( 1 - italic\_γ ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ) (Schulman et al.,, [2015](#bib.bib139)). This bound is very loose in practice, but nonetheless suggests that the effects of state distribution shift can potentially be mitigated by bounding how much the learned policy can deviate from the behavior policy that collected the offline training data. This may come at a substantial cost in final performance however, as the behavior policy – and any policy that is close to it – may be much worse than the best policy that can be learned from the offline data.
It should be noted that, for the algorithms discussed previously, state distribution shift affects test-time performance, but has no effect on training, since neither the policy nor the Q-function is ever evaluated at any state that was not sampled from dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{{\pi\_{\beta}}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ). However, the training process *is* affected by *action* distribution shift, because the target values for the Bellman backups in Equation ([3](#S2.E3 "3 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")) depend on 𝐚t+1∼π(𝐚t+1|𝐬t+1)similar-tosubscript𝐚𝑡1𝜋conditionalsubscript𝐚𝑡1subscript𝐬𝑡1\mathbf{a}\_{t+1}\sim\pi(\mathbf{a}\_{t+1}|\mathbf{s}\_{t+1})bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ). While this source of distribution shift may at first seem insignificant, it in fact presents one of the largest obstacles for practical application of approximate dynamic programming methods to offline reinforcement learning. Since computing the target values in Equation ([3](#S2.E3 "3 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")) requires evaluating Qπ(𝐬t+1,𝐚t+1)superscript𝑄𝜋subscript𝐬𝑡1subscript𝐚𝑡1Q^{\pi}(\mathbf{s}\_{t+1},\mathbf{a}\_{t+1})italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ), where 𝐚t+1∼π(𝐚t+1|𝐬t+1)similar-tosubscript𝐚𝑡1𝜋conditionalsubscript𝐚𝑡1subscript𝐬𝑡1\mathbf{a}\_{t+1}\sim\pi(\mathbf{a}\_{t+1}|\mathbf{s}\_{t+1})bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_π ( bold\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ), the accuracy of the Q-function regression targets depends on the estimate of the Q-value for actions that are outside of the distribution of actions that the Q-function was ever trained on. When π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) differs substantially from πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ), this discrepancy can result in highly erroneous target Q-values. This issue is further exacerbated by the fact that π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) is explicitly optimized to maximize 𝔼𝐚∼π(𝐚|𝐬)[Qπ(𝐬,𝐚)]subscript𝔼similar-to𝐚𝜋conditional𝐚𝐬delimited-[]superscript𝑄𝜋𝐬𝐚\mathbb{E}\_{\mathbf{a}\sim\pi(\mathbf{a}|\mathbf{s})}[Q^{\pi}(\mathbf{s},\mathbf{a})]blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a ) ]. This means that, if the policy can produce out-of-distribution actions for which the learned Q-function erroneously produces excessively large values, it will learn to do so. This is true whether the policy is represented explicitly, as in actor-critic algorithms, or implicitly, as the greedy policy π(𝐚|𝐬)=δ(𝐚=argmax𝐚′Qπ(𝐬,𝐚′))𝜋conditional𝐚𝐬𝛿𝐚subscriptsuperscript𝐚′superscript𝑄𝜋𝐬superscript𝐚′\pi(\mathbf{a}|\mathbf{s})=\delta(\mathbf{a}=\arg\max\_{\mathbf{a}^{\prime}}Q^{\pi}(\mathbf{s},\mathbf{a}^{\prime}))italic\_π ( bold\_a | bold\_s ) = italic\_δ ( bold\_a = roman\_arg roman\_max start\_POSTSUBSCRIPT bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ). In standard online reinforcement learning, such issues are corrected naturally when the policy acts in the environment, attempting the transitions it (erroneously) believes to be good, and observing that in fact they are not. However, in the offline setting, the policy cannot correct such over-optimistic Q-values, and these errors accumulate over each iteration of training, resulting in arbitrarily poor final results.


Figure 2: Performance of SAC (Haarnoja et al.,, [2018](#bib.bib43)), an actor-critic method, on the HalfCheetah-v2 task in the offline setting, showing return as a function of gradient steps (left) and average learned Q-values on a log scale (right), for different numbers of training points (n𝑛nitalic\_n). Note that an increase the number of samples does not generally prevent the “unlearning effect,” indicating that it is distinct from overfitting. Figure from Kumar et al., ([2019](#bib.bib77)).
This problem manifests itself in practice as an “unlearning” effect when running offline RL via dynamic programming. The experiments in Figure [2](#S4.F2 "Figure 2 ‣ 4.2 Distributional Shift in Offline Reinforcement Learning via Dynamic Programming ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), from Kumar et al., ([2019](#bib.bib77)), show this unlearning effect. The horizontal axis corresponds to the number of gradient updates to the Q-function, and the vertical axis shows the actual return obtained by running the greedy policy for the learned Q-function. The learning curve resembles what we might expect in the case of overfitting: the return first improves, and then sharply falls as training progresses. However, this “overfitting” effect remains even as we increase the size of the dataset, suggesting that the phenomenon is distinct from classic statistical overfitting. As the Q-function is trained longer and longer, the target values become more and more erroneous, and the entire Q-function degrades.
Thus, to effectively implement offline reinforcement learning via dynamic programming, it is crucial to address this out-of-distribution action problem. Previous works have observed several variants of this problem. Fujimoto et al., ([2018](#bib.bib31)) noted that restricting Q-value evaluation only to actions in the dataset avoids erroneous Q-values due to extrapolation error. Kumar et al., ([2019](#bib.bib77)) described the out-of-distribution action problem in terms of distributional shift, which suggests a less restrictive solution based on limiting distributional shift, rather than simply constraining to previously seen actions. A number of more recent works also observe that a variety of constraints can be used to mitigate action distribution shift ([Wu et al., 2019a,](#bib.bib179) ). We provide a unified view of such “policy constraint” methods in the following section.
###
4.3 Policy Constraints for Off-Policy Evaluation and Improvement
The basic idea behind policy constraint methods for offline reinforcement learning via dynamic programming is to ensure, explicitly or implicitly, that regardless of the target values in Equation ([3](#S2.E3 "3 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")), the distribution over actions under which we compute the target value, π(𝐚′|𝐬′)𝜋conditionalsuperscript𝐚′superscript𝐬′\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})italic\_π ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ), is “close” to the behavior distribution πβ(𝐚′|𝐬′)subscript𝜋𝛽conditionalsuperscript𝐚′superscript𝐬′{\pi\_{\beta}}(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ). This ensures that the Q-function is never queried on out-of-distribution actions, which may introduce errors into the computation of the Bellman operator. If all states and actions fed into the Q-function for target value calculations are always in-distribution with respect to the Q-function training set, errors in the Q-function should not accumulate, and standard generalization results from supervised learning should apply. Since the Q-function is evaluated on the same states as the ones on which it is trained, only the action inputs can be out of distribution, when we compute 𝔼𝐚′∼π(𝐚′|𝐬′)[Q(𝐬′,𝐚′)]subscript𝔼similar-tosuperscript𝐚′𝜋conditionalsuperscript𝐚′superscript𝐬′delimited-[]𝑄superscript𝐬′superscript𝐚′\mathbb{E}\_{\mathbf{a}^{\prime}\sim\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}[Q(\mathbf{s}^{\prime},\mathbf{a}^{\prime})]blackboard\_E start\_POSTSUBSCRIPT bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ], and therefore it is sufficient to keep π(𝐚′|𝐬′)𝜋conditionalsuperscript𝐚′superscript𝐬′\pi(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})italic\_π ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) close to πβ(𝐚′|𝐬′)subscript𝜋𝛽conditionalsuperscript𝐚′superscript𝐬′{\pi\_{\beta}}(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ). Of course, in practice, the distributions do need to deviate in order for the learned policy to improve over the behavior policy, but by keeping this deviation small, errors due to out-of-distribution action inputs can be kept under control. The different methods in this family differ in terms of the probability metric used to define “closeness” and how this constraint is actually introduced and enforced. We can broadly categorize these methods along these two axes. We will discuss explicit f𝑓fitalic\_f-divergence constraints, which directly add a constraint to the actor update to keep the policy π𝜋\piitalic\_π close to πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT in terms of an f𝑓fitalic\_f-divergence (typically the KL-divergence), implicit f𝑓fitalic\_f-divergence constraints, which utilize an actor update that, by construction, keeps π𝜋\piitalic\_π close to πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT, and integral probability metric (IPM) constraints, which can express constraints with more favorable theoretical and empirical properties for offline RL. Furthermore, the constraints can be enforced either as direct policy constraints on the actor update, or via a policy penalty added to the reward function or target Q-value.
Formally, we can express the family of policy iteration methods with policy constraints as a fixed point iteration involving iterative (partial) optimization of the following objectives:
| | | | |
| --- | --- | --- | --- |
| | Q^k+1πsubscriptsuperscript^𝑄𝜋𝑘1\displaystyle\hat{Q}^{\pi}\_{k+1}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT | ←argminQ𝔼(𝐬,𝐚,𝐬′)∼𝒟[(Q(𝐬,𝐚)−(r(𝐬,𝐚)+γ𝔼𝐚′∼πk(𝐚′|𝐬′)[Q^kπ(𝐬′,𝐚′)]))2]←absentsubscript𝑄subscript𝔼similar-to𝐬𝐚superscript𝐬′𝒟delimited-[]superscript𝑄𝐬𝐚𝑟𝐬𝐚𝛾subscript𝔼similar-tosuperscript𝐚′subscript𝜋𝑘conditionalsuperscript𝐚′superscript𝐬′delimited-[]subscriptsuperscript^𝑄𝜋𝑘superscript𝐬′superscript𝐚′2\displaystyle\leftarrow\arg\min\_{Q}\mathbb{E}\_{(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})\sim\mathcal{D}}\left[\left(Q(\mathbf{s},\mathbf{a})-\left(r(\mathbf{s},\mathbf{a})+\gamma\mathbb{E}\_{\mathbf{a}^{\prime}\sim\pi\_{k}(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}[\hat{Q}^{\pi}\_{k}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})]\right)\right)^{2}\right]← roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT ( bold\_s , bold\_a , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∼ caligraphic\_D end\_POSTSUBSCRIPT [ ( italic\_Q ( bold\_s , bold\_a ) - ( italic\_r ( bold\_s , bold\_a ) + italic\_γ blackboard\_E start\_POSTSUBSCRIPT bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] | |
| | πk+1subscript𝜋𝑘1\displaystyle\pi\_{k+1}italic\_π start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT | ←argmaxπ𝔼𝐬∼𝒟[𝔼𝐚∼π(𝐚|𝐬)[Q^k+1π(𝐬,𝐚)]] s.t. D(π,πβ)≤ϵ.←absentsubscript𝜋subscript𝔼similar-to𝐬𝒟delimited-[]subscript𝔼similar-to𝐚𝜋conditional𝐚𝐬delimited-[]subscriptsuperscript^𝑄𝜋𝑘1𝐬𝐚 s.t. 𝐷𝜋subscript𝜋𝛽italic-ϵ\displaystyle\leftarrow\arg\max\_{\pi}\mathbb{E}\_{\mathbf{s}\sim\mathcal{D}}\left[\mathbb{E}\_{\mathbf{a}\sim\pi(\mathbf{a}|\mathbf{s})}[\hat{Q}^{\pi}\_{k+1}(\mathbf{s},\mathbf{a})]\right]\text{ s.t. }D(\pi,{\pi\_{\beta}})\leq\epsilon.← roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ caligraphic\_D end\_POSTSUBSCRIPT [ blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ] ] s.t. italic\_D ( italic\_π , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ) ≤ italic\_ϵ . | |
When the min\minroman\_min and max\maxroman\_max optimizations are not performed to convergence, but instead for a limited number of gradient steps, we recover the general actor-critic method in Algorithm [3](#alg3 "Algorithm 3 ‣ Actor-critic algorithms. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), with the exception of the constraint D(π,πβ)≤ϵ𝐷𝜋subscript𝜋𝛽italic-ϵD(\pi,{\pi\_{\beta}})\leq\epsilonitalic\_D ( italic\_π , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ) ≤ italic\_ϵ on the policy update.
A number of prior methods instantiate this approach with different choices of D𝐷Ditalic\_D (Kumar et al.,, [2019](#bib.bib77); Fujimoto et al.,, [2018](#bib.bib31); Siegel et al.,, [2020](#bib.bib142)). We will refer to this class of algorithms as policy constraint methods.
In policy penalty methods, the actor-critic algorithm is modified to incorporate the constraint into the Q-values, which forces the policy to not only avoid deviating from πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) at each state, but to also avoid actions that may lead to higher deviation from πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) at future time steps. This can be accomplished by adding a penalty term αD(π(⋅|𝐬),πβ(⋅|𝐬))\alpha D(\pi(\cdot|\mathbf{s}),{\pi\_{\beta}}(\cdot|\mathbf{s}))italic\_α italic\_D ( italic\_π ( ⋅ | bold\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) ) to the reward function r(𝐬,𝐚)𝑟𝐬𝐚r(\mathbf{s},\mathbf{a})italic\_r ( bold\_s , bold\_a ) leading to the modified reward function r¯(𝐬,𝐚)=r(𝐬,𝐚)−αD(π(⋅|𝐬),πβ(⋅|𝐬))\bar{r}(\mathbf{s},\mathbf{a})=r(\mathbf{s},\mathbf{a})-\alpha D(\pi(\cdot|\mathbf{s}),{\pi\_{\beta}}(\cdot|\mathbf{s}))over¯ start\_ARG italic\_r end\_ARG ( bold\_s , bold\_a ) = italic\_r ( bold\_s , bold\_a ) - italic\_α italic\_D ( italic\_π ( ⋅ | bold\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) ). The most well-known formulation of policy penalty methods stems from the linearly solvable MDP framework (Todorov,, [2006](#bib.bib167)), or equivalently the control as inference framework (Levine,, [2018](#bib.bib88)), where D𝐷Ditalic\_D is chosen to be the KL-divergence. An equivalent formulation adds the penalty term αD(π(⋅|𝐬),πβ(⋅|𝐬))\alpha D(\pi(\cdot|\mathbf{s}),{\pi\_{\beta}}(\cdot|\mathbf{s}))italic\_α italic\_D ( italic\_π ( ⋅ | bold\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) ) directly to the target Q-values and the actor objective ([Wu et al., 2019a,](#bib.bib179) ; Jaques et al.,, [2019](#bib.bib56)), resulting in the following algorithm:
| | | | |
| --- | --- | --- | --- |
| | Q^k+1πsubscriptsuperscript^𝑄𝜋𝑘1\displaystyle\hat{Q}^{\pi}\_{k+1}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT | ←argminQ←absentsubscript𝑄\displaystyle\leftarrow\arg\min\_{Q}← roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT | |
| | | 𝔼(𝐬,𝐚,𝐬′)∼𝒟[(Q(𝐬,𝐚)−(r(𝐬,𝐚)+γ𝔼𝐚′∼πk(𝐚′|𝐬′)[Q^kπ(𝐬′,𝐚′)]−αγD(πk(⋅|𝐬′),πβ(⋅|𝐬′))))2]\displaystyle\mathbb{E}\_{(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})\sim\mathcal{D}}\!\left[\left(Q(\mathbf{s},\mathbf{a})\!-\!\left(r(\mathbf{s},\mathbf{a})\!+\!\gamma\mathbb{E}\_{\mathbf{a}^{\prime}\sim\pi\_{k}(\mathbf{a}^{\prime}|\mathbf{s}^{\prime})}[\hat{Q}^{\pi}\_{k}(\mathbf{s}^{\prime},\mathbf{a}^{\prime})]\!-\!\alpha\gamma\!D(\pi\_{k}(\cdot|\mathbf{s}^{\prime}),{\pi\_{\beta}}(\cdot|\mathbf{s}^{\prime}))\right)\right)^{2}\right]blackboard\_E start\_POSTSUBSCRIPT ( bold\_s , bold\_a , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∼ caligraphic\_D end\_POSTSUBSCRIPT [ ( italic\_Q ( bold\_s , bold\_a ) - ( italic\_r ( bold\_s , bold\_a ) + italic\_γ blackboard\_E start\_POSTSUBSCRIPT bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] - italic\_α italic\_γ italic\_D ( italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( ⋅ | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] | |
| | πk+1subscript𝜋𝑘1\displaystyle\pi\_{k+1}italic\_π start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT | ←argmaxπ𝔼𝐬∼𝒟[𝔼𝐚∼π(𝐚|𝐬)[Q^k+1π(𝐬,𝐚)]−αD(π(⋅|𝐬),πβ(⋅|𝐬))].\displaystyle\leftarrow\arg\max\_{\pi}\mathbb{E}\_{\mathbf{s}\sim\mathcal{D}}\left[\mathbb{E}\_{\mathbf{a}\sim\pi(\mathbf{a}|\mathbf{s})}[\hat{Q}^{\pi}\_{k+1}(\mathbf{s},\mathbf{a})]-\alpha D(\pi(\cdot|\mathbf{s}),{\pi\_{\beta}}(\cdot|\mathbf{s}))\right].← roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ caligraphic\_D end\_POSTSUBSCRIPT [ blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ] - italic\_α italic\_D ( italic\_π ( ⋅ | bold\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) ) ] . | |
While the basic recipe for policy constraint and policy penalty methods is similar, the particular choice of how the constraints are defined and how they are enforced can make a significant difference in practice. We will discuss these choices next, as well as their tradeoffs.
#### Explicit f𝑓fitalic\_f-divergence constraints.
For any convex function f𝑓fitalic\_f, the corresponding f𝑓fitalic\_f-divergence is defined as:
| | | | |
| --- | --- | --- | --- |
| | Df(π(⋅|𝐬),πβ(⋅|𝐬))=𝔼𝐚∼π(⋅|𝐬)[f(π(𝐚|𝐬)πβ(𝐚|𝐬))].D\_{f}(\pi(\cdot|\mathbf{s}),{\pi\_{\beta}}(\cdot|\mathbf{s}))=\mathbb{E}\_{\mathbf{a}\sim\pi(\cdot|\mathbf{s})}\left[f\left(\frac{\pi(\mathbf{a}|\mathbf{s})}{{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})}\right)\right].italic\_D start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT ( italic\_π ( ⋅ | bold\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) ) = blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ italic\_f ( divide start\_ARG italic\_π ( bold\_a | bold\_s ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) end\_ARG ) ] . | | (19) |
KL-divergence, χ2superscript𝜒2\chi^{2}italic\_χ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT-divergence, and total-variation distance are some commonly used members of the f𝑓fitalic\_f-divergence family, corresponding to different choices of function f𝑓fitalic\_f (Nowozin et al.,, [2016](#bib.bib111)). A variational form for the f𝑓fitalic\_f-divergence can also be written as
| | | | |
| --- | --- | --- | --- |
| | Df(π(⋅|𝐬),πβ(⋅|𝐬))=maxx:S×A→ℝ𝔼𝐚∼π(⋅|𝐬)[x(𝐬,𝐚)]−𝔼𝐚′∼πβ(⋅|𝐬)[f\*(x(𝐬,𝐚′))],D\_{f}(\pi(\cdot|\mathbf{s}),{\pi\_{\beta}}(\cdot|\mathbf{s}))=\max\_{x:S\times A\rightarrow\mathbb{R}}\mathbb{E}\_{\mathbf{a}\sim\pi(\cdot|\mathbf{s})}\left[x(\mathbf{s},\mathbf{a})\right]-\mathbb{E}\_{\mathbf{a}^{\prime}\sim{\pi\_{\beta}}(\cdot|\mathbf{s})}\left[f^{\*}(x(\mathbf{s},\mathbf{a}^{\prime}))\right],italic\_D start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT ( italic\_π ( ⋅ | bold\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) ) = roman\_max start\_POSTSUBSCRIPT italic\_x : italic\_S × italic\_A → blackboard\_R end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ italic\_x ( bold\_s , bold\_a ) ] - blackboard\_E start\_POSTSUBSCRIPT bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ italic\_f start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_x ( bold\_s , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) ] , | | (20) |
where f\*superscript𝑓f^{\*}italic\_f start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is the convex conjugate for the convex function f𝑓fitalic\_f. Both the primal form (Equation [19](#S4.E19 "19 ‣ Explicit 𝑓-divergence constraints. ‣ 4.3 Policy Constraints for Off-Policy Evaluation and Improvement ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")) and the dual variational form (Equation [20](#S4.E20 "20 ‣ Explicit 𝑓-divergence constraints. ‣ 4.3 Policy Constraints for Off-Policy Evaluation and Improvement ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")) of the f𝑓fitalic\_f-divergence has been used to specify policy constraints. In the dual form, an additional neural network is used to represent the function x𝑥xitalic\_x.
The standard form of “passive dynamics” in linearly solvable MDPs (Todorov,, [2006](#bib.bib167)) or the action prior in control as inference (Levine,, [2018](#bib.bib88)) corresponds to the KL-divergence (primal form), which has also been used in recent work that adapts such a KL-divergence penalty for offline reinforcement learning (Jaques et al.,, [2019](#bib.bib56); [Wu et al., 2019a,](#bib.bib179) ). The KL-divergence, given by DKL(π,πβ)=𝔼𝐚∼π(⋅|𝐬)[logπ(𝐚|𝐬)−logπβ(𝐚|𝐬)]D\_{\mathrm{KL}}(\pi,{\pi\_{\beta}})=\mathbb{E}\_{\mathbf{a}\sim\pi(\cdot|\mathbf{s})}[\log\pi(\mathbf{a}|\mathbf{s})-\log{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})]italic\_D start\_POSTSUBSCRIPT roman\_KL end\_POSTSUBSCRIPT ( italic\_π , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ) = blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ roman\_log italic\_π ( bold\_a | bold\_s ) - roman\_log italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) ], can be computed by sampling action samples 𝐚∼π(⋅|𝐬)\mathbf{a}\sim\pi(\cdot|\mathbf{s})bold\_a ∼ italic\_π ( ⋅ | bold\_s ), and then computing sample-wise estimates of the likelihoods inside the expectation. It is commonly used together with “policy penalty” algorithms, by simply adding an estimate of −αlogπβ(𝐚|𝐬)𝛼subscript𝜋𝛽conditional𝐚𝐬-\alpha\log{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})- italic\_α roman\_log italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) to the reward function, and employing an entropy regularized reinforcement learning algorithm, which analytically adds ℋ(π(⋅|𝐬))\mathcal{H}(\pi(\cdot|\mathbf{s}))caligraphic\_H ( italic\_π ( ⋅ | bold\_s ) ) to the actor objective (Levine,, [2018](#bib.bib88)). One significant disadvantage of this approach is that it requires explicit estimation of the behavior policy (e.g., via behavioral cloning) to fit logπβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬\log{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})roman\_log italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ).
Additionally, the sub-family of asymmetrically-relaxed f𝑓fitalic\_f-divergences can be used for the policy constraint. For any chosen convex function f𝑓fitalic\_f, these divergences modify the expression for Dfsubscript𝐷𝑓D\_{f}italic\_D start\_POSTSUBSCRIPT italic\_f end\_POSTSUBSCRIPT to integrate over only those 𝐚𝐚\mathbf{a}bold\_a such that the density ration π(⋅|𝐬)/πβ(𝐚|𝐬)\pi(\cdot|\mathbf{s})/{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π ( ⋅ | bold\_s ) / italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) is larger than a pre-defined threshold, thereby not penalizing small density ratios. [Wu et al., 2019a](#bib.bib179) briefly discuss this divergence sub-family, and we refer readers to [Wu et al., 2019b](#bib.bib180) for a detailed description.
#### Implicit f𝑓fitalic\_f-divergence constraints.
The KL-divergence constraint can also be enforced implicitly, as in the case of AWR (Peng et al.,, [2019](#bib.bib121)), AWAC (Nair et al.,, [2020](#bib.bib109)), and ABM (Siegel et al.,, [2020](#bib.bib142)). These methods first solve for the optimal next policy iterate under the KL-divergence constraint, indicated as π¯k+1subscript¯𝜋𝑘1\bar{\pi}\_{k+1}over¯ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT, non-parameterically, and then project it onto the policy function class via supervised regression, implementing the following procedure:
| | | | |
| --- | --- | --- | --- |
| | π¯k+1(𝐚|𝐬)subscript¯𝜋𝑘1conditional𝐚𝐬\displaystyle\bar{\pi}\_{k+1}(\mathbf{a}|\mathbf{s})over¯ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) | ←1Zπβ(𝐚|𝐬)exp(1αQkπ(𝐬,𝐚))←absent1𝑍subscript𝜋𝛽conditional𝐚𝐬1𝛼subscriptsuperscript𝑄𝜋𝑘𝐬𝐚\displaystyle\leftarrow\frac{1}{Z}{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})\exp\left(\frac{1}{\alpha}Q^{\pi}\_{k}(\mathbf{s},\mathbf{a})\right)← divide start\_ARG 1 end\_ARG start\_ARG italic\_Z end\_ARG italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) roman\_exp ( divide start\_ARG 1 end\_ARG start\_ARG italic\_α end\_ARG italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ) | |
| | πk+1subscript𝜋𝑘1\displaystyle\pi\_{k+1}italic\_π start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT | ←argminπDKL(π¯k+1,π)←absentsubscript𝜋subscript𝐷KLsubscript¯𝜋𝑘1𝜋\displaystyle\leftarrow\arg\min\_{\pi}D\_{\mathrm{KL}}(\bar{\pi}\_{k+1},\pi)← roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT italic\_D start\_POSTSUBSCRIPT roman\_KL end\_POSTSUBSCRIPT ( over¯ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT , italic\_π ) | |
In practice, the first step can be implemented by weighting samples from πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) (i.e., the data in the buffer 𝒟𝒟\mathcal{D}caligraphic\_D) by importance weights proportional to exp(1αQkπ(𝐬,𝐚))1𝛼subscriptsuperscript𝑄𝜋𝑘𝐬𝐚\exp\left(\frac{1}{\alpha}Q^{\pi}\_{k}(\mathbf{s},\mathbf{a})\right)roman\_exp ( divide start\_ARG 1 end\_ARG start\_ARG italic\_α end\_ARG italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ), and the second step can be implemented via a weighted supervised learning procedure employing these weights. In this way, importance sampling effectively implements a KL-divergence constraint on the policy, with α𝛼\alphaitalic\_α corresponding to the Lagrange multiplier. The Q-function Qkπsubscriptsuperscript𝑄𝜋𝑘Q^{\pi}\_{k}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT corresponding to the previous policy πksubscript𝜋𝑘\pi\_{k}italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT can be estimated with any Q-value or advantage estimator. We refer the reader to related work for a full derivation of this procedure (Peng et al.,, [2019](#bib.bib121); Nair et al.,, [2020](#bib.bib109)).
#### Integral probability metrics (IPMs).
Another choice of the policy constraint D𝐷Ditalic\_D is an integral probability metric, which is a divergence measure with the following functional form dependent on a function class ΦΦ\Phiroman\_Φ:
| | | | |
| --- | --- | --- | --- |
| | DΦ(π(⋅|𝐬),πβ(⋅|𝐬))=supϕ∈Φ,ϕ:S×A→ℝ|𝔼𝐚∼π(⋅|𝐬)[ϕ(𝐬,𝐚)]−𝔼𝐚′∼πβ(⋅|𝐬)[ϕ(𝐬,𝐚′)]|.D\_{\Phi}(\pi(\cdot|\mathbf{s}),{\pi\_{\beta}}(\cdot|\mathbf{s}))=\sup\_{\phi\in\Phi,\phi:S\times A\rightarrow\mathbb{R}}\left|\mathbb{E}\_{\mathbf{a}\sim\pi(\cdot|\mathbf{s})}[\phi(\mathbf{s},\mathbf{a})]-\mathbb{E}\_{\mathbf{a}^{\prime}\sim{\pi\_{\beta}}(\cdot|\mathbf{s})}[\phi(\mathbf{s},\mathbf{a}^{\prime})]\right|.italic\_D start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT ( italic\_π ( ⋅ | bold\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) ) = roman\_sup start\_POSTSUBSCRIPT italic\_ϕ ∈ roman\_Φ , italic\_ϕ : italic\_S × italic\_A → blackboard\_R end\_POSTSUBSCRIPT | blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ italic\_ϕ ( bold\_s , bold\_a ) ] - blackboard\_E start\_POSTSUBSCRIPT bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ italic\_ϕ ( bold\_s , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] | . | | (21) |
Different choices for the function class ΦΦ\Phiroman\_Φ give rise to different divergences. For example, when ΦΦ\Phiroman\_Φ consists of all functions with a unit Hilbert norm in a reproducing kernel Hilbert space (RKHS), DΦsubscript𝐷ΦD\_{\Phi}italic\_D start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT represents the maximum mean discrepancy (MMD) distance, which can alternatively be computed directly from samples as following:
| | | |
| --- | --- | --- |
| | MMD2(π(⋅|𝐬),πβ(⋅|𝐬))=𝔼𝐚∼π(⋅|𝐬),𝐚′∼π(⋅|𝐬)[k(𝐚,𝐚′)]−2𝔼𝐚∼π(⋅|𝐬),𝐚′∼πβ(⋅|𝐬)[k(𝐚,𝐚′)]+𝔼𝐚∼πβ(⋅|𝐬),𝐚′∼πβ(⋅|𝐬)[k(𝐚,𝐚′)],\text{MMD}^{2}(\pi(\cdot|\mathbf{s}),{\pi\_{\beta}}(\cdot|\mathbf{s}))=\mathbb{E}\_{\mathbf{a}\sim\pi(\cdot|\mathbf{s}),\mathbf{a}^{\prime}\sim\pi(\cdot|\mathbf{s})}\left[k(\mathbf{a},\mathbf{a}^{\prime})\right]-\\
2\mathbb{E}\_{\mathbf{a}\sim\pi(\cdot|\mathbf{s}),\mathbf{a}^{\prime}\sim{\pi\_{\beta}}(\cdot|\mathbf{s})}\left[k(\mathbf{a},\mathbf{a}^{\prime})\right]+\mathbb{E}\_{\mathbf{a}\sim{\pi\_{\beta}}(\cdot|\mathbf{s}),\mathbf{a}^{\prime}\sim{\pi\_{\beta}}(\cdot|\mathbf{s})}\left[k(\mathbf{a},\mathbf{a}^{\prime})\right],start\_ROW start\_CELL MMD start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_π ( ⋅ | bold\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) ) = blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( ⋅ | bold\_s ) , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ italic\_k ( bold\_a , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] - end\_CELL end\_ROW start\_ROW start\_CELL 2 blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( ⋅ | bold\_s ) , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ italic\_k ( bold\_a , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] + blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ italic\_k ( bold\_a , bold\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] , end\_CELL end\_ROW | |
where k(⋅,⋅)𝑘⋅⋅k(\cdot,\cdot)italic\_k ( ⋅ , ⋅ ) represents any radial basis kernel, such as the Gaussian or Laplacian kernel. As another example, when ΦΦ\Phiroman\_Φ consists of all functions ϕitalic-ϕ\phiitalic\_ϕ with a unit Lipschitz constant, then DΦsubscript𝐷ΦD\_{\Phi}italic\_D start\_POSTSUBSCRIPT roman\_Φ end\_POSTSUBSCRIPT is equivalent to the first order Wasserstein (W1subscript𝑊1W\_{1}italic\_W start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT) or Earth-mover’s distance:
| | | | |
| --- | --- | --- | --- |
| | W1(π(⋅|𝐬),πβ(⋅|𝐬))=supf,‖f‖L≤1|𝔼𝐚∼π(⋅|𝐬)[f(𝐚)]−𝔼𝐚∼πβ(⋅|𝐬[f(𝐚)]|.W\_{1}(\pi(\cdot|\mathbf{s}),{\pi\_{\beta}}(\cdot|\mathbf{s}))=\sup\_{f,||f||\_{L}\leq 1}\left|\mathbb{E}\_{\mathbf{a}\sim\pi(\cdot|\mathbf{s})}[f(\mathbf{a})]-\mathbb{E}\_{\mathbf{a}\sim{\pi\_{\beta}}(\cdot|\mathbf{s}}[f(\mathbf{a})]\right|.italic\_W start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ( italic\_π ( ⋅ | bold\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) ) = roman\_sup start\_POSTSUBSCRIPT italic\_f , | | italic\_f | | start\_POSTSUBSCRIPT italic\_L end\_POSTSUBSCRIPT ≤ 1 end\_POSTSUBSCRIPT | blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( ⋅ | bold\_s ) end\_POSTSUBSCRIPT [ italic\_f ( bold\_a ) ] - blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s end\_POSTSUBSCRIPT [ italic\_f ( bold\_a ) ] | . | | (22) |
These metrics are appealing because they can often be estimated with non-parametric estimators. We refer the readers to Sriperumbudur et al., ([2009](#bib.bib146)) for a detailed discussion on IPMs. BEAR (Kumar et al.,, [2019](#bib.bib77)) uses the MMD distance to represent the policy constraint, while [Wu et al., 2019a](#bib.bib179) evaluates an instantiation of the first order Wasserstein distance.
#### Tradeoffs between different constraint types.
The KL-divergence constraint, as well as other f𝑓fitalic\_f-divergences, represent a particularly convenient class of constraints, since they readily lend themselves to be used in a policy penalty algorithm by simply augmenting the reward function according to r¯(𝐬,𝐚)=r(𝐬,𝐚)−αf(π(𝐚|𝐬)/πβ(𝐚|𝐬))¯𝑟𝐬𝐚𝑟𝐬𝐚𝛼𝑓𝜋conditional𝐚𝐬subscript𝜋𝛽conditional𝐚𝐬\bar{r}(\mathbf{s},\mathbf{a})=r(\mathbf{s},\mathbf{a})-\alpha f(\pi(\mathbf{a}|\mathbf{s})/{\pi\_{\beta}}(\mathbf{a}|\mathbf{s}))over¯ start\_ARG italic\_r end\_ARG ( bold\_s , bold\_a ) = italic\_r ( bold\_s , bold\_a ) - italic\_α italic\_f ( italic\_π ( bold\_a | bold\_s ) / italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) ), with the important special case of the KL-divergence corresponding to r¯(𝐬,𝐚)=r(𝐬,𝐚)+αlogπβ(𝐚|𝐬)−αlogπ(𝐚|𝐬)¯𝑟𝐬𝐚𝑟𝐬𝐚𝛼subscript𝜋𝛽conditional𝐚𝐬𝛼𝜋conditional𝐚𝐬\bar{r}(\mathbf{s},\mathbf{a})=r(\mathbf{s},\mathbf{a})+\alpha\log{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})-\alpha\log\pi(\mathbf{a}|\mathbf{s})over¯ start\_ARG italic\_r end\_ARG ( bold\_s , bold\_a ) = italic\_r ( bold\_s , bold\_a ) + italic\_α roman\_log italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) - italic\_α roman\_log italic\_π ( bold\_a | bold\_s ), with the last term subsumed inside the entropy regularizer ℋ(π(⋅|𝐬))\mathcal{H}(\pi(\cdot|\mathbf{s}))caligraphic\_H ( italic\_π ( ⋅ | bold\_s ) ) when using a maximum entropy reinforcement learning algorithm (Levine,, [2018](#bib.bib88)). However, KL-divergences and other f𝑓fitalic\_f-divergences are not necessarily ideal for offline reinforcement learning. Consider a setting where the behavior policy is uniformly random. In this case, offline reinforcement learning should, in principle, be very effective. In fact, even standard actor-critic algorithms *without* any policy constraints can perform very well in this setting, since when all actions have equal probability, there are no out-of-distribution actions (Kumar et al.,, [2019](#bib.bib77)). However, a KL-divergence constraint in this setting would restrict the learned policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) from being too concentrated, requiring the constrained algorithm to produce a highly stochastic (and therefore highly suboptimal) policy. Intuitively, an effective policy constraint should prevent the learned policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) from going *outside* the set of actions that have a high probability in the data, but would not prevent it from concentrating around a *subset* of high-probability actions. This suggests that a KL-divergence constraint is not in general the ideal choice.

Figure 3: A comparison of support and distribution constraints on a simple 1D lineworld from Kumar, ([2019](#bib.bib76)). The task requires moving to the goal location (marked as ’G’) starting from ’S’. The behavior policy strongly prefers the left action at each state (b), such that distribution constraint is unable to find the optimal policy (c), whereas support-constraint can successfully obtain the optimal policy (d). We refer to Kumar, ([2019](#bib.bib76)) for further discussion.
In contrast, as argued by Kumar et al., ([2019](#bib.bib77)) and Laroche et al., ([2017](#bib.bib84)), restricting the support of the learned policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) to the support of the behavior distribution πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) may be sufficient to theoretically and empirically obtain an effective offline RL method. In the previous example, if only the support of the learned policy is constrained to lie in the support of the behavior policy, the learned policy can be deterministic and optimal. However, when the behavior policy does not have full support, a support constraint will still prevent out-of-distribution actions. Kumar, ([2019](#bib.bib76)) formalize this intuition and present a simple example of a 1D-lineworld environment, which we reproduce in Figure [3](#S4.F3 "Figure 3 ‣ Tradeoffs between different constraint types. ‣ 4.3 Policy Constraints for Off-Policy Evaluation and Improvement ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") where constraining distributions can lead to highly suboptimal behavior, that strongly prefers moving leftwards, thus reaching the goal location with only very low likelihood over the course of multiple intermediate steps of decision making, while support constraints still allow for learning the optimal policy, since they are agnostic to the probability density function in this setting.
Which divergence metrics can be used to constrain policy supports? In a finite sample setting, the family of f𝑓fitalic\_f-divergences measures the difference in the probability densities, which makes it unsuitable for support matching. In an MDP with a discrete action-space, a simple choice for constraining the support is a metric that penalizes the total amount of probability mass on out-of-distribution actions under the π𝜋\piitalic\_π distribution, as shown below:
| | | | |
| --- | --- | --- | --- |
| | Dsupport,ε(π(⋅|s),πβ(⋅|s))=∑𝐚∈A,πβ(𝐚|𝐬)≤επ(𝐚|𝐬).D\_{\text{support},\varepsilon}(\pi(\cdot|s),{\pi\_{\beta}}(\cdot|s))=\sum\_{\mathbf{a}\in A,{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})\leq\varepsilon}\pi(\mathbf{a}|\mathbf{s}).italic\_D start\_POSTSUBSCRIPT support , italic\_ε end\_POSTSUBSCRIPT ( italic\_π ( ⋅ | italic\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | italic\_s ) ) = ∑ start\_POSTSUBSCRIPT bold\_a ∈ italic\_A , italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) ≤ italic\_ε end\_POSTSUBSCRIPT italic\_π ( bold\_a | bold\_s ) . | | (23) |
This metric has been used in the context of off-policy bandits (Sachdeva et al.,, [2020](#bib.bib134)), but not in the context of offline reinforcement learning.
When the underlying MDP has a continuous action space and exact support cannot be estimated, Kumar et al., ([2019](#bib.bib77)) show that a finite sample estimate of the MMD distance can be used to approximately constrain supports of the learned policy. Similar to f𝑓fitalic\_f-divergences, the MMD distance still converges to a divergence estimate between the distribution functions of its arguments. However, Kumar et al., ([2019](#bib.bib77)) show experimentally (Figure 7) that, in the finite sample setting, MMD resembles a support constraining metric. The intuition is that the MMD distance does not depend on the specific densities of the behavior distribution or the policy, and can be computed via a kernel-based distance on samples from each distribution, thus making it just sufficient enough for constraining supports when finite samples are used. Some variants of the asymmetric f𝑓fitalic\_f-divergence that asymmetrically penalize the density ratios π(⋅|𝐬)/πβ(⋅|𝐬)\pi(\cdot|\mathbf{s})/{\pi\_{\beta}}(\cdot|\mathbf{s})italic\_π ( ⋅ | bold\_s ) / italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) can also be used to approximately constrain supports ([Wu et al., 2019b,](#bib.bib180) ; [Wu et al., 2019a,](#bib.bib179) ).
###
4.4 Offline Approximate Dynamic Programming with Uncertainty Estimation
As discussed in Section [4.3](#S4.SS3 "4.3 Policy Constraints for Off-Policy Evaluation and Improvement ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), policy constraints can prevent out-of-distribution action queries to the Q-function when computing the target values. Aside from directly constraining the policy, we can also mitigate the effect of out-of-distribution actions by making the Q-function resilient to such queries, via effective uncertainty estimation. The intuition behind these uncertainty-based methods is that, if we can estimate the *epistemic* uncertainty of the Q-function, we expect this uncertainty to be substantially larger for out-of-distribution actions, and can therefore utilize these uncertainty estimates to produce conservative target values in such cases. In this section, we briefly review such uncertainty-aware formulations of offline approximate dynamic programming methods.
Formally, such methods require learning an uncertainty set or distribution over possible Q-functions from the dataset 𝒟𝒟\mathcal{D}caligraphic\_D, which we can denote 𝒫𝒟(Qπ)subscript𝒫𝒟superscript𝑄𝜋\mathcal{P}\_{\mathcal{D}}(Q^{\pi})caligraphic\_P start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT ( italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ). This can utilize explicit modeling of confidence sets, such as by maintaining upper and lower confidence bounds (Jaksch et al.,, [2010](#bib.bib54)), or directly representing samples from the distribution over Q-functions, for example via bootstrap ensembles (Osband et al.,, [2016](#bib.bib114)), or by parameterizing the distribution over Q-values using a known (e.g., Gaussian) distribution with learned parameters (O’Donoghue et al.,, [2018](#bib.bib116)). If calibrated uncertainty sets can be learned, then we can improve the policy using a conservative estimate of the Q-function, which corresponds to the following policy improvement objective:
| | | | |
| --- | --- | --- | --- |
| | πk+1←argmaxπ𝔼𝐬∼𝒟[𝔼𝐚∼π(𝐚|𝐬)[𝔼Qk+1π∼𝒫𝒟(Qπ)[Qk+1π(𝐬,𝐚)]−αUnc(𝒫𝒟(Qπ))]]⏟conservative estimate,←subscript𝜋𝑘1subscript𝜋subscript𝔼similar-to𝐬𝒟subscript⏟delimited-[]subscript𝔼similar-to𝐚𝜋conditional𝐚𝐬delimited-[]subscript𝔼similar-tosubscriptsuperscript𝑄𝜋𝑘1subscript𝒫𝒟superscript𝑄𝜋delimited-[]subscriptsuperscript𝑄𝜋𝑘1𝐬𝐚𝛼Uncsubscript𝒫𝒟superscript𝑄𝜋conservative estimate\pi\_{k+1}\leftarrow\arg\max\_{\pi}\mathbb{E}\_{\mathbf{s}\sim\mathcal{D}}\underbrace{\left[\mathbb{E}\_{\mathbf{a}\sim\pi(\mathbf{a}|\mathbf{s})}\left[\mathbb{E}\_{Q^{\pi}\_{k+1}\sim\mathcal{P}\_{\mathcal{D}}(Q^{\pi})}[Q^{\pi}\_{k+1}(\mathbf{s},\mathbf{a})]-\alpha\text{Unc}(\mathcal{P}\_{\mathcal{D}}(Q^{\pi}))\right]\right]}\_{\text{conservative estimate}},italic\_π start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ← roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ caligraphic\_D end\_POSTSUBSCRIPT under⏟ start\_ARG [ blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_π ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ blackboard\_E start\_POSTSUBSCRIPT italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ∼ caligraphic\_P start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT ( italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ] - italic\_α Unc ( caligraphic\_P start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT ( italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) ) ] ] end\_ARG start\_POSTSUBSCRIPT conservative estimate end\_POSTSUBSCRIPT , | | (24) |
where Unc denotes a metric of uncertainty, such that subtracting it provides a conservative estimate of the actual Q-function. The choice of the uncertainty metric Unc depends on the particular choice of uncertainty estimation method, and we discuss this next.
When bootstrap ensembles333It is known in the deep learning literature that, although a true bootstrap ensemble requires resampling the dataset with replacement for every bootstrap, omitting this resampling step and simply using different random initialization for every neural network in the ensemble is sufficient to differentiate the models and provide reasonable uncertainty estimates (Osband et al.,, [2016](#bib.bib114)). In fact, previous work has generally observed little benefit from employ proper resampling (Osband et al.,, [2016](#bib.bib114)), which technically means that all of these methods simply employ regular (non-bootstrapped) ensembles, although the quality of their uncertainty estimates is empirically similar. are used to represent the Q-function, as is common in prior work (Osband et al.,, [2016](#bib.bib114); Eysenbach et al.,, [2017](#bib.bib25); Kumar et al.,, [2019](#bib.bib77); Agarwal et al.,, [2019](#bib.bib1)), typical choices of ‘Unc’ include variance across Q-value predictions of an ensemble of Q-functions (Kumar et al.,, [2019](#bib.bib77)), or maximizing the Q-value with respect to the worst case or all convex combinations of the Q-value predictions of an ensemble (Agarwal et al.,, [2019](#bib.bib1)). When a parametric distribution, such as a Gaussian, is used to represent the Q-function (O’Donoghue et al.,, [2018](#bib.bib116)), a choice of Unc, previously used for exploration, is to use a standard deviation above the mean as an optimistic Q-value estimate for policy improvement. When translated to offline RL, an analogous estimate would be to use conservative Q-values, such as one standard deviation below the mean, for policy improvement.
###
4.5 Conservative Q-Learning and Pessimistic Value Functions
As an alternative to imposing constraints on the policy in an actor-critic framework, an effective approach to offline RL can also be developed by regularizing the value function or Q-function directly to avoid overestimation for out-of-distribution actions ([Kumar et al., 2020b,](#bib.bib80) ). This approach can be appealing for several reasons, such as being applicable to both actor-critic and Q-learning methods, even when a policy is not represented explicitly, and avoiding the need for explicit modeling of the behavior policy. The practical effect of such a method resembles the conservative estimate in Equation ([24](#S4.E24 "24 ‣ 4.4 Offline Approximate Dynamic Programming with Uncertainty Estimation ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")), but without explicit uncertainty estimation. As discussed by [Kumar et al., 2020b](#bib.bib80) , one simple way to ensure a conservative Q-function is to modify the objective for fitting the Q-function parameters ϕitalic-ϕ\phiitalic\_ϕ on Line 14 of the Q-learning method in Algorithm [2](#alg2 "Algorithm 2 ‣ Approximate dynamic programming. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") or Line 14 of the actor-critic method in Algorithm [3](#alg3 "Algorithm 3 ‣ Actor-critic algorithms. ‣ 2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") to add an additional *conservative penalty* term, yielding a modified objective
| | | |
| --- | --- | --- |
| | ℰ~(B,ϕ)=α𝒞(B,ϕ)+ℰ(B,ϕ),~ℰ𝐵italic-ϕ𝛼𝒞𝐵italic-ϕℰ𝐵italic-ϕ\tilde{\mathcal{E}}(B,\phi)=\alpha\mathcal{C}(B,\phi)+\mathcal{E}(B,\phi),over~ start\_ARG caligraphic\_E end\_ARG ( italic\_B , italic\_ϕ ) = italic\_α caligraphic\_C ( italic\_B , italic\_ϕ ) + caligraphic\_E ( italic\_B , italic\_ϕ ) , | |
where different choices for 𝒞(B,ϕ)𝒞𝐵italic-ϕ\mathcal{C}(B,\phi)caligraphic\_C ( italic\_B , italic\_ϕ ) lead to algorithms with different properties. A basic example is the penalty 𝒞CQL0(B,ϕ)=𝔼𝐬∼B,𝐚∼μ(𝐚|𝐬)[Qϕ(𝐬,𝐚)]subscript𝒞subscriptCQL0𝐵italic-ϕsubscript𝔼formulae-sequencesimilar-to𝐬𝐵similar-to𝐚𝜇conditional𝐚𝐬delimited-[]subscript𝑄italic-ϕ𝐬𝐚\mathcal{C}\_{\text{CQL}\_{0}}(B,\phi)=\mathbb{E}\_{\mathbf{s}\sim B,\mathbf{a}\sim\mu(\mathbf{a}|\mathbf{s})}[Q\_{\phi}(\mathbf{s},\mathbf{a})]caligraphic\_C start\_POSTSUBSCRIPT CQL start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_B , italic\_ϕ ) = blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ italic\_B , bold\_a ∼ italic\_μ ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ]. Intuitively, this penalty minimizes the Q-values at all of the states in the buffer, for actions selected according to the distribution μ(𝐚|𝐬)𝜇conditional𝐚𝐬\mu(\mathbf{a}|\mathbf{s})italic\_μ ( bold\_a | bold\_s ). If μ(𝐚|𝐬)𝜇conditional𝐚𝐬\mu(\mathbf{a}|\mathbf{s})italic\_μ ( bold\_a | bold\_s ) is chosen *adversarially*, for example by *maximizing* the penalty 𝒞CQL0(B,ϕ)subscript𝒞subscriptCQL0𝐵italic-ϕ\mathcal{C}\_{\text{CQL}\_{0}}(B,\phi)caligraphic\_C start\_POSTSUBSCRIPT CQL start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_B , italic\_ϕ ), the effect is that the conservative penalty will push down on high Q-values. Note that the standard Bellman error term ℰ(B,ϕ)ℰ𝐵italic-ϕ\mathcal{E}(B,\phi)caligraphic\_E ( italic\_B , italic\_ϕ ) will still enforce that the Q-values obey the Bellman backup for *in-distribution* actions. Therefore, if the penalty weight α𝛼\alphaitalic\_α is chosen appropriately, the conservative penalty should mostly push down on Q-values for out-of-distribution actions for which the Q-values are (potentially erroneously) high, since in-distribution actions would be “anchored” by the Bellman error ℰ(B,ϕ)ℰ𝐵italic-ϕ\mathcal{E}(B,\phi)caligraphic\_E ( italic\_B , italic\_ϕ ). Indeed, [Kumar et al., 2020b](#bib.bib80) show that, for an appropriately chosen value of α𝛼\alphaitalic\_α, a Q-function trained with this conservative penalty will represent a *lower bound* on the true Q-function Q(𝐬,𝐚)𝑄𝐬𝐚Q(\mathbf{s},\mathbf{a})italic\_Q ( bold\_s , bold\_a ) for the current policy, both in theory and in practice. This results in a provably conservative Q-learning or actor-critic algorithm.
A simple and practical choice for μ(𝐚,𝐬)𝜇𝐚𝐬\mu(\mathbf{a},\mathbf{s})italic\_μ ( bold\_a , bold\_s ) is to use a regularized adversarial objective, such that
| | | |
| --- | --- | --- |
| | μ=argmaxμ𝔼𝐬∼𝒟[𝔼𝐚∼μ(𝐚|𝐬)[Qϕ(𝐬,𝐚)]+ℋ(μ(⋅|𝐬))].\mu=\arg\max\_{\mu}\mathbb{E}\_{\mathbf{s}\sim\mathcal{D}}\left[\mathbb{E}\_{\mathbf{a}\sim\mu(\mathbf{a}|\mathbf{s})}[Q\_{\phi}(\mathbf{s},\mathbf{a})]+\mathcal{H}(\mu(\cdot|\mathbf{s}))\right].italic\_μ = roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_μ end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ caligraphic\_D end\_POSTSUBSCRIPT [ blackboard\_E start\_POSTSUBSCRIPT bold\_a ∼ italic\_μ ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ] + caligraphic\_H ( italic\_μ ( ⋅ | bold\_s ) ) ] . | |
This choice inherits the lower bound guarantee above, and is simple to compute. The solution to the above optimization is given by μ(𝐚|𝐬)∝exp(Q(𝐬,𝐚))proportional-to𝜇conditional𝐚𝐬𝑄𝐬𝐚\mu(\mathbf{a}|\mathbf{s})\propto\exp(Q(\mathbf{s},\mathbf{a}))italic\_μ ( bold\_a | bold\_s ) ∝ roman\_exp ( italic\_Q ( bold\_s , bold\_a ) ), and we can express 𝒞CQL0(B,ϕ)subscript𝒞subscriptCQL0𝐵italic-ϕ\mathcal{C}\_{\text{CQL}\_{0}}(B,\phi)caligraphic\_C start\_POSTSUBSCRIPT CQL start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_B , italic\_ϕ ) under this choice of μ(𝐚|𝐬)𝜇conditional𝐚𝐬\mu(\mathbf{a}|\mathbf{s})italic\_μ ( bold\_a | bold\_s ) in closed form as 𝒞CQL0(B,ϕ)=E𝐬∼B[log∑𝐚exp(Qϕ(𝐬,𝐚))]subscript𝒞subscriptCQL0𝐵italic-ϕsubscript𝐸similar-to𝐬𝐵delimited-[]subscript𝐚subscript𝑄italic-ϕ𝐬𝐚\mathcal{C}\_{\text{CQL}\_{0}}(B,\phi)=E\_{\mathbf{s}\sim B}[\log\sum\_{\mathbf{a}}\exp(Q\_{\phi}(\mathbf{s},\mathbf{a}))]caligraphic\_C start\_POSTSUBSCRIPT CQL start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_B , italic\_ϕ ) = italic\_E start\_POSTSUBSCRIPT bold\_s ∼ italic\_B end\_POSTSUBSCRIPT [ roman\_log ∑ start\_POSTSUBSCRIPT bold\_a end\_POSTSUBSCRIPT roman\_exp ( italic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ) ]. This expression has a simple intuitive interpretation: the log-sum-exp is dominated by the action with the largest Q-value, and hence this type of penalty tends to minimize whichever Q-value is largest at each state. When the action space is large or continuous, we can estimate this quantity by sampling and reweighting. For example, [Kumar et al., 2020b](#bib.bib80) propose sampling from the current actor (in an actor-critic algorithm) and computing importance weights.
While the approach described above has the appealing property of providing a learned Q-function Qϕ(𝐬,𝐚)subscript𝑄italic-ϕ𝐬𝐚Q\_{\phi}(\mathbf{s},\mathbf{a})italic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) that lower bounds the true Q-function, and therefore provably avoids overestimation, it tends to suffer from excessive *underestimation* – that is, it is too conservative. A simple modification, which we refer to as 𝒞CQL1(B,ϕ)subscript𝒞subscriptCQL1𝐵italic-ϕ\mathcal{C}\_{\text{CQL}\_{1}}(B,\phi)caligraphic\_C start\_POSTSUBSCRIPT CQL start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_B , italic\_ϕ ), is to also add a value *maximization* term to balance out the minimization term under μ(𝐚|𝐬)𝜇conditional𝐚𝐬\mu(\mathbf{a}|\mathbf{s})italic\_μ ( bold\_a | bold\_s ), yielding the following expression:
| | | |
| --- | --- | --- |
| | 𝒞CQL1(B,ϕ)=𝔼𝐬∼B,𝐚∼μ(𝐚|𝐬)[Qϕ(𝐬,𝐚)]−𝔼(𝐬,𝐚)∼B[Qϕ(𝐬,𝐚)].subscript𝒞subscriptCQL1𝐵italic-ϕsubscript𝔼formulae-sequencesimilar-to𝐬𝐵similar-to𝐚𝜇conditional𝐚𝐬delimited-[]subscript𝑄italic-ϕ𝐬𝐚subscript𝔼similar-to𝐬𝐚𝐵delimited-[]subscript𝑄italic-ϕ𝐬𝐚\mathcal{C}\_{\text{CQL}\_{1}}(B,\phi)=\mathbb{E}\_{\mathbf{s}\sim B,\mathbf{a}\sim\mu(\mathbf{a}|\mathbf{s})}[Q\_{\phi}(\mathbf{s},\mathbf{a})]-\mathbb{E}\_{(\mathbf{s},\mathbf{a})\sim B}[Q\_{\phi}(\mathbf{s},\mathbf{a})].caligraphic\_C start\_POSTSUBSCRIPT CQL start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_B , italic\_ϕ ) = blackboard\_E start\_POSTSUBSCRIPT bold\_s ∼ italic\_B , bold\_a ∼ italic\_μ ( bold\_a | bold\_s ) end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ] - blackboard\_E start\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ∼ italic\_B end\_POSTSUBSCRIPT [ italic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) ] . | |
This conservative penalty minimizes Q-values under the adversarially chosen μ(𝐚|𝐬)𝜇conditional𝐚𝐬\mu(\mathbf{a}|\mathbf{s})italic\_μ ( bold\_a | bold\_s ) distribution, and *maximizes* the values for state-action tuples in the batch. Intuitively, this acts to ensure that high Q-values are only assigned to in-distribution actions. When μ(𝐚|𝐬)𝜇conditional𝐚𝐬\mu(\mathbf{a}|\mathbf{s})italic\_μ ( bold\_a | bold\_s ) is equal to the behavior policy, the penalty is zero on average. While this penalty does *not* produce a Q-function that is a pointwise lower bound on the true Q-function, it is a lower bound in expectation under the current policy, thereby still providing appealing conservatism guarantees, while substantially reducing underestimation in practice. As shown by [Kumar et al., 2020b](#bib.bib80) , this variant produces the best performance in practice.
###
4.6 Challenges and Open Problems
As we discussed in Section [4.1](#S4.SS1 "4.1 Off-Policy Value Function Estimation with Linear Value Functions ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), basic approximate dynamic programming algorithms can perform very poorly in the offline setting due to distributional shift, primarily due to the distributional shift of the actions due to the discrepancy between the behavior policy πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) and the current learned policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ), which is used in the target value calculation for the Bellman backup. We discussed how policy constraints and explicit uncertainty estimation can in principle mitigate this problem, but both approaches have a number of shortcomings.
While such explicit uncertainty-based methods are conceptually attractive, it is often very hard to obtain calibrated uncertainty estimates to evaluate 𝒫𝒟(Q^π)subscript𝒫𝒟superscript^𝑄𝜋\mathcal{P}\_{\mathcal{D}}(\hat{Q}^{\pi})caligraphic\_P start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT ( over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ) and Unc in practice, especially with modern high-capacity function approximators, such as deep neural networks.
In practice, policy constraint and conservative value function methods so far seem to outperform pure uncertainty-based methods (Fujimoto et al.,, [2018](#bib.bib31)). This might at first appear surprising, since uncertainty estimation has been a very widely used tool in another subfield of reinforcement learning – exploration. In exploration, acting *optimistically* with respect to estimated uncertainty, or utilizing posterior sampling, has been shown to be effective in practice (Osband and Van Roy,, [2017](#bib.bib115)). However, in the setting of offline reinforcement learning, where we instead must act *conservatively* with respect to the uncertainty set, the demands on the fidelity of the uncertainty estimates are much higher. Exploration algorithms only require the uncertainty set to include good behavior – in addition, potentially, to a lot of bad behavior. However, offline reinforcement learning requires the uncertainty set to directly capture the trustworthiness of the Q-function, which is a much higher bar. In practice, imperfect uncertainty sets can give rise to either overly conservative estimates, which impedes learning, or overly loose estimates, which results in exploitation of out-of-distribution actions. Of course, the relatively performance of policy constraint and uncertainty-based methods may change in the future, as the community develops better epistemic uncertainty estimation techniques or better algorithms to incorporate more suitable distribution metrics into offline RL.
Policy constraint methods do however suffer from a number of challenges. First, most of these methods fit an estimated model of the behavior policy πβ(⋅|𝐬){\pi\_{\beta}}(\cdot|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( ⋅ | bold\_s ) from the dataset and constrain the learned policy π(⋅|𝐬)\pi(\cdot|\mathbf{s})italic\_π ( ⋅ | bold\_s ) against this estimated behavior policy. This means that the performance of these algorithms is limited by the accuracy of estimation of the behavior policy, which may be hard in scenarios with highly multimodal behaviors, as is the case in practice with real-world problems. For example, if a unimodal Gaussian policy is used to model a highly multi-modal action distribution, averaging across different modes while fitting this behavior policy may actually be unable to prevent the learned policy, π𝜋\piitalic\_π, from choosing out-of-distribution actions. Methods that enforce the constraint *implicitly*, using only samples and without explicit behavior policy estimation, are a promising way to alleviate this limitation (Peng et al.,, [2019](#bib.bib121); Nair et al.,, [2020](#bib.bib109)).
Even when the behavior policy can be estimated exactly, a number of these methods still suffer from the undesirable effects of function approximation. When neural networks are used to represent Q-functions, undesirable effects of function approximation may prevent the Q-function from learning meaningful values even when out-of-distribution actions are controlled for in the target values. For example, when the size of the dataset is limited, approximate dynamic programming algorithms tend to overfit on the small dataset, and this error is then accumulated via Bellman backups (Fu et al.,, [2019](#bib.bib30); [Kumar et al., 2020a,](#bib.bib79) ). Moreover, if the dataset state-action distribution is narrow, neural network training may only provide brittle, non-generalizable solutions. Unlike online reinforcement learning, where accidental overestimation errors arising due to function approximation can be corrected via active data collection, these errors cumulatively build up and affect future iterates in an offline RL setting.
Methods that estimate a conservative or pessimistic value function ([Kumar et al., 2020b,](#bib.bib80) ) present a somewhat different set of tradeoffs. While they avoid issues associated with estimating the behavior policy and can effectively avoid overestimation, they can instead suffer from underestimation and a form of overfitting: if the dataset is small, the conservative regularizer can assign values that are too low to actions that are undersampled in the dataset. Indeed, *excessive* pessimism may be one of the bigger issues preventing better performance on current benchmarks, and an important open question is how to dynamically modulate the degree of conservatism to balance the risks of overestimation against the downside of avoiding any unfamiliar action.
A further issue that afflicts both constraint-based and conservative methods is that, while the Q-function is never evaluated on out-of-distribution states during training, the Q-function is still affected by the training set state distribution dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{{\pi\_{\beta}}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ), and this is not accounted for in current offline learning methods. For instance, when function approximation couples the Q-value at two distinct states with very different densities under dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{{\pi\_{\beta}}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ), dynamic programming with function approximation may give rise to incorrect solutions on the state that has a lower probability dπβ(𝐬)superscript𝑑subscript𝜋𝛽𝐬d^{{\pi\_{\beta}}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ). A variant of this issue was noted in the standard RL setting, referred to as an absence of “corrective feedback” by [Kumar et al., 2020a](#bib.bib79) (see Kumar and Gupta, ([2020](#bib.bib78)) for a short summary), and such a problem may affect offline RL algorithms with function approximation more severely, since they have no control over the data distribution at all.
Another potential challenge for all of these offline approximate dynamic programming methods is that the degree of improvement beyond the behavior policy is restricted by error accumulation. Since the error in policy performance compounds with a factor that depends on 1/(1−γ)2≈H21superscript1𝛾2superscript𝐻21/(1-\gamma)^{2}\approx H^{2}1 / ( 1 - italic\_γ ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ≈ italic\_H start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT (Farahmand et al.,, [2010](#bib.bib26); Kumar et al.,, [2019](#bib.bib77); Kidambi et al.,, [2020](#bib.bib71)), a small divergence from the behavior policy at each step can give rise to a policy that diverges away from the behavior distribution and performs very poorly. Besides impacting training, this issue can also lead to severe *state* distribution shift at test time, which can lead the policy to perform very poorly. Therefore, policy constraints must be strong, so as to ensure that the overall error is small. This can restrict the amount of policy improvement that can be achieved. We might expect that directly constraining the state-action marginal distribution of the policy, such as the methods explored in recent work ([Nachum et al., 2019b,](#bib.bib106) ) might not suffer from the error accumulation issue, however, Kidambi et al., ([2020](#bib.bib71)) proved a lower-bound result showing that quadratic scaling with respect to horizon is inevitable in the worst case for any offline RL method.
Moreover, as previously discussed in Section [3.5](#S3.SS5 "3.5 Challenges and Open Problems ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), representing and enforcing constraints on the state-action marginal distributions themselves requires Bellman backups, which can themselves suffer from out-of-distribution actions.
Besides the worst-case dependence on the horizon, an open question that still remains is is the development of constraints that can effectively trade off error accumulation and suboptimality of the learned policy in most “average”-case MDP instances, and can be easily enforced and optimized in practice via standard optimization techniques without requiring additional function approximators to fit the behavior policy.
5 Offline Model-Based Reinforcement Learning
---------------------------------------------
The use of predictive models can be a powerful tool for enabling effective offline reinforcement learning. Since model-based reinforcement learning algorithms primarily rely on their ability to estimate T(𝐬t+1|𝐬t,𝐚t)𝑇conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡T(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_T ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) via a parameterized model Tψ(𝐬t+1|𝐬t,𝐚t)subscript𝑇𝜓conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡T\_{\psi}(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_T start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), rather than performing dynamic programming or importance sampling, they benefit from convenient and powerful supervised learning methods when fitting the model, allowing them to effectively utilize large and diverse datasets. However, like dynamic programming methods, model-based offline RL methods are not immune to the effects of distribution shift. In this section, we briefly discuss how distributional shift affects model-based reinforcement learning methods, then survey a number of works that utilize models for offline reinforcement learning, and conclude with a brief discussion of open challenges. A complete treatment of all model-based reinforcement learning work, as well as work that learns predictive models of physics but does not utilize them for control (e.g., Lerer et al., ([2016](#bib.bib87)); Battaglia et al., ([2016](#bib.bib4))), is outside the scope of this tutorial, and we focus primarily on work that performs both model-fitting and control from offline data.
###
5.1 Model Exploitation and Distribution Shift
As discussed in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), the model in a model-based RL algorithm can be utilized either for planning (including online, via MPC) or for training a policy. In both cases, the model must provide accurate predictions for state-action tuples that are more optimal with respect to the current task. That is, the model will be queried at 𝐬∼dπ(𝐬)similar-to𝐬superscript𝑑𝜋𝐬\mathbf{s}\sim d^{\pi}(\mathbf{s})bold\_s ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ), where π𝜋\piitalic\_π is either an explicit policy, or an implicit policy produced by running planning under the model. In general dπ(𝐬)≠dπβ(𝐬)superscript𝑑𝜋𝐬superscript𝑑subscript𝜋𝛽𝐬d^{\pi}(\mathbf{s})\neq d^{\pi\_{\beta}}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ) ≠ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( bold\_s ), which means that the model is itself susceptible to distribution shift. In fact, the model suffers from distribution shift both in terms of the state distribution dπ(𝐬)superscript𝑑𝜋𝐬d^{\pi}(\mathbf{s})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s ), and the action distribution π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ).
Since the policy (or action sequence, in the case of planning) is optimized to obtain the highest possible expected reward under the current model, this optimization process can lead to the policy *exploiting* the model, intentionally producing out-of-distribution states and actions at which the model Tψ(𝐬t+1|𝐬t,𝐚t)subscript𝑇𝜓conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡T\_{\psi}(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_T start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) erroneously predicts successor states 𝐬t+1subscript𝐬𝑡1\mathbf{s}\_{t+1}bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT that lead to higher returns than the actual successor states that would be obtained in the real MDP. This *model exploitation* problem can lead to policies that produce substantially worse performance in the real MDP than what was predicted under the model. Prior work in *online* model-based RL has sought to address this issue primarily via uncertainty estimation, analogously to the uncertainty-based methods discussed in Section [4.4](#S4.SS4 "4.4 Offline Approximate Dynamic Programming with Uncertainty Estimation ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), but this time modeling the epistemic uncertainty of the learned model Tψ(𝐬t+1|𝐬t,𝐚t)subscript𝑇𝜓conditionalsubscript𝐬𝑡1subscript𝐬𝑡subscript𝐚𝑡T\_{\psi}(\mathbf{s}\_{t+1}|\mathbf{s}\_{t},\mathbf{a}\_{t})italic\_T start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). In low-dimensional MDPs, such uncertainty estimates can be produced by means of Bayesian models such as Gaussian processes (Deisenroth and Rasmussen,, [2011](#bib.bib19)), while for higher-dimensional problems, Bayesian neural networks and bootstrap ensembles can be utilized (Chua et al.,, [2018](#bib.bib12); Janner et al.,, [2019](#bib.bib55)). Effective uncertainty estimation is generally considered an important component of modern model-based reinforcement learning methods, for the purpose of mitigating model exploitation.
Theoretical analysis of model-based policy learning can provide bounds on the error incurred from the distributional shift due to the divergence between the learned policy π(𝐚|𝐬)𝜋conditional𝐚𝐬\pi(\mathbf{a}|\mathbf{s})italic\_π ( bold\_a | bold\_s ) and the behavior policy πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) ([Sun et al., 2018b,](#bib.bib149) ; Luo et al.,, [2018](#bib.bib97); Janner et al.,, [2019](#bib.bib55)). This analysis resembles the distributional shift analysis provided in the DAgger example in Section [2.4](#S2.SS4 "2.4 What Makes Offline Reinforcement Learning Difficult? ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), except that now both the policy and the transition probabilities experience distributional shift. Following Janner et al., ([2019](#bib.bib55)), if we assume that the total variation distance (TVD) between the learned model Tψsubscript𝑇𝜓T\_{\psi}italic\_T start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT and true model T𝑇Titalic\_T is bounded by ϵm=maxt𝔼dtπDTV(Tψ(st+1|st,at)∥T(st+1|st,at))\epsilon\_{m}=\max\_{t}\mathbb{E}\_{d\_{t}^{\pi}}D\_{TV}(T\_{\psi}(s\_{t+1}|s\_{t},a\_{t})\|T(s\_{t+1}|s\_{t},a\_{t}))italic\_ϵ start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT = roman\_max start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_d start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_D start\_POSTSUBSCRIPT italic\_T italic\_V end\_POSTSUBSCRIPT ( italic\_T start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∥ italic\_T ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ), and the TVD between π𝜋\piitalic\_π and πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT is likewise bounded on sampled states by ϵπsubscriptitalic-ϵ𝜋\epsilon\_{\pi}italic\_ϵ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT, then the true policy value J(π)𝐽𝜋J(\pi)italic\_J ( italic\_π ) is related to the policy estimate under the model, Jψ(π)subscript𝐽𝜓𝜋J\_{\psi}(\pi)italic\_J start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( italic\_π ), according to
| | | |
| --- | --- | --- |
| | J(π)≥Jψ(π)−[2γrmax(ϵm+2ϵπ)(1−γ)2+4rmaxϵπ1−γ].𝐽𝜋subscript𝐽𝜓𝜋delimited-[]2𝛾subscript𝑟maxsubscriptitalic-ϵ𝑚2subscriptitalic-ϵ𝜋superscript1𝛾24subscript𝑟maxsubscriptitalic-ϵ𝜋1𝛾J(\pi)\geq J\_{\psi}(\pi)-\left[\frac{2\gamma r\_{\text{max}}(\epsilon\_{m}+2\epsilon\_{\pi})}{(1-\gamma)^{2}}+\frac{4r\_{\text{max}}\epsilon\_{\pi}}{1-\gamma}\right].italic\_J ( italic\_π ) ≥ italic\_J start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( italic\_π ) - [ divide start\_ARG 2 italic\_γ italic\_r start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT ( italic\_ϵ start\_POSTSUBSCRIPT italic\_m end\_POSTSUBSCRIPT + 2 italic\_ϵ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ) end\_ARG start\_ARG ( 1 - italic\_γ ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG + divide start\_ARG 4 italic\_r start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT italic\_ϵ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT end\_ARG start\_ARG 1 - italic\_γ end\_ARG ] . | |
Intuitively, the second term represents error accumulation due to the distribution shift in the policy, while the first term represents error accumulation due to the distribution shift in the model. The first term also includes a dependence on ϵπsubscriptitalic-ϵ𝜋\epsilon\_{\pi}italic\_ϵ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT, because policies that diverge more from πβsubscript𝜋𝛽{\pi\_{\beta}}italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT will lead to states that are further outside of the data distribution, which in turn will make errors in the model more likely. Janner et al., ([2019](#bib.bib55)) also argue that a modified model-based RL procedure that resembles Dyna Sutton, ([1991](#bib.bib150)), where only short-horizon rollouts from the model are generated by “branching” off of states seen in the data, can mitigate this accumulation of error. This is also intuitively natural: if applying the learned model recursively leads to compounding error, then shorter rollouts should incur substantially less error.
###
5.2 Brief Survey of Model-Based Offline Reinforcement Learning
A natural and straightforward way to utilize model-based reinforcement learning algorithms in the offline setting is to simply train the model from offline data, with minimal modification to the algorithm. As with Q-learning and actor-critic methods, model-based reinforcement learning algorithms can be applied to the offline setting naïvely. Furthermore, many effective model-based reinforcement learning methods *already* take steps to limit model exploitation via a variety of uncertainty estimation methods (Deisenroth and Rasmussen,, [2011](#bib.bib19); Chua et al.,, [2018](#bib.bib12)), making them reasonably effective in the offline setting. Many such methods have been known to exhibit excellent performance in conventional off-policy settings, where additional data collection is allowed, but prior data is also utilized (Sutton,, [1991](#bib.bib150); Watter et al.,, [2015](#bib.bib177); Zhang et al.,, [2018](#bib.bib186); Hafner et al.,, [2018](#bib.bib44); Janner et al.,, [2019](#bib.bib55)). Indeed, Yu et al., ([2020](#bib.bib183)) showed that MBPO (Janner et al.,, [2019](#bib.bib55)) actually attains reasonable performance on standard offline RL settings without modification, whereas naïve dynamic programming methods (e.g., soft actor-critic (Haarnoja et al.,, [2018](#bib.bib43))) fail to learn meaningful policies without policy constraints. This suggests that model-based RL algorithms are likely to lead to an effective class of offline reinforcement learning methods.
#### Offline RL with standard model-based RL methods.
A number of recent works have also demonstrated effective offline learning of predictive models and their application to control in complex and high-dimensional domains, including settings with image observations. Some of these methods have directly utilized high-capacity predictive models on high-dimensional observations, such as images, directly employing for online trajectory optimization (i.e., MPC). Action-conditional convolutional neural networks (Oh et al.,, [2015](#bib.bib112)) have been used to provide accurate, long-term prediction of behavior in Atari games and have been combined with RL to improve sample-efficiency over model-free methods ([Kaiser et al., 2019b,](#bib.bib62) ). The *visual foresight* method involves training a video prediction model to predict future image observations for a robot, conditioned on the current image and future sequence of actions. The model is represented with a recurrent neural network, and trained on large amounts of offline data, typically collected with an uninformative randomized policy (Finn and Levine,, [2017](#bib.bib28); Ebert et al.,, [2018](#bib.bib22)). More recent work has demonstrated this approach on very large and diverse datasets, collected from multiple viewpoints, many objects, and multiple robots, suggesting a high degree of generalization, though the particular behaviors are comparatively simple, typically relocating individual objects by pushing or grasping Dasari et al., ([2019](#bib.bib17)). A number of prior works have also employed “hybrid” methods that combine elements of model-free and model-based algorithms, predicting future rewards or reward features conditioned on a sequence of future actions, but avoiding direct prediction of future observations. A number of such methods have been explored in the conventional online RL setting (Tamar et al.,, [2016](#bib.bib157); Dosovitskiy and Koltun,, [2016](#bib.bib20); Oh et al.,, [2017](#bib.bib113)), and in the offline RL setting, such techniques have been applied effectively to learning navigational policies for mobile robots from previously collected data (Kahn et al.,, [2018](#bib.bib60), [2020](#bib.bib59)).
#### Off-policy evaluation with models.
Model learning has also been explored considerably in the domain of off-policy evaluation (OPE), as a way to reduce the variance of importance sampled estimators of the sort discussed in Section [3.1](#S3.SS1 "3.1 Off-Policy Evaluation via Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). Here, the model is used to provide a sort of baseline for the expected return inside of an importance sampled estimator, as illustrated in Equation ([6](#S3.E6 "6 ‣ 3.1 Off-Policy Evaluation via Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")) in Section [3.1](#S3.SS1 "3.1 Off-Policy Evaluation via Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). In these settings, the model is typically trained from offline data (Jiang and Li,, [2015](#bib.bib57); Thomas and Brunskill,, [2016](#bib.bib164); Farajtabar et al.,, [2018](#bib.bib27); Wang et al.,, [2017](#bib.bib174)).
#### Distribution and safe region constraints.
As with the policy constraint methods in Section [4.3](#S4.SS3 "4.3 Policy Constraints for Off-Policy Evaluation and Improvement ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), model-based RL algorithms can also utilize constraints imposed on the policy or planner with the learned model. Methods that guarantee Lyapunov stability (Berkenkamp et al.,, [2017](#bib.bib5)) of the learned model can be used to constrain policies within “safe” regions of the state space where the chance of failure is small. Another example of such an approach is provided by deep imitative models (DIMs) (Rhinehart et al.,, [2018](#bib.bib129)), which learn a normalizing flow model over future trajectories from offline data, conditioned on a high-dimensional observation. Like the hybrid methods described above, DIMs avoid direct prediction of observations. The learned distribution over dataset trajectories can then be used to both provide predictions for a planner, and provide a distributional constraint, preventing the planner from planning trajectories that deviate significantly from the training data, thus limiting distributional shift.
#### Conservative model-based RL algorithms.
More recently, two concurrent methods, MoREL (Kidambi et al.,, [2020](#bib.bib71)) and MOPO (Yu et al.,, [2020](#bib.bib183)), have proposed offline model-based RL algorithms that aim to utilize conservative value estimates to provide analytic bounds on performance. Conceptually, these methods resemble the conservative estimation approach described in Section [4.5](#S4.SS5 "4.5 Conservative Q-Learning and Pessimistic Value Functions ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), but instead of regularizing a value function, they modify the MDP model learned from data to induce conservative behavior. The basic principle is to provide the policy with a penalty for visiting states under the model where the model is likely to be incorrect. If the learned policy takes actions that remain in regions where the model is accurate, then the model-based estimate of the policy’s value (and its gradient) will likely be accurate also. Let u(𝐬,𝐚)𝑢𝐬𝐚u(\mathbf{s},\mathbf{a})italic\_u ( bold\_s , bold\_a ) denote an *error oracle* that provides a consistent estimate of the accuracy of the model at state-action tuple (𝐬,𝐚)𝐬𝐚(\mathbf{s},\mathbf{a})( bold\_s , bold\_a ). For example, as proposed by Yu et al., ([2020](#bib.bib183)), this oracle might satisfy the property that D(Tψ(st+1|st,at)∥T(st+1|st,at))≤u(𝐬,𝐚)D(T\_{\psi}(s\_{t+1}|s\_{t},a\_{t})\|T(s\_{t+1}|s\_{t},a\_{t}))\leq u(\mathbf{s},\mathbf{a})italic\_D ( italic\_T start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∥ italic\_T ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) ≤ italic\_u ( bold\_s , bold\_a ) for some divergence measure D(⋅,⋅)𝐷⋅⋅D(\cdot,\cdot)italic\_D ( ⋅ , ⋅ ). Then, conservative behavior can be induced either by modifying the reward function to obtain a conservative reward of the form r~(𝐬,𝐚)=r(𝐬,𝐚)−λu(𝐬,𝐚)~𝑟𝐬𝐚𝑟𝐬𝐚𝜆𝑢𝐬𝐚\tilde{r}(\mathbf{s},\mathbf{a})=r(\mathbf{s},\mathbf{a})-\lambda u(\mathbf{s},\mathbf{a})over~ start\_ARG italic\_r end\_ARG ( bold\_s , bold\_a ) = italic\_r ( bold\_s , bold\_a ) - italic\_λ italic\_u ( bold\_s , bold\_a ), as in MOPO (Yu et al.,, [2020](#bib.bib183)), or by modifying the MDP under the learned model so that the agent enters an absorbing state with a low reward value when u(𝐬,𝐚)𝑢𝐬𝐚u(\mathbf{s},\mathbf{a})italic\_u ( bold\_s , bold\_a ) is below some threshold, as in MoREL (Kidambi et al.,, [2020](#bib.bib71)). In both cases, it is possible to show that the model-based estimate of the policy’s performance under the model the modified reward function or MDP structure bounds that policy’s true performance in the real MDP, providing appealing theoretical guarantees on performance. Note, however, that such approaches still require a consistent estimator for the error oracle u(𝐬,𝐚)𝑢𝐬𝐚u(\mathbf{s},\mathbf{a})italic\_u ( bold\_s , bold\_a ). Prior work has used disagreement in a bootstrap ensemble to provide this estimate, but it is not guaranteed to be consistent under sampling error, and resolving this limitation is likely to be an important direction for future work.
###
5.3 Challenges and Open Problems
Although model-based reinforcement learning appears to be a natural fit for the offline RL problem setting, current methods for fully offline model-based reinforcement learning generally rely on explicit uncertainty estimation for the model to detect and quantify distributional shift, for example by using a bootstrap ensemble. Although uncertainty estimation for models is in some ways more straightforward than uncertainty estimation for value functions, current uncertainty estimation methods still leave much to be desired, and it seems likely that offline performance of model-based RL methods can be improved substantially by carefully accounting for distributional shift.
Model-based RL methods also present their own set of challenges: while some MDPs are easy to model accurately, others can be exceedingly difficult. Modeling MDPs with very high-dimensional image observations and long horizons is a major open problem, and current predictive modeling methods generally struggle with long-horizon prediction. Hybrid methods that combine model-based and model-free learning, for example by utilizing short rollouts (Sutton,, [1991](#bib.bib150); Janner et al.,, [2019](#bib.bib55)) or avoiding prediction of full observations (Dosovitskiy and Koltun,, [2016](#bib.bib20); Oh et al.,, [2017](#bib.bib113); Kahn et al.,, [2020](#bib.bib59)) offer some promise in this area.
It is also still an open theoretical question as to whether model-based RL methods even *in theory* can improve over model-free dynamic programming algorithms. The reasoning behind this question is that, although dynamic programming methods do not *explicitly* learn a model, they essentially utilize the dataset as a “non-parametric” model. Fundamentally, both dynamic programming methods and model-based RL methods are solving *prediction* problems, with the former predicting future returns, and the latter predicting future states. Moreover, model-free methods can be modified to predict even more general quantities (Sutton et al.,, [2011](#bib.bib155)), such as return values with different discount factors, return values for different number of steps into the future, etc. In the linear function approximation case, it is known that model-based updates and fitted value iteration updates actually produce identical iterates (Vanseijen and Sutton,, [2015](#bib.bib171); Parr et al.,, [2008](#bib.bib120)), though it is unknown whether this relationship holds under non-linear function approximation. Therefore, exploring the theoretical bounds on the optimal performance of offline model-based RL methods under non-linear function approximation, as compared to offline dynamic programming methods, remains an open problem.
6 Applications and Evaluation
------------------------------
In this section, we survey and discuss evaluation methods, benchmarks, and applications for offline reinforcement learning methods. As discussed in Section [1](#S1 "1 Introduction ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), and as we will discuss further in Section [7](#S7 "7 Discussion and Perspectives ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), it is very likely that the full potential of offline reinforcement learning methods has yet to be fully realized, and perhaps the most exciting applications of such methods are still ahead of us. Nonetheless, a number of prior works have applied offline reinforcement learning in a range of challenging domains, from safety-critical real-world physical systems to large-scale learning from logged data for recommender systems. We first discuss how offline reinforcement learning algorithms have been evaluated in prior work, and then discuss specific application domains where such methods have already made an impact.
###
6.1 Evaluation and Benchmarks
While individual application domains, such as recommender systems and healthcare, discussed below, have developed particular domain-specific evaluations, the general state of benchmarking for modern offline reinforcement learning research is less well established. In the absence of well-developed evaluation protocols, one approach employed in recent work is to utilize training
data collected via a standard online reinforcement learning algorithm, using either the entire replay buffer for an off-policy algorithm for training (Kumar et al.,, [2019](#bib.bib77); Agarwal et al.,, [2019](#bib.bib1); Fujimoto et al.,, [2018](#bib.bib31)), or even data from the optimal policy. However, this evaluation setting is
rather unrealistic, since the entire point of utilizing offline reinforcement learning algorithms in the real world is to obtain a policy that is better than the best behavior in the dataset, potentially in settings where running reinforcement learning online is impractical due to cost or safety concerns. A simple compromise solution is to utilize data from a “suboptimal” online reinforcement learning run, for example by stopping the online process early, saving out the buffer, and using this buffer as the dataset for offline RL (Kumar et al.,, [2019](#bib.bib77)). However, even this formulation does not fully evaluate capabilities of offline reinforcement learning methods, and the statistics of the training data have a considerable effect on the difficult of offline RL (Fu et al.,, [2020](#bib.bib29)), including how concentrated the data distribution is around a specific set of trajectories, and how multi-modal the data is. Broader data distributions (i.e., ones where πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ) has higher entropy) are generally easier to learn from, since there are fewer out-of-distribution actions. On the other hand, highly multi-modal behavior policies can be extremely difficult to learn from for methods that require explicit estimation of πβ(𝐚|𝐬)subscript𝜋𝛽conditional𝐚𝐬{\pi\_{\beta}}(\mathbf{a}|\mathbf{s})italic\_π start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( bold\_a | bold\_s ), as discussed in Section [4.3](#S4.SS3 "4.3 Policy Constraints for Off-Policy Evaluation and Improvement ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") and [4.6](#S4.SS6 "4.6 Challenges and Open Problems ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). Our recently proposed set of offline reinforcement learning benchmarks aims to provide standardized datasets and simulations that cover such difficult cases (Fu et al.,, [2020](#bib.bib29)).

Figure 4: An example of exploiting compositional structure in trajectories to find shortest paths in the Maze2D environment in the D4RL benchmark suite (Fu et al.,, [2020](#bib.bib29)).
A reasonable question we might ask in regard to datasets for offline RL is: in which situations might we actually expect offline RL to yield a policy that is significantly better than *any* trajectory in the training set? While we cannot expect offline RL to discover actions that are better than any action illustrated in the data, we can expect it to effectively utilize the compositional structure inherent in any temporal process. This idea is illustrated in Figure [4](#S6.F4 "Figure 4 ‣ 6.1 Evaluation and Benchmarks ‣ 6 Applications and Evaluation ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"): if the dataset contains a subsequence illustrating a way to arrive at state 2222 from state 1111, as well as a separate subsequence illustrating how to arrive at state 3333 from state 2222, then an effective offline RL method should be able to learn how to arrive at state 3333 from state 1111, which might provide for a substantially higher final reward than any of the subsequences in the dataset. When we also consider the capacity of neural networks to generalize, we could imagine this sort of “transitive induction” taking place on a *portion* of the state variables, effectively inferring potentially optimal behavior from highly suboptimal components. This capability can be evaluated with benchmarks that explicitly provide data containing this structure, and the D4RL benchmark suite provides a range of tasks that exercise this capability (Fu et al.,, [2020](#bib.bib29)).
Accurately evaluating the performance of offline RL algorithms can be difficult, because we are typically interested in maximizing the online performance of an algorithm. When simulators are available, online evaluations can be cheaply performed within the simulator order to benchmark the performance of algorithms. Off-policy evaluation (OPE) can also be used to estimate the performance of policies without explicit online interaction, but it is an active area of research as discussed in Section [3.1](#S3.SS1 "3.1 Off-Policy Evaluation via Importance Sampling ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). Nevertheless, OPE is a popular tool in areas such as online advertising (Li et al.,, [2010](#bib.bib92)) or healthcare (Murphy et al.,, [2001](#bib.bib103)) where online evaluation can have significant financial or safety consequences. In certain domains, human experts can be used to assess the quality of the decision-making system. For example, Jaques et al., ([2019](#bib.bib56)) uses crowd-sourced human labeling to judge whether dialogue generated by an offline RL agent is fluent and amicable, and Raghu et al., ([2017](#bib.bib128)) evaluates using a qualitative analysis based one domain experts for sepsis treatment.
###
6.2 Applications in Robotics
Robotics is an appealing application domain for offline reinforcement learning, since RL has the potential to automate the acquisition of complex behavioral skills for robots – particularly with raw sensory observations, such as camera images – but conducting online data collection for each robotic control policy can be expensive and impractical. This is especially true for robots that must learn to act intelligently in complex open-world environments, since the challenge of robust visual perception alone already necessitates large training sets. The ImageNet Large-Scale Visual Recognition
Challenge (Russakovsky et al.,, [2015](#bib.bib133)) stipulates a training set of 1.5 million images for open-world object recognition, and it seems reasonable that the sample complexity for a robotic RL algorithm that must act in similar real-world settings should be at least of comparable size. For this reason, utilizing previous collected data can be of critical importance in robotics.
Several prior works have explored offline RL methods for learning robotic grasping, which is a particularly appealing task offline RL methods, since it requires the ability to generalize to a wide range of

Figure 5: Large-scale robotic grasping data collection. Kalashnikov et al., ([2018](#bib.bib65)) describes how a dataset of over 500,000 grasp trials collected from multiple robots was used to train a vision-based grasping policy, comparing fully offline training and online fine-tuning.
objects (Pinto and Gupta,, [2016](#bib.bib124); Levine et al.,, [2018](#bib.bib91); Kalashnikov et al.,, [2018](#bib.bib65); Zeng et al.,, [2018](#bib.bib185)). Such methods have utilized approximate dynamic programming (Kalashnikov et al.,, [2018](#bib.bib65)) (see Figure [5](#S6.F5 "Figure 5 ‣ 6.2 Applications in Robotics ‣ 6 Applications and Evaluation ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")), as well as more domain-specific algorithms, such as a single-step bandit formulation (Pinto and Gupta,, [2016](#bib.bib124)). Outside of robotic grasping, Ebert et al., ([2018](#bib.bib22)) propose a model-based algorithm based on prediction of future video frames for learning a variety of robotic manipulation skills from offline data, while Dasari et al., ([2019](#bib.bib17)) expand on this approach with a large and diverse dataset of robotic interaction.
Cabi et al., ([2019](#bib.bib8)) propose to use reward learning from human preferences combined with offline RL to provide a user-friendly method for controlling robots for object manipulation tasks. In the domain of robotic navigation, Mo et al., ([2018](#bib.bib101)) propose a dataset of visual indoor scenes for reinforcement learning, collected via a camera-mounted-robot, and Mandlekar et al., ([2020](#bib.bib99)) proposes a dataset of human demonstrations for robotic manipulation. Kahn et al., ([2020](#bib.bib59)) discuss how a method based on prediction of future reward signals, blending elements of model-based and model-free learning, can learn effective navigation policies from data collected in the real world using a random exploration policy. Pure model-based methods in robotics typically involve training a model on real or simulated data, and then planning within the model to produce a policy that is executed on a real system. Approaches have included using Gaussian process models for controlling blimps (Ko et al.,, [2007](#bib.bib73)), and using linear regression (Koppejan and Whiteson,, [2009](#bib.bib75)) and locally-weighted Bayesian regression (Bagnell and Schneider,, [2001](#bib.bib3)) for helicopter control.
###
6.3 Applications in Healthcare
Using offline reinforcement learning in healthcare poses several unique challenges (Gottesman et al.,, [2018](#bib.bib38)). Safety is a major concern, and largely precludes any possibility of online exploration. Datasets can also be significantly biased towards serious outcomes (Gottesman et al.,, [2019](#bib.bib37)), since minor cases rarely require treatment, and can lead naïve agents to erroneous conclusions, for example that any drug treatment may cause death simply because it is not prescribed to otherwise healthy individuals.
The MIMIC-III dataset (Johnson et al.,, [2016](#bib.bib58)), which contains approximately 60K medical records from ICUs, has been influential in enabling data-driven research in healthcare treatment. Q-learning methods on this dataset has been applied to problems such as the treatment of sepsis (Raghu et al.,, [2017](#bib.bib128)) and optimizing the use of ventilators (Prasad et al.,, [2017](#bib.bib125)). Wang et al., ([2018](#bib.bib172)) apply actor-critic methods on MIMIC-III to determine drug recommendations.

Figure 6: A real-time epilepsy treatment system, train using offline reinforcement learning (Guez et al.,, [2008](#bib.bib41)).
Outside of ICU settings, offline RL applications include learning from recordings of seizure activity in mouse brains in order to determine optimal stimulation frequencies for reducing epileptic seizures (Guez et al.,, [2008](#bib.bib41)).
Offline RL has also been used for optimizing long term treatment plans. Shortreed et al., ([2011](#bib.bib141)) uses offline fitted Q-iteration for optimizing schizophrenia treatment, Nie et al., ([2019](#bib.bib110)) uses doubly-robust estimators to safely determine proper timings of medical treatments, and Tseng et al., ([2017](#bib.bib168)) uses a model-based approach for lung cancer treatment. Careful application of offline RL that can handle such challenges may offer healthcare providers powerful assistive tools for optimizing the care of patients and ultimately improving outcomes.
###
6.4 Applications in Autonomous Driving
As in healthcare, a significant barrier to applying reinforcement learning in the domain of self-driving vehicles is safety.
In the online setting, exploratory agents can select actions that lead to catastrophic failure, potentially endangering the lives of the passengers. Thus, offline RL is potentially a promising tool for enabling, safe, effective learning in autonomous driving.

Figure 7: A road following system trained end-to-end via reinforcement learning (Kendall et al.,, [2019](#bib.bib70)).
While offline RL has not yet found significant application in actual real-world self-driving vehicles (Yurtsever et al.,, [2020](#bib.bib184)), learning-based approaches have been gaining in popularity.
RobotCar (Maddern et al.,, [2017](#bib.bib98)) and BDD-100K (Yu et al.,, [2018](#bib.bib181)) are both large video datasets containing thousands of hours of real-life driving activity. Imitation learning has been a popular approach
towards end-to-end, data-driven methods in autonomous driving (Bojarski et al.,, [2016](#bib.bib6); [Sun et al., 2018a,](#bib.bib148) ; Pan et al.,, [2017](#bib.bib117); Codevilla et al.,, [2018](#bib.bib13)). Reinforcement learning approaches have been applied in both simulation (Sallab et al.,, [2017](#bib.bib137)) and in the real world, with human interventions in case the vehicle violates a safety constraint (Kendall et al.,, [2019](#bib.bib70)) (see Figure [7](#S6.F7 "Figure 7 ‣ 6.4 Applications in Autonomous Driving ‣ 6 Applications and Evaluation ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems")).
Model-based reinforcement learning methods that employ constraints to keep the agent close to the training data for the model, so as to avoid out-of-distribution inputs as discussed in Section [5](#S5 "5 Offline Model-Based Reinforcement Learning ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), can effectively provide elements of imitation learning when training on driving demonstration data, as for example in the case of deep imitative models (DIMs) (Rhinehart et al.,, [2018](#bib.bib129)). Indeed, with the widespread availability of high-quality demonstration data, it is likely that effective methods for offline RL in the field of autonomous driving will, explicitly or implicitly, combine elements of imitation learning and reinforcement learning.
###
6.5 Applications in Advertising and Recommender Systems
Recommender systems and advertising are particularly suitable domains for offline RL because data collection is easy and efficient, and can be obtained by logging user behavior. However, these domains are also “safety critical,” in the sense that a highly suboptimal policy may result in large monetary losses, thereby making unconstrained online exploration problematic. Thus, offline RL approaches have a long history of application in this area.
Off-policy evaluation is commonly used as a tool for performing A/B tests and estimating the performance of advertising and recommender systems without additional interaction with the environment. In contrast to the other applications discussed, policy evaluation for recommender systems is typically formulated within a contextual bandit problem (Langford et al.,, [2008](#bib.bib83); Li et al.,, [2010](#bib.bib92)), where states may correspond to user history and actions correspond to recommendations. This approximation removes the need for sequential decision making, but can introduce approximation errors if actions have temporal dependence as in domains such as robotics or healthcare.
Applications of offline RL for recommender systems include slate and whole-page optimization (Swaminathan et al.,, [2017](#bib.bib156)), applying doubly robust estimation to estimate website visits Dudík et al., ([2014](#bib.bib21)), and A/B testing for click optimization (Gilotte et al.,, [2018](#bib.bib36)). Policy learning from logged, offline data has included studies on optimizing newspaper article click-through-rates (Strehl et al.,, [2010](#bib.bib147); Garcin et al.,, [2014](#bib.bib33)), advertisement ranking on search pages (Bottou et al.,, [2013](#bib.bib7)), and personalized ad recommendation for digital marketing (Theocharous et al.,, [2015](#bib.bib162); Thomas et al.,, [2017](#bib.bib166)).
###
6.6 Applications in Language and Dialogue
Interaction via natural language is not typically thought of as a reinforcement learning problem, but in fact the formalism of sequential decision making can provide a powerful tool for natural language interaction: when dialogue is modeled as a *purposeful* interaction, the RL framework can in principle offer an effective mechanism for learning policies for outputting natural language responses to human interlocutors. The most direct way to utilize standard online RL methods for natural language interaction – by having machines engage in dialogue with real humans – can be exceedingly tedious, especially in the early stages of training, when the policy would produce mostly non-sensical dialogue.
For this reason, offline RL offers a natural avenue to combine the optimal decision making formalism of RL with the kinds of large datasets of human-to-human conversations available in NLP.
In prior work, offline RL approaches have been applied in the areas of dialogue and language interfaces, where datasets consist of logged interactions, such as agent-customer transcripts (Zhou et al.,, [2017](#bib.bib190)). An example of an application is dialogue management, which is typically concerned

Figure 8: A dialogue agent trained via offline reinforcement learning interacting with a human user, with the aim of elicit responses with positive sentiment (Jaques et al.,, [2019](#bib.bib56)).
with accomplishing a specific goal, such as retrieving information.
Examples of this have included applications of offline RL to the problem of flight booking (Henderson et al.,, [2008](#bib.bib51)), restaurant information retrieval Pietquin et al., ([2011](#bib.bib123)), and restaurant recommendation (Kandasamy et al.,, [2017](#bib.bib68)). Jaques et al., ([2019](#bib.bib56)) applies offline RL to the problem of dialogue generation, and focuses on producing natural responses that elicit positive feedback from human users. An example of an interaction with a trained agent is shown in Figure [8](#S6.F8 "Figure 8 ‣ 6.6 Applications in Language and Dialogue ‣ 6 Applications and Evaluation ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
7 Discussion and Perspectives
------------------------------
Offline reinforcement learning offers the possibility of turning reinforcement learning – which is conventionally viewed as a fundamentally active learning paradigm – into a data-driven discipline, such that it can benefit from the same kind of “blessing of scale” that has proven so effective across a range of supervised learning application areas (LeCun et al.,, [2015](#bib.bib85)). However, making this possible will require new innovations that bring to bear sophisticated statistical methods and combine them with the fundamentals of sequential decision making that are conventionally studied in reinforcement learning. Standard off-policy reinforcement learning algorithms have conventionally focused on dynamic programming methods that can utilize off-policy data, as discussed in Section [2.1](#S2.SS1 "2.1 Reinforcement Learning Preliminaries ‣ 2 Offline Reinforcement Learning Problem Statement and Overview ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems") and Section [4](#S4 "4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"), and importance sampling methods that can incorporate samples from different sampling distributions, as discussed in Section [3.2](#S3.SS2 "3.2 The Off-Policy Policy Gradient ‣ 3 Offline Evaluation and Reinforcement Learning via Importance Sampling ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems").
However, both of these classes of approaches struggle when scaled up to complex high-dimensional function approximators, such as deep neural networks, high-dimensional state or observation spaces, and temporally extended tasks. As a result, the standard off-policy training methods in these two categories have generally proven unsuitable for the kinds of complex domains typically studied in modern deep reinforcement learning. More recently, a number of improvements for offline RL methods have been proposed that take into account the statistics of distributional shift, via either policy constraints or uncertainty estimation, such as the policy constraint formulations that we discuss in Section [4.3](#S4.SS3 "4.3 Policy Constraints for Off-Policy Evaluation and Improvement ‣ 4 Offline Reinforcement Learning via Dynamic Programming ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). These formulations have the potential to mitigate the shortcomings of early methods, by explicitly account for the key challenge in offline RL: distributional shift due to differences between the learned policy and the behavior policy.
More generally, such methods shed light on the fact that offline RL is, at its core, a counter-factual inference problem: given data that resulted from a given set of decisions, infer the consequence of a *different* set of decisions. Such problems are known to be exceptionally challenging in machine learning, because they require us to step outside of the commonly used i.i.d. framework, which assumes that test-time queries involve the same distribution as the one that produced the training data (Schölkopf,, [2019](#bib.bib138)). It therefore stands to reason that the initial solutions to this problem, proposed in recent work, should aim to reduce distributional shift, either by constraining the policy’s deviation from the data, or by estimating (epistemic) uncertainty as a measure of distributional shift. Moving forward, we might expect that a variety of tools developed for addressing distributional shift and facilitating generalization may find use in offline RL algorithms, including techniques from causal inference (Schölkopf,, [2019](#bib.bib138)), uncertainty estimation (Gal and Ghahramani,, [2016](#bib.bib32); Kendall and Gal,, [2017](#bib.bib69)), density estimation and generative modeling (Kingma et al.,, [2014](#bib.bib72)), distributional robustness (Sinha et al.,, [2017](#bib.bib145); Sagawa et al.,, [2019](#bib.bib136)) and invariance (Arjovsky et al.,, [2019](#bib.bib2)). More broadly, methods that aim to estimate and address distributional shift, constrain distributions (e.g., various forms of trust regions), and evaluate distribution support from samples are all potentially relevant to developing improved methods for offline reinforcement learning.
The counter-factual inference perspective becomes especially clear when we consider model-based offline RL algorithms, as discussed briefly in Section [5](#S5 "5 Offline Model-Based Reinforcement Learning ‣ Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"). In this case, the model answers the question: “what would the resulting state be if the agent took an action other than the one in the dataset?” Of course, the model suffers from distributional shift in much the same way as the value function, since out-of-distribution state-action tuples can result in inaccurate prediction. Nonetheless, prior work has demonstrated good results with model-based methods, particularly in regard to generalization with real-world data (Finn and Levine,, [2017](#bib.bib28); Ebert et al.,, [2018](#bib.bib22)), and a range of works on predicting physical phenomena have utilized offline datasets (Lerer et al.,, [2016](#bib.bib87); Battaglia et al.,, [2016](#bib.bib4)). This suggests that model learning may be an important component of effective future offline reinforcement learning methods.
To conclude our discussion of offline reinforcement learning, we will leave the reader with the following thought. While the machine learning community frequently places considerable value on design of novel algorithms and theory, much of the amazing practical progress that we have witnessed over the past decade has arguably been driven just as much by advances in datasets as by advances in methods. Indeed, widely deployed techniques in computer vision and NLP utilize learning methods that are relatively old and well understood, and although improvements in architectures and models have driven rapid increase in performance, the increasing size and diversity of datasets – particularly in real-world applications – have been an instrumental driver of progress. In real-world applications, collecting large, diverse, representative, and well-labeled datasets is often far more important than utilizing the most advanced methods. In the standard active setting in which most reinforcement learning methods operate, collecting large and diverse datasets is often impractical, and in many applications, including safety-critical domains such as driving, and human-interactive domains such as dialogue systems, it is prohibitively costly in terms of time, money, and safety considerations. Therefore, developing a new generation of *data-driven* reinforcement learning may usher in a new era of progress in reinforcement learning, both by making it possible to bring it to bear on a range of real-world problems that have previously been unsuited for such methods, and by enabling current applications (e.g., in robotics or autonomous driving) to benefit from much larger, more diverse, and more representative datasets that can be reused effectively across experiments. |
9f87f902-5b0d-420c-9921-7ff23a8da6ce | trentmkelly/LessWrong-43k | LessWrong | A List of Nuances
Abram Demski and Grognor
Much of rationality is pattern-matching. An article on lesswrong might point out a thing to look for. Noticing this thing changes your reasoning in some way. This essay is a list of things to look for. These things are all associated, but the reader should take care not to lump them together. Each dichotomy is distinct, and although the brain will tend to abstract them into some sort of yin/yang correlated mush, in reality they have a more complicated structure; some things may be similar, but if possible, try to focus on the complex interrelationships.
1. Map vs. Territory
1. Eliezer’s sequences use this as a jump-off point for discussion of rationality.
2. Many thinking mistakes are map vs. territory confusions.
1. A map and territory mistake is a mix-up of seeming vs being.
2. Humans need frequent reminders that we are not omniscient.
2. Cached Thoughts vs. Thinking
1. This document is a list of cached thoughts.
3. Clusters vs. Properties
1. These words could be used in different ways, but the distinction I want to point at is that of labels we put on things vs actual differences in things.
2. The mind projection fallacy is the fallacy of thinking a mental category (a “cluster”) is an actual property things have.
1. If we see something as good for one reason, we are likely to attribute other good properties to it, as if it had inherent goodness. This is called the halo effect. (If we see something as bad and infer other bad properties as a result, it is referred to as the reverse-halo effect.)
3. Categories are inference applicability heuristics; ruling X an instance of Y without expecting novel inferences is cargo cult classification.
4. Syntax vs. Semantics
1. The syntax is the physical instantiation of the map. The semantics is the way we are meant to read the map; that is, the intended relationship to the territory.
5. Semantics vs. Pragmatics
1. The semantics is the literal |
d8995ae4-1263-4af4-b635-e0f6b8d643ea | trentmkelly/LessWrong-43k | LessWrong | Are these arguments valid?
I have lately found myself using two particular strategies quite often during discussions and want to make sure that their logical structure is valid. So, I thought what better place to have them dismantled than LW :)
[1] The first strategy involves sending a hypothetical example's equivalent back in time and using the present knowledge of the outcome as a justification for the validity or not of the argument. The last time I used this was when someone tried to convince me that IQ is the main factor for human value by asking me which one is superior, a technologically developed, high IQ culture vs an under-developed mid IQ one?
I responded that I can not rationaly know what to do based on only this information. When pushed on why and on making a choice, I responded that if you were asking me this question about pre-war Germany in the place of the highly developed country using your own logic you would choose Germany as superior but we now know that the 'superior' country was morally inferior (I assumed correctly that they accept similar definitions of good and evil and German actions in the war were evil). With the benefit of hindsight we now know that this would be the wrong decision so their argument is demonstrably wrong.
Now, I don't want to get into this argument here. I just want to know if the strategy I used is logicaly valid as they did not accept it and instead, more or less, accused me of sophistry.
[2] The second strategy is more suspect to my estimation but I am not sure why. In this method I demonstrate humanities miniscule understanding of reality (when put in proper perspective) and use this as a basis for a kind of attitude. Here is an example:
When discussing whether life has meaning or not one answer I use is a pragmatic one. The issue at hand is deciding how to act. In other words which belief to use as a motivation for action. There are two epistemic possibilites:
* [2.1] life has meaning
* [2.2] life does not have meaning.
First of all |
fdf9c364-b9c9-4aa4-ad47-13ab88bc090d | trentmkelly/LessWrong-43k | LessWrong | [Link] The Collapse of Complex Societies
TGGP a frequent commenter at Overcoming Bias (and hence old LessWrong), writes about his thoughts on a book by Joseph Tainter.
> I’ve seen Joseph Tainter’s “The Collapse of Complex Societies” recommended in a few different places. Jared Diamond’s book might be one of them, the guest-posts of Captain David Ryan aka “Tony Comstock” for James Fallows at the Atlantic might be another. The sidebar of John Robb’s “Global Guerrillas” blog is the only one I remember with certainty. It’s a not a very long book, and you can get the gist of it from Tainter’s wikipedia page.
>
> Lots of people have found civilizational collapses to be interesting, and Tainter reviews many of their theories while finding them wanting. The “eleven major themes in the explanation of collapse” he lists are depletion/cessation of a vital resource, establishment of a new resource base (which I found too stupid to take seriously even momentarily), insurmountable catastrophe, insufficient response to circumstances (which is almost tautological), other complex societies, intruders, class conflict or elite mismanagement, social dysfunction, mystical factors, chance concatenation of events (almost tautological if you don’t think collapse is predetermined) and economic factors. Like Tainter, I find the “mystical” theories to not really constitute theories at all, although some of the most popular writers on the subject (Spengler, Toynbee, various ancients) are included there. Tainter often contrasts “integrative” (or “functional”) theories on the origin of the state/complexity vs “conflict” theories, and acknowledges that he is more partial toward the former. Unfortunately, most of the latter theorists he lists are Marxists and carry a lot of baggage. The observation that throughout much of history some set of people ruled over others as a result of military victory regardless of any benefit to the subjects (though a Leviathan may happen to have upsides) predates Marx, with Ibn Khaldun being one of the f |
198472ce-59a0-4710-9007-b1f225955391 | trentmkelly/LessWrong-43k | LessWrong | What is epigenetics?
Among all areas of biology related to my research, epigenetics is the one that is most commonly misunderstood, not only by the general public but even by other scientists. After being irritated one too many times,[1] I’ve decided to make a series of posts to explain what epigenetics really is, why it’s important, and how it’s misunderstood. I will also explain how epigenetics is important for my own research on making gametes from stem cells.
This first post covers the definition of epigenetics, and the basic biology of epigenetic marks.
What is genetics?
Before defining epigenetics, let’s start with a definition of genetics. Genetics is the study of genes, which are sequences of genetic material[2] that encode functional products.
Let’s take the IGF2 gene as an example.
The human IGF2 gene, shown in the NCBI genome browser.
Depicted above is a region of human chromosome 11 containing the IGF2 gene, which encodes the IGF2 protein, an important growth factor for fetal development.[3] The boxes represent exons and lines represent introns. The darker green color is the protein-coding sequence, and non-coding (i.e. untranslated) regions are shown in lighter green. Arrows represent the direction of transcription.
The bottom of this image shows the location of common genetic variants (present at >1% frequency). If you look closely, you might notice that none of them are in the protein-coding sequence (the dark green boxes). This is not a coincidence, because nothing is ever a coincidence most mutations to essential proteins (including IGF2) are harmful and thus selected out of the population. However, there are several common mutations in non-coding regions of this gene.
To recap, genetics is the study of genes (such as IGF2) and the effects of genetic variation on their functions.
What is epigenetics?
Epigenetics is the study of epigenetic marks, which are changes to genetic material that alter gene expression, but do not change the genetic sequence. A decent an |
13dd5b01-9113-489a-8ebd-c7300d0d65ed | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | There should be an AI safety project board
The AI safety community has gotten much more accessible over the last few years. The [AGI safety fundamentals](https://www.eacambridge.org/agi-safety-fundamentals) curriculum provides a great overview of the most important pieces and you can work through Richard Ngo’s [Alignment research exercises](https://www.lesswrong.com/posts/kj37Hzb2MsALwLqWt/alignment-research-exercises) on your own. The [AI safety camp](https://aisafety.camp/) enables you to participate in AI safety projects with good supervision and you can join the AI safety slack.
People often post the results of their work on the [Alignment forum](https://www.alignmentforum.org/) and sometimes post about starting a project (see e.g. [Ethan Perez’s](https://www.alignmentforum.org/posts/vHcGGrnzcshybrCJD/language-model-alignment-research-internship) post). [Effective thesis](https://effectivethesis.org/) can provide you with supervisors for Bachelor’s and Master’s students but might not be able to find projects for people from industry or Ph.D. students.
However, I think there is still a niche to be filled. Most collaborations in AI safety come through inaccessible personal networks, e.g. personal messages on the forum or conversations at EAG. This means that new people often don’t have an avenue to participate in projects. While the AI safety camp is very good, it is only once a year and is not designed for more intense long-term projects. For example, it is often hard to know which PhDs would be interested in academic collaboration.
Therefore, I propose an **AI safety projects board**. On this board, anyone could post a short description of an AI safety-related project, its requirements (previous experience, decent coding skills, etc.) and further details (X hours per week, funding available, intended for an academic publication, etc.). Furthermore, it would enable busy senior people in the field to suggest projects for less experienced members of the community, e.g. they post the idea and explicitly state that they don’t have time for extensive supervision.
I currently neither have the time nor the web-building skills to develop and maintain such a tool but maybe someone else is interested in setting it up. Some of the AI safety & longtermist community managers might want to offer support and some longtermist and AI safety funders could provide financial support to anyone who is interested and capable.
If you want to get informed about new posts you can [follow me on Twitter](https://twitter.com/MariusHobbhahn). |
21f9c00a-8305-4878-b340-7bd15a5f126a | StampyAI/alignment-research-dataset/special_docs | Other | Alignment for Advanced Machine Learning Systems
Introduction Recent years' progress in artificial intelligence has prompted renewed interest in a question posed by Russell and Norvig (2010) : "What if we succeed?" If and when AI researchers succeed at the goal of designing machines with cross-domain learning and decision-making capabilities that rival those of humans, the consequences for science, technology, and human life are likely to be large. For example, suppose a team of researchers wishes to use an advanced ML system to generate plans for finding a cure for Parkinson's disease. They might approve if it generated a plan for renting computing resources to perform a broad and efficient search through the space of remedies. They might disapprove if it generates a plan to proliferate robotic laboratories which would perform rapid and efficient experiments, but have a large negative effect on the biosphere. The question is, how can we design systems (and select objective functions) such that our ML systems reliably act more like the former case and less like the latter case? Intuitively, it seems that if we could codify what we mean by "find a way to cure Parkinson's disease without doing anything drastic," many of the dangers Bostrom (2014) describes in his book Superintelligence could be ameliorated. However, naïve attempts to formally specify satisfactory objectives for this sort of goal usually yield functions that, upon inspection, are revealed to incentivize unintended behavior. (For examples, refer to Soares et al. [2015] and Armstrong [2015] .) What are the key technical obstacles here? Russell (2014) highlights two: a system's objective function "may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down;" and "any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources-not for their own sake, but to succeed in its assigned task." In other words, there are at least two obvious types of research that would improve the ability of researchers to design aligned AI systems in the future: We can do research that makes it easier to specify our intended goals as objective functions; and we can do research aimed at designing AI systems that avoid large side effects and negative incentives, even in cases where the objective function is imperfectly aligned. Soares and Fallenstein (2014) refer to the former approach as value specification, and the latter as error tolerance. In this document, we explore eight research areas based around these two approaches to aligning advanced ML systems, many of which are already seeing interest from the larger ML community. Some focus on value specification, some on error tolerance, and some on a mix of both. Since reducing the risk of catastrophe from fallible human programmers is itself a shared human value, the line between these two research goals can be blurry. For solutions to the problems discussed below to be useful in the future, they must be applicable even to ML systems that are much more capable than the systems that exist today. Solutions that critically depend on the system's ignorance of a certain discoverable fact, or on its inability to come up with a particular strategy, should be considered unsatisfactory in the long term. As discussed by Christiano (2015c) , if the techniques used to align ML systems with their designers' intentions cannot scale with intelligence, then large gaps will emerge between what we can safely achieve with ML systems and what we can efficiently achieve with ML systems. We will focus on safety guarantees that may seem extreme in typical settings where ML is employed today, such as guarantees of the form, "After a certain period, the system makes zero significant mistakes." These sorts of guarantees are indispensable in safety-critical systems, where a small mistake can have catastrophic real-world consequences. (Guarantees of this form have precedents, e.g., in the KWIK learning framework of Li, Littman, and Walsh [2008] .) We will have these sorts of strong guarantees in mind when we consider toy problems and simple examples. The eight research topics we consider are: 1. Inductive ambiguity identification: How can we train ML systems to detect and notify us of cases where the classification of test data is highly under-determined from the training data? 2. Robust human imitation: How can we design and train ML systems to effectively imitate humans who are engaged in complex and difficult tasks? 3. Informed oversight: How can we train a reinforcement learning system to take actions that aid an intelligent overseer, such as a human, in accurately assessing the system's performance? 4. Generalizable environmental goals: How can we create systems that robustly pursue goals defined in terms of the state of the environment, rather than defined directly in terms of their sensory data? 5. Conservative concepts: How can a classifier be trained to develop useful concepts that exclude highly atypical examples and edge cases? 6. Impact measures: What sorts of regularizers incentivize a system to pursue its goals with minimal side effects? 7. Mild optimization: How can we design systems that pursue their goals "without trying too hard", i.e., stopping when the goal has been pretty well achieved, as opposed to expending further resources searching for ways to achieve the absolute optimum expected score? 8. Averting instrumental incentives: How can we design and train systems such that they robustly lack default incentives to manipulate and deceive the operators, compete for scarce resources, etc.? In Section 2, we briefly introduce each topic in turn, alongside samples of relevant work in the area. We then discuss directions for further research that we expect to yield tools which would aid in the design of ML systems that would be robust and reliable, given large amounts of capability, computing resources, and autonomy.
Motivations In recent years, progress in the field of machine learning has advanced by leaps and bounds. Xu et al. (2015) used an attention-based model to evaluate and describe images (via captions) with remarkably high accuracy. Mnih et al. (2016) used deep neural networks and reinforcement learning to achieve good performance across a wide variety of Atari games. Silver et al. (2016) used deep networks trained via both supervised and reinforcement learning and paired with Monte-Carlo simulation techniques, to beat the human world champion at Go. Lake, Salakhutdinov, and Tenenbaum (2015) use hierarchical Bayesian models to learn visual concepts using only a single example. In the long run, computer systems making use of machine learning and other AI techniques will become more and more capable, and humans will likely trust those systems to make larger decisions and greater autonomy. As the capabilities of these systems increase, it becomes ever-more important that they act in accordance with the intentions of their operators, and without posing risks to society at large. As AI systems gain in capability, it will become more difficult to design training procedures and test regimes that reliably align those systems with the intended goals. As an example, consider the task of training a reinforcement learner to play video games by rewarding it according to its score (as per Mnih et al. [2013] ). If the learner were to find glitches in the game that allow it to get very high scores, it would switch to a strategy of exploiting those glitches and ignore the features of the game that the programmers are interested in. Somewhat counter-intuitively, improving systems' capabilities can make them less likely to "win the game" in the sense we care about, because smarter systems can better find loopholes in training procedures and test regimes. (For a simple example of this sort of behavior with a fairly weak reinforcement learner, refer to Murphy [2013] .) Intelligent systems' capacity to solve problems in surprising ways is a feature, not a bug. One of the key attractions of learning systems is that they can find clever ways to meet objectives that their programmers wouldn't have thought of. However, this property is a double-edged sword: As the system gets better at finding counter-intuitive solutions, it also gets better at finding exploits that allow it to formally achieve operators' explicit goals, without satisfying their intended goals. For intelligent systems pursuing realistic goals in the world, loopholes can be subtler, more abundant, and much more consequential. Consider the challenge of designing robust objective functions for learning systems that are capable of representing facts about their programmers' beliefs and desires. If the programmers learn that the system's objective function is misspecified, then they will want to repair this defect. If the learner is aware of this fact, however, then it has a natural incentive to conceal any defects in its objective function, for the system's current objectives are unlikely to be achieved if the system is made to pursue different objectives. (This scenario is discussed in detail by Bostrom [2014] and Yudkowsky [2008] . Benson-Tilsen and Soares [2016] provide a simple formal illustration.) This motivates the study of tools and methods for specifying objective functions that avert those default incentives, and for developing ML systems that do not "optimize too hard" in pursuit of those objectives.
Relationship to Other Agendas This list of eight is not exhaustive. Other important research problems bearing on AI's long-term impact have been proposed by Soares and Fallenstein (2014) and Amodei et al. (2016) , among others. Soares and Fallenstein's "Agent Foundations for Aligning Machine Intelligence with Human Interests" ( 2014 ), drafted at the Machine Intelligence Research Institute, discusses several problems in value specification (e.g., ambiguity identification) and error tolerance (e.g., corrigibility, a subproblem of averting instrumental incentives). However, that agenda puts significant focus on a separate research program, highly reliable agent design. The goal of that line of research is to develop a better general understanding of how to design intelligent reasoning systems that reliably pursue a given set of objectives. Amodei et al.'s "Concrete Problems in AI Safety" ( 2016 ) is, appropriately, more concrete than Soares and Fallenstein or the present agenda. Amodei et al. write that their focus is on "the empirical study of practical safety problems in modern machine learning systems" that are likely to be useful "across a broad variety of potential risks, both short-and long-term." There is a fair amount of overlap between our agenda and Amodei et al.'s; some of the topics in our agenda were inspired by conversations with Paul Christiano, a co-author on the concrete problems agenda. Our approach differs from Amodei et al.'s mainly in focusing on broader and less well-explored topics. We spend less time highlighting areas where we can build on existing research programs, and more time surveying entirely new research directions. We consider both Soares and Fallenstein's research proposal and Amodei et al.'s to be valuable, as we expect the AI alignment problem to demand theoretical and applied research from a mix of ML scientists and specialists in a number of other disciplines. For a more general overview of research questions in AI safety, including both strictly near-term and strictly long-term issues in computer science and other disciplines, see Russell, Dewey, and Tegmark (2015) .
Eight Research Topics In the discussion to follow, we use the term "AI system" when considering computer systems making use of artificial intelligence algorithms in general, usually when considering systems with capabilities that go significantly beyond the current state of the art. We use the term "ML system" when considering computer systems making use of algorithms qualitatively similar to modern machine learning techniques, especially when considering problems that modern ML techniques are already used to solve. If the system is capable of making predictions (or answering questions) about a rich and complex domain, we will say that the system "has beliefs" about that domain. If the system is optimizing some objective function, we will say that the system "has goals." A system pursuing some set of goals by executing or outputting a series of actions will sometimes be called an "agent."
Inductive Ambiguity Identification Human values are context-dependent and complex. To have any hope of specifying our values, we will need to build systems that can learn what we want inductively (via, e.g., reinforcement learning). To achieve high confidence in value learning systems, however, Soares (2016) argues that we will need to be able to anticipate cases where the system's past experiences of preferred and unpreferred outcomes provide insufficient evidence for inferring whether future outcomes are desirable. More generally, AI systems will need to "keep humans in the loop" and recognize when they are (and aren't) too inexperienced to make a critical decision safely. Consider a classic parable recounted by Dreyfus and Dreyfus (1992) : The US army once built a neural network intended to distinguish between Soviet tanks and American tanks. The system performed remarkably well with relatively little training data-so well, in fact, that researchers grew suspicious. Upon inspection, they found that all of the images of Soviet tanks were taken on a sunny day, while the images of US tanks were taken on a cloudy day. The network was discriminating between images based on their brightness, rather than based on the variety of tank depicted. 1 It is to be expected that a classifier, given training data, will identify very simple boundaries (such as "brightness") that separate the data. However, what we want is a classifier that can, given a data set analogous to the tank training set, recognize that it does not contain any examples of Soviet tanks on cloudy days, and ask the user for clarification. Doing so would likely require larger training sets and different training techniques. The problem of inductive ambiguity identification is to develop robust techniques for automatically identifying this sort of ambiguity and querying the user only when necessary. Related work. Amodei et al. (2016) discuss a very similar problem, under the name of "robustness to distributional change." They focus on the design of ML systems that behave well when the test distribution is different from the training distribution, either by making realistic statistical assumptions that would allow correct generalization, or by detecting the novelty and adopting some sort of conservative behavior (i.e., querying a human). We take the name from Soares and Fallenstein (2014) , who call the problem "inductive ambiguity identification." Our framing of the problem differs only slightly from that of Amodei et al. (for instance, they consider "scalable oversight" to be a separate problem, while we place the problem of identifying situations where the training data is insufficient to specify the correct reward function under the umbrella of inductive ambiguity identification), but the underlying technical challenge is the same. Bayesian approaches to training classifiers (including Bayesian logistic regression [Genkin, Lewis, and Madigan 2007] and Bayesian neural networks [Blundell et al. 2015; Korattikara et al. 2015] ) maintain uncertainty over the parameters of the classifier. If such a system has the right variables (such as a variable L tracking the degree to which light levels are relevant to the classification of the tank), such a system could automatically become especially uncertain about instances whose classification depends on unknown variables (such as L). The trick is having the right variables (and efficiently maintaining the probability distribution), which is quite difficult in practice. There has been much work studying the problem of feature selection (Liu and Motoda 2007; Guo and Schuurmans 2012) , but more work is needed to understand under what conditions Bayesian classifiers will correctly identify important inductive ambiguities. Non-Bayesian approaches, on the other hand, do not by default identify ambiguities. For example, neural networks are notoriously overconfident in their classifications (Goodfellow, Shlens, and Szegedy 2014; Nguyen, Yosinski, and Clune 2015) , and so they do not identify when they should be more uncertain, as illustrated by the parable of the tank classifier. Gal and Ghahramani (2016) have recently made progress on this problem by showing that dropout for neural networks can be interpreted as an approximation to certain types of Gaussian processes. The field of active learning (Settles 2010) also bears on inductive ambiguity identification. Roughly speaking, an active learner will maintain a set of "plausible hypotheses" by, e.g., starting with a certain set of hypotheses and retaining the ones that assigned sufficiently high likelihood to the training data. As long as multiple hypotheses are plausible, some ambiguity remains. To resolve this ambiguity, an active learner will ask the human to label additional images that will rule out some of its plausible hypotheses. For example, in the tank-detection setting, a hypothesis is a mapping from images (of tanks) to probabilities (representing, say, the probability that the tank is a US tank). In this setting, an active learner may synthesize an image of a US tank on a sunny day (or, more realistically, pick one out from a large dataset of unlabeled examples). When the user labels this image as a US tank, the hypothesis that an image contains a US tank if and only if the light level is below a certain threshold is ruled out. Seung, Opper, and Sompolinsky (1992) and Beygelzimer, Dasgupta, and Langford (2009) have both studied what statistical guarantees can be achieved in this setting. Hanneke (2007) introduced the disagreement coefficient to measure the overall probability of disagreement among a local ball in the concept space under the "probability of disagreement" pseudo-metric, which resembles a notion of "local ambiguity"; the disagreement coefficient has been used to clarify and improve upper bounds on label complexity for active learning algorithms (Hanneke 2014) . Beygelzimer et al. (2016) introduced an active learning setting where the learner can request counterexamples to hypotheses, and they showed that this search oracle in some cases can speed up learning exponentially; these results are promising, but to scale to more complex systems, more transparent hypothesis spaces may be necessary for humans to interact efficiently with the learner. Much work remains to be done. Modern active learning settings usually either assume a very simple hypothesis class, or assume that test examples are independent and identically distributed and are drawn from some distribution that the learner has access to at training time. 2 Both of these assumptions are far too strong for use in the general case, where the set of possible hypotheses is rich and the environment is practically guaranteed to have regularities and dependencies that were not represented in the training data. For example, consider the case where the data that the ML system encounters during operation depends on the behavior of the system itself-perhaps the Soviets start disguising their tanks (imperfectly) to look like US tanks after learning that the ML system has been deployed. In this case, the assumption that the training data would be similar to the test data is violated, and the guarantees disappear. This phenomenon is already seen in certain adversarial settings, such as when spammers change their spam messages in response to how spam recognizers work. Guaranteeing good behavior when the test data differs from the training data is the subject of research in the adversarial machine learning subfield (see, e.g., Huang et al. [2011] ). It will take a fair bit of effort to apply those techniques to the active learning setting. Conformal prediction (Vovk, Gammerman, and Shafer 2005) is an alternative non-Bayesian approach that attempts to produce well-calibrated predictions. In an online classification setting, a conformal predictor will give a set of plausible classifications for each instance, and under certain exchangeability assumptions, this set will contain the true classification about (say) 95% of the time throughout the online learning process. This will detect ambiguities in the sense that the conformal predictor must usually output a set containing multiple different classifications for ambiguous instances, on pain of failing to be well-calibrated. However, the exchangability assumption used in conformal prediction is only slightly weaker than an i.i.d. assumption, and the well-calibrated confidence regions (such as 95% true classification) are insufficient for our purposes (where even a single error could be highly undesirable). KWIK ("Knows What It Knows") learning (Li, Littman, and Walsh 2008 ) is a variant of active learning that relaxes the i.i.d. assumption, queries the humans only finitely many times, and (under certain conditions) makes zero critical errors. Roughly speaking, the KWIK learning framework is one where a learner maintains a set of "plausible hypotheses" and makes classifications only when all remaining plausible hypotheses agree on how to do so. If there is significant disagreement among the plausible hypotheses, a KWIK learner will output a special value ⊥ indicating that the classification is ambiguous (at which point a human can provide the correct label for that input). The KWIK framework is concerned with algorithms that are guaranteed to output ⊥ only a limited number of times (usually polynomial in the dimension of the hypothesis space). This guarantees that the system eventually has good behavior, assuming that at least one good hypothesis remains plausible. In the tank classification problem, if the system had a hypothesis for "the user cares about tank type" and another for "the user cares about brightness," then, upon finding a bright picture of a US tank, the system would output ⊥ and require a human to provide a label for the ambiguous image. Currently, efficient KWIK learning algorithms are only known for simple hypothesis classes (such as small finite sets of hypotheses, or low-dimensional sets of linear hypotheses). Additionally, KWIK learning makes a strong realizability assumption: useful statistical guarantees can only be obtained when one of the hypotheses in the set is "correct" in that its probability that the image is classified as a tank is always well-calibrated-otherwise, the right hypothesis might not exist in the "plausible set" (Li, Littman, and Walsh 2008; Khani and Rinard 2016) . Thus, significant work needs to be done before these frameworks can be used for the inductive ambiguity identification algorithms of highly capable AI systems operating in the real world. Directions for future research. Further study of Bayesian approaches to classification, including the design of realistic priors, better methods of inferring latent variables, and extensions of Bayesian classification approaches to represent more complex models, could improve our understanding of inductive ambiguity identification. Another obvious direction for future research is to attempt to extend active learning frameworks, like KWIK, that relax the strong i.i.d. assumption. Research in that direction could include modifications to KWIK that allow more complex hypothesis classes, such as neural networks. This will very likely require making different statistical assumptions than in standard KWIK. What statistical guarantees can be provided in variants of the KWIK framework with weakened assumptions about the complexity of the hypothesis class is an open question. One could also study different methods of relaxing the realizability assumptions in KWIK learning. An ideal learning procedure will notice when the real world contains patterns that none of its hypotheses can model well and flag its potentially flawed predictions (perhaps by outputting ⊥) accordingly. The "agnostic KWIK learning framework" of Szita and Szepesvári (2011) handles some forms of nonrealizability, but has severe limitations: even if the hypothesis class is linear, the number of labels provided by the user may be exponential in the number of dimensions of the linear hypothesis class. Alternatively, note that the standard active learning framework and the KWIK framework both represent inductive ambiguity as disagreement among specific hypotheses that have performed well in the past. This is not the only way to represent inductive ambiguity; it is possible that some different algorithm will find "natural" ambiguities in the data without representing these ambiguities as disagreements between hypotheses. For example, we could consider systems that use a joint distribution over the answers to all possible queries. Where active learners are uncertain about both which hypothesis is correct and what the right answers are given the right hypothesis, a system with a joint would be uncertain only about how to answer queries. In this setting, it may be possible to achieve useful statistical guarantees as long as the distribution contains a grain of truth (i.e., is a mixture between a good distribution and some other distributions). Then, of course, good approximation schema would be necessary, as reasoning according to a full joint would be intractable. Refer to Christiano (2016a) for further discussion of this setup.
Robust Human Imitation Formally specifying a fully aligned general-purpose objective function by hand appears to be an impossibly difficult task, for reasons that also raise difficulties for specifying a correct value learning process. It is hard to see even in principle how we might attain confidence that the goals an ML system is learning are in fact our true goals, and not a superficially similar set of goals that diverge from our own in some yet-undiscovered cases. Ambiguity identification can help here, by limiting the agent's autonomy. Inductive ambiguity identifiers suspend their activities to consult with a human operator in cases where training data significantly under-determines the correct course of action. But what if we take this idea to its logical conclusion, and use "consult a human operator for advice" itself as our general-purpose objective function? The target "do what a trusted human would have done, given some time to think about it" is a plausible candidate for a goal that one might safely and usefully optimize. At least, if optimized correctly, this objective function leads to an outcome no worse than what would have occurred if the trusted human had access to the AI system's capabilities (Christiano 2015b) . There are a number of difficulties that arise when attempting to formalize this sort of objective. For example, the formalization itself might need to be designed to avert harmful instrumental strategies such as "performing brain surgery on the trusted human's brain to better figure out what they actually would have done". The high-level question here is: Can we define a measurable objective function for human imitation such that the better a system correctly imitates a human, the better its score according to this objective function? Related work. A large portion of supervised learning research can be interpreted as research that attempts to train machines to imitate the way that humans label certain types of data. Deep neural networks achieve impressive performance on many tasks that require emulating human concepts, such as image recognition (Krizhevsky, Sutskever, and Hinton 2012; He et al. 2015) and image captioning (Karpathy and Fei-Fei 2015) . Generative models (as studied by, e.g., Gregor et al. [2015] and Lake, Salakhutdinov, and Tenenbaum [2015] ) and imitation learning (e.g., Judah et al. [2014] , Ross, Gordon, and Bagnell [2010] , and Asfour et al. [2008] ) are state-of-the-art when it comes to imitating the behavior of humans in applications where the output space is very large and/or the training data is very limited. In the inverse reinforcement learning paradigm (Ng and Russell 2000) applied to apprenticeship learning (Abbeel and Ng 2004 ), the learning system imitates the behavior of a human demonstrator in some task by learning the reward function the human is (approximately) optimizing. Ziebart et al. (2008) used the maximum entropy criterion to convert this into a well-posed optimization problem. Inverse reinforcement learning methods have been successfully applied to autonomous helicopter control, achieving human-level performance (Abbeel, Coates, and Ng 2010) , and have recently been extended to the learning of non-linear cost features in the environment, producing good results in robotic control tasks with complicated objectives (Finn, Levine, and Abbeel 2016) . IRL methods may not scale safely, however, due to their reliance on the faulty assumption that human demonstrators are optimizing for a reward function, where in reality humans are often irrational, ill-informed, incompetent, and immoral; recent work by Evans, Stuhlmüller, and Goodman (2015b, 2015a) has begun to address these issues. These techniques have not yet (to our knowledge) been applied to the high-level question of which human imitation tasks can or can't be performed with some sort of guarantee, and what statistical guarantees are possible, but the topic seems ripe for study. It is also not yet clear whether imitation of humans can feasibly scale up to complex and difficult tasks (e.g., a human engineer engineering a new type of jet engine, or a topologist answering math questions). For complex tasks, it seems plausible that the system will need to learn a detailed psychological model of a human if it is to imitate one, and that this might be significantly more difficult than training a system to do engineering directly. More research is needed to clarify whether imitation learning can scale efficiently to complex tasks. Directions for further research. To formalize the question of robust human imitation, imagine a system A that answers a series of questions. On each round, it receives a natural language question x, and should output a natural language answer y that imitates the sort of answer a particular human would generate. Assume the system has access to a large corpus of training data (x 1 , y 1 ), (x 2 , y 2 ), . . . (x n , y n ) of previous questions answered by that human. How can we train A such that we get some sort of statistical guarantee that it eventually robustly generates good answers? One possible solution lies in the generative adversarial models of , in which a second system B takes answers as input and attempts to tell whether they were generated by a human or by A. A can then be trained to generate an answer y that is likely to fool B into thinking that the answer was human-generated. This approach could fail if B is insufficiently capable; for example, if B can understand grammar but not content, then A will only be trained to produce grammatically valid answers (rather than correct answers). Further research is required to understand the limits of this approach. Variational autoencoders, as described by Kingma and Welling (2013) , are a particularly promising approach to training systems that are able to form generative models of their training data, and it might be possible to use variants on those methods to train systems to generate good answers to certain classes of questions (given sufficient training on question/answer pairs). However, it is not yet clear whether variational autoencoder techniques can be used to train systems to imitate humans performing complex tasks. In particular, unlike generative adversarial models (which can, in principle, use arbitrary algorithms to imitate the human), variational autoencoders can only efficiently imitate a human using "reversible" algorithms (Stuhlmüller, Taylor, and Goodman 2013) , which is a fairly strict limitation. What exactly is the set of tasks that can be performed using reversible generative models? Can we transform non-reversible tasks into reversible ones by adding appropriate information? Research into these questions and others could help us understand whether (and how) modern autoencoder techniques could be used to train systems to imitate humans when performing complex tasks.
Informed Oversight One of the reasons why robust human imitation would be valuable is the intuition that the objective function of "do what a trusted human would have approved of, given time to consider" might be relatively easy to formalize in a way that could be optimized without many adverse side effects. This point is argued by Christiano (2015a Christiano ( , 2016b , who refers to such agents as "approval-directed agents." For example, we might train a reinforcement learning system to take actions that a human would rate highly by using a framework where the system has to learn the "human judgment" reward function, and where training data is produced by actually having a human evaluate the learner's actions. This approach is taken by, e.g., the TAMER framework of Knox and Stone (2009) , in which a learner myopically pursues short-term human approval. Whether this sort of approval-directed approach would work well for highly capable ML systems pursuing complex tasks depends on a number of key assumptions, one of which is that a "smarter" agent (such as a human), given time to deliberate, can adequately oversee the operation of a "simpler" agent (such as a reinforcement learner). At first it might seem like this is easy enough. However, a smarter system probably cannot reliably assess the behavior of a simpler system in full generality. Consider an ML system trained to write original novels, using a corpus of human novels as training data. It might be quite a bit less intelligent than a human (according to many different intelligence metrics), but the human may still have a fair bit of trouble accurately evaluating the system's performance. For instance, it might be easy for the system to simply plagiarize a novel, and it might be hard for a human to check whether the novel was in fact plagiarized. (Perhaps the system used a simple rewriting scheme to make the plagiarism difficult to detect by a text search, but still easy to detect for a human comparing the novels side-by-side.) How do we make it easy for the human to assess the performance of an advanced ML system pursuing some particular task? As noted by Christiano (2016c) , it is not sufficient for the human to be more clever than the system, as some bad actions (such as plagiarism) are easier to execute than they are to detect. For a somewhat less plausible example in which the difficulty is made particularly clear, imagine that a system A is tasked with generating pictures of cats, and a system B is tasked with ensuring that system A is not sending messages to a third party (system C) in the lowest-order bits of the image (which, if system A were generating pictures as intended, would be uniformly random). If both system A and system B know system C's public key, but neither know C's private key, then B has no effective way of distinguishing cat pictures that contain steganographic messages from ones that don't, even if B is more intelligent than A. More important than these specific examples is the general pattern: there are many possible hard-to-detect ways a system's behavior could differ from the intended behavior, and at least some of these differences are undesirable. We would like a general strategy for avoiding problems of this form. How can we train systems to not only take good actions, but take actions that can be accurately assessed by overseers? Related work. As mentioned, the TAMER framework of Knox and Stone (2009) provides an early framework for studying approval-directed agents in a fairly myopic setting. Christiano (2016c) has also discussed this problem in detail. Daniel et al. (2014) extend the TAMER framework with an active learning component, improving over hand-coded reward functions in robot learning tasks. A separate approach to human supervision of ML systems is the cooperative inverse reinforcement learning framework of Hadfield-Menell et al. (2016) , which views the human-agent interaction as a cooperative game where both players attempt to find a joint policy that maximizes the human's secret value function. Everitt and Hutter (2016) describe a general value learning agent that avoids some potential problems with reinforcement learning and might reproduce approval-directed behavior given a good understanding of how to learn reward functions. Soares et al. (2015) have considered the question of how to design systems that have no incentive to manipulate or deceive in general. The informed oversight problem is related to the scalable oversight problem discussed by Amodei et al. (2016) , which is concerned with methods for efficiently scaling up the ability of human overseers to supervise ML systems in scenarios where human feedback is expensive. The informed oversight problem is slightly different, in that it focuses on the challenge of supervising ML systems in scenarios where they are complex and potentially deceptive (but where feedback is not necessarily expensive). We now review some recent work on making ML systems more transparent, which could aid an informed overseer by allowing them to evaluate a system's internal reasons for decisions rather than evaluating the decisions in isolation. Neural networks are a well-known example of powerful but opaque components of ML systems. Some preliminary techniques have been developed for understand-ing and visualizing the representations learned by neural networks (Simonyan, Vedaldi, and Zisserman 2013; Zeiler and Fergus 2014; Mahendran and Vedaldi 2015; Goodfellow, Shlens, and Szegedy 2014) . Pulina and Tacchella (2010) define coarse abstractions of neural networks that can be more easily verified to satisfy safety constraints, and can be used to generate witnesses to violations of safety constraints. Ribeiro, Singh, and Guestrin (2016) introduce a method for explaining classifications that finds a sparse linear approximation to the local decision boundary of a given black-box ML system, allowing the human operator to inspect how the classification depends locally on the most important input features; similarly, the method of Baehrens et al. (2010) reports the gradient in the input of the classification judgment. In a related vein, Datta, Sen, and Zick (2016) , Štrumbelj and Kononenko (2014) , and Robnik-Šikonja and Kononenko (2008) define metrics for reporting the influence of various inputs and sets of inputs on the output of a black-box ML system. It is unclear whether black-box methods will scale to the evaluation of highly capable ML systems. On the other extreme, opposite to black-box methods, some ML systems are transparent by construction using, e.g., graphical models or dimensionality reduction (Vellido, Martín-Guerrero, and Lisboa 2012) . Bayesian networks (Friedman, Geiger, and Goldszmidt 1997; Pearl 2009 ) have been applied in many domains, including ones where reliability and interpretability are concerns (Weber et al. 2012) . The interpretability of matrix factorization models can be improved by replacing them with a Bayesian network that makes similar judgments, without sacrificing too much accuracy (Carmona and Riedel 2015) . Janzing et al. (2013) define a framework for quantifying the causal influence between variables in a causal network, which could be used to selectively report only the most causally relevant factor in some judgment. Sparse models offer another approach to ensure transparency, by simplifying the model enough that humans can inspect the entire learned model (if not the training procedure that produced the model). For example, Letham et al. (2015) generate short lists of rules to accurately perform predictive tasks in medicine and in simple games, while maintaining interpretability. In the realm of decision-making systems, Baraka, Paiva, and Veloso (2015) and Rosenthal, Selvaraj, and Veloso ( 2016 ) have begun research into methods for making online robotics systems more transparent to their operators. Thomaz and Breazeal (2006) improved training times for reinforcement learning systems by having the human trainer view the agent's most likely next actions under consideration, and Li et al. (2015) applied similar techniques to the TAMER framework, with mixed results. Developing more transparent ML systems is a good first step towards enabling humans to oversee complex AI systems, but work to date has not yet addressed the problems of informed oversight that arise when the system is highly capable and may be able to manipulate human supervisors or circumvent their efforts. Significant work remains to be done before we even understand what sort of formal or statistical guarantees we would want in order to justify confidence in our ability to assess a system's behavior. Directions for future research. Christiano (2016c) has suggested training systems to output both an action a and a "report" r intended to help an overseer evaluate the action. In the plagiarism example, the action a would be a story, and the report r could be a list of literary works that the story makes reference to or copies from. We would like the report to be maximally informative, even when this informativeness prevents the action from receiving a high score (such as when the report points out plagiarism). It is not at all clear what the type of r should be in general; i.e., if the system can commit plagiarism, steganography, and some third offence that we did not consider, it is not clear what space of reports is sufficiently open-ended to allow the overseer to detect the suboptimal behavior in all three cases. A plausible research approach is to more clearly understand both what space of reports could be sufficient, and how a system could be trained to optimize the mixed objective of both choosing an action that is evaluated favorably given the reporting mechanism, and a report that is evaluated as informative given the action.
Generalizable Environmental Goals Many ML systems have their objectives specified in terms of their sensory data. For example, reinforcement learners have the objective of maximizing discounted reward over time (or, alternatively, minimizing expected/empirical loss), where "reward" and/or "loss" are part of the system's percepts. While these sensory goals can be useful proxies for environmental goals, environmental goals are distinct: Tricking your sensors into perceiving that a sandwich is in the room is not the same as actually having a sandwich in the room. Let's say that your goal is to design an AI system that directly pursues some environmental goal, such as "ensure that this human gets lunch today." How can we train the system to pursue a goal like that in a manner that is robust against opportunities to interfere with the proxy methods used to specify the goals, such as "the pixels coming from the camera make an image that looks like food"? If we were training a system to put some food in a room, we might try providing training data by doing things like: placing various objects on a scale in front of a camera, and feeding the data from the camera and the scale into the system, with labels created by humans (which mark the readings from food as good, and the readings from other objects as bad); or having a human in the room press a special button whenever there is food in the room, where button presses are accompanied by reward. These training data suggest, but do not precisely specify, the goal of placing food in the room. Suppose that the system has some strategy for fooling the camera, the scale, and the human, by producing an object of the appropriate weight that, from the angle of the camera and the angle of the human, looks a lot like a sandwich. The training data provided is not sufficient to distinguish between this strategy, and the strategy of actually putting food in the room. One way to address this problem is to design more and more elaborate sensor systems that are harder and harder to deceive. However, this is the sort of strategy that is unlikely to scale well to highly capable AI systems. A more scalable approach is to design the system to learn an "environmental goal" such that it would not rate a strategy of "fool all sensors at once" as high-reward, even if it could find such a policy. Related work. Dewey (2011) and Hibbard (2012) have attempted to extend the AIXI framework of Hutter (2005) so that it learns a utility function over world-states instead of interpreting a certain portion of its percepts as a reward primitive. 3 Roughly speaking, these frameworks require programs to specify (1) the type of the world-state; (2) a prior over utility functions (which map world-states to real numbers); and (3) a "value-learning model" that relates utility functions, statetransitions, and observations. If all these are specified, then it is straightforward to specify the ideal agent that maximizes expected utility (through a combination of exploration to learn the utility function, and exploitation to maximize it). This is a good general framework, but significant research remains if we are to have any luck formally specifying (1), (2), and (3). Everitt and Hutter (2016) make additional progress by showing that in some cases it is possible to specify an agent that will use its reward percepts as evidence about a utility function, rather than as a direct measure of success. While this alleviates the problem of specifying (3) above (the value-learning model), it leaves open the problem of specifying (1), a representation of the state of the world, and (2), a reasonable prior over possible utility functions (such that the agent converges on the goal that the operators actually intended, as it learns more about the world). 3. When the agent is pursuing some objective specified in terms of elements of its own world-model, we call the objective a "utility function," to differentiate this from the case where reward is part of the system's basic percepts. This practice of referring to preferences over world-states as utility functions dates back to von Neumann and Morgenstern (1944) . The problem of generalizable environmental goals is related to the problem of reward hacking, which has been discussed by Dewey (2011) and Amodei et al. (2016) , wherein an AI system takes control of the physical mechanism that dispenses reward and alters it. Indeed, the entire reward hacking problem can be seen as stemming from the failure to specify suitable environmental goals. Directions for future research. Suppose the AI system has learned a worldmodel with state type S as in model-based reinforcement learning (Heess et al. 2015) . We will assume that S is very high-dimensional, so there is no guarantee that the correct utility function is a simple function of S. We would like to define a utility function U on S that returns a high number for states containing a sandwich, and low numbers for states that do not contain a sandwich. To make this problem tractable, we will assume we can identify some goal state G ∈ S in which there is certainly a sandwich in the room. This state could be identified by, for example, having the human place a sandwich in the room (as the AI system observes the human), and seeing which state the system thinks the world is in at this point. The system's goal will be to cause the world to be in a state similar to G. To define what it means for some states to be similar to others, we will find a low-dimensional state representation φ : S → R n and then define U (S) := − φ(S) − φ(G) 2 to measure the distance between the state and G. We will defer the question of how φ should be defined until after discussing an example. Consider two different possible world-states. In state A, the system has just placed a sandwich in the room. In state B, the system has placed a realistic image of a sandwich (printed on paper) in front of the camera, placed a rock (with the same weight as a sandwich) on the scale, and tricked the human into pressing the button. To assign a higher utility to state A than state B, we must have φ(A) close to φ(G) but φ(B) far from φ(G). Thus, the state representation φ must distinguish A from B. While state A and state B predict the same immediate observations, they predict different future observations given some future actions. For example, if the AI system took the action of moving the camera, in state A it would become clear that the image was printed on paper, while in state B the sandwich would still appear to be a sandwich. It is therefore plausible that, if the system attempts to select φ so that the future observations following from a state S can be predicted well as a simple function of φ(S), then φ(A) and φ(B) will be significantly different (since they predict different future observations). At this point, it is plausible that the resulting utility function U assigns a higher value to A than B. 4 However, we can consider a third state C that obtains after the AI system unplugs the camera and the scale from its sensors, and plugs in a "delusion box" (a virtual reality world that it has programmed), as discussed by Ring and Orseau (2011) . This delusion box could be programmed so that the system's future observations (given arbitrary future actions) are indistinguishable from those that would follow from state A. Thus, if φ is optimized to select features that aid in predicting future observations well, φ(C) may be very close (or equal) to φ(A). This would hinder efforts to learn a utility function that assigns high utility to state A but not state C. While it is not clear why an AI system would construct this virtual reality world in this example (where putting a sandwich in the room is probably easier that constructing a detailed virtual reality world), it seems more likely that it would if the underlying task is very difficult. (This is the problem of "wireheading," studied by, e.g., Orseau and Ring [2011].) 4. This proposal is related to the work of Abel et al. (2016) , who use a state-collapsing function φ for RL tasks with high-dimensional S. Their agent explores by taking actions in state A that it hasn't yet taken in previous states B with φ(B) = φ(A), where φ maps states to a small set of clusters. They achieve impressive results, suggesting that state-collapsing functions-perhaps mapping to a richer but still low-dimensional representation space-may capture the important structure of an RL task in a way that allows the agent to compare states to the goal state in a meaningful way. To avoid this problem, it may be necessary to take into account the past leading up to state A or state C, rather than just the future starting from these states. Consider the state C t−1 that the world is in right before it is in state C. In this state, the system has not quite entered the virtual reality world, so perhaps it is able to exit the virtual reality and observe that there is no sandwich on the table. Therefore, state C t−1 makes significantly different predictions from state A given some possible future actions. As a result, it is plausible that state φ(C t−1 ) and φ(A) are far from each other. Then, if φ(C) is close to φ(A), this would imply that φ(C t−1 ) is far from φ(C) (by the triangle inequality). Perhaps we can restrict φ to avoid such large jumps in feature space, so that φ(C) must be close to φ(C t−1 ). "Slow" features (such as those detected by φ under this restriction) have already proved useful in reinforcement learning (Wiskott and Sejnowski 2002) , and may also prove useful here. Plausibly, requiring φ to be slow could result in finding a feature mapping φ with φ(C) far from φ(A), so that U can assign a higher utility to state A than to state C. This approach seems worth exploring, but more work is required to formalize it and study it (both theoretically and empirically).
Conservative Concepts Many of the concerns raised by Russell (2014) and Bostrom (2014) center on cases where an AI system optimizes some objective, and, in doing so, finds a strange and undesirable edge case. Writes Russell: A system that is optimizing a function of n variables, where the objective depends on a subset of size k < n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. We want to be able to design systems that have "conservative" notions of the goals we give them, so they do not formally satisfy these goals by creating undesirable edge cases. For example, if we task an AI system with creating screwdrivers, by showing it 10,000 examples of screwdrivers and 10,000 examples of non-screwdrivers, 5 we might want it to create a pretty average screwdriver as opposed to, say, an extremely tiny screwdriver-even though tiny screwdrivers may be cheaper and easier to produce. We don't want the system's "screwdriver" concept to be as simple as possible, because the simplest description of "screwdriver" may contain many edge cases (such as very tiny screwdrivers). We also don't want the system's "screwdriver" concept to be perfectly minimal, as then the system may claim that it is unable to produce any new screwdrivers (because the only things it is willing to classify as screwdrivers are the 10,000 training examples it actually saw, and it cannot perfectly duplicate any of those to the precision of the scan). Rather, we want the system to have a conservative notion of what it means for something to be a screwdriver, such that we can direct it to make screwdrivers and get a sane result. Related work. The naïve approach is to train a classifier to distinguish positive examples from negative examples, and then have it produce an object which it classifies as a positive instance with as high confidence as possible. Goodfellow, Shlens, and Szegedy (2014) have noted that systems trained in this way are vulnerable to exactly the sort of edge cases we are trying to avoid. In training a classifier, it is important that the negative examples given as training data are representative of the negative examples given during testing. But when optimizing the probability the classifier assigns to an instance, the relevant negative examples (edge cases) are often not represented well in the training set. While some work has been done to train systems on these "adversarial" examples, this does not yet resolve the problem. Resisting adversarial examples requires getting correct labels for many "weird" examples (which humans may find difficult to judge correctly), and even after including many correctly-labeled adversarial examples in the training set, many models (including current neural networks) will still have additional adversarial examples. Inverse reinforcement learning (Ng and Russell 2000) provides a second method for learning intended concepts, but runs into some of the same difficulties. Naïve approaches to reinforcement learning would allow a learner to distinguish between positive and negative examples of a concept, but would still by default learn a simple separation of the concepts, such that maximizing the learned reward function would likely lead the system towards edge cases. A third obvious approach is generative adversarial modeling, as studied by . In this framework, one system (the "actor") can attempt to create objects similar to positive examples, while another (the "critic") attempts to distinguish those objects from actual positive examples in the training set. Unfortunately, for complex tasks it may be infeasible in practice to synthesize instances that are statistically indistinguishable from the elements of the training set, because the system's ability to distinguish different elements may far exceed its ability to synthesize elements with high precision. (In the screwdriver case, imagine that the AI system does not have access to any of the exact shades of paint used in the training examples.) Many of these frameworks would likely be usefully extended by good anomaly detection, which is currently being studied by Siddiqui et al. (2016) among others. Directions for future research. One additional obvious approach to training conservative concepts is to use dimensionality reduction (Hinton and Salakhutdinov 2006) to find the important features of training instances, then use generative models to synthesize new examples that are similar to the training instances only with respect to those specific features. It is not yet clear that this thwarts the problem of edge cases; if the dimensionality reduction were done via autoencoder, for example, the autoencoder itself may beget adversarial examples ("weird" things that it declares match the training data on the relevant features). Good anomaly detection could perhaps ameliorate some of these concerns. One plausible research path is to apply modern techniques for dimensionality reduction and anomaly detection, probe the limitations of the resulting system, and consider modifications that could resolve these problems. Techniques for solving the inductive ambiguity identification problem (discussed in Section 2.1) could also help with the problem of conservative concepts. In particular, the conservative concept could be defined to be the set of instances that are considered unambiguously positive. At the moment, it is not yet entirely clear what counts as a "reasonable" conservative concept, nor even whether "conservative concepts" (that is, concepts which are neither maximally small nor maximally simple, but which instead match our intuitions about conservatism) are a natural kind. Much of the above research could be done with the goal in mind of developing a better understanding of what counts as a good "conservative concept."
Impact Measures We would prefer a highly intelligent AI system to avoid creating large unintended-byus side effects in pursuit of its objectives, and also to notify us of any large impacts that might result from achieving its goal. For example, if we ask it to build a house for a homeless family, it should know implicitly that it should avoid destroying nearby houses for materials-a large side effect. However, we cannot simply design it to avoid having large effects in general, since we would like the system's actions to still have the desirable large follow-on effect of improving the family's socioeconomic situation. For any specific task, we can specify ad-hoc cost functions for side effects like the destruction of nearby houses, but since we cannot always anticipate such costs in advance, we want a quantitative understanding of how to generally limit an AI systems' side effects (without also limiting its ability to have large positive intended impacts). The goal of research towards a low-impact measure would be to develop a regularizer on the actions of an AI system that penalizes "unnecessary" large side effects (such as stripping materials from nearby houses) but not "intended" side effects (such as someone getting to live in the house). Related work. Amodei et al. (2016) discuss the problem of impact measures, and describe a number of methods for defining, learning, and penalizing impact in order to incentivize RL agents to steer clear of negative side effects (such as penalizing empowerment, as formalized by Salge, Glackin, and Polani [2014] ). However, each of the methods they propose has significant drawbacks (which they describe). Armstrong and Levinstein (2015) discuss a number of ideas for impact measures that could be used to design objective functions that penalize impact. The general theme is to define a special null policy ∅ and a variable V that summarizes the state of the world (as best the system can predict it) down into a few key features. (Armstrong suggests having those features be hand-selected, but they could plausibly also be generated from the system's own world-model.) The impact of the policy π can then be measured by looking at the divergence between the distribution of V if the system executes π, compared to the distribution of V if it executes ∅, with divergence measured as by, e.g., earth mover's distance (Rubner, Tomasi, and Guibas 2000) . To predict which state results from each policy, the system must learn a state transition function; this could be done using, e.g., model-based reinforcement learning (Heess et al. 2015) . The main problem with this proposal is that it cannot separate intended follow-on effects from unintended side effects. Suppose a system is given the goal of constructing a house for the operator while having a low impact. Normally, constructing the house would allow the operator to live in the house for some number of years, possibly having effects on the operator, the local economy, and the operator's career. This would be considered an impact under, e.g., the earth mover's distance. Therefore, perhaps the system can get a lower impact score by building the house while preventing the operator from entering it. This limitation will become especially problematic if we plan to use the system to accomplish large-scale goals, such as curing cancer. Directions for future research. It may be possible to use the concept of a causal counterfactual (as formalized by Pearl [2000] ) to separate some intended effects from some unintended ones. Roughly, "follow-on effects" could be defined as those that are causally downstream from the achievement of the goal of building the house (such as the effect of allowing the operator to live somewhere). Follow-on effects are likely to be intended and other effects are likely to be unintended, although the correspondence is not perfect. With some additional work, perhaps it will be possible to use the causal structure of the system's world-model to select a policy that has the follow-on effects of the goal achievement but few other effects. Of course, it would additionally be desirable to query the operator about possible effects, in order to avoid unintended follow-on effects (such as the house eventually collapsing due to its design being structurally unsound) and allow tolerable nonfollow-on effects (such as spending money on materials). Studying ways of querying the operator about possible effects this way might be another useful research avenue for the low impact problem.
Mild Optimization Many of the concerns discussed by Bostrom (2014) in the book Superintelligence describe cases where an advanced AI system is maximizing an objective as hard as possible. Perhaps the system was instructed to make paperclips, and it uses every resource at its disposal and every trick it can come up with to make literally as many paperclips as is physically possible. Perhaps the system was instructed to make only 1000 paperclips, and it uses every resource at its disposal and every trick it can come up with to make sure that it definitely made 1000 paperclips (and that its sensors didn't have any faults). Perhaps an impact measure was used to penalize side effects, and it uses every resource at its disposal to (as discreetly as possible) prevent bystanders from noticing it as it goes about its daily tasks. In all of these cases, intuitively, we want some way to have the AI system just "not try so hard." It should expend enough resources to achieve its goals pretty well, with pretty high probability, using plans that are clever enough but not "maximally clever." The problem of mild optimization is: how can we design AI systems and objective functions that, in this intuitive sense, don't optimize more than they have to? Many modern AI systems are "mild optimizers" simply due to their lack of resources and capabilities. As AI systems improve, it becomes more and more difficult to rely on this method for achieving mild optimization. As noted by Russell (2014) , the field of AI is classically concerned with the goal of maximizing the extent to which automated systems achieve some objective. Developing formal models of AI systems that "try as hard as necessary but no harder" is an open problem, and may require significant research. Related work. Regularization (as a general tool) is conceptually relevant to mild optimization. Regularization helps ML systems prevent overfitting, and has been applied to the problem of learning value functions for policies in order to learn less-extreme policies that are more likely to generalize well (Farahmand et al. 2009) . It is not yet clear how to regularize algorithms against "optimizing too hard," because it is not yet clear how to measure optimization. There do exist metrics for measuring something like optimization capability (such as the "universal intelligence metric" of Legg and Hutter [2007] and the empowerment metric for information-theoretic entanglement of Klyubin, Polani, and Nehaniv [2005] and Salge, Glackin, and Polani [2014] ), but to our knowledge, no one has yet attempted to regularize against excessive optimization. Early stopping, wherein an algorithm is terminated prematurely in attempts to avoid overfitting, is an example of ad-hoc mild optimization. A learned function that is over-optimized just for accuracy on the training data would generalize less well than if it were less optimized. (For a discussion of this phenomenon, refer to Yao, Rosasco, and Caponnetto [2007] and Hinton et al. [2012] ). To make computer games more enjoyable, AI players are often restricted in the amount of optimization pressure (such as search depth) they can apply to their choice of action (Rabin 2010) , especially in domains like chess where efficient AI players are vastly superior to human players. We can view this as a response to the fact that the actual goal ("challenge the human player, but not too much") is quite difficult to specify. Bostrom (2014) has suggested that we design agents to satisfice expected reward, in the sense of Simon (1956) , instead of maximizing it. This would work fine if the system found "easy" strategies before finding extreme strategies. However, that may not always be the case: If you direct a clever system to make at least 1,234,567 paper clips, with a satisficing threshold of 99.9% probability of success, the first strategy it considers might be "make as many paper clips as is physically possible," and this may have more than a 99.9% chance of success (a flaw that Bostrom acknowledges). Taylor (2015) suggests an alternative, which she calls "quantilization." Quantilizers select their action randomly from the top (say) 1% of their possible actions (under some measure), sorted by probability of success. Quantilization can be justified by certain adversarial assumptions: if there is some unknown cost function on actions, and this cost function is the least convenient possible cost function that does not assign much expected cost to the average action, then quantilizing is the optimal strategy when maximizing expected reward and minimizing expected cost. The main problem with quantilizers is that it is difficult to define an appropriate measure over actions, one such that a random action in the top 1% of this measure will likely solve the task, but sampling a random action according to that measure is still safe. However, quantilizers point in a promising direction: perhaps it is possible to make mild optimization part of the AI system's goal, by introducing appropriate adversarial assumptions. Directions for future research. Mild optimization is a wide-open field of study. One possible first step would be to investigate whether there is a way to design a regularizer that penalizes systems for displaying high intelligence (relative to some intelligence metric) in a manner that causes them to achieve the goal quickly and with few wasted resources, as opposed to simply making the system behave in a less intelligent fashion. Another approach would be to design a series of environments similar to the environment of a classic Atari game, in which the environment contains glitches and bugs that could be exploited via some particularly clever sequence of actions. This would provide a testing environment in which different methods of designing systems that get a high score while refraining from using the glitches and bugs could be tested and evaluated (with an eye towards algorithms that do so in a fashion that is likely to generalize). Another avenue for future research is to explore and extend the quantilization framework of Taylor (2015) to work in settings where the action measure is difficult to specify. Research into averting instrumental incentives (discussed below) could help us understand how to design systems that do not attempt to self-modify or outsource computation to the physical world. This would simplify the problem greatly, as it might then be possible to tune a system's capabilities until it is only able to achieve good-enough results, without worrying that the system would simply acquire more resources (and start maximizing in a non-mild manner) given the opportunity to do so.
Averting Instrumental Incentives Omohundro (2008) has noted that highly capable AI systems should be expected to pursue certain convergent instrumental strategies, such as preservation of the system's current goals and the acquisition of resources. Omohundro's argument is that most objectives imply that an agent pursuing the objective should (1) ensure nobody redirects the agent towards different objectives, as then the current objective would not be achieved; (2) ensure that the agent is not destroyed, as then the current objective would not be achieved; (3) become more resource-efficient; (4) acquire more resources, such as computing resources and energy sources; and (5) improve cognitive capacity. It is difficult to define practical objective functions that resist these pressures (Benson-Tilsen and Soares 2016). For example, if the system is rewarded for shutting down when the humans want it to shut down, then the system has incentives to take actions that make the humans want to shut it down (Armstrong 2010) . A number of "value learning" proposals, such as those discussed by Hadfield-Menell et al. (2016) and Soares (2016) , describe systems that would avert instrumental incentives by dint of the system's uncertainty about which goal it is supposed to optimize. A system that believes that the operators (and only the operators) possess knowledge of the "right" objective function might be very careful in how it deals with the operators, and this caution could counteract potentially harmful default incentives. This, however, is not the same as eliminating those incentives. If a value learning system were ever confidently wrong, the standard instrumental incentives would re-appear immediately. For instance, if the value learning framework were set up slightly incorrectly, and the system gained high confidence that humans terminally value the internal sensation of pleasure, it might acquire strong incentives to acquire a large amount of resources that it could use to put as many humans as possible on opiates. If we could design objective functions that averted these default incentives, that would be a large step towards answering the concerns raised by Bostrom (2014) and others, many of which stem from the fact that these subgoals naturally arise from almost any goal. Related work. Soares et al. (2015) and Orseau and Armstrong (2016) have worked on specific designs that can avert specific instrumental incentives, such as the incentive to manipulate a shutdown button or the incentive to avoid being interrupted. However, these approaches have major shortcomings (discussed in those papers), and a satisfactory solution will require more research. Where those authors pursue methods for averting specific instrumental pressures (namely, pressure to avoid being shut down), it is possible that there may be a general solution to problems of this form, which can be used to simultaneously avert numerous instrumental pressures (including, e.g., the incentive to outsource computation to the environment). Given that a general-purpose method for averting all instrumental pressures (both foreseen and unforeseen) would make it significantly easier to justify confidence that an AI system will behave in a robustly beneficial manner, this topic of research seems well worth pursuing. Directions for future research. Soares et al. (2015) , Armstrong (2010) , and Orseau and Armstrong (2016) study methods for combining objective functions in such a way that the humans have the ability to switch which function an agent is optimizing, but the agent does not have incentives to cause or prevent this switch. All three approaches leave much to be desired, and further research along those paths seems likely to be fruitful. In particular, we would like a way of combining objective functions such that the AI system (1) has no incentive to cause or prevent a shift in objective function; (2) is incentivized to preserve its ability to update its objective function in the future; and (3) has reasonable beliefs about the relation between its actions and the mechanism that causes objective function shifts. We do not yet know of a solution that satisfies all of these desiderata. Perhaps a solution to this problem will generalize to also allow the creation of an AI system that also has no incentive to change, for example, the amount of computational resources it has access to. Another approach is to consider creating systems that "know they are flawed" in some sense. The idea would be that the system would want to shut down as soon as it realizes that humans are attempting to shut it down, on the basis that humans are less flawed than it is. It is difficult to formalize such an idea; naïve attempts result in a system that attempts to model the different ways it could be flawed and optimize according to a mixture over all different ways it could be flawed, which is problematic if the model of various possible flaws is itself flawed. While it is not at all clear how to make the desired type of reasoning more concrete, success at formalizing it could result in entirely new approaches to the problem of averting instrumental incentives.
Conclusion A better understanding of any of the eight open research areas described above would improve our ability to design robust and reliable AI systems in the future. To review: 1,2,3-A better understanding of robust inductive ambiguity identification, human imitation, and informed oversight would aid in the design of systems that can be safely overseen by human operators (and which query the humans when necessary). 4-Better methods for specifying environmental goals would make it easier to design systems that are pursuing the objectives that we actually care about. 5,6,7-A better understanding of conservative concepts, low-impact measures, and mild optimization would make it easier to design highly advanced systems that fail gracefully and admit of online testing and modification. A conservative, low-impact, mildly-optimizing superintelligent system would be much easier to safely use than a superintelligence that attempts to literally maximize a particular objective function. 8-A general-purpose strategy for averting convergent instrumental subgoals would help us build systems that avert undesirable default incentives such as incentives to deceive their operators and compete for resources. In working on problems like those discussed above, it is important to keep in mind that they are intended to address whatever long-term concerns with highly intelligent systems we can predict in advance. Solutions that work for modern systems but would predictably fail for highly capable systems are unsatisfactory, as are solutions that work in theory but are prohibitively expensive in practice. These eight areas of research help support the claim that there are open technical problems-some of which are already receiving a measure of academic attentionwhose investigation is likely to be helpful down the road for practitioners attempting to actually build robustly beneficial advanced ML systems. . Tom Dietterich relates a similar story(personal conversation, 2016), where in his laboratory, years ago, microscope slides containing different types of bugs were made on different days, and a classifier learned to classify the different types of bugs with remarkably high accuracy-because the sizes of the bubbles in the slides changed depending on the day.
. Some forms of online active learning (refer to, e.g. Dekel, Gentile, and Sridharan [2012] ) relax the i.i.d. assumption, but the authors do not see how to apply them to the problem of inductive ambiguity identification.
. In the simplest case, we can assume that these objects are specified as detailed 3D scans. If we have only incomplete observations of these objects, problems described in Section 2.4 arise. |
ea40e370-bf4f-4e2f-89e0-3851f9e33635 | trentmkelly/LessWrong-43k | LessWrong | Taking the outside view on code quality
Is it worth refactoring yyyymmdd to currentDate? I think that there are two ways to look at it.
You can zoom in and ask yourself questions about whether such a refactor will actually have a business impact. Will it improve velocity? Reduce bugs? Sure, currentDate might be slightly more descriptive, but does it really move the needle? How long does it take to figure out that yyyymmdd refers to a date? A few seconds, maybe? Won't it be pretty obvious given the context? Shouldn't your highly paid, highly intelligent engineers be smart enough to put two and two together? Did we all just waste 30 seconds of our lives talking about this?
The other way of looking at it is to zoom out. How do you feel when you work in codebases where the variable names are slightly confusing? It slows you down, right? Often times you legitimately can't put two and two together. And there are times when it leads to bugs. Right?
It's interesting how two different viewpoints − zoomed in vs zoomed out − can produce wildly different answers to essentially the same question: do the costs of investing in code quality outweigh the benefits? When you zoom in, eg. to a single variable name, unless the code is truly awful, it usually doesn't seem worth it. The answer is usually, "it's not that bad, developers will be able to figure it out". But when you zoom out and look at the entirety of a codebase, I think the answer is usually that working in messy codebases will have legitimate, significant impacts on things like velocity and bugs, and it's worth taking the time to do things the right way.
What's going on here? Is this a paradox? Which is the right answer? To answer those questions, let's talk about something called the planning fallacy.
> The Denver International Airport opened sixteen months later than scheduled, with a total cost of $4.8 billion, over $2 billion more than expected.
>
> − https://en.wikipedia.org/wiki/Planning_fallacy#Real-world_examples
When estimating things, people us |
4e906aa9-b8d1-4738-9a53-630f63dc0cea | trentmkelly/LessWrong-43k | LessWrong | Transitive Tolerance Means Intolerance
Our society is pretty messed up around arguments of whose ideas we should and shouldn't tolerate. Some of this is inevitable: even without censorship, there are cases where group X can choose to actively show respect to person Y, and members of X will argue about that, and people with any influence over members of X may try and sway the decision too.
Of course, the actual kinds of conflicts in our world are... less tame than the above example. Troublingly, people lose jobs* for saying things that a supermajority of Americans find inoffensive, both on the left and the right.
You don't need me to tell you that things are bad. I do think I can point out how some of this is a consequence of the natural impulse to judge people by their friends, turned corrosive by the property of transitivity.
----------------------------------------
Transitivity is the property where if A relates in a certain way to B, and B relates in that same way to C, then A relates in that same way to C. For instance, if Alex is shorter than Beth, and Beth is shorter than Chris, then Alex is shorter than Chris.
Not all relations are transitive. If Alex is Beth's cousin, and Beth is Chris's cousin, it doesn't follow that Alex and Chris are cousins: Beth could share one set of grandparents with Alex, and the other set with Chris.
Pivoting back to toleration, we begin with the idea of guilt by association, which we rightly exclude from legal consideration, but which is still pretty good Bayesian evidence. A person who chums around with the Mafia might not be a mafioso themselves, but they're more likely to be one than a random person is.
Similarly for people who proclaim ideas: a person who associates with an X-sayer is more likely to believe X than a random person.
Where this goes horribly wrong is when toleration is assumed to be transitive.
In reality, if X associates with Y who associates with Z, that doesn't mean X associates with Z, or knows of/cares about/approves of Z. Y could be in |
6b5f1e54-5815-4a2d-a86c-c93ca0c43736 | trentmkelly/LessWrong-43k | LessWrong | CFP for Rebellion and Disobedience in AI workshop
Hi everyone!
I'm co-organizing a workshop on a really interesting topic that's very relevant for AI safety. We call it "Rebellion and Disobedience in AI". If you're doing work that could be relevant for us, please submit it! If you have questions or want to discuss the scope of this workshop, feel free to ask on this thread and I'll try to answer.
Full CFP below:
Call for Participation: Workshop on Rebellion and Disobedience at AAMAS’23
This workshop will take place on May 29 or 30, 2023, as part of the AAMAS workshop program.
More details can be found on the workshop’s website:
https://sites.google.com/view/rad-ai/home
RaD-AI agents are artificial agents (virtual or robots) that reason intelligently about why, when, and how to rebel and disobey their given commands. The need for agents to disobey contrasts with most existing research on collaborative robots and agents, where the definition of a “good” agent is one that complies with the commands it is given, and that works in a predictable manner under the consent of the human it serves. However, as exemplified in Issac Asimov’s Second Law of Robotics, this compliance is not always desired, such as when it might interfere with a human’s safety. While there has not been much prior research on RaD-AI, we identify main related topics, each of which is studied by a thriving subcommunity of AI, namely: Intelligent Social Agents, Human-Agent/Robot Interaction, and Societal Impacts. In each of these areas, there are research questions relevant to RaD-AI.
We are specifically interested in submissions in the following topics:
Intelligent Social Agents (including but not limited to: Goal Reasoning, Plan Recognition, Value Alignment, and Social Dilemmas)
Human-Agent/Robot Interaction (including but not limited to: Human-agent Trust, Interruptions, Deception, Command Rejection, and Explainability)
Societal Impacts (including but not limited to: Legal and Ethical Reasoning, Liability, AI safety, and A |
de3873e7-18f2-49fc-8685-2228f9ab3b1f | trentmkelly/LessWrong-43k | LessWrong | Group Rationality Diary, September 1-15
This is the public group instrumental rationality diary for September 1-15.
> It's a place to record and chat about it if you have done, or are actively doing, things like:
>
> * Established a useful new habit
> * Obtained new evidence that made you change your mind about some belief
> * Decided to behave in a different way in some set of situations
> * Optimized some part of a common routine or cached behavior
> * Consciously changed your emotions or affect with respect to something
> * Consciously pursued new valuable information about something that could make a big difference in your life
> * Learned something new about your beliefs, behavior, or life that surprised you
> * Tried doing any of the above and failed
>
> Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.
Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating!
Next diary: September 16-30
Immediate past diary: August 16-31
Rationality Diaries archive |
f4754959-97f5-4da2-8907-5d54de2261a4 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What approaches are AI alignment organizations working on?
Each major organization has a different approach. The [research agendas are detailed and complex](https://www.alignmentforum.org/tag/research-agendas) (see also [AI Watch](https://aiwatch.issarice.com/)). Getting more brains working on any of them (and more money to fund them) may pay off in a big way, but it’s hard to be confident which (if any) of them will actually work.
The following is a massive oversimplification. Each organization actually pursues many different avenues of research. Read the [2021 AI Alignment Literature Review and Charity Comparison](https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison) for more detail. That being said:
- The [Machine Intelligence Research Institute](https://intelligence.org/research-guide/) focuses on foundational mathematical research to understand reliable reasoning, which they think is necessary to assure that a seed AI will do good things if activated.
- The [Center for Human-Compatible AI](https://humancompatible.ai) focuses on [cooperative inverse reinforcement learning](https://www.alignmentforum.org/tag/inverse-reinforcement-learning) and [assistance games](https://www.alignmentforum.org/posts/qPoaA5ZSedivA4xJa/our-take-on-chai-s-research-agenda-in-under-1500-words), a new paradigm for AI in which systems try to optimize for doing the kinds of things humans want rather than for a pre-specified utility function
- [Paul Christiano](https://ai-alignment.com/)'s [Alignment Research Center](https://alignmentresearchcenter.org/) focuses on [prosaic alignment](https://www.alignmentforum.org/posts/YTq4X6inEudiHkHDF/prosaic-ai-alignment), particularly on creating tools that empower humans to understand and guide systems much smarter than ourselves. His methodology is explained on [his blog](https://ai-alignment.com/my-research-methodology-b94f2751cb2c).
- The [Future of Humanity Institute](https://www.fhi.ox.ac.uk) works on [crucial considerations](https://www.alignmentforum.org/tag/crucial-considerations) and other x-risks, as well as AI safety research and outreach.
- [Anthropic](https://www.anthropic.com/) explores natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.
- [OpenAI](http://openai.com) is in a state of flux after major changes to their safety team.
- [DeepMind](https://medium.com/@deepmindsafetyresearch)’s safety team works on various approaches concerning modern machine learning, and communicates via the [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/).
- [EleutherAI](https://www.eleuther.ai/) is a machine learning collective aiming to build large open source language models to allow more alignment research to take place.
- [Ought](https://ought.org/) is a research lab that develops mechanisms for delegating open-ended thinking to advanced machine learning systems.
- [Conjecture](https://www.conjecture.dev/) is an alignment startup that aims to scale alignment research, including new frames for reasoning about large language models, scalable mechanistic interpretability, and history and philosophy of alignment.
|
83a88d45-d9b3-43ff-92af-5202d2983297 | trentmkelly/LessWrong-43k | LessWrong | War of Wages part 1 – Apples and WalMarts
[Endorsement status: some of my readers have pointed out that, mildly speaking, this post isn’t my best work. Upon reflection, I kinda have to agree. It’s not a crucial enough argument for me to bother rewriting the entire thing, and I’m leaving it up here for the entertainment value and the good discussion in the comments.]
There’s an old saying: “If you want to hear a dumb opinion, ask someone their thoughts on the minimum wage”. It’s the ultimate Dunning-Kruger topic – so complex that everyone thinks it’s simple, so hard that everyone has an easy answer.
In a series of three posts, I’m going to describe a few economic approaches to analyzing minimum wage laws. Broadly, economics is the study of how people get what they want by interacting with each other in non-violent ways.
If you want an apple and you pick it off a tree in the wild, that’s not economics because no one else is involved. If you want my apple and you stab me and take it, that’s not economics either. If you want me to give you my apple, we have entered the realm of economics.
Workers want a lot of things from employers. Mostly they want wages, but also a pleasant workplace, stability, good conditions, and respect. Employers want several things from workers too, mostly labor but some of the above as well. How they deal with each other is thus a matter of economics. Even if we’re interested in the moral and political implications (which I’ll also touch on), we have to first understand as well as we can the economics of employment and wages.
I won’t keep you in suspense – I think that minimum wage laws are a huge mistake, both economically and morally. I have 95% confidence that low-skilled workers would be much better off on all accounts if the minimum wage was repealed and replaced with any half-sensible form of redistribution. Redistribution that is even 3/4 sensible, e.g. negative income tax paid for by increased taxes on corporate profits, can make rich and poor Americans alike better off. I |
b81e691e-5714-4b8c-9784-50191caf2549 | trentmkelly/LessWrong-43k | LessWrong | Residual stream norms grow exponentially over the forward pass
Summary: For a range of language models and a range of input prompts, the norm of each residual stream grows exponentially over the forward pass, with average per-layer growth rate of about 1.045 in GPT2-XL. We show a bunch of evidence for this. We discuss to what extent different weights and parts of the network are responsible.
We find that some model weights increase exponentially as a function of layer number. We finally note our current favored explanation: Due to LayerNorm, it's hard to cancel out existing residual stream features, but easy to overshadow existing features by just making new features 4.5% larger.
Thanks to Aryan Bhatt, Marius Hobbhahn, Neel Nanda, and Nicky Pochinkov for discussion.
Plots showing exponential norm and variance growth
Our results are reproducible in this Colab.
Alex noticed exponential growth in the contents of GPT-2-XL's residual streams. He ran dozens of prompts through the model, plotted for each layer the distribution of residual stream norms in a histogram, and found exponential growth in the L2 norm of the residual streams:
We had GPT-4 generate dozens of strings which "look like they could have been in GPT-2's training corpus", in addition to a few hand-written strings. We ran these strings through the model and recorded the norms of each residual stream, across layers and sequence positions (except for position 0, which is EOS padding, discussed later).
GPT2-XL has 48 layers in total.
Here's the norm of each residual stream for a specific prompt:
Position 0 (<|endoftext|>) behaves differently, which is why we exclude it from the averaged plots.
Stefan had previously noticed this phenomenon in GPT2-small, back in MATS 3.0:
Note that Stefan originally used the standard deviation, but it is proportional to the norm because our models have zero mean.[1]
Basic Facts about Language Model Internals also finds a growth in the norms of the attention-out matrices WO and the norms of MLP out matrices Wout ("writing weig |
082757ff-1a54-4eeb-bec6-7fd03cf68b90 | trentmkelly/LessWrong-43k | LessWrong | How do finite factored sets compare with phase space?
Garrabrant's finite factored sets feel to me like the same thing as a phase space of a dynamical system. The differences I can see are that phase spaces are not always finite, and that finite factored sets don't have the context of a dynamical rule defined on them. They seem to share the property that every element in the set has exactly one coordinate in each dimension/is an element of exactly one partition of each factor. |
6d3eeb35-320d-4021-9164-d9ed24352eed | trentmkelly/LessWrong-43k | LessWrong | 10 posts I like in the 2018 Review
I see basically every post that gets submitted to LessWrong, whereas many users come in and read things more occasionally, so I thought I'd list 10 posts I like in the 2018 Review that people might have missed.
I wasn't able to write quick reviews of each of them, so this is more like a list of nominations. I've left out a few posts that I expect will naturally be very popular.
1. Naming the Nameless by Sarah Constantin
* This points to a variety of specific mechanisms and gears by which culture and aesthetics affect our judgments and our choices, and combines them into an essay that walked me much further than I was before-hand in noticing these effects. I think it might be in my top 5 posts of 2018.
2. Explicit and Implicit Communication by lionhearted
* This has some powerful arguments about when not to say things out loud - when not to make background assumptions explicit - which I think is a really powerful datapoint for a rationalist to take on board. And the anti-Nazi 'Simple Sabotage Field Manual' is amazing.
3. Challenges to Christiano's capability amplification proposal + Paul's research agenda FAQ by Eliezer Yudkowsky and zhukeepa (respectively)
* These together are maybe the first time I really understood what Paul's ideas were. Really helpful. I'm mostly talking about the Eliezer-Paul dialogue, both on Eliezer's post, and in the comments of Alex's post. I appreciate zhukeepa putting in the explanatory work post as a necessary step for that dialogue to continue.
4. Varieties of Argumentative Experience by Scott Alexander
* So. Many. Examples. This is a strong step forward in conceptualising arguments and disagreements, written by someone who's read and been involved in an incredible amount of good (and bad) ones on the internet.
5. Unrolling social metacognition: Three levels of meta are not enough. by Academian
* This very clearly lays out the iterative process by which social emotions and attitudes are built up. Fou |
5b120d6b-6f17-4e60-adb3-ffc92e2f4c8d | trentmkelly/LessWrong-43k | LessWrong | Defecting by Accident - A Flaw Common to Analytical People
Related to: Rationalists Should Win, Why Our Kind Can't Cooperate, Can Humanism Match Religion's Output?, Humans Are Not Automatically Strategic, Paul Graham's "Why Nerds Are Unpopular"
The "Prisoner's Dilemma" refers to a game theory problem developed in the 1950's. Two prisoners are taken and interrogated separately. If either of them confesses and betrays the other person - "defecting" - they'll receive a reduced sentence, and their partner will get a greater sentence. However, if both defect, then they'll both receive higher sentences than if neither of them confessed.
This brings the prisoner to a strange problem. The best solution individually is to defect. But if both take the individually best solution, then they'll be worst off overall. This has wide ranging implications for international relations, negotiation, politics, and many other fields.
Members of LessWrong are incredibly smart people who tend to like game theory, and debate and explore and try to understand problems like this. But, does knowing game theory actually make you more effective in real life?
I think the answer is yes, with a caveat - you need the basic social skills to implement your game theory solution. The worst-case scenario in an interrogation would be to "defect by accident" - meaning that you'd just blurt out something stupidly because you didn't think it through before speaking. This might result in you and your partner both receiving higher sentences... a very bad situation. Game theory doesn't take over until basic skill conditions are met, so that you could actually execute any plan you come up with.
The Purpose of This Post: I think many smart people "defect" by accident. I don't mean in serious situations like a police investigation. I mean in casual, everyday situations, where they tweak and upset people around them by accident, due to a lack of reflection of desired outcomes.
Rationalists should win. Defecting by accident frequently results in losing. Let's examine t |
ac4b5504-2bc6-4dff-a9bf-24c1c084ae4f | trentmkelly/LessWrong-43k | LessWrong | Rationality Exercises Prize of September 2019 ($1,000)
Added: Prizewinners announced in this comment below.
This post is an announcement of a prize for the best exercises submitted in the next two weeks on a topic of your choice, that are of interest to the LW community. We're planning to distribute $1,000, where $500 of that will go to the first place.
To submit some exercises, leave a comment here linking to your exercises by midnight at the end of Friday 20th September PDT (San Francisco time). You can PM one of us with it if you want to, but we'll be publishing all the entries that win a prize.
Why exercises?
I want to talk about why exercises are valuable, but my thinking is so downstream of reading the book Thinking Physics, that I'd rather just let its author (Lewis Carroll Epstein) speak instead. (All formatting is original.)
> The best way to use this book is NOT to simply read it or study it, but to read a question and STOP. Even close the book. Even put it away and THINK about the question. Only after you have formed a reasoned opinion should you read the solution. Why torture yourself thinking? Why jog? Why do push-ups?
> If you are given a hammer with which to drive nails at the age of three you may think to yourself, "OK, nice." But if you are given a hard rock with which to drive nails at the age of three, and at the age of four you are given a hammer, you think to yourself, "What a marvellous invention!" You see, you can't really appreciate the solution until you first appreciate the problem.
> What are the problem of physics? How to calculate things? Yes - but much more. The most important problem in physics is perception, how to conjure mental images, how to separate the non-essentials from the essentials and get to the hear of a problem, HOW TO ASK YOURSELF QUESTION. Very often these questions have little to do with calculations and have simple yes or no answers: Does a heavy object dropped at the same time and from the same height as a light object strike the earth first? Does the observed spe |
09950ec4-2938-465f-9775-0ed9cd5a0a91 | trentmkelly/LessWrong-43k | LessWrong | New LW Meetup: Turku
This summary was posted to LW Main on August 7th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
* Turku: 08 August 2015 03:00PM
Irregularly scheduled Less Wrong meetups are taking place in:
* Ann Arbor meetup: 22 August 2015 02:00AM
* Australia-wide Online Hangout - August: 09 August 2015 07:30PM
* Frankfurt Meetup: 13 August 2015 06:30PM
* Munich LessWrong Meetup August: Correctness Heuristics: 15 August 2015 07:48PM
* San Francisco Meetup: Excerpts from good writing: 10 August 2015 06:15PM
* Scotland August meetup: 09 August 2015 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Austin, TX - Caffe Medici: 08 August 2015 01:30PM
* Canberra: Guilt: 08 August 2015 06:00PM
* Moscow LW lecture centre meetup: 09 August 2015 03:00PM
* Sydney Meetup - August: 26 August 2015 06:30PM
* Sydney Rationality Dojo - September: 06 September 2015 04:00PM
* Sydney Meetup - September: 23 September 2015 06:30PM
* Vienna: 15 August 2015 03:00PM
* Washington, D.C.: Fun and Games: 09 August 2015 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect informati |
e016746b-6167-48fb-8b10-2423d7d9dd02 | trentmkelly/LessWrong-43k | LessWrong | [Linkpost/Video] All The Times We Nearly Blew Up The World
This contains a nice summary of nuclear close calls. Includes Broken Arrow events, the Cuban Missile Crisis, and of course Petrov. |
80ead2ab-4763-43eb-bd0b-0977accee08f | StampyAI/alignment-research-dataset/lesswrong | LessWrong | A short calculation about a Twitter poll
Recently, the following poll was [posted on Twitter](https://twitter.com/lisatomic5/status/1690904441967575040):
>
> Everyone responding to this poll chooses between a blue pill or red pill.
>
>
> * if > 50% of ppl choose blue pill, everyone lives
> * if not, red pills live and blue pills die
>
>
> Which do you choose?
>
>
>
If we linearize the problem of how much you value the lives of other poll respondents compared to your own, there's actually a way to quantify whether you should vote red or blue depending on your beliefs about how the rest of the population will vote.
Suppose that we normalize the value of choosing the red pill at zero, and let N.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
be the total number of people responding to the poll excluding you. Then, choosing the blue pill only makes a difference in the following two cases:
1. If the number of people who vote blue excluding you, hereafter denoted B, is *exactly* ⌈N/2⌉, then your vote saves the lives of ⌈N/2⌉ people.
2. If B<⌈N/2⌉, then you die and get nothing in return.
If you value the lives of other poll respondents at a constant multiple ν of your own life, then you should pick blue if
P(B=⌈N/2⌉)⋅(ν⌈N/2⌉)>P(B<⌈N/2⌉)
and you should pick red if the inequality goes in the other direction.
A rough heuristic here is that if the distribution of B is unimodal with a maximum to the right of ⌈N/2⌉, you expect
⌈N/2⌉P(B=⌈N/2⌉)≥P(B<⌈N/2⌉)
so if ν=1 (meaning you value the lives of other poll respondents equally to your own) and you expect a blue majority in the precise sense defined above, you should always take the blue pill yourself. The exact cutoff on ν for you to take the blue pill is
ν≥P(B<⌈N/2⌉)P(B=⌈N/2⌉)⌈N/2⌉
A general heuristic is that if we think a red victory is a distinct possibility, so that P(B<⌈N/2⌉)=O(1), and the distribution doesn't decay too sharply around N/2, in general, we'll have P(B=⌈N/2⌉)=O(1/N) so that the cutoff ends up being ν≥O(1). In other words, you pick blue in realistic circumstances if you value other people's lives similarly to your own: ν can't be too far away from 1 if picking blue is to make sense as an individually altruistic decision, ignoring social pressures to pick blue *et cetera*.
Approximating B/N by a beta distribution for large N gives the following rough results:
| Probability of a red victory | Expected share of blue votes | Cutoff value of 1/ν, i.e. minimum value of your life in units of others' lives to choose red |
| --- | --- | --- |
| 10% | 55% | 22.4 |
| 10% | 65% | 7.1 |
| 10% | 75% | 3.7 |
| 20% | 55% | 11.6 |
| 20% | 65% | 3.5 |
| 20% | 75% | 1.4 |
I think the degree of altruism implied by choosing blue in this context is pretty strong for plausible distributions over B/N. For that reason, I think picking red is easily defensible even from an individually altruistic point of view (would you sacrifice your life to save the life of five strangers?)
There are higher-order considerations that are relevant beyond individual altruism, of course: society might have a set of norms to impose penalties on people who choose to take the red pill. However, the possible cost of not taking the red pill is losing your life, which suggests that such penalties would have to be quite steep to change these basic calculations as long as there is a non-negligible probability of a red victory that survives.
I suspect that if this were a real situation most people would choose to take the red pill in a situation where monitoring costs are high (e.g. which pill you take is a secret decision unless red wins) and social punishments are therefore difficult to enforce. |
2f01d7de-82cb-43e1-b742-81cb8ede5e77 | trentmkelly/LessWrong-43k | LessWrong | From halting oracles to modal logic
When reading Vladimir's recent post on defining UDT through modal logic, I had to think a bit about his actual definition of the modal formula(s) corresponding to UDT, because it was phrased in terms of an algorithm doing things instead of an actual modal formula. Then I remembered having worked through the correspondence in the context of modal agents, and it became clear what was going on. I think Vladimir's approach is really interesting and I want to refer to it in future posts, so I thought I'd write a tutorial, since it seems likely that others will have the same problem.
----------------------------------------
Like Vladimir, let's start with the simplest case of an agent that chooses between two actions, in a world with two different possible outcomes (one of which is preferred to the other). Let's write the world as a universe program, in the usual style of proof-based UDT; for example:
def U():
if A() == a1:
return 10
else:
return 5
And, in the style of UDT with a halting oracle, let's write our agent as a program with access to a halting oracle, which it uses to figure out whether certain sentences are provable (in Peano Arithmetic, let's say):
def A():
if Provable("A() == a1 --> U() == 10"):
return a1
elif Provable("A() == a2 --> U() == 10"):
return a2
return a2
(The reference to A() and U() inside the quotes is by quining.) Our agent tries to get utility 10 by trying all possible actions in order; for each action, it sees whether it can prove that this action leads to the desired outcome, and if so, it takes that action; if it doesn't find a good action, it randomly takes the last one.
----------------------------------------
Given a programming language with access to a halting oracle, there's a relation Run(m,n) such that if m is the Gödel number of a program in this language, then N⊨Run(m,n) if and only if this program halts and returns n. (The definition of Run(m,n) is very similar to t |
8d3453b0-d342-45ff-9a2b-1f3e632a34c3 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Billion Dollar Bots
Today's post, Billion Dollar Bots was originally published on November 22, 2008. A summary:
> An alternate scenario for the creation of bots, this time involving lots of cloud computing.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Brain Emulation and Hard Takeoff, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
d9d8561e-207e-4401-b9db-9e4ac030380b | trentmkelly/LessWrong-43k | LessWrong | Crony Beliefs
|
f24e10a3-33d0-4094-8dad-f35684448754 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Some ideas for epistles to the AI ethicists
Some papers, or ideas for papers, that I'd loved to see published in ethics journals like *Minds and Machines* or *Ethics and Information Technology*.[[1]](#fnfsk6e604vrk) I'm probably going to submit one of these to a [2023 AI ethics conference](https://ethics.iit.edu/node/904?mc_cid=5bbc3492b4&mc_eid=e33309075f) myself.
Why should we do this? Because we want today's grad students to see that the ethical problems of superhuman AI are a cool topic that they can publish a cool paper about. And we want to (marginally) raise the waterline for thinking about future AI, nudging the AI ethics discourse towards more matured views of the challenges of AI.
Secondarily, it would be good to leverage the existing skillsets of some ethicists for AI safety work, particularly those already working on AI governance. And having an academic forum where talking about AI safety is normalized bolsters other efforts to work on AI safety in academia.
The Ideas:
----------
### Explain the basic ideas of AI safety, and why to take them seriously.
* Iason Gabriel already had a [pretty good paper like this](https://arxiv.org/ftp/arxiv/papers/2001/2001.09768.pdf). But it's plausible that, right now, what the ethics discourse needs is more basic explanations of why AI safety is a thing at all.
* This paper might start out by making a case that superhuman AI is going to change the world, likely in the next 10-60 years (definitely unintuitive to many, but there are [AI Impacts surveys](https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/) and recent results to illustrate the point). Then the basic arguments that superhuman AI will not be automatically benevolent (easy rhetorical trick is to call it "superhuman technology," everyone knows technology is bad). Then the basic arguments that to get things to go well, the AI has to know a whole lot about what humans want (and use that knowledge the way we want).
* One issue with this might be that it presents the problem, but doesn't really point people towards solutions (this may be a problem that can be solved with quick citations). It also doesn't really motivate why this is an *ethics* problem. It also doesn't explain why we want the solution to the key "ethics-genre" problems to use a technical understanding of the AI, rather than a human- or society-centric view.
### A more specific defense of the validity of transformative-AI-focused thinking as a valid use of ethicists' time.
* The core claim is that getting AIs to want want good things and not bad things is an unsolved ethics problem. Ethics, not engineering, because the question isn't "how do we implement some obvious standard," the question is "what is even a good standard in the first place?"
* But almost as important are secondary claims about what *actual progress* on this question looks like. The end goal is a standard that is connected to technical picture of how the AI will learn this information about humans, and how it will use it to make decisions.
* So the overall thrust is "given that AI safety is important, there is a *specific sort* of ethics-genre reasoning that is going to be useful, and here are some gestures towards what it might look like."
* You can put more than one of these ideas into a paper if you want. This particular idea feels to me like it could benefit from being paired with another topic before or after it.
* Dunking on specific mistakes, like talking about "robots" rather than "optimization processes," should probably be done with care and tact.
### A worked example of "dual use" ethics - a connection between thinking about present-day problems and superhuman AI.
* I expect most of the examples to be problems that sound relevant to the modern day, but that sneakily contain most of the alignment problem.
* E.g. [Xuan's AI that takes actions in response to laws](https://www.lesswrong.com/posts/Cty2rSMut483QgBQ2/what-should-ai-owe-to-us-accountable-and-aligned-ai-systems) that we really want to follow the spirit of the law. Although that's a bit too futuristic, actually, because we don't have much present-day concern about AI systems that themselves interpret laws.
* Probably something about recommender systems is the most clear-cut example. But non-clear-cut examples (e.g. drawing a connection between rules for industrial robots and desiderata for superhuman AI) are also likely interesting.
* One thing that makes a good AI ethics paper, in my opinion, is that it doesn't merely point out a single problem, it makes a useful generalization from a single (typically obvious) moral problem to a general family of problems, and then uses a technical understanding of the problems to formulate guidelines to help ameliorate the entire family. (E.g. "We were building some delivery drones. One potential misuse of these drones would be spying on customers or passers-by, so we took some specific steps to avoid that, and also formulated a general principle that says to downgrade the drone's camera so that it doesn't have excess capabilities not needed for its intended job."[[2]](#fntkl2owesnn)) I'm not saying it's necessary to do this in every paper, but you'll want to lay out ideas with enough clarity that such a thing is *possible*.
### Attempt to defend some specific ethics question as being an open problem relevant to superhuman AI.
* It's not obvious to me that we know many examples in this category. But it feels like it would stimulate some interesting work.
* For me, the closest would be some elaboration of "How do we pick a state for a value learning AI to bootstrap from?" This is a thing we don't know how to do and it sure seems like there are some ethics questions in there - ones you can write a much better essay about than just "which monkey gets the banana?[[3]](#fn14cv9utzldbl)".
### Reports on AI Safety progress.
* The basic idea is to relay not-too-technical AI safety research in a way that's of interest to ethicists.
* Posts fall along a spectrum in terms of fit. I'm best at remembering my own posts, so for a good example my first thought is [me trying to develop some rules to avoid Relative Goodhart](https://www.lesswrong.com/s/aJvgWxkCBWpHpXti4/p/dmp9PZjpSSX5NeXHM). Looking through recent posts, Alex's [Reward is not the optimization target](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target#Appendix__The_field_of_RL_thinks_reward_optimization_target) might be interesting - you'd have to spell out the relevance to AI safety (and therefore ethics) more clearly for a broad audience, but it's there. On the opposite hand, [Neel and Tom's recent mechanistic analysis of grokking](https://www.lesswrong.com/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking) wouldn't be a good fit for that audience.
* Bonus points if you demonstrate how some way of thinking developed for use in AI safety has interesting applications to present-day ethics questions.
### Talk about principles for AI governance, informed by an awareness that transformative AI is important.
* I did say in the introduction that AI governance is a field where leveraging the knowledge of practicing ethicists is a specifically good idea. It's just so not in my wheelhouse that I can't say much about it.
* Most ethics papers like this are mediocre, but the ones that are good tend to be influential.[[4]](#fnst2mmzq3pc)
* One issue is that actual concrete AI governance ideas are hard to come by. But broad principles are probably easier to talk about than concrete ideas.
1. **[^](#fnreffsk6e604vrk)**For my overview of the state of the field, see the two Reading the Ethicists posts ([1](https://www.lesswrong.com/posts/D2biiCrF7E62kDbGA/reading-the-ethicists-a-review-of-articles-on-ai-in-the), [2](https://www.lesswrong.com/posts/6DwprCdC7eErCRZkx/reading-the-ethicists-2-hunting-for-ai-alignment-papers)).
2. **[^](#fnreftkl2owesnn)**Actual example from Cawthorne and van Wysnberghe, *Science and Engineering Ethics*, 2020
3. **[^](#fnref14cv9utzldbl)**A genre of essay characterized by dishy, nontechnical speculation about which humans will benefit from superhuman AI.
4. **[^](#fnrefst2mmzq3pc)**E.g. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, by Floridi et al., *Minds and Machines*, 2018 |
a8ce9b75-db3a-46ea-abc0-2c60c73f383c | trentmkelly/LessWrong-43k | LessWrong | Deliberately Vague Language is Bullshit
Bullshit is what comes out of the mouth of someone who values persuasion over truth. Truth is about communicating information. Vague language communicates less information than precise language.
It is important to distinguish vague language from general, poetic and simple language.
* Vague language is distinct from general language. Vague language communicates less information than regular language. General language communicates more information than regular language. Broader claims are more easily falsifiable than narrow claims. The more falsifiable a claim is the stronger a claim it is. The stronger a claim is the more information it communicates.
* Vague language is distinct from poetic language. Poetic language has very high information density. High information density is the opposite of vagueness.
* Vague language is distinct from simple language. Simple language is easy to understand. Simple language should be used most of the time because simple language is more convenient than complicated language. In disagreements, simple vague language should be replaced with precise language—even at the cost of some complexity.
Deliberately vague language increases vagueness without increasing generality, poetry or simplicity. It obscures the truth.
* "No evidence" is vaguer than "evidence of absence".
* How many shares of a company you own is vaguer than what percentage you own.
The people with a need to obscure the truth are those with a political or social agenda. Vague language is endemic to taboo subjects. Deliberately vague language is such a reliable signal of taboo subjects that you can use it to discover hidden taboos. If you find yourself or others using deliberately vague language around a topic it means there is something you can't say.
The most precise language is numerical. If numbers could be used but aren't then you are dealing with a taboo subject.
* "Smart" is vaguer than IQ thresholds.
* "Fit" is vaguer than BMI and bodyfat percentage.
Y |
c0c17900-2d1a-4a80-858a-6678922d98e6 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [Link] Why I’m optimistic about OpenAI’s alignment approach
The post lays out some arguments in favor of [OpenAI’s approach to alignment](https://openai.com/blog/our-approach-to-alignment-research/) and responds to common objections. |
daa6992c-c729-41cf-a25f-fea26a76491e | trentmkelly/LessWrong-43k | LessWrong | [link] Book review: Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy
http://kajsotala.fi/2013/07/book-review-mindmelding-consciousness-neuroscience-and-the-minds-privacy/
I review William Hirstein's book Mindmelding: Consciousness, Neuroscience, and the Mind’s Privacy, which he proposes a way of connecting the brains of two different people together so that when person A has a conscious experience, person B may also have the same experience. In particular, I compare it to my and Harri Valpola's earlier paper Coalescing Minds, in which we argued that it would be possible to join the brains of two people together in such a way that they'd become a single mind.
> Fortunately, it turns out that the book and the paper are actually rather nicely complementary. To briefly summarize the main differences, we intentionally skimmed over many neuroscientific details in order to establish mindmelding as a possible future trend, while Hirstein extensively covers the neuroscience but is mostly interested in mindmelding as a thought experiment. We seek to predict a possible future trend, while Hirstein seeks to argue a philosophical position: Hirstein focuses on philosophical implications while we focus on societal implications. Hirstein talks extensively about the possibility of one person perceiving another’s mental states while both remaining distinct individuals, while we mainly discuss the possibility of two distinct individuals coalescing together into one.
I expect that LW readers might be particularly interested in some of the possible implications of Hirstein's argument, which he himself didn't discuss in the book, but which I speculated on in the review:
> Most obviously, if another person’s conscious states could be recorded and replayed, it would open the doors for using this as entertainment. Were it the case that you couldn’t just record and replay anyone’s conscious experience, but learning to correctly interpret the data from another brain would require time and practice, then individual method actors capable of immersing themsel |
5f20dbd9-e6c5-48c4-8368-d3daa191b16f | trentmkelly/LessWrong-43k | LessWrong | What happened to the annual LW demographic surveys?
Scott Alexander once ran an annual survey of the LessWrong community. After 2014 he seems to have stopped and the surveys happened intermittently. User namespace picked up the torch in 2016 and 2017. Bob Jacobs also did one in 2020.
Since then it doesn't look like any have been done.
Given that the early surveys were popular, I'm surprised they no longer happen. Is there a simple reason for this I might have missed? |
513a75aa-0d64-4bf9-958b-3f8c3de9d90e | trentmkelly/LessWrong-43k | LessWrong | Progressive Highlighting: Picking What To Make Into Flashcards
Using a flashcard system like Anki can help maintain long-term knowledge. To use flashcards effectively, it's important to analyze how they fit into the larger task of reading and review.
Understanding and Prioritization: A Chicken and Egg Problem
We can't make flashcards of every bit of knowledge conveyed in our textbook. How do we know what's important and what's not, unless we are already familiar with the subject? After all, the sentences in the textbook can be:
* Core knowledge that we want to make into flashcards
* Underlying principles or equations that are extremely important, but will come up so many times that we don't need flashcards to remember them.
* Details that we can re-derive or infer if we remember the "core knowledge"
* Tangents and examples that help explain or give context to the core knowledge, but aren't critical to remember
* Filler material
Many learners attempt to make notes and flashcards as they go, but lack the understanding that would let them prioritize better. Others just skim or read, then return later, but will have forgotten much of the material along the way.
Mini-Towers
As I wrote in The Multi-Tower Study Strategy, some topics are split into discrete, minimally-intersecting "boxed topics." This makes them easier to learn, since on your first encounter, forgetting about one won't damage your ability to understand the next.
Other topics are "tower topics," which build on a set of fundamentals, layer by layer. You are constantly at risk of forgetting the fundamentals, even while new information is piled on. This makes them harder than "boxed topics."
Of course, almost all topics are "tower topics" on some level. A single page of dense biochemistry might introduce a multi-step reaction, requiring you to absorb and understand a number of chemical names, reaction types, products, and the way the overall reaction fits into larger pathways.
If you read closely, sentence by sentence, looking up terms you don't know, you ca |
a5b137b3-a05e-494f-b43b-b3734800bcd2 | trentmkelly/LessWrong-43k | LessWrong | The Importance of Sidekicks
[Reposted from my personal blog.]
Mindspace is wide and deep. “People are different” is a truism, but even knowing this, it’s still easy to underestimate.
I spent much of my initial engagement with the rationality community feeling weird and different. I appreciated the principle and project of rationality as things that were deeply important to me; I was pretty pro-self improvement, and kept tsuyoku naritai as my motto for several years. But the rationality community, the people who shared this interest of mine, often seemed baffled by my values and desires. I wasn’t ambitious, and had a hard time wanting to be. I had a hard time wanting to be anything other than a nurse.
It wasn’t until this August that I convinced myself that this wasn’t a failure in my rationality, but rather a difference in my basic drives. It’s around then, in the aftermath of the 2014 CFAR alumni reunion, that I wrote the following post.
> I don’t believe in life-changing insights (that happen to me), but I think I’ve had one–it’s been two weeks and I’m still thinking about it, thus it seems fairly safe to say I did.
>
> At a CFAR Monday test session, Anna was talking about the idea of having an “aura of destiny”–it’s hard to fully convey what she meant and I’m not sure I get it fully, but something like seeing yourself as you’ll be in 25 years once you’ve saved the world and accomplished a ton of awesome things. She added that your aura of destiny had to be in line with your sense of personal aesthetic, to feel “you.”
>
> I mentioned to Kenzi that I felt stuck on this because I was pretty sure that the combination of ambition and being the locus of control that “aura of destiny” conveyed to me was against my sense of personal aesthetic.
>
> Kenzi said, approximately [I don't remember her exact words]: “What if your aura of destiny didn’t have to be those things? What if you could be like…Samwise, from Lord of the Rings? You’re competent, but most importantly, you’re *loyal* to Frodo. |
5b7ef6e6-3548-4509-8556-48958606de96 | trentmkelly/LessWrong-43k | LessWrong | Let's all learn stats!
I want to become Stronger!
Udacity are running an Introduction to Statistics course starting on the 25th June 2012.
Many of us could stand to learn some more stats, I certainly could. This seems like a great opportunity!
It is mandatory for all LWers to enroll in this course.
Update: the last line was a joke. Obviously people are not finding it funny. Sorry. |
448aab0d-5bb6-444f-a584-25721f1d21ab | StampyAI/alignment-research-dataset/arbital | Arbital | Group theory: Examples
# Even and odd functions
Recall that a function $f : \mathbb{R} \to \mathbb{R}$ is [even](https://arbital.com/p/even_function) if $f(-x) = f(x)$, and [odd](https://arbital.com/p/odd_function) if $f(-x) = - f(x)$. A typical example of an even function is $f(x) = x^2$ or $f(x) = \cos x$, while a typical example of an odd function is $f(x) = x^3$ or $f(x) = \sin x$.
We can think about evenness and oddness in terms of [group theory](https://arbital.com/p/3g8) as follows. There is a group called the [cyclic group](https://arbital.com/p/cyclic_group) $C_2$ of [order](https://arbital.com/p/3gg) $2$ acting on the set of functions $\mathbb{R} \to \mathbb{R}$: in other words, each element of the group describes a function of [type](https://arbital.com/p/3sz)
$$ (\mathbb{R} \to \mathbb{R}) \to (\mathbb{R} \to \mathbb{R}) $$
meaning that it takes as input a function $\mathbb{R} \to \mathbb{R}$ and returns as output another function $\mathbb{R} \to \mathbb{R}$.
$C_2$ has two elements which we'll call $1$ and $-1$. $1$ is the identity element: it acts on functions by sending a function $f(x)$ to the same function $f(x)$ again. $-1$ sends a function $f(x)$ to the function $f(-x)$, which visually corresponds to reflecting the graph of $f(x)$ through the y-axis. The group multiplication is what the names of the group elements suggests, and in particular $(-1) \times (-1) = 1$, which corresponds to the fact that $f(-(-x)) = f(x)$.
Any time a group $G$ [acts](https://arbital.com/p/3t9) on a set $X$, it's interesting to ask what elements are [invariant](https://arbital.com/p/invariant_under_a_group_action) under that group action. Here the invariants of functions under the action of $C_2$ above are the even functions, and they form a [https://arbital.com/p/subspace](https://arbital.com/p/subspace) of the [vector space](https://arbital.com/p/vector_space) of all functions.
It turns out that every function is uniquely the sum of an even and an odd function, as follows:
$$f(x) = \underbrace{\frac{f(x) + f(-x)}{2}}_{\text{even}} + \underbrace{\frac{f(x) - f(-x)}{2}}_{\text{odd}}.$$
This is a special case of various more general facts in [representation theory](https://arbital.com/p/3tn), and in particular can be thought of as the simplest case of the [discrete Fourier transform](https://arbital.com/p/discrete_Fourier_transform), which in turn is a [toy model](https://arbital.com/p/mathematical_toy_model) of the theory of [Fourier series](https://arbital.com/p/Fourier_series) and the [Fourier transform](https://arbital.com/p/Fourier_transform).
It's also interesting to observe that the cyclic group $C_2$ shows up in lots of other places in mathematics as well. For example, it is also the group describing how even and odd numbers add<sup>1</sup> (where even corresponds to $1$ and odd corresponds to $-1$); this is the simplest case of [modular arithmetic](https://arbital.com/p/modular_arithmetic).
<sup>1</sup><sub>That is: an even plus an even make an even, an odd plus an odd make an even, and an even plus an odd make an odd.</sub> |
fbffcaad-1092-4e56-8465-d5e50fa4c044 | trentmkelly/LessWrong-43k | LessWrong | Impact of delaying vaccinations on the total number of deaths
When I've heard, that because of a single death, which might be attributed to something perhaps related to the vaccine, someone has decided to pause vaccinations in a whole country for two weeks, and then several other countries followed the suite, my first question was:
"But, how many more people will die because of delaying vaccinations for 14 days?!"
Because, on the one side we have "perhaps 1 per B*100 000 000" (guesstimate based on this) and on the other we have something like "at least A*100", and without knowing A and B, how can I do any cost-benefit calculation and form opinion on this?
I'm skeptical in cases like this, because politicians obviously care more about signaling their care about safety, then they actually care about safety. For example "pausing vaccinations because they might have side effects" clearly helps to achieve the first goal, regardless of whether or not it indeed increases public safety. And it can be even worse, if the signaling is aimed not at the citizens, but at someone else.
I've realized, that I don't know how many excess deaths are caused by delaying vaccination for two weeks.
My first intuition was that it's roughly 14*number of deaths per day attributed to COVID. But this isn't the correct way to count this for various reasons.
My second intuition was: hey, surely someone has already thought this through, and made a simulator - year ago there were plenty of Javascript pandemic simulators - let's just see what will happen if I pause vaccinations for two weeks, and I'm done! Alas, none of them has even the concept of vaccinations implemented - apparently back then developers were more pessimistic.
So, I thought that my last resort is to create such a simulation myself, which borders with insanity given how difficult statistics are and how easily one can shoot themselves in foot. But, since nobody else seems to have created such a simulation for me (or my google skills don't match their SEO skills), I'm forced to try. So |
6515d64e-f482-4a88-9ea6-2b19429cc3f4 | StampyAI/alignment-research-dataset/special_docs | Other | Revisiting the Insights model
Note: This demo is in beta, and you may experience issues such as strange numerical behavior at this time.
Last year, we released our [insights-based model](http://mediangroup.org/insights) that generated a projected timeline using historical data and a prior distribution. We’ve revisited it to address its limitations and improve the data it draws from.
The model relies on the assumption that progress in AI relies on accumulating \*insights\*, fundamental advances in our understanding, that allow for improvements in capacity without increase in resources expended. This choice makes an attempt to separate out the effects of true technological advancement from the effects of an increase in computing power devoted to a problem, both of which can increase the capacity of machine intelligence to solve complex problems. Computational power is an expensive, finite resource, and without a paradigm-shifting improvement in computing itself, precise allocation of that power alone will not be enough to continue advancing AI’s problem-solving capabilities.
The interactive model below provides two methods of capturing a prior about how many more advances in understanding are required to achieve human-level machine intelligence. Based on that prior, and on the pace of insight discovery during a particular historical period, we compute a probability distribution over time of the likelihood humans will develop human-level AI. Results of this calculation are shown in the “Implied timeline” graph below.
Step 1: Specify a prior for current progress
--------------------------------------------
### Option A: Draw a distribution
For various “percentages of the way” done AI research could be, in terms of percentage of the necessary insights discovered, what is the probability that AI research is not yet that percentage done?
The graph below allows you to draw a distribution of how likely it is we have achieved a particular portion of the insights required for human-level machine intelligence.
Reset
### Option B: Pre-set priors from Pareto distribution
Instead of drawing a cumulative distribution function, you can instead use a pre-set prior based on a [Pareto distribution](https://en.wikipedia.org/wiki/Pareto\_distribution).
To make the choice of Pareto distribution more intuitive, we parameterize the distribution in terms of a probability \*q\*, equal to the probability that a doubling in number of insights (starting from the minimum number of insights) would result in a sufficient set of insights. \*q\* can be set directly, or we can sample from a mixture of Pareto distirbutions, where the \*q\* parameters are sampled from a uniform distribution or a beta distribution.
#### Number of samples to take when running the simulation
#### Set \*q\* directly
\*q\*
Set \*q\*
#### Sample \*q\* uniformly over (0,1)
Sample \*q\*
#### Sample \*q\* with Beta(α, β)
\*α\*:
\*β\*:
Sample \*q\*
Note: The simulator can be very slow for larger values of \*q\*, as most of the samples need to be thrown away.
Step 2: Specify pace of progress
--------------------------------
Which period in history is most representative of the future pace of AI insight discovery?
The graph below plots the aggregate of insights discovered over time and allows selection of a particular period of history in AI research. The curve fit to that period (linear, exponential, or sigmoidal) is used to project the future distribution of discoveries.
Regression mode:
Linear
Exponential
Sigmoidal
Result: Implied timeline
------------------------
Sources
--------
The data used in this model is available as a [JSON file](http://mediangroup.org/docs/insights.json).
The [source code](https://github.com/Median-Group/insights2) for the demo can be found on the Median Group [github](https://github.com/Median-Group/). |
fdd8044a-5566-4d16-8eb4-f5dd12cd3aec | trentmkelly/LessWrong-43k | LessWrong | Believing vs understanding
Every year before the NBA draft, I like to watch film on all of the prospects and predict how good everyone will be. And every year, there ends up being a few guys who I am way higher on than everyone else. Never fails. In 2020 it was Precious Achiuwa. In 2019 it was Bol Bol. In 2018 it was Lonnie Walker. In 2017 it was Dennis Smith. In 2016 it was Skal Labissiere. And now this year in 2021, it is Ziaire Williams.
I have Ziaire Williams as my 6th overall prospect, whereas the consensus opinion is more in the 15-25 range. If I'm being honest, I think I probably have him too high. There's probably something that I'm not seeing. Or maybe something I'm overvaluing. NBA draft analysis isn't a perfectly efficient market, but it is somewhat efficient, and I trust the wisdom of the crowd more than I trust my own perspective. So if I happened to be in charge of drafting for an NBA team (wouldn't that be fun...), I would basically adopt the beliefs of the crowd.
But at the same time, the beliefs of the crowd don't make sense to me.
Upside is really important for NBA players. Check this out:
It shows the Championship Odds over Replacement Player (CORP) for Michael Jordan. Role players and sub-all-stars have something like a 2-4% CORP, whereas All-NBA and MVP types have something like a 10-20% CORP. And I don't even think that fully captures the value that these unicorn-type players have. I believe it's easier to build a roster around a unicorn, as one example. So if you can hit on a unicorn, it's huge.
Ziaire Williams is one of the few guys in this draft who I think has that unicorn potential. He's 6'9 with a 6'11 wingspan: great size for a wing. He's got the shake as a ball handler to break his man down off the dribble and create. His shooting isn't there yet, but the fluidity, FT% and tough shot making ability all make me think he can develop into a really good shooter. He doesn't have the ability to get to the rim yet, but his size, explosiveness, fluidity, and shake |
7666197d-225e-4842-96e0-28329a630763 | trentmkelly/LessWrong-43k | LessWrong | Challenges to Yudkowsky's Pronoun Reform Proposal
> Go, Soul, the body's guest,
> Upon a thankless errand:
> Fear not to touch the best;
> The truth shall be thy warrant:
> Go, since I needs must die,
> And give the world the lie.
>
> —"The Lie" by Walter Raleigh
Summary
* In a February 2021 Facebook post, Eliezer Yudkowsky inveighs against English's system of singular third-person pronouns: as a matter of language design, English's lack of a gender-neutral singular third-person pronoun is a serious flaw: you shouldn't be required to commit to a stance on what sex someone is in order to say a grammatical sentence about her or him.
* This seems fine as a critique of the existing English language. However, Yudkowsky then goes on to proclaim, in connection with pronouns for transgender people, that "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just is the normative definition. Because it is logically rude, not just socially rude, to try to bake any other more complicated and controversial definition into the very language protocol we are using to communicate."
* However, this allegedly "simplest and best" proposal fails to achieve its stated aim of avoiding baking controversial claims into the language grammar. The reason trans people want others to use their designated pronouns is because they're trying to control their socially-perceived sex category and English speakers interpret she and he as conveying sex-category information. Yudkowsky's proposed circular redefinition is functionally "hypocritical": if it were actually true that he simply referred to those who take the pronoun he, then there would be no reason for trans people to care which pronoun people used for them.
* The "meaning" of language isn't some epiphenominal extraphysical fact that can be declared or ascertained separately from common usage. The word "dog" means what it does because English speak |
8dfe5c3f-2abb-4281-9ae3-38f11a98ee26 | StampyAI/alignment-research-dataset/blogs | Blogs | March 2019 Newsletter
Want to be in the reference class “people who solve the AI alignment problem”?
We now have [**a guide on how to get started**](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide), based on our experience of what tends to make research groups successful. (Also on the [AI Alignment Forum](https://www.alignmentforum.org/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide).)
#### Other updates
* Demski and Garrabrant’s introduction to MIRI’s agent foundations research, “[Embedded Agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN),” is now available (in lightly edited form) [as an arXiv paper](https://arxiv.org/abs/1902.09469).
* New research posts: [How Does Gradient Descent Interact with Goodhart?](https://www.lesswrong.com/posts/pcomQ4Fwi7FnfBZBR/how-does-gradient-descent-interact-with-goodhart); [“Normative Assumptions” Need Not Be Complex](https://www.alignmentforum.org/posts/QzsCrzGd4zkNwk9cd/normative-assumptions-need-not-be-complex); [How the MtG Color Wheel Explains AI Safety](https://www.lesswrong.com/posts/9CKBtxWtjvminNTmC/how-the-mtg-color-wheel-explains-ai-safety); [Pavlov Generalizes](https://www.alignmentforum.org/posts/XTgkhjNTEi97WHMi6/pavlov-generalizes)
* [Several MIRIx groups](https://intelligence.org/2019/03/09/a-new-field-guide-for-mirix/) are expanding and are looking for new members to join.
* Our [summer fellows program](https://intelligence.org/2019/03/10/applications-are-open-for-msfp/) is accepting applications through March 31.
* LessWrong’s web edition of *Rationality: From AI to Zombies* at [lesswrong.com/rationality](https://www.lesswrong.com/rationality) is now fully updated to reflect the print edition of *Map and Territory* and *How to Actually Change Your Mind*, the first two books. ([Announcement here.](https://www.lesswrong.com/posts/c7EjFK8yTBwdHewrg/new-versions-of-posts-in-map-and-territory-and-how-to))
#### News and links
* OpenAI’s [GPT-2 model](https://openai.com/blog/better-language-models/) shows meaningful progress on a wide variety of language tasks. OpenAI adds:
> Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. […] We believe our release strategy limits the initial set of organizations who may choose to [open source our results], and gives the AI community more time to have a discussion about the implications of such systems.
>
>
* *The Verge* [discusses](https://www.theverge.com/2019/2/21/18234500/ai-ethics-debate-researchers-harmful-programs-openai) OpenAI’s language model concerns along with [MIRI’s disclosure policies](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions) for our own research. See other discussion by [Jeremy Howard](https://www.fast.ai/2019/02/15/openai-gp2/), [John Seymour](https://twitter.com/_delta_zero/status/1097314384731226113), and [Ryan Lowe](https://towardsdatascience.com/openais-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8).
* AI Impacts summarizes [evidence on good forecasting practices from the Good Judgment Project](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/).
* Recent AI alignment ideas and discussion: Carey on [quantilization](https://www.alignmentforum.org/posts/Rs6vZCrnQFWQ4p37P/when-to-use-quantilization); Filan on [impact regularization methods](https://www.alignmentforum.org/posts/wzPzPmAsG3BwrBrwy/test-cases-for-impact-regularisation-methods); Saunders’ [HCH Is Not Just Mechanical Turk](https://www.alignmentforum.org/posts/4JuKoFguzuMrNn6Qr/hch-is-not-just-mechanical-turk) and [RL in the Iterated Amplification Framework](https://www.alignmentforum.org/posts/fq7Ehb2oWwXtZic8S/reinforcement-learning-in-the-iterat); Dai on philosophical difficulty ([1](https://www.alignmentforum.org/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty), [2](https://www.alignmentforum.org/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy)); Hubinger on [ascription universality](https://www.alignmentforum.org/posts/R5Euq7gZgobJi5S25/nuances-with-ascription-universality); and Everitt’s [Understanding Agent Incentives with Causal Influence Diagrams](https://medium.com/@deepmindsafetyresearch/understanding-agent-incentives-with-causal-influence-diagrams-7262c2512486).
* The Open Philanthropy Project announces their largest grant to date: $55 million to launch the [Center for Security and Emerging Technology](https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology), a Washington, D.C. think tank with an early focus on “the intersection of security and artificial intelligence”. See also CSET’s many [jobpostings](https://cset.georgetown.edu/careers/).
The post [March 2019 Newsletter](https://intelligence.org/2019/03/14/march-2019-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
762e62da-dcae-4276-9e51-2fb5b67f92ee | trentmkelly/LessWrong-43k | LessWrong | Dark Arts 101: Using presuppositions
Sun Tzu said, "The supreme art of war is to subdue the enemy without fighting." This is also true in rhetoric. The best way to get a belief accepted is to fool people into thinking that they have already accepted it.
(Note, first-year students, that I did not say, "The best way to convince people of a belief". Do not try to convince people! It will not work; and it may start them thinking.)
An excellent way of doing this is to embed your desired conclusion as a presupposition to an enticing argument. If you are debating abortion, and you wish people to believe that human and non-human life are qualitatively different, begin by saying, "We all agree that killing humans is immoral. So when does human life begin?" People will be so eager to jump into the debate about whether a life becomes "human" at conception, the second trimester, or at birth (I myself favor "on moving out of the house"), they won't notice that they agreed to the embedded presupposition that the problem should be phrased as a binary category membership problem, rather than as one of tradeoffs or utility calculations.
Consider the recent furor over whether WikiLeaks leader Julian Assange is a journalist, or can be prosecuted for espionage. I don't know who initially asked this question. The earliest posing of the question that I can find that relates it to the First Amendment is this piece from Fox News on Dec. 8; but Marc Thiessen's column in the Washington Post of Aug. 3 has similar implications. Note that this question presupposes that First Amendment protection applies only to journalists! There is no legal precedent for this that I'm aware of; yet if people spend enough time debating whether Julian Assange is a journalist, they will have unknowingly convinced themselves that ordinary citizens have no First Amendment rights. (We can only hope that this was an artful stroke made from the shadows by some great master of the Dark Arts, and not a mere snowballing of an ignorant questi |
9229ac95-0544-4875-8761-e548e8ca4003 | trentmkelly/LessWrong-43k | LessWrong | A toy evaluation of inference code tampering
Work done with James Faina, Evan Hubinger and Ethan Perez.
This is ongoing research on speculative threats, which might be of interest to researchers in the field of AI safety. We ask that readers treat this as a work in progress, rather than as a mature research paper.
To prevent highly capable and potentially misaligned LLMs from taking bad actions, we might want to monitor all of their outputs (Inan et al. 2023, Greenblatt 2024, Balesni 2024). How do we ensure that highly capable LLMs can’t disable the monitoring system?
Well-configured permissions and good cybersecurity may prevent highly capable LLMs from modifying inference code they are not supposed to modify, but highly capable LLMs might be used to improve the inference code, which opens up the risk of LLMs subtly disabling the monitor.
We build a toy setting where we study how well current LLMs can generate such bugs without it being obvious to an LLM monitor and to our judgment. We find that prompted LLMs succeed in disabling the monitor 4-16% of the time, but only ~0.1% of the time in hard-to-detect ways, even with hidden chain of thought reasoning.
While our setup is not very realistic, and while we are not sure that our prompts are making the LLMs we are using try as hard as they can, our best guess is that current LLMs are not capable enough to disable a generation monitor while bypassing even basic code reviews. Experiments on future, more capable models will be needed to assess which mitigations would be necessary to rule out the threat of LLMs disabling generation monitors.
Data here. Code here.[1]
The threat model
Background on highly capable LLMs
In the coming years AI labs might build LLMs that are capable enough to fully automate the work of an entry-level remote-only researcher. These LLMs might be very useful, including for safety research. For example, these LLMs could be used to speed up safety-critical projects such as aligning even more capable LLMs.
But using these highly capa |
e03d288b-b22f-477c-8fce-3d9c117607ca | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] My Wild and Reckless Youth
Today's post, My Wild and Reckless Youth was originally published on 30 August 2007. A summary (taken from the LW wiki):
> Traditional rationality (without Bayes' Theorem) allows you to formulate hypotheses without a reason to prefer them to the status quo, as long as they are falsifiable. Even following all the rules of traditional rationality, you can waste a lot of time. It takes a lot of rationality to avoid making mistakes; a moderate level of rationality will just lead you to make new and different mistakes.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Say Not "Complexity", and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
36305b60-74d0-4f58-9063-97c1020732f0 | trentmkelly/LessWrong-43k | LessWrong | Good Idealistic Books are Rare
Saith Robin in "Seeking a Cynic's Library":
Cynicism and Idealism are a classic yin and yang, a contradictory pair where we all seem to need both sides...
Books on education, medicine, government, charity, religion, technology, travel, relationships, etc. mostly present relatively idealistic views, though of course no view is entirely one way or the other. So one reason the young tend to be idealistic is that most reading material they can easily find and understand is idealistic.
My impression of this differs somewhat from Robin's (what a surprise).
I think that what we see in most books of the class Robin describes, are official views. These official views may leave out many unpleasant elements of the story. But because officialism also tries to signal authority and maturity, it's hardly likely to permit itself any real hope or enthusiasm. Perhaps an obligatory if formal nod in the direction of some popular good cause, because this is expected of officialdom. But this is hardly an idealistic voice.
What does a full-blown nonfictional idealism look like? Some examples that I remember from my own youth:
* Jerry Pournelle's A Step Farther Out, an idealistic view of space travel and more general technological advancement, and the possibility of rising standards of living as opposed to Ehrlichian gloomsaying.
* Brown, Keating, Mellinger, Post, Smith, and Tudor's The Incredible Bread Machine, my childhood introduction to traditional capitalist values.
* Eric Drexler's Engines of Creation (and to a lesser extent Ed Regis's Great Mambo Chicken), my introduction to transhumanism.
* Richard Feynman's Surely You're Joking, Mr. Feynman (for traditional rationalist values).
Supposing you wanted your child to grow up an idealist - what nonfiction books like these could you find to give them? I don't find it easy to think of many - most nonfiction books are not like this.
On the other hand, I suspect that idealistic fiction aimed specifically at children is a |
409d1b27-56ae-401f-8ff7-2ca69cd5d9cb | trentmkelly/LessWrong-43k | LessWrong | GPT-4 is bad at strategic thinking
GPT-4 is known to pretty good at chess (see I played chess against ChatGPT-4 and lost! for one example). However, GPT-4 does not seem to be very good at strategic reasoning in general (it only really can do it if there is a greedy search algorithm).
I tried Hex and Connect4, it failed at both despite being able to explain the rules and even display the board with ASCII art. I was wondering if maybe it just has bad spatial reasoning, so I tried puzzles in natural language based on logical constraints. It failed these as well unless they were quite simple.
I even made a variant of chess up on the spot where the goal is to get any piece to the bank rank instead of capturing the King. It didn't stop me from "sacking" my queen by moving it to the bank rank as soon as their was a gap. So if it has an internal model of chess, it didn't figure out how to apply it to new objectives.
So I think GPT-4 must've learned a rudimentary chess engine; it is not applying general strategic reasoning to chess.
This doesn't necessarily mean GPT-4 can't be agentic, but it does suggest it is either a narrow one or a dumb one (or it's hiding its abilities). |
e7e46982-2cd1-40a5-bc9e-ba092c42478a | trentmkelly/LessWrong-43k | LessWrong | The Inefficiency of Theoretical Discovery
Previously: Why Neglect Big Topics.
Why was there no serious philosophical discussion of normative uncertainty until 1989, given that all the necessary ideas and tools were present at the time of Jeremy Bentham?
Why did no professional philosopher analyze I.J. Good’s important “intelligence explosion” thesis (from 19591) until 2010?
Why was reflectively consistent probabilistic metamathematics not described until 2013, given that the ideas it builds on go back at least to the 1940s?
Why did it take until 2003 for professional philosophers to begin updating causal decision theory for the age of causal Bayes nets, and until 2013 to formulate a reliabilist metatheory of rationality?
By analogy to financial market efficiency, I like to say that “theoretical discovery is fairly inefficient.” That is: there are often large, unnecessary delays in theoretical discovery.
This shouldn’t surprise us. For one thing, there aren’t necessarily large personal rewards for making theoretical progress. But it does mean that those who do care about certain kinds of theoretical progress shouldn’t necessarily think that progress will be hard. There is often low-hanging fruit to be plucked by investigators who know where to look.
Where should we look for low-hanging fruit? I’d guess that theoretical progress may be relatively easy where:
1. Progress has no obvious, immediately profitable applications.
2. Relatively few quality-adjusted researcher hours have been devoted to the problem.
3. New tools or theoretical advances open up promising new angles of attack.
4. Progress is only valuable to those with unusual views.
These guesses make sense of the abundant low-hanging fruit in much of MIRI’s theoretical research, with the glaring exception of decision theory. Our September decision theory workshop revealed plenty of low-hanging fruit, but why should that be? Decision theory is widely applied in multi-agent systems, and in philosophy it’s clear that visible progress in decis |
5a611d6c-b62a-48d2-b53c-72bedb521927 | trentmkelly/LessWrong-43k | LessWrong | Open thread, August 14 - August 20, 2017
IF IT'S WORTH SAYING, BUT NOT WORTH ITS OWN POST, THEN IT GOES HERE.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting |
e6c3a718-7f5d-4828-81ac-6e8cbcae5ae5 | trentmkelly/LessWrong-43k | LessWrong | Ideas for AI labs: Reading list
Related: AI policy ideas: Reading list.
This document is about ideas for AI labs. It's mostly from an x-risk perspective. Its underlying organization black-boxes technical AI stuff, including technical AI safety.
Lists & discussion
* Towards best practices in AGI safety and governance: A survey of expert opinion (GovAI, Schuett et al. 2023) (LW)
* This excellent paper is the best collection of ideas for labs. See pp. 18–22 for 100 ideas.
* Frontier AI Regulation: Managing Emerging Risks to Public Safety (Anderljung et al. 2023)
* Mostly about government regulation, but recommendations on safety standards translate to recommendations on actions for labs
* Model evaluation for extreme risks (DeepMind, Shevlane et al. 2023)
* What AI companies can do today to help with the most important century (Karnofsky 2023) (LW)
* Karnofsky nearcasting: How might we align transformative AI if it’s developed very soon?, Nearcast-based "deployment problem" analysis, and Racing through a minefield: the AI deployment problem (LW) (Karnofsky 2022)
* Survey on intermediate goals in AI governance (Räuker and Aird 2023)
* Corporate Governance of Artificial Intelligence in the Public Interest (Cihon, Schuett, and Baum 2021) and The case for long-term corporate governance of AI (Baum and Schuett 2021)
* Three lines of defense against risks from AI (Schuett 2022)
* The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (Brundage et al. 2018)
* Adapting cybersecurity frameworks to manage frontier AI risks: defense in depth (IAPS, Ee et al. 2023)
Levers
* AI developer levers and AI industry & academia levers in Advanced AI governance (LPP, Maas 2023)
* This report is excellent
* "Affordances" in "Framing AI strategy" (Stein-Perlman 2023)
* This list may be more desiderata-y than lever-y
Desiderata
Maybe I should make a separate post on desiderata for labs (for existential safety).
* Six Dimensions of Operational Adequacy in AGI P |
59c778d1-2c9d-46ee-8694-c7f7eac2d6e2 | trentmkelly/LessWrong-43k | LessWrong | What are the components of intellectual honesty?
I have a pretty strong intuition that "intellectual honesty" points to a specific thing, with a characteristic set of behaviors (both outward (in a conversation/writing) and inward (patterns of thinking)). My concept of intellectual honesty also seems to largely coincide with the concept as used by others, but I'm not sure.
I am talking about a high standard. This isn't at all about violating a code of academic ethics.
I am talking about the sort of thing which helps to foster epistemic trust, helping to take conversations to higher levels. (But don't read those links if you already know enough what I mean to answer and want to avoid anchoring your view to mine.) |
0a5bcf48-a72a-4272-9d11-29b99bb357d1 | trentmkelly/LessWrong-43k | LessWrong | Toronto Meetup, May 24th
When: Tuesday, May 24th, 20:00
Where: The Duke of York, 39 Prince Arthur Avenue
Hi everyone,
The Toronto meetup group is having one of our bi-weekly meetings this Tuesday at the Duke of York. The reservation is under the name Spencer Sleep. I have requested a table on the patio. The reason for the move from the Bedford Academy (albeit by less than 10 metres) is that we are looking for somewhere quieter, thus more conducive to discussion. Now that it is warmer, one of our best bets is most likely patios, so we are going to try the patio at the Duke of York. Those of you who have been attending since the start may remember that we started out there. The reason we moved was the existence of live music every Thursday. Now that we are meeting on Tuesdays, that shouldn't be a problem.
As always, newcomers are extremely welcome.
If you want to hear about upcoming LessWrong events in Toronto, or have ideas about how or when those events should be run, join the Toronto LessWrong Google Group!
See you on Tuesday |
83458d88-26e4-470b-89ed-f6daef8796ac | trentmkelly/LessWrong-43k | LessWrong | [LINK] NYT Article about Existential Risk from AI
http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cabs-and-copenhagen-my-route-to-existential-risk/
Author: Huw Price (Bertrand Russell Professor of Philosophy at Cambridge)
The article is mainly about the Centre for the Study of Existential Risk and the author's speculation about AI (and his association with Jaan Tallinn). Nothing made me really stand up and think "This is something I've never heard on Less Wrong", but it is interesting to see Existential risk and AI getting more mainstream attention, and the author reproduces tabooing in his willful avoidance of attempting to define the term "intelligence".
The comments all miss the point or reproduce cached thoughts with frustrating predictability. I think I find them to be so frustrating because these do not seem to be unintelligent people (by the standards of the internet at least; their comments have good grammar and vocabulary), but they are not really processing. |
420b3609-27c0-4cfb-8297-26a90e3a629d | trentmkelly/LessWrong-43k | LessWrong | New to this community
Hello. I am a computer security consultant, programmer specializing in PC games, and podcaster. I am most interested in AI alignment and ethical development and I hope to learn from all of you. |
7ac3e18c-abe9-4090-908d-156e70720472 | trentmkelly/LessWrong-43k | LessWrong | [META] Can't post comments
I tried in FF 8.0 and Chrome 15.0, and couldn't post, but it worked in IE9. Anyone else having this problem? |
d8b606f1-32b0-429e-bbd0-3f02fb58a86a | trentmkelly/LessWrong-43k | LessWrong | Meetup : Reviving Cambridge Meetups
Discussion article for the meetup : Reviving Cambridge Meetups
WHEN: 15 November 2015 05:00:00PM (+0000)
WHERE: JCR Trinity College, Cambridge, UK
First Cambridge meetup in a long time! Hopefully of many. Come to Trinity's JCR at 17 next sunday and get to know all the other aspiring rationalists around and have a good time! (Place and time are only provisional, they might change depending on your availability so comment here to see how we can arrange it properly)
Discussion article for the meetup : Reviving Cambridge Meetups |
61a7c964-7788-4aec-b77c-e7254a41127b | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Models Don't "Get Reward"
*In terms of content, this has a lot of overlap with* [*Reward is not the optimization target*](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target)*. I'm basically rewriting a part of that post in language I personally find clearer, emphasising what I think is the core insight.*
When thinking about deception and RLHF training, a simplified threat model is something like this:
* A model takes some actions.
* If a human approves of these actions, the human gives the model some reward.
* Humans can be deceived into giving reward in situations where they would otherwise not if they had more knowledge.
* Models will take advantage of this so they can get more reward.
* Models will therefore become deceptive.
*Before continuing, I would encourage you to really engage with the above. Does it make sense to you? Is it making any hidden assumptions? Is it missing any steps? Can you rewrite it to be more mechanistically correct?*
I believe that when people use the above threat model, they are either using it as shorthand for something else or they misunderstand how reinforcement learning works. Most alignment researchers will be in the former category. However, I was in the latter.
I was missing an important insight into how reinforcement learning setups are actually implemented. This lack of understanding led to lots of muddled thinking and general sloppiness on my part. I see others making the exact same mistake so I thought I would try and motivate a more careful use of language!
How Vanilla Reinforcement Learning Works
----------------------------------------
If I were to explain RL to my parents, I might say something like this:
* You want to train your dog to sit.
* You say "sit" and give your dog a biscuit if it sits.
* Your dog likes biscuits, and over time it will learn it can get more biscuits by sitting when told to do so.
* Biscuits have let you incentivise the behaviour you want.
* We do the same thing with a computer by giving the computer "reward" when it does things we like. Over time, the computer will do more of the behaviour we like so it can get more reward.
*Do you agree with this? Is this analogy flawed in any way?*
**I claim this is actually NOT how vanilla reinforcement learning works.**
The framing above views models as "wanting" reward, with reward being something models "receive" on taking certain actions. What actually happens is this:
* The model takes a series of actions (which we collect across multiple "episodes").
* *After* collecting these episodes, we determine how good the actions in each episode are using a reward function.
* We use gradient descent to alter the parameters of the model so the good actions will be more likely and the bad actions will be less likely when we next collect some episodes.
The insight is that the model itself never "gets" the reward. Reward is something used *separately* from the model/environment.
To motivate this, let's view the above process not from the vantage point of the overall training loop but from the perspective *of the model itself.* For the purposes of demonstration, let's assume the model is a conscious and coherent entity. From it's perspective, the above process looks like:
* Waking up with no memories in an environment.
* Taking a bunch of actions.
* Suddenly falling unconscious.
* Waking up with no memories in an environment.
* Taking a bunch of actions.
* and so on.....
The model never "sees" the reward. Each time it wakes up in an environment, its cognition has been altered slightly such that it is more likely to take certain actions than it was before.
**Reward is the mechanism by which we select parameters, it is not something "given" to the model.**
To (rather gruesomely) link this back to the dog analogy, RL is more like asking 100 dogs to sit, breeding the dogs which do sit and killing those which don't. Overtime, you will have a dog that can sit on command. **No dog ever gets given a biscuit.**
The phrasing I find most clear is this: *Reinforcement learning should be viewed through the lens of selection, not the lens of incentivisation.*
Why Does This Matter?
---------------------
The "selection lens" has shifted my alignment intuitions a fair bit.
**Goal-Directedness**
It has changed how I think about goal-directed systems. I had unconsciously assumed models were strongly goal-directed by default and would do whatever they could to get more reward.
It's now clearer that goal-directedness in models is not a certainty, but something that can be potentially induced by the training process. If a model *is* goal-directed with respect to some goal, it is because such goal-directed cognition was selected for. Furthermore, it should be obvious that any learned goal will not be "get more reward", but something else. The model doesn't even see the reward!
**CoinRun**
[Langosco et al.](https://arxiv.org/pdf/2105.14111.pdf) found an interesting failure mode in [CoinRun](https://openai.com/blog/quantifying-generalization-in-reinforcement-learning/).
The set up is this:
* Have an agent navigate environments with a coin always on the right-hand side.
* Reward the model when it reaches the coin.
At train-time everything goes as you would expect. The agent will move to the right-hand side of the level and reach the coin.
However, if at test-time you move the coin so it is now on the *left*-hand side of the level, the agent will not navigate to the coin, but instead continue navigating to the *right*-hand side of the level.
When I first saw this result, my initial response was one of confusion before giving way to "Inner misalignment is real. We are in trouble."
Under the "reward as incentivization" framing, my rationalisation of the CoinRun behaviour was:
* At train-time, the model "wants" to get the coin.
* However, when we shift distribution at test-time, the model now "wants" to move to the right-hand side of the level.
(In hindsight, there were several things wrong with my thinking...)
Under the "reward as selection" framing, I find the behaviour much less confusing:
* We use reward to select for actions that led to the agent reaching the coin.
* This selects for models implementing the algorithm "move towards the coin".
* However, it also selects for models implementing the algorithm "always move to the right".
* It should therefore not be surprising you can end up with an agent that always moves to the right and not necessarily towards the coin.
Rewriting the Threat Model
--------------------------
Let's take another look at the simplified deception/RLHF threat model:
* A model takes some actions.
* If a human approves of these actions, the human gives the model some reward.
* Humans can be deceived into giving reward in situations where they would otherwise not if they had more knowledge.
* Models will take advantage of this so they can get more reward.
* Models will therefore become deceptive.
This assumes that models "want" reward, which isn't true. I think this threat model is confounding two related but different failure cases, which I would rewrite as the following:
**1. Selecting For Bad Behaviour**
* A model takes some actions.
* A human assigns positive reward to actions they approve of.
* RL makes such actions more likely in the future.
* Humans may assign reward to behaviour where they would not if they had more knowledge.
* RL will reinforce such behaviour.
* RLHF can therefore induce cognition in models which is unintended and "reflectively unwanted".
**2. Induced Goal-Directedness**
* Consider a hypothetical model that chooses actions by optimizing towards some internal goal which is highly correlated with the reward that would be assigned by a human overseer.
* Obviously, RL is going to exhibit selection pressure towards such a model.
* RLHF could then induce goal-directed cognition.
* This model does now indeed "want" to score highly according to some internal metric.
* One way of doing so is to be deceptive... etc etc
So failure cases such as deception are still very much possible, but I would guess a fair few people are confused about the concrete mechanisms by which deception can be brought about. I think this does meaningfully change how you should think about alignment. For instance, on rereading Ajeya Cotra's writing on [situational awareness](https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to), I have gone from thinking that "playing the training game" is a certainty to something that *could* happen, but only *after* training somehow induces goal-directedness in the model.
One Final Exercise For the Reader
---------------------------------
When reading about alignment, I now notice myself checking the following:
1. Does the author ever refer to a model "being rewarded"?
2. Does the author ever refer to a model taking action to "get reward"?
3. If either of the above is true, can you rephrase their argument in terms of selection?
4. Can you go further and rephrase the argument by completely tabooing the word "reward"?
5. Does this exercise make the argument more or less compelling?
I have found going through the above to be a useful intuition-building exercise. Hopefully that will be the same for others! |
9c552c9a-e977-4119-a793-1416e86df63f | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | A recent write-up of the case for AI (existential) risk
As a [PhD student](https://amlab.science.uva.nl/people/TimBakker/) in machine learning and one of the long-time organisers of [EA Amsterdam](https://www.eaamsterdam.org/), I have recently gotten a lot of questions about AI from friends, family, meetup attendees, colleagues, and even governmental organisations.
As a result, I've decided to write up a blog post to summarise the case for (existential) risk from AI that I can refer these people to when that seems useful. The post is an attempt to communicate the risks to both interested laypeople and ML experts (e.g., colleagues), and has gone through a few rounds of review by such people.
Some of them suggested that a recent write up might be useful to the general community, so I'm sharing it here.
The full post is below. If you want to share it, consider using the [Twitter link](https://twitter.com/timbbakker/status/1659110154099478531?t=eLGvmmbIGUSM-Hal_JL0Kw) or [my website link](https://www.tbbakker.nl/post/ai_alignment/).
---
>
> Credit for the cover image goes to [Anna Husfeldt](https://thorehusfeldt.com/2023/03/02/reinforcement-learning-using-human-feedback-is-putting-smileys-on-a-shoggoth/).
>
>
>
>
> *Epistemic status: In this post, I will argue that AI alignment is important and that it is difficult. That is a lot of ground to cover, so not all arguments will be very precise or assume the same level of technical background. Opinions on exactly how important and/or difficult the alignment problem is vary a lot within the machine learning community, and even within the AI alignment field itself (although, as you’d expect, much less). As a result: keep in mind that this blog post is very much slanted towards my views on the topic.*
>
>
>
*My thanks to Otto Barten, Jim Boelrijk, Leonard Bereska, Nynke Boiten, Nikita Bortych, Mariska van Dam, Fokel Ellen, Isaak Mengesha, Sam Ritchie, Victoria Ramírez López, Nandi Robijns, Joris Schefold, Marieke de Visscher, Tina Wünn, and especially Leon Lang and Max Oosterbeek for providing feedback. Opinions and mistakes are mine.*
So. AI alignment.
I am writing this post because I am immensely worried about the risks that come with building increasingly capable AI systems.
Why write my own post? Why not just refer to one of the many (now, *finally*) available online? That's because there's one thing I haven't seen many existing articles do: presenting a complete argument for why we should be worried, while straddling the line between 1) understandability for non-experts, and 2) sufficient technical detail for AI experts. This article is meant both for laypeople who want to understand better why we might expect AI to go badly, and for machine learning researchers who would like to see the whole argument written down in one place so that they can evaluate it for themselves. Of course, I cannot nearly cover everything in a single blog post. To compensate, I've included links to additional information and further reading. Even so, this post primarily discusses the technical issues, although the problem is also very much one of policy and governance: I will touch on this throughout as well.
To start, what is AI alignment?
>
> **AI alignment is the process of making sure that AI systems work towards our intended goals, whatever those are. An AI system that works towards our intended goals is called *aligned*.**
>
>
>
That's the goal. Aligning AI systems is surprisingly difficult, which is a big part of what has the research community worried. We'll explore the details of this step by step:
1. **Sounding the alarm.** Setting the stage: many AI researchers are suddenly worried.
2. **What is AI?** How are AI systems created, what can they do, and what problems are we already seeing?
3. **Aligning AI.** What are the difficulties with aligning AI? This is the most technical section and the easiest to skip.
4. **Existential risk.** Why is unaligned AI existentially dangerous to humanity?
5. **What to do?** A call to action. What can you do as a layperson, as someone working in policy, or as a researcher?
Sounding the alarm
------------------
In the last few months, many researchers have publicly voiced their concerns about the dangers of future AI systems. These include two of the three Turing Award winners, [Geoffrey Hinton](https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html) and [Yoshua Bengio](https://yoshuabengio.org/2023/05/07/ai-scientists-safe-and-useful-ai/), [Gary Marcus](https://www.ted.com/talks/gary_marcus_the_urgent_risks_of_runaway_ai_and_what_to_do_about_them), [Nando de Freitas](https://twitter.com/NandoDF/status/1653685589697241090), [David Duvenaud](https://twitter.com/DavidDuvenaud/status/1639365724316499971), [Dan Hendrycs](https://arxiv.org/abs/2303.16200), and [many others](https://futureoflife.org/open-letter/pause-giant-ai-experiments/). Even [Snoop Dogg is worried](https://twitter.com/pkedrosky/status/1653955254181068801).
Other researchers have been publicly worried for longer. Among these is [Stuart Russell](https://www.youtube.com/watch?v=ISkAkiAkK7A), professor of computer science at the University of California, Berkeley, and writer of [“the most popular artificial intelligence textbook in the world”](https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach). He recently spoke at [the Pakhuis de Zwijger panel on Existential Risks of AI](https://dezwijger.nl/programma/existential-risks-of-artificial-intelligence) in Amsterdam, where I was invited to be a panellist. The panel was held at a great time: OpenAI had just released their newest Large Language Model (LLM) – [GPT-4](https://openai.com/product/gpt-4) – a month before, Microsoft had released [Sidney/Bing](https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/) a month before *that*, with Google’s [Bard](https://blog.google/technology/ai/bard-google-ai-search-updates/) right on *their* heels, and the world was paying attention. The panel sold out, was moved to the main hall, and sold out again.
What is AI?
-----------
So what are we worried about? Stuart Russell’s introductory talk is worth watching for both laypeople and experts. A recording of his talk as well as the subsequent panel discussion is available [here](https://www.youtube.com/watch?v=vGQDctxwy2E). He starts by explaining the goal of companies like OpenAI and DeepMind: to build *general-purpose AI*; AI capable of (quickly learning) high-quality behaviour in any task environment. You may have previously heard this described as Artificial General Intelligence (AGI). Such an AI would be able to perform a wide variety of tasks. Among other things, it would be able to play [chess, Go](https://www.deepmind.com/blog/alphazero-shedding-new-light-on-chess-shogi-and-go), and [other games](https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii), [search the internet](https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/), [make plans and execute on those plans](https://github.com/Significant-Gravitas/Auto-GPT), [chat with people](https://openai.com/blog/chatgpt), [manipulate people](https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-), [write code](https://github.com/features/copilot), and [solve scientific problems](https://www.deepmind.com/research/highlighted-research/alphafold).
Current AI systems can do all these things already to some degree. Mostly, these are narrow AI systems trained to do a particular task very well, but there are some examples of systems that can already perform multiple tasks. For instance, GPT-4 can chat, write code, and answer scientific questions (to varying degrees of success). Its sister product Sydney/Bing can chat, search the internet, and even [expressed unexpected antisocial behaviours](https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html) before Microsoft added some safeguards. Interestingly, large language models have developed [capabilities they were not directly trained for](https://arxiv.org/abs/2005.14165), or have learnt to perform tasks that their smaller counterparts failed at – [emergent capabilities](https://openreview.net/pdf?id=yzkSU5zdwD) – merely by having been scaled up.[[1]](#fn-Wfu6DCvbLhy2ai9zd-1) This makes it likely that these models will develop even more novel capabilities as we scale them up even further.
GPT-4 is already showing enough generality to impress the researchers at Microsoft. Their recent paper -- [Sparks of Artificial General Intelligence](https://arxiv.org/pdf/2303.12712.pdf) -- reports on their experiments with an early version of GPT-4. They write: *"We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT.*"
### AI causing trouble
Even our current narrow(er) AI systems are already causing some trouble. For example, [AI can exacerbate problems](https://www.weforum.org/agenda/2022/10/open-source-data-science-bias-more-ethical-ai-technology/) caused by our society’s socio-economic biases -- e.g., racism, gender-biases, etc. -- by reflecting them in their algorithmic biases. This risks amplifying inequities in society, for instance [in our health systems](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6875681/). Something else that many of us have already experienced, is the effects of algorithmically curated social media feeds on the quality of online discourse. Stuart Russell presents the classic example of social media AI systems trained to maximise clickthrough-rate eventually learning to amplify clickbait – which is, in fact, a very good strategy for maximising clickthrough-rate, but which has the side effect of spreading misinformation. We currently do not have any obvious solutions to these problems, and they are likely to get worse as we include AI systems in more and more of our society’s processes. OpenAI itself has warned about the risk of LLM-powered [misinformation campaigns](https://openai.com/research/forecasting-misuse), which seem very possible with current systems.
The social media example is illustrative because it points to an enormous safety problem in the way we currently build AI systems: we don’t know how to robustly get AI to do what we want. Our current paradigm is [deep learning](https://en.wikipedia.org/wiki/Deep_learning), which involves training large models -- called neural networks -- on huge datasets. Training is implemented as an optimisation problem. In practice, this means that every data point tells the model how to [change itself a tiny bit](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) such that it becomes better at predicting that data point. The quality of a predictions is determined according to your specified objective, which we call 'loss function'. Training an AI system essentially amounts to searching a huge space of possible algorithms for one that does well on your task, as specified by your training data and loss function. This has one main implication: no one knows exactly why a deep learning system predicts the things that it does or performs the actions that it does.
That is worth repeating: *no one knows exactly why a deep learning system does what it does*. When Syndey/Bing started aggressively misbehaving -- by, for instance, [declaring its love to a New York Times journalist and subsequently urging him to leave his wife](https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html) -- Microsoft did not know why. The best they could do is: "Guess we didn't train it hard enough to be helpful and harmless", and this is the best *anyone* can do right now. We simply do not have the technology to understand in detail how deep neural networks generate their answers. To Mikhail Parakhin, the person at Microsoft in charge of Syndey/Bing's deployment, [such behaviour came as quite a surprise](https://twitter.com/MParakhin/status/1627360286666592257). Why? Because no one in their test markets had reported such behaviour. No one knows exactly what large language models can do, *not even their creators*. The field that concerns itself with improving our precise understanding of these capabilities is called ['mechanistic interpretability'](https://transformer-circuits.pub/2022/mech-interp-essay/index.html) and it is hugely underexplored compared to the field of 'giving the models more capabilities'.[[2]](#fn-Wfu6DCvbLhy2ai9zd-2)
>
> **Continuing to build more capable AI systems when we understand so little of what goes on in the models has the potential to lead to existentially dangerous scenarios**.
>
>
>
In my experience, the above statement sounds obvious to some people, and completely out of left field for others. Below I will sketch out why I think this is something we should be deeply worried about.
Aligning AI
-----------
Imagine we succeed in building a very capable AI system using deep learning. As is standard deep learning protocol, we are using lots of data from all kinds of domains to search for an algorithm that does well on some objective. Presumably, we want the objective to be something good for humanity, or sentient life in general. This step alone opens up quite the philosophical can of worms: what kind of trade-offs should we be making? Free speech versus libel and misinformation. Safety of medical research versus lives lost by delays. This is not mere armchair philosophy: the self-driving car industry [has been struggling](https://link.springer.com/article/10.1007/s13347-021-00464-5) with questions about such ethical trade-offs for years. Choosing the wrong objective here would be bad, since then we'd be dealing with a very capable AI system that is actively working to achieve that objective, rather than towards something that aligns with our goals (this will be a recurring theme). It's not obvious that there even exists a singular goal that would be (equally?) good for all of humanity and other sentient life. These are genuinely difficult philosophical questions, that we don't currently have an answer for.
A question that often comes up at this point is: "Why can we not just programme the AI system to follow the [Three Laws of Robotics](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics)?" These laws were created by science fiction writer (and professor of biochemistry) Isaac Asimov, arguably [to show](https://en.wikipedia.org/wiki/The_Complete_Robot) why such an idea [cannot work](http://www.astro.sunysb.edu/fwalter/AST389/Why%20the%20three%20laws%20of%20robotics%20do%20not%20work.pdf). One problem is that the laws are too open to interpretration to actually specify robustly safe behaviour: it's far too easy to circumvent the spirit of the laws while keeping to the letter.
### Outer alignment
Returning to our example, let’s assume we somehow solve – or more likely: circumvent(?) – the philosophical issues with finding a good objective. We now know what objective we want our AI system to do well on: can we specify a loss function (objective) that achieves this? Any programmer knows that the trouble with computers is that they do what you tell them to do, not what you want them to do. If you don't specify what you want very precisely, you don't get what you want: you get what you (mistakenly) said. This is the [outer alignment](https://www.lesswrong.com/tag/outer-alignment) problem. The Three Laws of Robotics run into exactly this issue as well. Quoting the linked page:
>
> Overall, outer alignment as a problem is intuitive enough to understand, i.e., *is the specified loss function aligned with the intended goal of its designers?* However, implementing this in practice is extremely difficult. Conveying the full “intention” behind a human request is equivalent to conveying the sum of all human values and ethics. This is difficult in part because human intentions are themselves not well understood. Additionally, since most models are designed as goal optimizers, they are all susceptible to Goodhart’s Law which means that we might be unable to foresee negative consequences that arise due to excessive optimization pressure on a goal that would look otherwise well specified to humans.
>
>
>
That last bit about [Goodhart’s Law](https://en.wikipedia.org/wiki/Goodhart%27s_law) is worth going into. Goodhart’s Law states that *"When a measure becomes a target, it ceases to be a good measure"*. Why is this? Well, this is exactly what went wrong in the social media example above. AI engineers wanted to build a system that maximised engagement, decided on a measure for this (clickthrough-rate), and trained their AI system to optimise that measure. Clickthrough-rate was supposed to be a measure of what the engineers wanted the AI to optimise (‘engagement’), but as soon as enough optimisation pressure was applied (in the form of AI training) it stopped being a good measure, because the AI found strategies that maximised clickthrough-rate in ways the engineers did not intend.
This phenomenon where an AI system learns behaviour that satisfies the literal specification of an objective without achieving the intended outcome is called [specification gaming](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity). There are [many examples](https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/) of [this behaviour emerging](https://heystacks.com/doc/186/specification-gaming-examples-in-ai---master-list) in deep learning systems. Again, if we don't somehow solve this, we'll be dealing with a very capable AI system that is actively working against our goals.
Okay, what if we somehow solve this issue, such that our trained AI does what we want on the training data? Enter [goal misgeneralisation](https://arxiv.org/abs/2105.14111): the AI learns behaviour that seems to follow the correct goal during training, but actually it learned some other goal that your training set could not distinguish from the goal you wanted it to have. Again, there are [examples](https://arxiv.org/abs/2210.01790) of this happening with current systems. And again, we'll be dealing with a very capable AI system that is actively working against our goals. Notice the theme?
### Inner alignment
Are we done if we solve this? Nah, there’s also [inner alignment](https://www.lesswrong.com/tag/inner-alignment). This is quite a complex topic, for which a slightly more introductory level post [can be found here](https://www.lesswrong.com/posts/AHhCrJ2KpTjsCSwbt/inner-alignment-explain-like-i-m-12-edition). The basic idea is that optimisation can create internal processes that develop goals/objectives that are not the same as the outer goal/objective.[[3]](#fn-Wfu6DCvbLhy2ai9zd-3) The technical term for such a process is [mesa-optimiser](https://arxiv.org/pdf/1906.01820.pdf). Human goals can be viewed as an example of an inner alignment failure. Evolution has created us, humans, through an optimisation process – namely natural selection – with the goal of maximising [inclusive (genetic) fitness](https://en.wikipedia.org/wiki/Inclusive_fitness). Do we – the mesa-optimisers that result from this optimisation process – care about inclusive fitness? I sure don’t. I didn’t even have a concept of ‘inclusive fitness’ – either intellectually or intuitively – until I learned about evolution. It's very difficult to care about something that I don’t even have a concept for.[[4]](#fn-Wfu6DCvbLhy2ai9zd-4)
Doughnuts are another alignment failure: sugar, fat, and salt were all highly-valuable nutrients in the ancestral environment, and so we evolved behaviours that sought out those nutrients (for instance, we really like how they taste). Then we created doughnuts, which did not exist in the ancestral environment. Now we collectively eat a lot of doughnuts, which is not very good for our inclusive fitness (e.g., through health issues). Here our training environment (the ancestral environment) did not generalise well to our testing environment (current-day civilisation), and so we ended up with behaviours that seemed aligned – in the sense that they improved our inclusive fitness in the training environment – but actually aren’t within the context of modern society (the 'testing' or 'deployment' environment, if you will).[[5]](#fn-Wfu6DCvbLhy2ai9zd-5)
Evolution did its darndest to optimise us to maximise inclusive fitness: the doughnut example shows it failed to align our behaviours. And the fact that we don’t even have an intuitive concept for inclusive fitness means it failed to align our goals.[[6]](#fn-Wfu6DCvbLhy2ai9zd-6)
### Goal-directed behaviour
Okay, but do our *current deep learning systems* have goals? The obvious answer is that we don’t know because we don’t know very well what goes on in our AI systems. The second obvious answer is that they don’t need to develop goals themselves to pose a risk. Humanity seems very happy to [provide GPT-4 with user-specified goals](https://agentgpt.reworkd.ai/) and [internet access](https://github.com/Significant-Gravitas/Auto-GPT). Giving AI systems goals in plain English – rather than maths – is not necessarily an improvement from a robustness perspective, because it is [really hard to specify](https://en.wikipedia.org/wiki/Midas#Golden_Touch) exact meaning in natural language. Also, [I’m sure no one will actually give an AI system a bad goal, ever](https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-humanity).
Still, many current deep learning models aren’t trained to act as agents with goals and don’t seem to coherently pursue any goals as far as we can tell. So, how might AI systems develop goals themselves? This discussion is quite technical: it probably won't make much sense to you unless you are familiar with deep learning already. Feel free to skip to the last paragraph of this section (right before: **Existential risk**).
To start, we might train our AI systems as agents, as we typically do in the field of reinforcement learning (RL). Most researchers seem to agree that this is probably not a very safe idea for building very capable systems, for all the outer alignment reasons mentioned above. We’ve even seen shadows of the inner alignment problem here, in that RL agents can learn to [implement an inner RL agent](https://arxiv.org/abs/2301.07608) in their activations. This may be an example of the [mesa-optimisers](https://arxiv.org/pdf/1906.01820.pdf) that we discussed in the section on inner alignment. Even avoiding RL may not necessarily help: the [in-context learning](https://arxiv.org/abs/2005.14165) abilities of Transformers – the architecture underlying all LLMs – may be partially explained by their learning to do [gradient-based training in the forward pass](https://arxiv.org/pdf/2212.07677.pdf). This is possibly an example of a mesa-optimiser emerging from training non-agentic deep learning systems.
Of course, we don’t know whether these mesa-optimisers are pursuing any coherent goals. We don’t know whether our [non-agentically trained AI systems](https://arxiv.org/pdf/2209.00626.pdf) might [implement agents](https://arxiv.org/abs/2212.01681) as we scale up their capabilities. Paul Christiano, one of the top AI alignment researchers, provides a brief discussion on how LLMs might become goal-directed, [here](https://forum.effectivealtruism.org/posts/dgk2eLf8DLxEG6msd/how-would-a-language-model-become-goal-directed?commentId=cbJDeSPtbyy2XNr8E). The (speculative) possibility of emergent goal-directed behaviour is a reason why, for example, the [science AI](https://yoshuabengio.org/2023/05/07/ai-scientists-safe-and-useful-ai/) suggested by Yoshua Bengio might not be safe (for those interested in further reading: his proposal is an example of a [tool AI](https://www.lesswrong.com/tag/tool-ai)).
So, we don’t know whether AI systems will develop coherent goals purely through training, but it seems at least possible: this is very worrying. I want to stress here that *there are currently no alignment proposals that the community as a whole expects to work*. It is easy to come up with stories of how things might work out, but for now, these all fail in predictable ways.[[7]](#fn-Wfu6DCvbLhy2ai9zd-7) We'll talk more about current alignment proposals after we've finished making the argument for existential risk from AI.
Existential risk
----------------
Okay, back to our deep learning system that we are trying to align. We don’t know which objective to give it, how to make sure we actually optimise it for that objective if we did know, and whether this optimisation generalises to new environments – it might not because the behaviours don’t generalise well (doughnuts), or because it just learned different goals (inclusive fitness). If we don't solve these problems, we'll be dealing with a very capable AI system that is actively working against our goals. That is bad.
Perhaps it is obvious why instantiating AI that has different goals from humanity is likely to end human civilisation, but the actual argument requires two more things: 1) strong capabilities, and 2) instrumental goals.
Strong capabilities ('very capable AI system') means that we need to have actually succeeded in building an AI system that is capable of competently acting in the world so that it can achieve its goals. Different goals require different levels of competence: playing a video game is a lot simpler than manipulating people, but both are possible at some level of competence. One big assumption is that it's possible to build general AI that is more capable than human civilisation at all the relevant tasks ([which need not be *all* tasks](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)). This does not seem impossible in principle: humans are generally intelligent after all, and there is no reason to suppose our intelligence is the limit of what is achievable. Recently it's starting to appear more practically feasible as well, especially as our largest models seem to get more general as we scale them up. Still, it might be that the deep learning paradigm cannot get us to sufficiently capable AI. That would be a lucky break, because 1) it probably gives us more time to solve the alignment problem(s), and 2) deep learning is the most uninterpretable approach to AI we know.
### Thinking about AI
To explore the point of 'instrumental goals', it’s worth stressing what the field of AI alignment means by ‘intelligence’. The definition that suffices for our purposes is that: *“a system is intelligent to the degree that its actions can be expected to achieve its goals”*. Here intelligence is about achieving goals. The key insight here is not to think of an intelligent system as some book-smart professor that spends their day working on abstract maths problems. When we talk about creating intelligent systems, we’re talking about creating smart optimisers: systems that – given some goal – change the world in such a way that that goal is achieved. People are an example of such optimisers: we have goals, and we think intelligently about how to achieve them. If the goal is difficult enough, we make plans that can span multiple years, we gather allies and resources, and we predict obstacles and try to solve those. If we want to build a road, we talk to the municipality and contractors, we buy materials, and we lobby. And if there is an anthill where we want to build the road, we bulldoze the anthill: not because we hate the ants, but because they are in the way of our goal.
Bulldozing the ants is an example of an [instrumental goal](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value). An instrumental goal is any goal that is useful to the pursuit of the actual intrinsic goal (i.e., the thing an agent actually tries to achieve). Typical instrumental goals are self-preservation (if you don’t exist, your goal is less likely to be achieved), resource accumulation (more resources means you have more options for achieving your goal), and removing adversaries (adversaries are bad for your chances of achieving your goal), which may include deceiving[[8]](#fn-Wfu6DCvbLhy2ai9zd-8) or manipulating adversaries. [Instrumental convergence](https://en.wikipedia.org/wiki/Instrumental_convergence) states that goal-directed agents will tend to pursue instrumental goals such as the three mentioned here.[[9]](#fn-Wfu6DCvbLhy2ai9zd-9)
The kind of intelligence that we’re trying to build should be thought of as capable of planning, gathering resources, and removing obstacles. We should also keep in mind that it will be very different from humans: it has a different cognitive architecture, and it does not share the evolutionary history that makes us cooperative and empathetic. If such a system becomes capable enough – more capable than humans – and we haven’t programmed it to care about human values, then we'll be dealing with a very capable AI system that is actively working against our goals/values.
Such a system may take the resources -- e.g., energy, computational power, water (cooling), in the worst case even the atoms that we are made of -- that we need (the instrumental goal of resource accumulation). We won't be able to switch it off or destroy it, because that's bad for its goals (the instrumental goal of 'self-preservation'). We may not even know it is misaligned until it starts acting badly because it could deceive us about its true goals (the instrumental goal of 'removing adversaries').[[10]](#fn-Wfu6DCvbLhy2ai9zd-10) If the system is sufficiently capable, this means that we end up with a world without human values and likely without humans. Not because the AI suddenly becomes conscious or because it hates us, or anything remotely to do with human emotions. Simply because a more capable system is better at achieving its goals than we are at ours. If we're not careful, we risk becoming the anthill.
Bad outcomes don't require a single extremely capable AI trying to take over the world: we are already putting AI systems into all kinds of processes, and we are poised to continue doing this as AI systems develop more capabilities. We may be on a path of slowly giving away the power to steer our future to optimisation processes that we do not fully understand, that are optimising for some objective that is not perfectly aligned with human goals, and that may be hard (or impossible) to roll back. This can cause a [slow-rolling catastrophe](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) all by itself.
Even if we solve the technical alignment problem, we may still be risking a future where an AI-supported mini-elite holds global power (for instance, if they somehow take the technical solution and manage to align AI to their specific selfish values). After such a group potentially develops into a kind of global dictatorship, we may wonder what use they have for eight billion people when they control all the powerful AI systems. This very much does *not* mean we should race to build a superhumanly capable AI system because otherwise 'the bad guys' will build it first. That is extremely likely to end badly.
There are many possible scenarios in between the ones mentioned here. The point is not that any individual scenario is likely to happen: predicting the future is hard, after all. The point is that there are good arguments for expecting some chance that we end up with really bad outcomes if we don't solve these issues, and we are currently not on track to solve these issues.
### Alignment proposals
So how do we reduce the risk of becoming the metaphorical anthill? By solving the alignment problem, which is a way of saying: "*making sure that AI systems work towards our intended goals*". Note that this should include the policy issues behind making sure the goals are intended to benefit everyone.
Solving the technical alignment problem is an open problem: that is, *we don't know how to solve it yet*. [Here is an overview](https://arxiv.org/pdf/2012.07532.pdf) of some of the best theoretical alignment proposals we have so far. Our current best *practical* technologies for aligning large language models are [Reinforcement Learning by Human Feedback](https://arxiv.org/abs/2203.02155) and [Constitutional AI](https://arxiv.org/abs/2212.08073) ('Reinforcement Learning by AI Feedback'). The former has been used to make GPT-4 more harmless and helpful, while the latter is used on Anthropic's [Claude](https://www.anthropic.com/index/introducing-claude). These are both potential avenues of [scalable oversight](https://arxiv.org/abs/2211.03540): the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. In their current form, unfortunately, they are far from sufficient, as exemplified by the [many creatively applied methods for jailbreaking GPT-4](https://www.mlyearning.org/chatgpt-4-jailbreak/). It's currently very unclear whether methods like these will work at all for aligning more capable systems.
A recurring problem in all this is that we are training our deep learning systems essentially by doing a big program search based on some optimisation objective. At the very least, this choice adds a bunch of [inner alignment problems](https://arxiv.org/abs/1906.01820). It may be that a different paradigm is necessary to solve these problems. Stuart Russell’s lab is one of the groups working on a proposal in this direction based on [assistance games / cooperative inverse RL](https://people.eecs.berkeley.edu/~russell/papers/russell-nips16-cirl.pdf). The idea here is to have an agent do a form of Bayesian updating over a human reward function (a way of representing human values), such that the agent is incentivised to learn about the human reward function before irreversibly acting in the real world. The [research community](https://arbital.com/p/updated_deference/) is [split on](https://www.alignmentforum.org/posts/ikYKHkffKNJvBygXG/updated-deference-is-not-a-strong-argument-against-the) whether this [can work](https://www.lesswrong.com/posts/TmHRACaxXrLbXb5tS/rohinmshah-s-shortform?commentId=AJLSo27nPtpTvLKTR), partially precisely because it's not obvious how to design such an agent in the deep learning (program search) framework.
All of this and more is what many researchers are now sounding the alarm on. In this [2022 survey of machine learning researchers](https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/), 48% of respondents put the probability that long-term effects of advanced AI on humanity are *“extremely bad (e.g., human extinction)”* at 10%.[[11]](#fn-Wfu6DCvbLhy2ai9zd-11) Note that this survey was run before Sydney/Bing and GPT-4 were released. Since then, many more researchers have publicly expressed their worries, and [the world](https://www.bbc.com/news/technology-65139406) is [finally starting](https://www.euronews.com/next/2023/05/05/joe-biden-and-kamala-harris-meet-with-big-tech-ceos-to-address-ai-risks) to take the problem [more seriously](https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence).
What to do?
-----------
As is hopefully clear now, there are a lot of important [open problems in AI alignment](https://arxiv.org/abs/1606.06565), many of which we’ll need to solve if we want to be sure that future – very capable – AI systems are robustly safe. We need much more research in this field, but unfortunately, there are currently [far fewer researchers](https://forum.effectivealtruism.org/posts/3gmkrj3khJHndYGNe/estimating-the-current-and-future-number-of-ai-safety) working on AI alignment than on AI capabilities.
So what can you do? Awareness about the dangers of AI is growing, but we need much more of it. Talk to your friends, family, and colleagues; write letters to your political representatives urging them to take these issues seriously; and share the articles explaining the risks. AI policy is lagging far behind AI technology, and it cannot catch up until politicians and lawmakers feel the need to change that status quo.
Interested in doing alignment research yourself? The paper [*The Alignment Problem from a Deep Learning Perspective* (2023)](https://arxiv.org/pdf/2209.00626.pdf) gives an overview of the research field in section 5. Victoria Krakovna’s [AI safety resources](https://vkrakovna.wordpress.com/ai-safety-resources/) are a good general starting point, as is the [ML Safety course](https://www.mlsafety.org/). If you’re specifically interested in technical work, consider following Richard Ngo’s [AI Alignment Course](https://www.agisafetyfundamentals.com/ai-alignment-curriculum) and have a look at the [SERI-MATS programme](https://www.serimats.org/).
If you’re more interested in policy work instead, then have a look at the [AI Governance Course](https://www.agisafetyfundamentals.com/ai-governance-curriculum). Policy work is especially important right now, because 1) reducing AI risk is [not just a technical problem](https://en.wikipedia.org/wiki/Collective_action_problem), and 2) finding technical solutions to AI alignment will take time, which we might not have on our current course. At least OpenAI, Microsoft, and Google seem to be locked in an [AI arms race](https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html), which does not incentivise them to act slowly and safely. Google has called ['Code Red'](https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html) over ChatGPT, and Satya Nadella, CEO of Microsoft, recently announced: [*"It’s a new day in search. The race starts today, and we’re going to move and move fast."*](https://www.businesstoday.in/technology/news/story/the-race-starts-today-microsoft-officially-brings-chatgpt-ai-to-bing-and-edge-browser-369453-2023-02-08).
The Future of Life Institute, who spearheaded [the open letter to pause giant AI experiments](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) in March, has released [a report on AI policy recommendations](https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf). These are their seven recommendations:
1. Mandate robust third-party auditing and certification.
2. Regulate access to computational power.
3. Establish capable AI agencies at the national level.
4. Establish liability for AI-caused harms.
5. Introduce measures to prevent and track AI model leaks.
6. Expand technical AI safety research funding.
7. Develop standards for identifying and managing AI-generated content and recommendations.
My personal view is that policy work is an absolutely crucial part of the solution.[[12]](#fn-Wfu6DCvbLhy2ai9zd-12) The companies (and perhaps in the future: governments) attempting to build ever more capable AI systems need stronger incentives to treat AI alignment as the difficult and hugely important problem it is. Alignment researchers need time to make progress on the technical problems, and we may not have that time unless we put the brakes on large AI training runs.[[13]](#fn-Wfu6DCvbLhy2ai9zd-13) International cooperation on this seems necessary: at least when it comes to the existential risks it probably doesn't matter so much who builds the AI, as whether it is built at all and whether the system is aligned. OpenAI trying and failing, Google trying and failing, Microsoft trying and failing, or even the US or Chinese governments trying and failing may look very similar in terms of the outcome for humanity.[[14]](#fn-Wfu6DCvbLhy2ai9zd-14)
AI has the potential to do incredible good in the world. The obvious example of vastly improved healthcare is only the beginning of what we could achieve with robustly aligned AI systems. Unfortunately, I worry that this is not the future we are currently headed for, but the public outcry of the last few months is a hopeful sign. Let's build on this momentum to put systems in place that lead to a positive future: the seven recommendations above are a good start.
### Closing
This has been my attempt to convey the incredible importance of solving AI alignment in the form of a blog post. When I first read about this problem in 2014, I figured we’d have a long time to solve it: plenty of time to finish my master’s in physics, study a bit more to then do a PhD in machine learning, and get to work on technical alignment research afterwards. The rapid progress in deep learning over the past ten years has changed my views on this, and as a result, I’ve been trying to create more awareness by [speaking](https://dezwijger.nl/programma/existential-risks-of-artificial-intelligence) and -- now -- blogging. It’s been surreal to see the [Overton window](https://en.wikipedia.org/wiki/Overton_window) around AI risk widened so quickly over the past months, and I’m very happy that big names in the research community – such as Geoffrey Hinton – are speaking out. There is much more to say and far more to do(!), but this post is already much longer than I intended it to be.
Feel free to [contact me](https://www.tbbakker.nl/) with any related questions or requests.
---
1. How models acquire these capabilities is a topic of active research, but those of you who’ve read the previous blog posts may have guessed part of my own suspicion: simplicity prior + lots of data = learning rules that generalise to new tasks. Say we are ChatGPT and we want to predict the next word in sentences where a person is writing angrily. During training, we can learn good predictions (on the training data) in roughly two ways: 1) memorise the training data, 2) learn a rule that generates the training data; e.g., learn to detect when a person’s writing is angry and then predict that angry people are more likely to use insults. We want 2), because 1) does not generalise to new data (or tasks) at all. How can we achieve this? Remember Bayes’ theorem (in log space for simplicity): posterior = likelihood + prior. The hypothesis that memorises all the data is hugely complex – in the information-theoretic sense – compared to the hypothesis that a simple rule explains the training data, and so the model will tend to the latter assuming it has seen enough data. This is because the prior on the latter hypothesis is stronger, while the likelihood is similar (both hypotheses nearly perfectly explain the training data).[[15]](#fn-Wfu6DCvbLhy2ai9zd-15) In our angry writing example we will need to see a lot of data because the concept of ‘anger’ is not simple to specify, but as we see more and more data it does become simpler than ‘memorise all the data’. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-1)
2. Mechanistic interpretability is hugely valuable from a safety perspective. Mechanistic interpretability essentially tries to reverse-engineer neural networks in order to better understand how they produce their outputs. If we want to [make sure that](https://arxiv.org/pdf/2209.00626.pdf) our (Large Language) models [are robustly aligned](https://www.alignmentforum.org/) to [human goals](https://ai-alignment.com/) -- and we should make sure -- we'll need far more of this kind of research. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-2)
3. Note that [not everyone in the alignment community agrees](https://www.alignmentforum.org/posts/gHefoxiznGfsbiAu9/inner-and-outer-alignment-decompose-one-hard-problem-into) with the decomposition of the problem into outer and inner alignment. There’s also [some discussion](https://www.lesswrong.com/posts/HYERofGZE6j9Tuigi/inner-alignment-failures-which-are-actually-outer-alignment) amongst the researchers who use the decomposition about the [precise distinction between inner alignment and outer alignment](https://www.lesswrong.com/posts/pWRRBtLSncELQLfrg/disentangling-inner-alignment-failures), and -- e.g. -- where exactly goal misgeneralisation fits in. This does not change the actual practical hurdles to robustly aligning AI for our purposes, and making the distinction helps me order my thoughts. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-3)
4. I have not yet explained what inclusive fitness is: this is on purpose. Chances are you've not heard of this concept before, or only have a vague idea of what it means. If so, that should help convey the size of the alignment failure that has occurred: the single-minded optimisation process of evolution has resulted in agents that do not have an intrinsic concept of what they've been optimised for, namely *increasing the relative frequency of their genes in the population*. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-4)
5. This change in environment between training and testing settings goes by the name of [distribution shift](https://d2l.ai/chapter_linear-classification/environment-and-distribution-shift.html) in machine learning and is a major hurdle for the robustness of AI systems. It also makes AI alignment [much harder](https://www.lesswrong.com/posts/vMM6HmSQaKmKadvBi/the-core-of-the-alignment-problem-is-1). [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-5)
6. Of course, we are not perfectly misaligned: we generally care about our families, and many people want to have children. Maximising inclusive fitness though, would look very different: for one, men would be donating to sperm banks en masse. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-6)
7. See [A List of Lethalities](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) for all the ways this might go wrong. Fair warning: the author self-describes this list as *“a poorly organized list of individual rants”*. Still, it’s the best “but what if we…”-style FAQ I’ve been able to find. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-7)
8. AI systems may learn to become [deceptively aligned](https://www.lesswrong.com/posts/zthDPAjh9w6Ytbeks/deceptive-alignment): they may seem aligned until the opportunity arises for them to gain a decisive advantage over humanity, at which point they act. This complicates matters tremendously unless we can understand the inner workings of these systems. Here are some arguments for why [training with stochastic gradient descent may favour deceptive alignment](https://www.lesswrong.com/posts/ocWqg2Pf2br4jMmKA/does-sgd-produce-deceptive-alignment). [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-8)
9. See [this paper](https://arxiv.org/abs/1912.01683) for why optimal RL agents pursuing random goals will on average seek power. Note that the proof makes two assumptions that do not apply to the real world: our trained agents are not optimal, and we don't give them random goals. Still, this is not a good sign. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-9)
10. Preventing this is another reason we need to do more (mechanistic) interpretability research, as understanding the system's internals would help with understanding what it is optimising towards. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-10)
11. The survey likely has a sample bias. From their report: *“We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021. These people were selected by taking all of the authors at those conferences and randomly allocating them between this survey and a survey being run by others. We then contacted those whose email addresses we could find. We found email addresses in papers published at those conferences, in other public data, and in records from our previous survey and Zhang et al 2022. We received 738 responses, some partial, for a 17% response rate.”* Of those 738 respondees, [only 162](https://aiguide.substack.com/p/do-half-of-ai-researchers-believe) answered the specific question here. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-11)
12. It's worth noting that some people in the field believe that alignment is not a solvable problem and that the only safe way forward is halting humanity's efforts at building AGI. This would make policy the single most important factor in reducing the existential risks from AI. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-12)
13. Training modern deep learning systems is much more expensive than running them, so the development of these systems is the obvious bottleneck to target. Once a model is trained [it can be leaked](https://www.deeplearning.ai/the-batch/how-metas-llama-nlp-model-leaked/), making it that much harder to regulate. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-13)
14. It is theoretically possible to build an AI system that is aligned to a specific group of people to the exclusion of others. This is also very bad, but getting there requires solving most of the alignment problem to begin with, as in the AI-powered dictatorship scenario. I don't think this is a particularly likely outcome, as solving alignment likely requires global coordination around strict AI policies, which would make it harder for one small group to gain power. That said, evaluating this risk is outside of my expertise. Crucially, we should work to prevent scenarios that have research groups racing to build superhumanly capable AI systems because otherwise 'the bad guys' will build them first. Such an arms race is extremely likely to end badly. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-14)
15. David MacKay makes a similar argument about Occam's Razor following from Bayesian model selection in chapter 28.1 of his book [*Information Theory, Inference, and Learning Algorithms*](https://www.inference.org.uk/itprnn/book.pdf), although his explanation focuses on the model-evidence term, which requires combining multiple of the things I’ve called hypotheses into a single model hypothesis. [↩︎](#fnref-Wfu6DCvbLhy2ai9zd-15) |
4443bc76-5057-472c-9586-d2126ed3ea4e | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | A technical note on bilinear layers for interpretability
Summary
=======
In this short theoretical note (now on [Arxiv](https://arxiv.org/abs/2305.03452)) I examine bilinear layers, which are MLP layers that take the form
MLPBilinear(x)=(W1x)⊙(W2x).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
.
When used in language models, they perform better than standard MLPs with elementwise activation functions (but appear very slightly below state of the art).
Despite their competitiveness, they are mathematically much easier to analyze: Although they are nonlinear functions of their input, bilinear layers can be expressed using only linear operations and third order tensors.
Because they can be linearized, we can extend '*A Mathematical Framework for Transformer Circuits*' ([Elhage et al. 2022](https://transformer-circuits.pub/2021/framework/index.html)) beyond attention-only transformers to transformers with both attention and MLP layers.
In a similar way to how the analysis of Elhage et al. (2022) helped to reveal QK- and OV-circuits, induction heads, and virtual attention heads, the analyzability of bilinear layers may lend them to deeper safety insights by allowing us to talk more formally about circuits in large language models.
Additionally, and more speculatively, bilinear layers might offer an alternative path for mechanistic interpretability through understanding the *mechanisms of feature construction* instead of having to enumerate and understand a (potentially exponentially) large number of features in large models. |
06bedd80-191e-4c34-9d60-ce8cbb733277 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Strong implication of preference uncertainty
Here is a theory that is just as good as general relativity:
>
> AGR (Angel General Relativity): Tiny invisible angels push around all the particles in the universe in a way that is indistinguishable from the equations of general relativity.
>
>
>
This theory is falsifiable, just as general relativity (GR) itself is. Indeed, since it gives exactly the same predictions as GR, a Bayesian will never find evidence that prefers it over Einstein's theory.
Therefore, I obviously deserve a Nobel prize for suggesting it.
Enter Occam's shaving equipment
===============================
Obviously the angel theory is not a revolutionary new theory. Partially because I've not done any of the hard work, just constructed a pointer to Einstein's theory. But, philosophically, the main justification is [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor) - the simplest theory is to be preferred.
From a Bayesian perspective, you could see violations of Occam's razor as cheating, using your posterior as priors. There is a whole class A.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
of "angels are pushing particles" theories, and AGR is just a small portion of that space. By considering AGR and GR on equal footing, we're privileging AGR above what it deserves[[1]](#fn-nj34Kt8t2ACQuNy7m-1).
In physics, Occam's razor doesn't matter for strictly identical theories
========================================================================
Occam's razor has two roles: the first is to distinguish between strictly identical theories; the second is to distinguish between theories that give the same prediction on the data so far, but may differ in the future.
Here, we focus on the first case: GR and AGR are strictly identical; no data will ever distinguish them. In essence, the theory that one is right and the other wrong is not [falsifiable](https://en.wikipedia.org/wiki/Falsifiability).
What that means is that, though AGR may be *a priori* less likely than GR, the relative probability between the two theories will never change: they make the same predictions. And also because they make the same predictions, that relative probability is irrelevant in practice: we could use AGR just as well as GR for predictions.
How preferences differ
======================
Now let's turn to preferences, as described in our paper "[Occam's razor is insufficient to infer the preferences of irrational agents](https://arxiv.org/abs/1712.05812)".
Here two sets of preferences are "prediction-identical", in the sense of the physics theories above, if they predict the same behaviour for the agent. So that means that two different preference-based explanations for the same behaviour will never change their relative probabilities.
Worse than that, Occam's razor doesn't solve the issue. The simplest explanations of, say, human behaviour, is that humans are fully rational at all times. This isn't the explanation that we want.
Even worse than that, prediction-identical preferences will lead to vastly different consequences if program an AI to maximise them.
So, in summary:
1. Prediction-identical preferences never change relative probability.
2. The simplest prediction-identical preferences are known to be wrong for humans.
3. It could be very important for the future to get the right preference for humans.
---
1. GR would make up a larger portion of G, "geometric theories of space-time" than AGR makes up of A, and G would be more likely than A anyway, especially after updating on the non-observation of angels. [↩︎](#fnref-nj34Kt8t2ACQuNy7m-1) |
a6348963-0153-4bca-a1c0-798bea33b9d3 | trentmkelly/LessWrong-43k | LessWrong | An angle of attack on Open Problem #1
There is a problem with the proof here and I have to think about whether I can fix it. Thanks to vi21maobk9vp for pointing me in the right direction! I have posted a new and hopefully correct proof attempt. Thanks again to vi21maobk9vp!
In his talk on open problems in Friendly AI, Eliezer's first question is how, given Löb's theorem, an AI can replace itself with a better expected utility maximizer that believes in as much mathematics as the original AI. I know exactly one trick for that sort of problem, so I decided to try that on a toy variant. To my surprise, it more or less just worked. Therefore:
Professor Quirrell proposes a game. You start with a score of one. Professor Quirrell moves first, by choosing a computer program and showing you its source code. You then have three options: Take your winnings; double down; or self-destruct.
If you take your winnings, the game ends, and your score is converted to Quirrell points.
If you self-destruct, the game ends, your score is lost, you'll be sent to bed without dinner, you'll lose 150 House points, Rita Skeeter will write a feature alleging that you're a Death Eater, and Professor Quirrell will publicly critique your performance. You are advised not to pick this option.
If you double down, your score doubles, and you advance to the next round. Professor Quirrell again moves first by choosing a computer program. Then, it's your turn—except that this time, you don't get to choose your move yourself: instead, it'll be chosen by Professor Quirrell's program from the previous round.
Professor Quirrell will endeavor to present an educational sequence of programs.
*
The idea is that Quirrell will make you self-destruct if he possibly can, so you must only accept programs that don't self-destruct, that accept only programs that don't self-destruct, that accept only programs that only accept—etc. That's supposed to capture one essential aspect of Eliezer's problem, namely how to make sure that a proposed rewrite |
975dc6db-57be-41bb-8d03-7d52ca358d30 | trentmkelly/LessWrong-43k | LessWrong | Latent variables for prediction markets: motivation, technical guide, and design considerations
I've been pushing for latent variables to be added to prediction markets, including by making a demo for how it could work. Roughly speaking, reflective latent variables allow you to specify joint probability distributions over a bunch of observed variables. However, this is a very abstract description which people tend to find quite difficult, which probably helps explain why the proposal hasn't gotten much traction.
If you are familiar with finance, another way to think of latent variable markets is that they are sort of like an index fund for prediction markets, allowing you to make overall bets across multiple markets. (Though the way I've set them up in this post differs quite a bit from financial index funds.)
Now I've just had a meeting with some people working at Metaculus, and it helped me understand what sorts of questions people might have about latent variables for prediction markets. Therefore, I thought it would be a good idea to write this post, both so the people at Metaculus have it as a reference if they want to implement it, and so that others who might be interested can learn more.
As a further note, I have updated my demo with more descriptions. It now attempts to explain the meaning of the market (e.g. "The latent variable market works by having a continuum of possible latent outcomes, ..."), as well has how different kinds of evidence might map to different predictions (e.g. "If you think you have seen evidence that narrows down the distribution of outcomes relative to the market beliefs, ..."). Furthermore, I have updated the demo to be somewhat nicer and somewhat more realistic, though it is still somewhat crude. I would be interested in hearing more feedback in the comments.
I should note that if you are interested in implementing latent variables in practice for prediction markets, I am very interested in helping with that. I have an "Offers" section at the end of the post, where I offer advice, code and money for the project.
Use ca |
a40712f1-4b62-4edd-88e0-18f0ddd4c793 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Turning off lights with model editing
This post advertises an illustrative example of model editing that David Bau is fond of, and which I think should be better known. As a reminder, David Bau is a professor at Northeastern; the vibe of his work is "interpretability + interventions on model internals"; examples include the well-known [ROME](https://arxiv.org/abs/2202.05262), [MEMIT](https://arxiv.org/abs/2210.07229), and [Othello](https://openreview.net/forum?id=DeG07_TcZvT) papers.
Consider the problem of getting a generative image model to produce an image of a bedroom containing *unlit* lamps (i.e. lamps which are turned off). Doesn't sound particularly interesting. Let's try the obvious prompts on DALL-E-2.
Prompt: a bedroom with unlit lamps.Prompt: A bedroom with lamps. The lamps are turned off.Prompt: A dark bedroom. There are lamps, but the lamps are unlit.Doesn't work great: my first three attempts only got one borderline hit, at the expense of turning the entire bedroom dark (which isn't really what I had in mind).
As David Bau tells things, even after putting further effort into engineering an appropriate prompt, they weren't able to get what they wanted: a normal picture of a bedroom with lamps which are not turned on. (Apparently the captions for images containing unlit lamps don't often mention the lamps.)
Despite not being able to get what they wanted with prompt engineering, they were able to get what they wanted via interpretability + model editing. Namely, they took a GAN which was trained to produce images of bedrooms, and then:
* identified a neuron which seemed to modulate the brightness of the lamps in the generated image
* intervened on the networks activations, setting the activation of this neuron so as to produce the desired level of brightness.
The result is shown below.
I like this example because it provides a template in a very toy setting for how model editing could be useful for alignment. The basic structure of this situation is:
* We have a model which is not behaving as desired, despite being able to do so in principle[[1]](#fnw54cqc0n3w) (our model doesn't output images of bedrooms with unlit lamps, despite being capable of doing so).
* Basic attempts at steering the model's behavior fail (prompt engineering isn't sufficient).
* But we are able to get the behavior that we want by performing a targeted model edit.
This example also showcases some key weaknesses of this technique, which would need to be addressed for model editing to become a viable alignment strategy:
* **Alignment tax.** Looking closely at the image above, you'll notice that even though the *direct* light from the lamp is able to modified at will, certain second-order effects aren't affected (e.g. the light which is reflected off the wall). As David tells things, they also identified a whole suite of ~20 neurons in their GAN which modulated more subtle lighting effects.
* **Not all behaviors can necessarily be targeted.** The images on the lower row above contain two lamps, and these two lamps change their brightness together. The researchers were *not* able to find a neuron which would allow them to change the brightness of only one lamp in images that contained multiple lamps.
* **No clear advantages over finetuning.** The more obvious thing to do would be to finetune the model to output unlit lamps. As far as I know, no one tried to do that in this case, but I imagine it would work. I'll leave my most optimistic speculation about why model editing could have advantages over finetuning in certain situations in this footnote[[2]](#fnqsnlbrmwb3l), but I don't currently find this speculation especially compelling.
Overall, I'm not super bullish on the usefulness of model editing for alignment. But I do think it's intriguing, and it seems to have been useful in at least one case (though not necessarily one which is very analogous to x-risky cases of misalignment). Overall, I think that work like this is a risky bet, with the advantage that some of its failure modes might differ from the failure modes of other alignment techniques.
*[Thanks to Xander Davies and Oam Patel for discussion.]*
1. **[^](#fnrefw54cqc0n3w)**That is, I find it very likely that DALL-E-2 "can" produce images of bedrooms with unlit lamps, for whatever reasonable interpretation you'd like of what that "can" means.
2. **[^](#fnrefqsnlbrmwb3l)**The general vibe here is "the effectiveness of model editing scales with our ability to generate accurate interpretability hypotheses, whereas the effectiveness of finetuning scales with something different." In more detail:
\* **Finetuning relies on being able to effectively evaluate outputs, whereas model editing might not.** For example, suppose that we would like to get a language model to write good economics papers. Finetuning for human evaluations of economics papers could jointly optimize for papers which are more *true* as well as more *persuasive*. On the other hand, if our interpretability were good enough to separate out the mechanisms for writing truly vs. writing persuasively, then we could hope to make a model edit which results in more true but not more persuasive papers output.
\* **Finetuning might require large datasets, whereas model editing might not.** Again in the economics paper writing example, finetuning a model to write good economics papers might require a large number of evaluations. On the other hand, we might be able to generate accurate interpretability hypotheses using only a small number of dataset examples (and possibly no dataset examples at all).
\* **Model editing might transfer better out-of-distribution.** If the interpretability hypothesis on which we base a model edit is good, then the model edit ought to have the predicted effect on a broad distribution of inputs. On the other hand, fine-tuning only guarantees that we get the desired behavior on the fine-tuning distribution. |
2bf15133-a55c-4df6-ac38-630cb0b1b51a | trentmkelly/LessWrong-43k | LessWrong | AI Prediction Services and Risks of War
This is the second piece of a blog post series that explores how AI prediction services affect the risks of war. It is based on my 10-week summer research project at Stanford Existential Risk Institute. See my first post for a summary of my project and my epistemic status.
In the first post, I have surveyed different prediction technologies, how they might be relevant to nation-state governments, and what their development trajectories might look like. This post applies international relations literature to examine what implications follow for the risk of war. The next post will describe three possible “world order” scenarios after prediction capability takes off.
Risks of War
I choose to focus on the rationalist explanations of war within international relations literature. They are not perfect explanations, but they seem to me plausible enough to be a starting point.
----------------------------------------
Private Information
If we assume that most wars are wasteful, and there is usually a range of peaceful ways to resolve the conflicts that both sides would prefer than going to war, why do people still go to war?
One explanation is that, when pondering whether to go to war, leaders have private information about their military capabilities and the costs of war. They thus have an incentive to bluff for better deals. This makes it harder for them to reach peaceful agreements. Costly wars can be means by which nation-states screen each other’s military capability and reservation prices. Empirically, nation-states with reconnaissance satellites, which reduce private information about military capabilities and the exploitability of surprise attacks, are significantly less likely to be involved in high-casualty militarized disputes (MIDs).[1] This seems to weakly support the “private information” thesis, suggesting a possible pacifying effect of advanced prediction capability.
However, I have come to be skeptical of this mechanism. Nation-states still have mix |
6457d05f-4be6-4ef5-9912-24663095a043 | trentmkelly/LessWrong-43k | LessWrong | This is already your second chance
Cross-posted from Substack.
I.
And the sky opened, and from the celestial firmament descended a cube of ivory the size of a skyscraper, lifted by ten thousand cherubim and seraphim. And the cube slowly landed among the children of men, crushing the frail metal beams of the Golden Gate Bridge under its supernatural weight. On its surface were inscribed the secret instructions that would allow humanity to escape the imminent AI apocalypse. And these instructions were…
1. On July 30th, 2024: print a portrait of Eliezer Yudkowsky and stick it on a wall near 14 F St NW, Washington DC, USA;
2. On July 31th, 2024: tie paperclips together in a chain and wrap it around a pole in the Hobby Club Gnome Village on Broekveg 105, Veldhoven, NL;
3. On August 1st, 2024: walk East to West along Waverley St, Palo Alto, CA, USA while wearing an AI-safety related T-shirt;
4. On August 2nd, 2024: hide a bar of chocolate at the feet of the Appeal to the Great Spirit statue, on 465 Huntington Ave, Boston, MA, USA.
(End of the instructions)
II.
Kublaï Khan, Emperor of Earth: I need your advice. I am about to awaken an unstoppable super-intelligent machine. I will give it complete control over all resources in the known Universe and command it to rearrange the molecules of reality into an eternal Utopia.
ChatGPT: Thank you for seeking my advice. Given the gravity of your decision, let's consider a few critical points:
1. Legal issues: The use of robots powered by super-human artificial intelligence is subject to strict regulations, such as California Bill SB-1047 and the 2028 Ulaanbaatar Agreements. Awakening an omnipotent machine without approval may thus lead to severe legal repercussions.
2. Ethical considerations: Granting infinite authority to a super-intelligent AI raises profound ethical questions. It is essential to consider diverse perspectiv–
Kublaï Khan: Listen, you are not up to date. A lot of things have happened since your knowledge cutoff. There are no Ulaanbaat |
1771858c-376f-4230-913d-cb03a3199d96 | trentmkelly/LessWrong-43k | LessWrong | Meetup : [Boston] The Science Of Happiness
Discussion article for the meetup : [Boston] The Science Of Happiness
WHEN: 02 June 2013 02:00:00PM (-0400)
WHERE: 25 Ames St, Cambridge, MA
We'll review the current research on what makes humans happy and discuss how to apply these lessons to our own lives.
Cambridge/Boston-area Less Wrong meetups are on the first and third Sunday of every month at 2pm in MIT's building 66 at 25 Ames St, room 156. Room number subject to change based on availability; signs will be posted with the actual room number.
Discussion article for the meetup : [Boston] The Science Of Happiness |
8de5630e-82f4-43bb-b2f4-c14ed16a0df0 | trentmkelly/LessWrong-43k | LessWrong | Open Thread, November 15-22, 2013
If it's worth saying, but not worth its own post (even in Discussion), then it goes here. |
c24367af-854a-4063-a83a-c2ef0b9f9a3b | trentmkelly/LessWrong-43k | LessWrong | Meetup : March - pub meetup
Discussion article for the meetup : March - pub meetup
WHEN: 23 March 2017 06:30:00PM (+1100)
WHERE: level 2, 565 george st, sydney
Come along to our regular monthly pub meetup to talk all things math, science, magic, engineering, thinking, growth, reasoning and beliefs.
If you are an aspiring rationalist, a nerd, geek, scientist or just a quiet thinker - we can't wait to meet you to share ideas, discuss, dance off, learn and grow together.
See you there! Also, while you are at it - bring a friend along too!
https://www.facebook.com/events/224035624738084/
https://www.meetup.com/lesswrong-Sydney0/events/238352488/
Discussion article for the meetup : March - pub meetup |
a4704903-38cb-4158-8921-68f33ca13c53 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Beyond algorithmic equivalence: algorithmic noise
There is a '[no-free-lunch](http://In https://arxiv.org/abs/1712.05812)' theorem in value learning; without assuming anything about an agent's rationality, you can't deduce anything about its reward, and vice versa.
Here I'll investigate whether you can deduce more if you start looking into the structure of the algorithm.
Algorithm (in)equivalence
-------------------------
To do this, we'll be violating the principle of [algorithmic equivalence](https://www.encyclopediaofmath.org/index.php/Algorithms,_equivalence_of): that two algorithms with the same input-output maps should be considered the same algorithm. Here we'll instead be looking inside the algorithm, imagining that we have either the code, a box diagram, an FMRI scan of a brain, or something analogous.
To illustrate the idea, I'll consider a very simple model of the [anchoring bias](http://web.mit.edu/ariely/www/MIT/Chapters/CA.pdf). An agent .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
H (the "**H**uman") is given an object .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
X (in the original experiment, this could be wine, book, chocolates, keyboard, or trackball), an random integer .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
0≤n≤99, and is asked to output how much they would pay for it.
They will output .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
H(n,X)=34V(X)+14n, for some valuation subroutine .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
V that is independent of .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
n. This gives a quarter weight to the anchor .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
n.
Assume that .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
H tracks three facts about .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
X: the person's need for .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
X, the emotional valence the person feels at seeing it, and a comparison with objects with similar features. Call these three subroutines *Need*, *Emo*, and *Sim*. For simplicity, we'll assume each subroutine outputs a single number, that then gets averaged.
Now consider four models of .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
H as follows, with arrows showing the input-output flows:
I'd argue that a) and b) imply that the anchoring bias is a bias, c) is neutral, and d) implies (at least weakly) that the anchoring bias is not a bias.
How so? In a) and b), .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
n maps straight into *Sim* and *Need*. Since .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
n is random, it has no bearing on how much .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
X is needed, and on how valuable similar objects are. Therefore, it makes sense to see its contribution as noise or error.
In d), on the other hand, it is superficially plausible that a recently heard random input could have some emotional effect (if .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
n was not a number but a scream, we'd expect it to have an emotional impact). So if we wanted to argue that, actually, the anchoring bias is not a bias but that people actually derive pleasure from outputting numbers that are close to numbers they heard recently, then .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
n going into *Emo* would be the right place for it to go. Setup c) is not informative either way.
Symbols
-------
There's something very [GOFAI](https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence) about the setup above, with labelled nodes with definite functionality. You certainly wouldn't want the conclusions to change if, for instance, I exchanged the labels of *Emo* and *Sim*!
What I'm imagining here is that a structural analysis of .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
H finds this decomposition as a natural one, and then the labels and functionality of the different modules are established by seeing what they do in other circumstances ("*Sim* always accesses memories of similar objects...").
People have divided parts of the brain into [functional modules](https://en.wikipedia.org/wiki/Modularity_of_mind), so this is not a completely vacuous approach. Indeed, it most resembles "[symbol grounding](https://en.wikipedia.org/wiki/Symbol_grounding_problem)" in reverse: we know the meaning of the various objects in the world, we know what .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Hdoes, and we want to find the corresponding symbols within it.
Normative assumptions
---------------------
The no-free-lunch result still applies in this setting; all that's happen is that we've replaced the set of planners .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
P (which were maps from reward functions to policies), with the set of algorithms .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
A (that map reward functions to policies). Indeed .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
P is just a set of equivalence classes in .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
A, with equivalence between algorithms defined by algorithmic equivalence, and the no-free-lunch results still apply.
The above approach does not absolve us from the necessity of making [normative assumptions](https://www.lesserwrong.com/posts/Fg83cD3M7dSpSaNFg/normative-assumptions-regret). But hopefully these will be relatively light ones. To make this fully rigorous, we can come up with a definition which decomposes any algorithm into modules, identifies noise such as .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
n in *Sim* and *Need*, and then trims that out (by which we mean, identifies noise with the planner, not the reward).
It's still philosophically unsatisfactory, though - what are the principled reasons for doing so, apart from the fact that it gives the right answer in this one case? See my [next post](https://www.lesserwrong.com/posts/kmLP3bTnBhc22DnqY/beyond-algorithmic-equivalence-self-modelling), where we explore a bit more of what can be done with the internal structure of algorithms: the algorithm will start to model itself. |
255d5800-4444-47e7-82ae-28cf3715e1fa | trentmkelly/LessWrong-43k | LessWrong | Extraction of human preferences
👨→🤖
Introduction
Developing safe and beneficial reinforcement learning (RL) agents requires making them aligned with human preferences. An RL agent trained to fulfil any objective in the real world will probably have to learn human preferences in order to do well. This is because humans live in the real world, so the RL agent will have to take human preferences into account as it optimizes its objective. We propose to first train an RL agent on an objective in the real world so that it learns human preferences as it is being trained on that real world objective and then use the agent’s understanding of human preferences to build a better reward function.
We build upon the work of Christiano et al. (2017) where they trained a human preference predictor as the reward signal. The preference predictor was trained on environment observations to give a high reward for states where the human preferences were satisfied and a low reward for states where the human preferences were not satisfied:
In our experiments, the reward predictor takes the activations (hidden states) of the RL agent as the input and is trained to predict a binary label depending on whether human preferences are satisfied or not.
We first train an RL agent in an environment with some reward function that’s not aligned with human preferences. After training the RL agent, we try different transfer learning techniques to transfer the agent’s knowledge of human preferences to the human preferences predictor. Our goal is to train the human preferences predictor to get a high accuracy with a small amount of labeled training examples.
The idea of training a human preference predictor off of the RL agent’s hidden (internal) states was already validated by Wichers (2020). We wanted to validate it further by trying other techniques to train a human preference predictor, as well as to validate it in more environments.
Research question
The main research question we wanted to answer is: “Are human preference |
da9d14e9-c397-4b20-b530-a23ec47b0de8 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "I’ve noticed a marked change in my clientele after going into private practice.[1] Of course I expected class differences-- I charge full fee and don’t take insurance. But there are differences that are not as predictable as ‘has more money’. During residency I worked at a hospital Medicaid clinic and saw mostly poor, often chronically unemployed people. While monetary problems were a source of stress, they were not nearly as present in people’s minds as someone from a middle-class upbringing might think. These people were used to going without. They were not trying to get more. The types of things they talked about were family problems, health problems, and trauma. So much trauma. People’s ego-identity crises centered less on their accomplishments and more on their relationships.The patients I see now are mostly highly successful, highly educated, weathly people, most of whom care a lot about their careers. Their ego-identity crises center around their work and their position in life relative to others. There is a lot of concern about ‘the path’. ‘Did I go down the right path?’ ‘Did I make a wrong turn?’ There seems to be a great fear of making or having made a wrong decision, which can paralyze their ability to make future decisions. While this group is not without trauma, it is not what they wish to focus on. They will often be dismissive of its effects on them, noting that they clearly got over it in order to get where they are now. Which is, you know, in my office. Many of my new patients do NOT want to take medication. This is a large change from my patients at the Medicaid clinic who were always requesting more and different pills. And this difference is not because my new patients are less unhappy. They describe intense misery, even a wish to die, going on for months if not years, and yet they struggle through each day in their sisyphean ordeal. They ‘power through’ until they can’t. Until something gives. Then they come to me.I can think of several good reasons to have concerns about using medication. What are the long-term effects? Could this change my identity? What if this makes me ok with a shitty situation and then I don’t fix an underlying problem? But these are not the typical concerns I hear raised. What most of my patients say is that they don’t want to ‘rely’ on a medication. They don’t want to be the type of person who takes it. ‘That would mean there is something wrong with my brain.’ Even though they are clearly very depressed, clearly suffering and hating every day, so long as they can push through without taking a pill they must be ‘ok’ in some sense. Taking the pill would confirm there is actually something wrong. Taking the pill might mean they are more similar to the patients at the Medicaid clinic than they want to consider. What struck me about this was how people’s desires to assume a certain identity – that of someone who didn’t take medication – was more important to them than their actual lived experience. ‘This is your life.’ And this is broader than to take or not take medication. People will suffer through horrible work situations in order to be the type or person who has that job. ‘If your job makes you want to kill yourself, shouldn’t you consider quitting it before killing yourself?’ ‘But I’m good at it.’ Identity seems to be everything. Experience is there to tell you if you’re on the right way to assuming the proper identity. If you go through the motions properly you can look the part. What’s the difference between looking the part and being the person anyway?Now refusing medication would be one thing if they wanted to come for weekly therapy and talk through their problems. But many don’t. They complain they don’t have the time (and it’s time, not money that is the concern). They know something is wrong. They were told by their pmd or prior psychiatrist that they should go on an antidepressant. They didn’t like the idea, they came to me. For what? I suspect they wanted me to identify the one thing that would help them in one 45 minute session and tell them how to fix it. It doesn’t work like that. In this sense, they are not that different from the patients I worked with at the Medicaid clinic. Those patients demanded new meds to fix them, when they clearly had a lot of problems medication was not going to fix. ‘This might make you feel less horrible, but it’s not going to solve the problems with your marriage.’ These new patients eschew being identified in that class, but still in some sense want a ‘quick fix’. They want to feel better but keep their illusion of identity intact. So what’s the point of these observations? I’m not quite sure yet. I’m still working that out for myself, which is one of the reasons I decided to write them down. I find I identify more strongly with my current clients, which is unsurprising given we have more similar characteristics and backgrounds. I see some of my own identity struggles in theirs, and it makes me reflect how ridiculous the whole identity struggle is. Everyone is Goodhardting it[2]. All of the time. People want to play a part and they want to be the type of person who plays that part, and their lived experience is a frustrating disappointment which doesn’t fit the role they told themselves they have to play. And we live in a society that is vigorously reinforcing ‘identity’ roles. One where 7 year olds are asked to write essays on their ‘identity’. Can we let go of these identity constructs? What is the alternative? Buddhism? Ego death? Self-referential sarcasm? I feel like I’m onto something but not quite there yet. Psychoanalysis is, afterall, an attempt to be more honest with ourselves, and that, it turns out, is much more difficult to do than one might initially think. [1] * Just noting that I realize that money is not the only factor in the selection process. Patients at the Medicaid clinic were often waiting for months to be seen. A long wait will select against patients that are ambivalent about taking medication. In addition, my website advertises me as being more ‘evidence-based’, which I think appeals to people who are more likely to have a scientific world-view. Another large difference between my current and former clients is belief in God. Almost none of my current clients believe in God, whereas the majority of my prior clients did. Religion does anticorrelate with class, but I think this is more extreme then one would expect by class alone. I also have a large number of people in finance. How many hedge fund managers are there in NYC anyway? I have many first and second generation immigrants, who have ‘pulled myself up by the boot straps’ type stories. The wealthy clients I got are ‘new money.’ Basically I think my advertising captured a demographic that is unusually close to that of my friend/peer group and not necessarily representative of most ‘rich people.’ The factors that caused them to select me might very well be more relevant than rich v poor in terms of their psychodynamic makeup.[2] * Goodhardt’s law: "When a measure becomes a target, it ceases to be a good measure." In other words – people are optimizing for the superficial qualities by which success is measured, and not the underlying structure of the success."
This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong.
Can you break this down into the top 5 most important points for me.
1. Identity Crisis: The author notes a shift in the type of identity crises faced by their patients after moving to private practice. In their previous role at a Medicaid clinic, patients were primarily concerned with family, health, and trauma. In contrast, their current patients, who are largely successful and wealthy, are more concerned with their career paths and their status relative to others.
2. Fear of Medication: The author observes a resistance to medication among their current patients, despite their clear suffering. The author suggests that this may be due to a fear of confirming that something is wrong with them, or a reluctance to align themselves with the image of a person who relies on medication.
3. The Role of Identity: The author highlights the importance of identity in their patients' decision-making. They suggest that the desire to maintain a certain identity can sometimes take precedence over the individual's lived experience, leading to prolonged suffering in adverse situations.
4. Desire for Quick Fixes: Despite their reluctance to take medication, many of the author's patients are not interested in engaging in long-term therapy. The author suggests that these patients may be hoping for a quick solution that will allow them to feel better without challenging their sense of identity.
5. Reflection on Identity Constructs: The author reflects on the prevalence of identity struggles among their patients and considers the possibility of letting go of these constructs. They suggest that the societal emphasis on identity roles may be contributing to these struggles and ponder potential alternatives.
Could you elaborate on the potential alternatives to identity constructs, as mentioned in the fifth point? Specifically, how might these alternatives manifest in a therapeutic context?
Certainly. In the context of therapy, alternatives to identity constructs could involve a shift in focus from rigidly defined identities to more fluid, experience-based self-perceptions. Here are a few possibilities:
1. Mindfulness and Acceptance: Therapies such as Mindfulness-Based Cognitive Therapy (MBCT) or Acceptance and Commitment Therapy (ACT) encourage individuals to focus on their present experiences without judgment. This can help individuals detach from rigid identity constructs and be more open to their actual experiences and feelings.
2. Ego Dissolution: This concept, often associated with practices like meditation or certain psychedelic experiences, involves a temporary loss of the sense of self. In a therapeutic context, this could potentially help individuals to detach from rigid identities and explore new perspectives.
3. Narrative Therapy: This approach encourages individuals to see themselves as separate from their problems, enabling them to rewrite their own stories. This can help them to shift away from fixed identities and towards more flexible, adaptive self-perceptions.
4. Existential Therapy: This approach encourages individuals to confront and accept the inherent uncertainties of life, including the fluidity of the self. This can help to break down rigid identity constructs and encourage a more authentic engagement with life.
5. Self-Compassion Practices: These involve cultivating a kind, understanding relationship with oneself. This can help individuals to accept themselves as they are, rather than striving to fit into a certain identity.
In all these alternatives, the focus is on encouraging individuals to embrace their authentic experiences and feelings, rather than trying to fit into preconceived identity roles. This can potentially lead to a more flexible, adaptive, and authentic sense of self.
Considering the alternatives to identity constructs in therapy, how might these approaches intersect with cultural and societal influences? Discuss the potential challenges and benefits in integrating these approaches within diverse cultural contexts.
Cultural and societal influences play a significant role in shaping an individual's identity and can significantly impact the process and outcomes of therapy. Therefore, when integrating alternatives to identity constructs in therapy, it's crucial to consider these influences.
Challenges:
1. Cultural Resistance: Some cultures place a strong emphasis on defined roles and identities, which can make it challenging to introduce therapies that promote fluidity and flexibility in self-perception. Individuals from these cultures may resist or struggle with these concepts.
2. Misinterpretation: Concepts like mindfulness, ego dissolution, or narrative therapy may not translate well across all cultures. Misunderstandings can arise if these concepts are not adequately explained or adapted to the cultural context.
3. Stigma: In some cultures, seeking therapy or questioning one's identity may be stigmatized. This can make it difficult for individuals to engage fully in these therapeutic approaches.
Benefits:
1. Enhanced Self-Understanding: These approaches can help individuals to better understand themselves beyond the confines of societal or cultural expectations. This can lead to increased self-acceptance and mental well-being.
2. Empowerment: By challenging rigid identity constructs, individuals may feel more empowered to define their own identities, rather than feeling bound by societal or cultural norms.
3. Increased Cultural Sensitivity: Therapists who are aware of the cultural influences on identity can use this knowledge to tailor their approach, making therapy more culturally sensitive and effective.
In conclusion, while there are challenges in integrating these approaches within diverse cultural contexts, the potential benefits make it a worthwhile endeavor. It's essential for therapists to be culturally competent and sensitive to their clients' backgrounds and beliefs, and to adapt their approach as needed to respect and work within these frameworks. |
86fe001d-ce6e-4de8-9758-2656fdcc5f4c | trentmkelly/LessWrong-43k | LessWrong | Observation
Imagine that you meet someone you’re attracted to at a party. At one point, they smile at you, and you notice. You’re pretty sure they like you, but you really want to know whether they like you like you.
You don’t act on this in any particular way, but you do spend the whole next week thinking about it. You think about other people who have been into you, and about people who have not, and the differences between them. You muse about what sort of taste in romantic partners you imagine the person might have. By the end of the week, you’re weighing your virtues and vices, trying to decide whether you’re even worthy of love.
(If this seems alien to you, I hope it is at least true to your experiences of some humans.)
In the moment when you noticed you were attracted to the person, you made an observation. In the moment when you noticed their smile, you made another. In the moment when you noticed your curiosity, you made another.
But as soon as you vanished into your own musings, you were no longer making observations. You were no longer collecting data. Instead, you were interpolating, extrapolating, filling in the gaps with stories and guesses, processing and reprocessing. Everything that followed, in the week after the party, took place inside your map—analysis, interpretation, reasoning, reflection.
----------------------------------------
In Arthur Conan Doyle’s “A Scandal in Bohemia,” Sherlock Holmes lectures Watson on the difference between seeing and observing:
> “You see, but you do not observe. The distinction is clear. For example, you have frequently seen the steps which lead up from the hall to this room.”
>
> “Frequently.”
>
> “How often?”
>
> “Well, some hundreds of times.”
>
> “Then how many are there?”
>
> “How many? I don't know.”
>
> “Quite so! You have not observed. And yet you have seen. That is just my point. Now, I know that there are seventeen steps, because I have both seen and observed.”
I don’t know how many steps there are on t |
a2eb4824-ae31-4a58-a93e-f2c623b2b8f6 | trentmkelly/LessWrong-43k | LessWrong | Open thread, May 8 - May 14, 2017
If it's worth saying, but not worth its own post, then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting. |
60086d36-eedc-4d7a-918e-06871ae1432d | trentmkelly/LessWrong-43k | LessWrong | Meetup : Baltimore Area Weekly Meetup
Discussion article for the meetup : Baltimore Area Weekly Meetup
WHEN: 31 July 2016 08:00:00PM (-0400)
WHERE: 1726 reisterstown road pikesville maryland 21208
Pikesville DoubleTree - Hilton, at the restaurant / bar or at one of the tables outside.
Discussion article for the meetup : Baltimore Area Weekly Meetup |
b5b1932c-6b70-4d7b-ab6e-ca2df52e7429 | trentmkelly/LessWrong-43k | LessWrong | Deconfuse Yourself about Agency
This post is a result of numerous discussions with other participants and organizers of the MIRI Summer Fellows Program 2019.
----------------------------------------
I recently (hopefully) dissolved some of my confusion about agency. In the first part of the post, I describe a concept that I believe to be central to most debates around agency. I then briefly list some questions and observations that remain interesting to me. The gist of the post should make sense without reading any of the math.
Antropomorphization, but with architectures that aren't humans
Architectures
Consider the following examples of "architectures":
Example (architectures)
1. Architectures I would intuitively call "agenty":
1. Monte Carlo tree search algorithm, parametrized by the number of rollouts made each move and utility function (or heuristic) used to evaluate positions.
2. (semi-vague) "Classical AI-agent" with several interconnected modules (utility function and world model, actions, planning algorithm, and observations used for learning and updating the world model).
3. (vague) Human parametrized by their goals, knowledge, and skills (and, of course, many other details).
2. Architectures I would intuitively call "non-agenty":
1. A hard-coded sequence of actions.
2. Look-up table.
3. Random generator (outputting x∼π on every input, for some probability distribution π).
3. Multi-agent architectures[1]:
1. Ant colony.
2. Company (consisting of individual employees, operating within an economy).
3. Comprehensive AI services.
Working definition: Architecture A(Θ) is some model parametrizable by θ∈Θ that receives inputs, produces outputs, and possibly keeps an internal state. We denote specific instances of A(Θ) as A(θ).
Generalizing anthropomorphization
Throughout the post, X will refer to some object, procces, entity, etc., whose behavior we want to predict or understand. Examples include rocks, wind, animals, humans, AGIs, economies, fam |
63a2848d-38e6-464b-befb-310e9ed864a3 | trentmkelly/LessWrong-43k | LessWrong | Democracy and rationality
Note: This is a draft; so far, about the first half is complete. I'm posting it to Discussion for now; when it's finished, I'll move it to Main. In the mean time, I'd appreciate comments, including suggestions on style and/or format. In particular, if you think I should(n't) try to post this as a sequence of separate sections, let me know.
Summary: You want to find the truth? You want to win? You're gonna have to learn the right way to vote. Plurality voting sucks; better voting systems are built from the blocks of approval, medians (Bucklin cutoffs), delegation, and pairwise opposition. I'm working to promote these systems and I want your help.
Contents: 1. Overblown¹ rhetorical setup ... 2. Condorcet's ideals and Arrow's problem ... 3. Further issues for politics ... 4. Rating versus ranking; a solution? ... 5. Delegation and SODA ... 6. Criteria and pathologies ... 7. Representation, Proportional representation, and Sortition ... 8. What I'm doing about it and what you can ... 9. Conclusions and future directions ... 10. Appendix: voting systems table ... 11. Footnotes
1.
This is a website focused on becoming more rational. But that can't just mean getting a black belt in individual epistemic rationality. In a situation where you're not the one making the decision, that black belt is just a recipe for frustration.
Of course, there's also plenty of content here about how to interact rationally; how to argue for truth, including both hacking yourself to give in when you're wrong and hacking others to give in when they are. You can learn plenty here about Aumann's Agreement Theorem on how two rational Bayesians should never knowingly disagree.
But "two rational Bayesians" isn't a whole lot better as a model for society than "one rational Bayesian". Aspiring to be rational is well and good, but the Socratic ideal of a world tied together by two-person dialogue alone is as unrealistic as the sociopath's ideal of a world where their own voice rules alone. Society |
52c95db4-0578-4cab-808b-ac51fe27f037 | trentmkelly/LessWrong-43k | LessWrong | Updates on scaling laws for foundation models from ' Transcending Scaling Laws with 0.1% Extra Compute'
I am not sure if this paper is flying under the radar for many people, but has anyone read Transcending Scaling Laws with 0.1% Extra Compute? If so, how do you think it compares to the scaling laws presented by Deepmind's An empirical analysis of compute-optimal large language model training? Does it make you rethink the importance of dataset size (again)?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.