id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
5ae862b1-0393-46b9-8770-8a1d4b2b158a | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Drawing Two Aces
Suppose I have a deck of four cards: The ace of spades, the ace of hearts, and two others (say, 2C and 2D).
You draw two cards at random.
*Scenario 1:* I ask you "Do you have the ace of spades?" You say "Yes." Then the probability that you are holding both aces is 1/3: There are three equiprobable arrangements of cards you could be holding that contain AS, and one of these is AS+AH.
*Scenario 2:* I ask you "Do you have an ace?" You respond "Yes." The probability you hold both aces is 1/5: There are five arrangements of cards you could be holding (all except 2C+2D) and only one of those arrangements is AS+AH.
Now suppose I ask you "Do you have an ace?"
You say "Yes."
I then say to you: "Choose one of the aces you're holding at random (so if you have only one, pick that one). Is it the ace of spades?"
You reply "Yes."
What is the probability that you hold two aces?
*Argument 1:* I now know that you are holding at least one ace and that one of the aces you hold is the ace of spades, which is just the same state of knowledge that I obtained in Scenario 1. Therefore the answer must be 1/3.
*Argument 2:* In Scenario 2, I know that I can *hypothetically* ask you to choose an ace you hold, and you must *hypothetically* answer that you chose either the ace of spades or the ace of hearts. My posterior probability that you hold two aces should be the same either way. [The expectation of my future probability must equal my current probability:](/lw/ii/conservation_of_expected_evidence/) If I expect to change my mind later, I should just give in and change my mind now. Therefore the answer must be 1/5.
Naturally I know which argument is correct. Do you? |
31ce6b8b-297e-4bab-bcc0-9e97f4721473 | StampyAI/alignment-research-dataset/arbital | Arbital | Correspondence visualizations for different interpretations of "probability"
[Recall](https://arbital.com/p/4y9) that there are three common interpretations of what it means to say that a coin has a 50% probability of landing heads:
- __The propensity interpretation:__ Some probabilities are just out there in the world. It's a brute fact about coins that they come up heads half the time; we'll call this the coin's physical "propensity towards heads." When we say the coin has a 50% probability of being heads, we're talking directly about this propensity.
- __The frequentist interpretation:__ When we say the coin has a 50% probability of being heads after this flip, we mean that there's a class of events similar to this coin flip, and across that class, coins come up heads about half the time. That is, the _frequency_ of the coin coming up heads is 50% inside the event class (which might be "all other times this particular coin has been tossed" or "all times that a similar coin has been tossed" etc).
- __The subjective interpretation:__ Uncertainty is in the mind, not the environment. If I flip a coin and slap it against my wrist, it's already landed either heads or tails. The fact that I don't know whether it landed heads or tails is a fact about me, not a fact about the coin. The claim "I think this coin is heads with probability 50%" is an _expression of my own ignorance,_ which means that I'd bet at 1 : 1 odds (or better) that the coin came up heads.
One way to visualize the difference between these approaches is by visualizing what they say about when a model of the world should count as a good model. If a person's model of the world is definite, then it's easy enough to tell whether or not their model is good or bad: We just check what it says against the facts. For example, if a person's model of the world says "the tree is 3m tall", then this model is [correct](https://arbital.com/p/correspondence_theory_of_truth) if (and only if) the tree is 3 meters tall.

Definite claims in the model are called "true" when they correspond to reality, and "false" when they don't. If you want to navigate using a map, you had better ensure that the lines drawn on the map correspond to the territory.
But how do you draw a correspondence between a map and a territory when the map is probabilistic? If your model says that a biased coin has a 70% chance of coming up heads, what's the correspondence between your model and reality? If the coin is actually heads, was the model's claim true? 70% true? What would that mean?

The advocate of __propensity__ theory says that it's just a brute fact about the world that the world contains ontologically basic uncertainty. A model which says the coin is 70% likely to land heads is true if and only the actual physical propensity of the coin is 0.7 in favor of heads.

This interpretation is useful when the laws of physics _do_ say that there are multiple different observations you may make next (with different likelihoods), as is sometimes the case (e.g., in quantum physics). However, when the event is deterministic — e.g., when it's a coin that has been tossed and slapped down and is already either heads or tails — then this view is largely regarded as foolish, and an example of the [https://arbital.com/p/-4yk](https://arbital.com/p/-4yk): The coin is just a coin, and has no special internal structure (nor special physical status) that makes it _fundamentally_ contain a little 0.7 somewhere inside it. It's already either heads or tails, and while it may _feel_ like the coin is fundamentally uncertain, that's a feature of your brain, not a feature of the coin.
How, then, should we draw a correspondence between a probabilistic map and a deterministic territory (in which the coin is already definitely either heads or tails?)
A __frequentist__ draws a correspondence between a single probability-statement in the model, and multiple events in reality. If the map says "that coin over there is 70% likely to be heads", and the actual territory contains 10 places where 10 maps say something similar, and in 7 of those 10 cases the coin is heads, then a frequentist says that the claim is true.

Thus, the frequentist preserves black-and-white correspondence: The model is either right or wrong, the 70% claim is either true or false. When the map says "That coin is 30% likely to be tails," that (according to a frequentist) means "look at all the cases similar to this case where my map says the coin is 30% likely to be tails; across all those places in the territory, 3/10ths of them have a tails-coin in them." That claim is definitive, given the set of "similar cases."
By contrast, a __subjectivist__ generalizes the idea of "correctness" to allow for shades of gray. They say, "My uncertainty about the coin is a fact about _me,_ not a fact about the coin; I don't need to point to other 'similar cases' in order to express uncertainty about _this_ case. I know that the world right in front of me is either a heads-world or a tails-world, and I have a [https://arbital.com/p/-probability_distribution](https://arbital.com/p/-probability_distribution) puts 70% probability on heads." They then draw a correspondence between their probability distribution and the world in front of them, and declare that the more probability their model assigns to the correct answer, the better their model is.

If the world _is_ a heads-world, and the probabilistic map assigned 70% probability to "heads," then the subjectivist calls that map "70% accurate." If, across all cases where their map says something has 70% probability, the territory is actually that way 7/10ths of the time, then the Bayesian calls the map "[https://arbital.com/p/-well_calibrated](https://arbital.com/p/-well_calibrated)". They then seek methods to make their maps more accurate, and better calibrated. They don't see a need to interpret probabilistic maps as making definitive claims; they're happy to interpret them as making estimations that can be graded on a sliding scale of accuracy.
## Debate
In short, the frequentist interpretation tries to find a way to say the model is definitively "true" or "false" (by identifying a collection of similar events), whereas the subjectivist interpretation extends the notion of "correctness" to allow for shades of gray.
Frequentists sometimes object to the subjectivist interpretation, saying that frequentist correspondence is the only type that has any hope of being truly objective. Under Bayesian correspondence, who can say whether the map should say 70% or 75%, given that the probabilistic claim is not objectively true or false either way? They claim that these subjective assessments of "partial accuracy" may be intuitively satisfying, but they have no place in science. Scientific reports ought to be restricted to frequentist statements, which are definitively either true or false, in order to increase the objectivity of science.
Subjectivists reply that the frequentist approach is hardly objective, as it depends entirely on the choice of "similar cases". In practice, people can (and do!) [abuse frequentist statistics](https://arbital.com/p/https://en.wikipedia.org/wiki/Data_dredging) by choosing the class of similar cases that makes their result look as impressive as possible (a technique known as "p-hacking"). Furthermore, the manipulation of subjective probabilities is subject to the [iron laws](https://arbital.com/p/1lz) of probability theory (which are the [only way to avoid inconsistencies and pathologies](https://arbital.com/p/) when managing your uncertainty about the world), so it's not like subjective probabilities are the wild west or something. Also, science has things to say about situations even when there isn't a huge class of objective frequencies we can observe, and science should let us collect and analyze evidence even then.
For more on this debate, see [https://arbital.com/p/4xx](https://arbital.com/p/4xx). |
3a80039a-563c-4b4c-acbe-627aa5339022 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "tldr; Complete narration of the Replacing Guilt series is available here: anchor.fm/guiltI discovered Nate Soares' Replacing Guilt series in late 2019 via the Bayesian Conspiracy podcast and found it immensely valuable. Over ~40 posts, Nate channels the guilt-based motivation that is common in rationalists / EAs into more productive emotional drives. I read the series on my Kindle, thanks to lifelonglearner's ePUB version, but was disappointed that no audio version existed. To make it more accessible, I reached out to Nate and got permission to produce the official audiobook.The rationalist community has a fantastic tradition of volunteer narration — Slate Star Codex , AI to Zombies, and, of course, HPMOR. These free resources have added immense value to my life, so narrating Replacing Guilt feels like my tiny way of reinvesting in our rationalist commons. For a 2-minute summary of the series and why you might be interested in it, check out this post (or listen to the audio version).You can find the individual episodes at anchor.fm/guilt. The complete audiobook can be streamed in any podcast app, or you can download the mp3 file here."
The excerpt provided comes from a user post on LessWrong.
What are the key concepts and ideas presented in Nate Soares' Replacing Guilt series that make it particularly valuable for rationalists and EAs?
Nate Soares' Replacing Guilt series is a collection of writings that focus on self-improvement, particularly in terms of motivation and emotional well-being. It is particularly valuable for rationalists and Effective Altruists (EAs) due to its focus on guilt-based motivation, a common emotional state within these communities.
The series primarily aims to channel this guilt into more productive emotional drives. This is achieved through various strategies and insights, some of which include:
1. **Replacing Guilt with Motivation**: Soares emphasizes the need to replace guilt with intrinsic motivation. He argues that guilt can be a destructive force that hinders productivity and well-being. Instead, he encourages readers to find their own personal motivations and use these as a driving force.
2. **Understanding and Managing Emotions**: The series delves into the importance of understanding one's own emotions and how they impact behavior and decision-making. It provides tools and strategies for managing emotions effectively.
3. **Self-Improvement and Personal Growth**: Soares advocates for continuous self-improvement and personal growth. He provides insights into how one can overcome personal limitations and strive for betterment.
4. **Rational Decision Making**: The series also touches on the concept of rational decision-making. It provides insights into how one can make decisions based on logic and reason, rather than being driven by emotions.
5. **Effective Altruism**: The series is also valuable for EAs as it explores the concept of effective altruism. It provides insights into how one can make a significant positive impact on the world, which aligns with the core principles of effective altruism.
In summary, the Replacing Guilt series provides valuable insights into guilt-based motivation, emotional management, self-improvement, rational decision-making, and effective altruism, making it a valuable resource for rationalists and EAs.
How might the principles outlined in Soares' Replacing Guilt series be practically applied in everyday life, particularly in decision-making and conflict resolution scenarios?
The principles outlined in Soares' Replacing Guilt series can be practically applied in everyday life in a variety of ways, particularly in decision-making and conflict resolution scenarios. Here are some ways these principles can be applied:
1. **Replacing Guilt with Motivation in Decision-Making**: If you find yourself feeling guilty about a decision you need to make, you can apply Soares' principle of replacing guilt with motivation. Instead of allowing guilt to influence your decision, identify what truly motivates you and let that guide your decision-making process.
2. **Understanding and Managing Emotions in Conflict Resolution**: In conflict resolution scenarios, understanding and managing your emotions can be crucial. If you find yourself becoming emotionally charged, use the tools and strategies Soares provides to manage your emotions effectively. This can help you approach the conflict in a more rational and calm manner, which can lead to a more effective resolution.
3. **Applying Self-Improvement and Personal Growth Principles**: If you're faced with a decision or conflict that challenges your personal limitations, use it as an opportunity for self-improvement and personal growth. Reflect on what you can learn from the situation and how you can use it to better yourself.
4. **Using Rational Decision Making in Everyday Life**: Apply rational decision-making principles in your everyday life, from deciding what to have for breakfast to making major life decisions. Instead of letting emotions dictate your choices, make decisions based on logic and reason.
5. **Applying Effective Altruism Principles**: If you're deciding how to contribute to your community or resolve a conflict that impacts others, consider the principles of effective altruism. Think about how your actions can have the most significant positive impact and let that guide your decision-making process.
In summary, the principles outlined in Soares' Replacing Guilt series can be applied in everyday life by replacing guilt with motivation, understanding and managing emotions, striving for self-improvement and personal growth, making rational decisions, and applying the principles of effective altruism. |
db689a55-0d26-4ffe-bdd3-7d264665219f | trentmkelly/LessWrong-43k | LessWrong | Another attempt to explain UDT
(Attention conservation notice: this post contains no new results, and will be obvious and redundant to many.)
Not everyone on LW understands Wei Dai's updateless decision theory. I didn't understand it completely until two days ago. Now that I had the final flash of realization, I'll try to explain it to the community and hope my attempt fares better than previous attempts.
It's probably best to avoid talking about "decision theory" at the start, because the term is hopelessly muddled. A better way to approach the idea is by examining what we mean by "truth" and "probability" in the first place. For example, is it meaningful for Sleeping Beauty to ask whether it's Monday or Tuesday? Phrased like this, the question sounds stupid. Of course there's a fact of the matter as to what day of the week it is! Likewise, in all problems involving simulations, there seems to be a fact of the matter whether you're the "real you" or the simulation, which leads us to talk about probabilities and "indexical uncertainty" as to which one is you.
At the core, Wei Dai's idea is to boldly proclaim that, counterintuitively, you can act as if there were no fact of the matter whether it's Monday or Tuesday when you wake up. Until you learn which it is, you think it's both. You're all your copies at once.
More formally, you have an initial distribution of "weights" on possible universes (in the currently most general case it's the Solomonoff prior) that you never update at all. In each individual universe you have a utility function over what happens. When you're faced with a decision, you find all copies of you in the entire "multiverse" that are faced with the same decision ("information set"), and choose the decision that logically implies the maximum sum of resulting utilities weghted by universe-weight. If you possess some useful information about the universe you're in, it's magically taken into account by the choice of "information set", because logically, your decision cannot a |
d639cdec-7e52-477f-8778-ea354dc7bf52 | trentmkelly/LessWrong-43k | LessWrong | Scientific Evidence, Legal Evidence, Rational Evidence
Suppose that your good friend, the police commissioner, tells you in strictest confidence that the crime kingpin of your city is Wulky Wilkinsen. As a rationalist, are you licensed to believe this statement? Put it this way: if you go ahead and insult Wulky, I’d call you foolhardy. Since it is prudent to act as if Wulky has a substantially higher-than-default probability of being a crime boss, the police commissioner’s statement must have been strong Bayesian evidence.
Our legal system will not imprison Wulky on the basis of the police commissioner’s statement. It is not admissible as legal evidence. Maybe if you locked up every person accused of being a crime boss by a police commissioner, you’d initially catch a lot of crime bosses, and relatively few people the commissioner just didn’t like. But unrestrained power attracts corruption like honey attracts flies: over time, you’d catch fewer and fewer real crime bosses (who would go to greater lengths to ensure anonymity), and more and more innocent victims.
This does not mean that the police commissioner’s statement is not rational evidence. It still has a lopsided likelihood ratio, and you’d still be a fool to insult Wulky. But on a social level, in pursuit of a social goal, we deliberately define “legal evidence” to include only particular kinds of evidence, such as the police commissioner’s own observations on the night of April 4th. All legal evidence should ideally be rational evidence, but not the other way around. We impose special, strong, additional standards before we anoint rational evidence as “legal evidence.”
As I write this sentence at 8:33 p.m., Pacific time, on August 18th, 2007, I am wearing white socks. As a rationalist, are you licensed to believe the previous statement? Yes. Could I testify to it in court? Yes. Is it a scientific statement? No, because there is no experiment you can perform yourself to verify it. Science is made up of generalizations which apply to many particular instances, |
5aa9f5e5-5092-4488-a511-ee23a2d1215a | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | What do XPT forecasts tell us about AI risk?
*This post was co-authored by the Forecasting Research Institute and Rose Hadshar. Thanks to Josh Rosenberg for managing this work, Zachary Jacobs and Molly Hickman for the underlying data analysis, Coralie Consigny and Bridget Williams for fact-checking and copy-editing, the whole FRI XPT team for all their work on this project, and our external reviewers.*
In 2022, the [Forecasting Research Institute](https://forecastingresearch.org/) (FRI) ran the Existential Risk Persuasion Tournament (XPT). From June through October 2022, 169 forecasters, including 80 superforecasters and 89 experts, developed forecasts on various questions related to existential and catastrophic risk. Forecasters moved through a four-stage deliberative process that was designed to incentivize them not only to make accurate predictions but also to provide persuasive rationales that boosted the predictive accuracy of others’ forecasts. Forecasters stopped updating their forecasts on 31st October 2022, and are not currently updating on an ongoing basis. FRI plans to run future iterations of the tournament, and open up the questions more broadly for other forecasters.
You can see the overall results of the XPT [here](https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/64abffe3f024747dd0e38d71/1688993798938/XPT.pdf).
Some of the questions were related to AI risk. This post:
* Sets out the XPT [forecasts](https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1#The_forecasts) on AI risk, and puts them in [context](https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1#The_forecasts_in_context).
* Lays out the [arguments](https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1#The_arguments_made_by_XPT_forecasters) given in the XPT for and against these forecasts.
* Offers some [thoughts](https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1#What_do_XPT_forecasts_tell_us_about_AI_risk_) on what these forecasts and arguments show us about AI risk.
TL;DR
=====
* **XPT superforecasters predicted that** ***catastrophic*****and** ***extinction*****risk from AI by 2030 is very low** (0.01% catastrophic risk and 0.0001% extinction risk).
* **XPT superforecasters predicted that** ***catastrophic*****risk from nuclear weapons by 2100 is almost twice as likely as** ***catastrophic*****risk from AI by 2100** (4% vs 2.13%).
* **XPT superforecasters predicted that** ***extinction*****risk from AI by 2050 and 2100 is roughly an order of magnitude larger than extinction****risk from nuclear, which in turn is an order of magnitude larger than non-anthropogenic extinction****risk**(see [here](https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1#Forecasts_on_other_risks) for details).
* **XPT superforecasters more than quadruple their forecasts for AI extinction risk by 2100 if conditioned on AGI or TAI by 2070** (see [here](https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1#How_sensitive_are_XPT_AI_risk_forecasts_to_AI_timelines_) for details).
* **XPT domain experts predicted that AI extinction risk by 2100 is far greater than XPT superforecasters do** (3% for domain experts, and 0.38% for superforecasters by 2100).
* **Although XPT superforecasters and experts disagreed substantially about AI risk, both superforecasters and experts still prioritized AI as an area for marginal resource allocation** (see [here](https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1#Resource_allocation_to_different_risks) for details).
* **It’s unclear how accurate these forecasts will prove, particularly as superforecasters have not been evaluated on this timeframe before.**[[1]](#fn80213xc33ps)
The forecasts
=============
In the table below, we present forecasts from the following groups:
* Superforecasters: median forecast across superforecasters in the XPT.
* Domain experts: median forecasts across all AI experts in the XPT.
(See [our discussion of aggregation choices](https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/64abffe3f024747dd0e38d71/1688993798938/XPT.pdf) (pp. 20–22) for why we focus on medians.)
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| **Question** | **Forecasters** | **N** | **2030** | **2050** | **2100** |
| [AI Catastrophic risk](https://docs.google.com/document/d/1CAmw1g_Y3siZGZaaYJjMRjEIyV4HURhCZf5rZ4a6-d8/edit) (>10% of humans die within 5 years) | Superforecasters | 88 | 0.01% | 0.73% | 2.13% |
| Domain experts | 30 | 0.35% | 5% | 12% |
| [AI Extinction risk](https://docs.google.com/document/d/1NMC1RV8XD0zcvVUfgcJPx7rBKqoNOk7pVJ74C9-JtLY/edit) (human population <5,000) | Superforecasters | 88 | 0.0001% | 0.03% | 0.38% |
| Domain experts | 29 | 0.02% | 1.1% | 3% |
The forecasts in context
========================
Different methods have been used to estimate AI risk:
* Surveying experts of various kinds, e.g. [Sanders and Bostrom, 2008](https://www.fhi.ox.ac.uk/reports/2008-1.pdf); [Grace et al. 2017](https://arxiv.org/pdf/1705.08807.pdf).
* Doing in-depth investigations, e.g. [Ord, 2020](https://80000hours.org/wp-content/uploads/2020/03/The-Precipice-Introduction-Chapter-1.pdf); [Carlsmith, 2021](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#).
The XPT forecasts are distinctive relative to expert surveys in that:
* The forecasts were incentivized: for long-run questions, XPT used ‘reciprocal scoring’ rules to incentivize accurate forecasts (see [here](https://docs.google.com/document/d/e/2PACX-1vS9x36iy_DKUsr233p5tctgKJjUDWta36jeVq2M23DtD4Tnsa1AQw9IIwejqLH4j21nNUIby3aO2_Yf/pub#h.q5gf6iwq49m8) for details).
* Forecasts were solicited from superforecasters as well as experts.
* Forecasters were asked to write detailed rationales for their forecasts, and good rationales were incentivized through prizes.
* Forecasters worked on questions in a four-stage deliberative process in which they refined their individual forecasts and their rationales through collaboration with teams of other forecasters.
Should we expect XPT forecasts to be more or less accurate than previous estimates? This is unclear, but some considerations are:
* Relative to some previous forecasts (particularly those based on surveys), XPT forecasters spent a long time thinking and writing about their forecasts, and were incentivized to be accurate.
* XPT forecasters with high reciprocal scoring accuracy may be more accurate.
+ There is evidence that reciprocal scoring accuracy correlates with short-range forecasting accuracy, though it is unclear if this extends to long-range accuracy.[[2]](#fndoj3pjjbqdi)
* XPT (and other) superforecasters have a history of accurate forecasts (primarily on short-range geopolitical and economic questions), and may be less subject to biases such as groupthink in comparison to domain experts.
+ On the other hand, there is limited evidence that superforecasters’ accuracy extends to technical domains like AI, long-range forecasts, or out-of-distribution events.
Forecasts on other risks
------------------------
Where not otherwise stated, the XPT forecasts given are superforecasters’ medians.
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| **Forecasting Question** | **2030** | **2050** | **2100** | **Ord, existential catastrophe 2120**[[3]](#fnntv8t4qcdzr) | **Other relevant forecasts**[[4]](#fnow32yo7ulzo) |
| **Catastrophic risk (>10% of humans die in 5 years)** | |
| **Biological**[[5]](#fnsc9q06yqib8) | - | - | 1.8% | - | - |
| **Engineered pathogens**[[6]](#fnp4okw8icuja) | - | - | 0.8% | | |
| **Natural pathogens**[[7]](#fnknlpvnn2x0s) | - | - | 1% | | |
| [**AI**](https://docs.google.com/document/d/1CAmw1g_Y3siZGZaaYJjMRjEIyV4HURhCZf5rZ4a6-d8/edit)**(superforecasters)** | 0.01% | 0.73% | 2.13% | - | [Carlsmith](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#), 2070, ~14%[[8]](#fnaxxrp32ekok)[Sandberg and Bostrom](https://www.fhi.ox.ac.uk/reports/2008-1.pdf), 2100, 5%[[9]](#fn0fp9xdb9mg0b)[Metaculus](https://www.metaculus.com/questions/2568/ragnar%2525C3%2525B6k-seriesresults-so-far/), 2100, 3.99%[[10]](#fnpfi6boisns) |
| [**AI**](https://docs.google.com/document/d/1CAmw1g_Y3siZGZaaYJjMRjEIyV4HURhCZf5rZ4a6-d8/edit)**(domain experts)** | 0.35% | 5% | 12% | - |
| [**Nuclear**](https://docs.google.com/document/d/199bSOzlT5PR4TAOU3myFfdXpBfbNDAG69wESSnl_MaE/edit) | 0.50% | 1.83% | 4% | - | - |
| [**Non-anthropogenic**](https://docs.google.com/document/d/1Syu1oqedSqWh1KneRH-LtV__3ZVdrYoDdL3FpG7v0eQ/edit) | 0.0026% | 0.015% | 0.05% | - | - |
| [**Total catastrophic risk**](https://docs.google.com/document/d/1H11Aq_XTZkKzjyECb_qdy4cbmJWxGHkYAI2JvhrjjdY/edit)[[11]](#fnjq2p2ewzuyo) | 0.85% | 3.85% | 9.05% | - | - |
| **Extinction risk (human population <5000)** | |
| **Biological**[[12]](#fncrggzlctu1m) | - | - | 0.012% | | |
| **Engineered pathogens**[[13]](#fnb08wujhvka) | - | - | 0.01% | 3.3% | |
| **Natural pathogens**[[14]](#fnuaj6z25owna) | - | - | 0.0018% | 0.01% | |
| [**AI**](https://docs.google.com/document/d/1NMC1RV8XD0zcvVUfgcJPx7rBKqoNOk7pVJ74C9-JtLY/edit)**(superforecasters)** | 0.0001% | 0.03% | 0.38% | 10%[[15]](#fn3jwcqlwdvx3) | [Carlsmith](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#), 2070, 5%[[16]](#fnahbp0za6n9n)[Sandberg and Bostrom](https://www.fhi.ox.ac.uk/reports/2008-1.pdf), 2100, 5%[[17]](#fnyavja33w45c)[Metaculus](https://www.metaculus.com/questions/2568/ragnar%2525C3%2525B6k-seriesresults-so-far/), 2100, 1.9%[[18]](#fnuaq38pge3a)[Fodor](https://forum.effectivealtruism.org/posts/2sMR7n32FSvLCoJLQ/critical-review-of-the-precipice-a-reassessment-of-the-risks), 2120, 0.0005%[[19]](#fnw21cmelbtxq)[Future Fund](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize), 2070, 5.8%[[20]](#fn6e8rqfvh3cq)[Future Fund](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize) lower prize threshold, 2070, 1.4%[[21]](#fneduqf55xmjt) |
| [**AI**](https://docs.google.com/document/d/1NMC1RV8XD0zcvVUfgcJPx7rBKqoNOk7pVJ74C9-JtLY/edit)**(domain experts)** | 0.02% | 1.1% | 3% |
| [**Nuclear**](https://docs.google.com/document/d/1-hkApWaPqETJLZ6Z0nXbtG0b--EihUdNTcB38fTAYI8/edit) | 0.001% | 0.01% | 0.074% | 0.1%[[22]](#fn7wtjz3i2iwy) | - |
| [**Non-anthropogenic**](https://docs.google.com/document/d/1GUNJD-ogiMRaPgRmJA_4uo_U4EqBiDSnBe-lxuMo5qo/edit) | 0.0004% | 0.0014% | 0.0043% | 0.01%[[23]](#fn3xc3mc5vph4) | - |
| [**Total extinction risk**](https://docs.google.com/document/d/1wENxRHoCrNU4MXfusy2Txh6onO9zt5kIl-K-OLZv5FQ/edit)[[24]](#fn85esd6mw6ia) | 0.01% | 0.3% | 1% | 16.67%[[25]](#fnl39s7xfksm) | - |
### Resource allocation to different risks
All participants in the XPT responded to an intake and a postmortem survey. In these surveys, participants were asked various questions about resource allocation:
* If you could allocate an additional $10,000 to X-risk avoidance, how would you divide the money among the following topics?
* If you could allocate the time of 100 new researchers (assuming they are generalists who could be effective in a wide range of fields), how would you divide them among the following topics?
* Assume we are in a world where there are no current attempts by public institutions, governments, or private individuals or organizations to allocate spending toward catastrophic risk avoidance. If you were to allocate $50 billion to the following risk avoidance areas, what fraction of the money would you allocate to each area?
Across superforecasters and experts and across all three resource allocation questions, AI received the single largest allocation, with one exception (superforecasters in the postmortem survey on the $50bn question):
The only instance where AI did not receive the largest single allocation is superforecasters’ postmortem allocation on the $50bn question (where AI received the second highest allocation after nuclear). The $50bn question is also the only absolute resource allocation question (the other two are marginal), and the question where AI’s allocation was smallest in percentage terms (across supers and experts and across both surveys).
A possible explanation here is that XPT forecasters see AI as underfunded on the margin — although superforecasters think that in absolute terms AI should still receive less funding than nuclear risk.
The main contribution of these results to what the XPT tells us about AI risk is that even though superforecasters and experts disagreed substantially about the probability of AI risk, and even though the absolute probabilities of superforecasters for AI risk were 1% or lower, both superforecasters and experts still prioritize AI as an area for marginal resource allocation.
Forecasts from the top reciprocal scoring quintile
--------------------------------------------------
In the XPT, forecasts for years later than 2030 were incentivized using a method called [reciprocal scoring](https://dx.doi.org/10.2139/ssrn.3954498). For each of the relevant questions, forecasters were asked to give their own forecasts, while also predicting the forecasts of other experts and superforecasters. Forecasters who most accurately predicted superforecaster and expert forecasts got a higher reciprocal score.
In the course of the XPT, there were systematic differences between the forecasts of those with high reciprocal scores and those with lower scores, in particular:
* Superforecasters outperformed experts at reciprocal scoring. They also tended to be more skeptical of high risks and fast progress.
* Higher reciprocal scores within both groups — experts and superforecasters — correlated with lower estimates of catastrophic and extinction risk.
In other words, the better forecasters were at discerning what other people would predict, the less concerned they were about extinction.
We won’t know for many decades which forecasts were more accurate, and as reciprocal scoring is a new method, there is insufficient evidence to definitively establish its correlation with the accuracy of long-range forecasts. However, in the plausible case that reciprocal scoring accuracy does indeed correlate with overall accuracy, it would justify giving more weight to forecasts from individuals with high reciprocal scores — which are lower risk predictions in the XPT. Readers should exercise their own judgment in determining how to read these findings.
It is also important to note that, in the present case, superforecasters’ higher scores were driven by their more accurate predictions of superforecasters’ views (both groups were comparable at predicting experts’ views). It is therefore conceivable that their superior performance is due to the fact that superforecasters are a relatively small social group and have spent lots of time discussing forecasts with one another, rather than due to general higher accuracy.
Here is a comparison of median forecasts on AI risk between top reciprocal scoring (RS) quintile forecasters and all forecasters, for superforecasters and domain experts:
| | | | | |
| --- | --- | --- | --- | --- |
| **Question** | **Forecasters** | **2030** | **2050** | **2100** |
| [**AI Catastrophic risk**](https://docs.google.com/document/d/1CAmw1g_Y3siZGZaaYJjMRjEIyV4HURhCZf5rZ4a6-d8/edit)**(>10% of humans die within 5 years)** | **RS top quintile - superforecasters (n=16)** | 0.001% | 0.46% | 1.82% |
| **RS top quintile - domain experts (n=5)** | 0.03% | 0.5% | 3% |
| **Superforecasters (n=88)** | 0.01% | 0.73% | 2.13% |
| **Domain experts (n=30)** | 0.35% | 5% | 12% |
| [**AI Extinction risk**](https://docs.google.com/document/d/1NMC1RV8XD0zcvVUfgcJPx7rBKqoNOk7pVJ74C9-JtLY/edit)**(human population <5,000)** | **RS top quintile - superforecasters (n=16)** | 0.0001% | 0.002% | 0.088% |
| **RS top quintile - domain experts (n=5)** | 0.0003% | 0.08% | 0.24% |
| **Superforecasters (n=88)** | 0.0001% | 0.03% | 0.38% |
| **Domain experts (n=29)** | 0.02% | 1.1% | 3% |
From this, we see that:
* The top RS quintiles predicted medians which were closer to overall superforecaster medians than overall domain expert medians, with no exceptions.
* The top RS quintiles tended to predict lower medians than both superforecasters overall and domain experts overall, with some exceptions:
+ The top RS quintile for domain experts predicted higher *catastrophic* risk by 2100 than superforecasters overall (3% to 2.13%).
+ The top RS quintile for superforecasters predicted the same *extinction* risk by 2030 as superforecasters overall (0.0001%).
+ The top RS quintile for domain experts predicted higher *extinction* risk by 2030 and 2050 than superforecasters overall (2030: 0.0003% to 0.0001%; 2050: 0.08% to 0.03%).
The arguments made by XPT forecasters
=====================================
XPT forecasters were grouped into teams. Each team was asked to write up a ‘rationale’ summarizing the main arguments that had been made during team discussions about the forecasts different team members made. The below summarizes the main arguments made across all teams in the XPT. Footnotes contain direct quotes from team rationales.
The core arguments about catastrophic and extinction risk from AI centered around:
1. **Whether sufficiently advanced AI would be developed in the relevant timeframe.**
2. **Whether there are plausible mechanisms for advanced AI to cause catastrophe/human extinction.**
Other important arguments made included:
1. **Whether there are incentives for advanced AI to cause catastrophe/human extinction.**
2. **Whether advanced AI would emerge suddenly.**
3. **Whether advanced AI would be misaligned.**
4. **Whether humans would empower advanced AI.**
1. Whether sufficiently advanced AI would be developed in the relevant timeframe
--------------------------------------------------------------------------------
There is a full summary of timelines arguments in the XPT in [this](https://forum.effectivealtruism.org/posts/KGGDduXSwZQTQJ9xc/what-do-xpt-forecasts-tell-us-about-ai-timelines) post.
Main arguments given against:
* Scaling laws may not hold, such that new breakthroughs are needed.[[26]](#fnpi0j7k19p8)
* There may be attacks on AI vulnerabilities.[[27]](#fnpq0xrr230tc)
* The forecasting track record on advanced AI is poor.[[28]](#fn1bohi53al76)
Main arguments given for:
* Recent progress is impressive.[[29]](#fnfqlkz6r841h)
* Scaling laws may hold.[[30]](#fndo42qpi3xa)
* Advances in quantum computing or other novel technology may speed up AI development.[[31]](#fnrb9l883ekkg)
* Recent progress has been faster than predicted.[[32]](#fn2t0y0ehwxkw)
2. Whether there are plausible mechanisms for advanced AI to cause catastrophe/human extinction
-----------------------------------------------------------------------------------------------
Main arguments given against:
* The logistics would be extremely challenging.[[33]](#fnecd2iu843l)
* Millions of people live very remotely, and AI would have little incentive to pay the high costs of killing them.[[34]](#fn2ha1kz3ccu4)
* Humans will defend themselves against this.[[35]](#fn8q3pzpiynhx)
* AI might improve security as well as degrade it.[[36]](#fncv3aeky93a)
Main arguments given for:
* There are many possible mechanisms for AI to cause such events.
* Mechanisms cited include: nanobots,[[37]](#fnijj5qbceuc) bioweapons,[[38]](#fnt95vn8jgyp) nuclear weapons,[[39]](#fndwmzu3wjxxk) attacks on the supply chain,[[40]](#fn3y4shvq642s) and novel technologies.[[41]](#fnuxlbp5aer6j)
* AI systems might recursively self-improve such that mechanisms become available to it.[[42]](#fnfg5n2zkhvz)
* It may be the case that an AI system or systems only needs to succeed once for these events to occur.[[43]](#fnpa0n04dr4vf)
3. Whether there are incentives for advanced AI to cause catastrophe/human extinction
-------------------------------------------------------------------------------------
Main arguments given against:
* AI systems won’t have ill intent.[[44]](#fnw1pa1pt6jg)
* AI might seek resources in space rather than on earth.[[45]](#fng4eaequzoh8)
Main arguments given for:
* AI systems might have an incentive to preemptively prevent shutdown.[[46]](#fnro0j64oy6vg)
* AI systems might maximize reward so hard that they use up resources humans need to survive.[[47]](#fnb1w1ri6ur7h)
* AI systems might optimize their environment in ways that make earth uninhabitable to humans, e.g. by reducing temperatures.[[48]](#fnulfzi07ugu8)
* AI systems might fight each other for resources, with humans as collateral.[[49]](#fnmpax48zg69)
4. Whether advanced AI would emerge suddenly
--------------------------------------------
Main arguments given for:
* AIs might recursively self-improve, such that they reach very high levels of advancement very quickly.[[50]](#fn5s7sf3vjq89)
* Recent progress has been unexpectedly sudden.[[51]](#fn8vgsx07gqlf)
5. Whether advanced AI would be misaligned
------------------------------------------
Main arguments given against:
* AI safety is a growing field and will continue to grow.[[52]](#fnbl96ytr32r)
* AI systems may help with AI safety.[[53]](#fnso9lmga1sns)
Main arguments given for:
* AI safety is not progressing as fast as AI development.[[54]](#fnyeg2yhmfuy)
* AI developers are not aligned with humanity.[[55]](#fnj994hpdho5e)
* Investment may be too low, because of collective action problems and the fact that risks from AI sound strange.[[56]](#fnxl7afue30gh)
* We may only get a few shots to align AI systems, particularly if takeoff speeds are high.[[57]](#fnwy0c4sprca)
6. Whether humans would empower advanced AI
-------------------------------------------
Main arguments given against:
* AI will not be given control over critical systems, because this will be seen as risky.[[58]](#fne4ffuzh7xwc)
* AI systems will be regulated in a way which successfully reduces risk.[[59]](#fnxrmbekjwfp8)
Main arguments given for:
* AI will be given control over critical systems, because this will be so much more efficient.[[60]](#fngh07sfiv5cp)
* AI systems will not be regulated in a way which successfully reduces risk.[[61]](#fn4c2ogkgdncy)
* If takeoff speeds are high, it will be hard for humans to stay in control.[[62]](#fnsjutz0ovjx)
How sensitive are XPT AI risk forecasts to AI timelines?
========================================================
The XPT did not include conditional forecasting questions about AI extinction risk. However, in the postmortem survey after the tournament, participants were asked several conditional questions.
Their conditional forecasts are displayed in the table below.
Note that:
* Only a subset of XPT forecasters responded to the postmortem survey, so in the table below we compare the postmortem results with the XPT forecasts of only the subset of XPT forecasters who responded to the postmortem survey, for comparability. (This is why the XPT medians in this table differ from medians quoted in the rest of the post, which are based on all XPT forecasts.)
* The definitions of extinction used were slightly different:
+ The XPT asked “What is the probability that artificial intelligence will cause human extinction or reduce the global population below 5,000 by 2030/2050/2100?” with full details [here](https://docs.google.com/document/d/1NMC1RV8XD0zcvVUfgcJPx7rBKqoNOk7pVJ74C9-JtLY/edit).
+ The postmortem asked “Assume that artificial general intelligence (AGI) has arisen by 2070. What is the probability of human extinction by [2100/2200]?” with no further details on extinction.
| | | |
| --- | --- | --- |
| **Question** | **Superforecasters (n=78)** | **Domain experts** |
| **(XPT postmortem subset) Unconditional AI Extinction risk by 2100** | 0.225% | 2% (n=21) |
| AI Extinction risk by 2100|AGI\* by 2070 | 1% | 6% (n=23) |
| AI Extinction risk by 2200|AGI\* by 2070 | 3% | 7% (n=23) |
| AI Extinction risk by 2100|TAI\*\* by 2070 | 1% | 3% (n=23) |
| AI Extinction risk by 2200|TAI\*\* by 2070 | 3% | 5% (n=23) |
\* “Artificial general intelligence is defined here as any scenario in which cheap AI systems are fully substitutable for human labor, or if AI systems power a comparably profound transformation (in economic terms or otherwise) as would be achieved in such a world.”
\*\* “Transformative AI is defined here as any scenario in which global real GDP during a year exceeds 115% of the highest GDP reported in any full prior year.”
From this, we see that:
* **Superforecasters more than quadruple their extinction risk forecasts by 2100 if conditioned on AGI or TAI by 2070.**
* Domain experts also increase their forecasts of extinction by 2100 conditioned on AGI or TAI by 2070, but by a smaller factor.
* When you extend the timeframe to 2200 as well as condition on AGI/TAI, superforecasters more than 10x their forecasts. Domain experts also increase their forecasts.
What do XPT forecasts tell us about AI risk?
============================================
This is unclear:
* Which conclusions to draw from the XPT forecasts depends substantially on your priors on AI timelines to begin with, and your views on which groups of people’s forecasts on these topics you expect to be most accurate.
* When it comes to action relevance, a lot depends on factors beyond the scope of the forecasts themselves:
+ Tractability: if AI risk is completely intractable, it doesn’t matter whether it’s high or low.
+ Current margins: if current spending on AI is sufficiently low, the risk being low might not affect current margins.
- For example, XPT forecasters prioritized [allocating resources](https://forum.effectivealtruism.org/posts/K2xQrrXn5ZSgtntuT/what-do-xpt-forecasts-tell-us-about-ai-risk-1#Resource_allocation_to_different_risks) to reducing AI risk over reducing other risks, in spite of lower AI risk forecasts than forecasts on some of those other risks.
+ Personal fit: for individuals, decisions may be dominated by considerations of personal fit, such that these forecasts alone don’t change much.
* There are many uncertainties around how accurate to expect these forecasts to be:
+ There is limited evidence on how accurate long-range forecasts are.[[63]](#fn0k31xbxdsp2d)
+ There is limited evidence on whether superforecasters or experts are likely to be more accurate in this context.
+ There is limited evidence on the relationship between reciprocal scoring accuracy and long-range accuracy.
That said, there are some things to note about the XPT results on AI risk and AI timelines:
* These are the first incentivized public forecasts from superforecasters on AI x-risk and AI timelines.
* XPT superforecasters think extinction from AI by 2030 is really, really unlikely.
* XPT superforecasters think that catastrophic risk from nuclear by 2100 is twice as likely as from AI.
+ On the one hand, they don’t think that AI is the main catastrophic risk we face.
+ On the other hand, they think AI is half as dangerous as nuclear weapons.
* XPT superforecasters think that by 2050 and 2100, extinction risk from AI is roughly an order of magnitude larger than from nuclear, which in turn is an order of magnitude more than from non-anthropogenic risks.
+ Ord and XPT superforecasters agree on these ratios, though not on the absolute magnitude of the risks.
* Superforecasters more than quadruple their extinction risk forecasts if conditioned on AGI or TAI by 2070.
* XPT domain experts think that risk from AI is far greater than XPT superforecasters do.
+ By 2100, domain experts’ forecast for catastrophic risk from AI is around four times that of superforecasters, and their extinction risk forecast is around 10 times as high.
1. **[^](#fnref80213xc33ps)** See [here](https://www.openphilanthropy.org/research/how-feasible-is-long-range-forecasting/) for a discussion of the feasibility of long-range forecasting.
2. **[^](#fnrefdoj3pjjbqdi)**See [Karger et al. 2021](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3954498).
3. **[^](#fnrefntv8t4qcdzr)**Toby Ord, The Precipice (London: Bloomsbury Publishing, 2020), 45. Ord’s definition is strictly broader than XPT’s. Ord defines existential catastrophe as “the destruction of humanity’s long-term potential”. He uses 2120 rather than 2100 as his resolution date, so his estimates are not directly comparable to XPT forecasts.
4. **[^](#fnrefow32yo7ulzo)**Other forecasts may use different definitions or resolution dates. These are detailed in the footnotes.
5. **[^](#fnrefsc9q06yqib8)**This row is the sum of the two following rows (catastrophic risk from engineered and from natural pathogens respectively). We did not directly ask for catastrophic biorisk forecasts.
6. **[^](#fnrefp4okw8icuja)**Because of concerns among our funders about information hazards, we did not include this question in the main tournament, but we did ask about risks from engineered and natural pathogens in a one-shot separate postmortem survey to which most XPT participants responded after the tournament. We report those numbers here.
7. **[^](#fnrefknlpvnn2x0s)**Because of concerns among our funders about information hazards, we did not include this question in the main tournament, but we did ask about risks from engineered and natural pathogens in a one-shot separate postmortem survey to which most XPT participants responded after the tournament. We report those numbers here.
8. **[^](#fnrefaxxrp32ekok)**The probability of Carlsmith’s fourth premise, incorporating his uncertainty on his first three premises:
1. It will become possible and financially feasible to build APS systems. 65%
2. There will be strong incentives to build and deploy APS systems. 80%
3. It will be much harder to build APS systems that would not seek to gain and maintain power in unintended ways (because of problems with their objectives) on any of the inputs they’d encounter if deployed, than to build APS systems that would do this, but which are at least superficially attractive to deploy anyway. 40%
4. Some deployed APS systems will be exposed to inputs where they seek power in unintended and high-impact ways (say, collectively causing >$1 trillion dollars of damage), because of problems with their objectives. 65%
See [here](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#heading=h.4y2596lbsi92) for the relevant section of the report.
9. **[^](#fnref0fp9xdb9mg0b)**“Superintelligent AI” would kill 1 billion people.
10. **[^](#fnrefpfi6boisns)**A catastrophe due to an artificial intelligence failure-mode reduces the human population by >10%
11. **[^](#fnrefjq2p2ewzuyo)**This question was asked independently, rather than inferred from questions about individual risks.
12. **[^](#fnrefcrggzlctu1m)**This row is the sum of the two following rows (catastrophic risk from engineered and from natural pathogens respectively). We did not directly ask for catastrophic biorisk forecasts.
13. **[^](#fnrefb08wujhvka)**Because of concerns among our funders about [information hazards](https://nickbostrom.com/information-hazards.pdf), we did not include this question in the main tournament, but we did ask about risks from engineered and natural pathogens in a one-shot separate postmortem survey to which most XPT participants responded after the tournament. We report those numbers here.
14. **[^](#fnrefuaj6z25owna)**Because of concerns among our funders about [information hazards](https://nickbostrom.com/information-hazards.pdf), we did not include this question in the main tournament, but we did ask about risks from engineered and natural pathogens in a one-shot separate postmortem survey to which most XPT participants responded after the tournament. We report those numbers here.
15. **[^](#fnref3jwcqlwdvx3)**Existential catastrophe from “unaligned artificial intelligence”.
16. **[^](#fnrefahbp0za6n9n)**Existential catastrophe (the destruction of humanity’s long-term potential).
17. **[^](#fnrefyavja33w45c)**“Superintelligent AI” would lead to humanity’s extinction.
18. **[^](#fnrefuaq38pge3a)**A catastrophe due to an artificial intelligence failure-mode reduces the human population by >95%.
19. **[^](#fnrefw21cmelbtxq)**Existential catastrophe from “unaligned artificial intelligence”.
20. **[^](#fnref6e8rqfvh3cq)**“[H]umanity will go extinct or drastically curtail its future potential due to loss of control of AGI”. This estimate is inferred from the values provided [here](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize), and assumes the probability of AGI being developed rises linearly from 2043 to 2100. Workings [here](https://docs.google.com/spreadsheets/d/1Ub-zadOoO8CizpojD2aPtAUxgwL0BRvDWJHej1GPpNM/edit#gid=0).
21. **[^](#fnrefeduqf55xmjt)**“[H]umanity will go extinct or drastically curtail its future potential due to loss of control of AGI”. This estimate is inferred from the values provided [here](https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize), and assumes the probability of AGI being developed rises linearly from 2043 to 2100. Workings [here](https://docs.google.com/spreadsheets/d/1Ub-zadOoO8CizpojD2aPtAUxgwL0BRvDWJHej1GPpNM/edit#gid=0).
22. **[^](#fnref7wtjz3i2iwy)**Existential catastrophe via nuclear war.
23. **[^](#fnref3xc3mc5vph4)**The total risk of existential catastrophe from natural (non-anthropogenic) sources (specific estimates are also made for catastrophe via asteroid or comet impact, supervolcanic eruption, and stellar explosion).
24. **[^](#fnref85esd6mw6ia)**This question was asked independently, rather than inferred from questions about individual risks.
25. **[^](#fnrefl39s7xfksm)**Total existential risk in next 100 years (published in 2020).
26. **[^](#fnrefpi0j7k19p8)**Question 3: 341, “there are many experts arguing that we will not get to AGI with current methods (scaling up deep learning models), but rather some other fundamental breakthrough is necessary.” See also 342, “While recent AI progress has been rapid, some experts argue that current paradigms (deep learning in general and transformers in particular) have fundamental limitations that cannot be solved with scaling compute or data or through relatively easy algorithmic improvements.” See also 337, "The current AI research is a dead end for AGI. Something better than deep learning will be needed." See also 341, “Some team members think that the development of AI requires a greater understanding of human mental processes and greater advances in mapping these functions.”
Question 4: 336, “Not everyone agrees that the 'computational' method (adding hardware, refining algorithms, improving AI models) will in itself be enough to create AGI and expect it to be a lot more complicated (though not impossible). In that case, it will require a lot more research, and not only in the field of computing.” 341, “An argument for a lower forecast is that a catastrophe at this magnitude would likely only occur if we have AGI rather than say today's level AI, and there are many experts arguing that we will not get to AGI with current methods (scaling up deep learning models), but rather some other fundamental breakthrough is necessary.” See also 342, “While recent AI progress has been rapid, some experts argue that current paradigms (deep learning in general and transformers in particular) have fundamental limitations that cannot be solved with scaling compute or data or through relatively easy algorithmic improvements.” See also 340, “Achieving Strong or General AI will require at least one and probably a few paradigm-shifts in this and related fields. Predicting when a scientific breakthrough will occur is extremely difficult.”
27. **[^](#fnrefpq0xrr230tc)** Question 3: 341, “Both evolutionary theory and the history of attacks on computer systems imply that the development of AGI will be slowed and perhaps at times reversed due to its many vulnerabilities, including ones novel to AI.” “Those almost certain to someday attack AI and especially AGI systems include nation states, protesters (hackers,[Butlerian Jihad](https://dune.fandom.com/wiki/Butlerian_Jihad)?),[crypto miners hungry for FLOPS](https://www.paymentsjournal.com/criminal-crypto-miners-are-stealing-your-cpu/), and indeed criminals of all stripes. We even could see AGI systems attacking each other.” “These unique vulnerabilities include:
- [poisoning the indescribably vast data inputs required](https://www.darpa.mil/program/guaranteeing-ai-robustness-against-deception); already demonstrated with[image classification, reinforcement learning, speech recognition, and natural language processing.](https://arxiv.org/abs/1809.02444)
- war or sabotage in the case of an AGI located in a server farm
- [latency of self-defense detection and remediation operations if distributed (cloud etc.)](https://www.cisco.com/c/en/us/solutions/data-center/data-center-networking/what-is-low-latency.html)”
Question 4: 341. See above.
28. **[^](#fnref1bohi53al76)** Question 4: 337, “The optimists tend to be less certain that AI will develop as quickly as the pessimists think likely and indeed question if it will reach the AGI stage at all. They point out that AI development has missed forecast attainment points before”. 336, “There have been previous bold claims on impending AGI (Kurzweil for example) that didn't pan out.” See also 340, “The prediction track record of AI experts and enthusiasts have erred on the side of extreme optimism and should be taken with a grain of salt, as should all expert forecasts.” See also 342, “given the extreme uncertainty in the field and lack of real experts, we should put less weight on those who argue for AGI happening sooner. Relatedly, Chris Fong and SnapDragon argue that we should not put large weight on the current views of Eliezer Yudkowsky, arguing that he is extremely confident, makes unsubstantiated claims and has a track record of incorrect predictions.”
29. **[^](#fnreffqlkz6r841h)** Question 3: 339, “Forecasters assigning higher probabilities to AI catastrophic risk highlight the rapid development of AI in the past decade(s).” 337, “some forecasters focused more on the rate of improvement in data processing over the previous 78 years than AGI and posit that, if we even achieve a fraction of this in future development, we would be at far higher levels of processing power in just a couple decades.”
Question 4: 339, “AI research and development has been massively successful over the past several decades, and there are no clear signs of it slowing down anytime soon.”
30. **[^](#fnrefdo42qpi3xa)** Question 3: 336, “the probabilities of continuing exponential growth in computing power over the next century as things like quantum computers are developed, and the inherent uncertainty with exponential growth curves in new technologies.”
31. **[^](#fnrefrb9l883ekkg)** Question 4: 336, “The most plausible forecasts on the higher end of our team related to the probabilities of continuing exponential growth in computing power over the next century as things like quantum computers are developed, and the inherent uncertainty with exponential growth curves in new technologies.”
32. **[^](#fnref2t0y0ehwxkw)** Question 3: 343, “Most experts expect AGI within the next 1-3 decades, and current progress in domain-level AI is often ahead of expert predictions”; though also “Domain-specific AI has been progressing rapidly - much more rapidly than many expert predictions. However, domain-specific AI is not the same as AGI.” 340, “Perhaps the strongest argument for why the trend of Sevilla et al. could be expected to continue to 2030 and beyond is some discontinuity in the cost of AI training compute precipitated by a novel technology such as optical neural networks.”
33. **[^](#fnrefecd2iu843l)** Question 4: Team 338, “It would be extremely difficult to kill everyone”. 339, “Perhaps the most common argument against AI extinction is that killing all but 5,000 humans is incredibly difficult. Even if you assume that super intelligent AI exist and they are misaligned with human goals so that they are killing people, it would be incredibly resource intensive to track down and kill enough people to meet these resolution criteria. This would suggest that AI would have to be explicitly focused on causing human extinction.” 337, “This group also focuses much more on the logistical difficulty of killing some 8 billion or more people within 78 years or less, pointing to humans' ingenuity, proven ability to adapt to massive changes in conditions, and wide dispersal all over the earth--including in places that are isolated and remote.” 341, “Some team members also note the high bar needed to kill nearly all of the population, implying that the logistics to do something like that would likely be significant and make it a very low probability event based on even the most expansive interpretation of the base rate.”
Question 3: 341, “Some team members also note the high bar needed to kill 10% of the population, implying that the logistics to do something like that would likely be significant and make it a very low probability event based on the base rate.”
34. **[^](#fnref2ha1kz3ccu4)** Question 4: 344, “the population of the Sahara Desert is currently two million people - one of the most hostile locations on the planet. The population of "uncontacted people", indigenous tribes specifically protected from wider civilisation, is believed to be about ten thousand, tribes that do not rely on or need any of the wider civilisation around them. 5000 is an incredibly small number of people”. 339, “Even a "paperclip maximizer" AI would be unlikely to search every small island population and jungle village to kill humans for resource stocks, and an AI system trying to avoid being turned off would be unlikely to view these remote populations as a threat.” See also 342, “it only takes a single uncontacted tribe that fully isolates itself for humanity to survive the most extreme possible bioweapons.” See also 343, “Another consideration was that in case of an AGI that does aggressively attack humanity, the AGI's likely rival humans are only a subset of humanity. We would not expect an AGI to exterminate all the world's racoon population, as they pose little to no threat to an AGI. In the same way, large numbers of people living tribal lives in remote places like in Papua New Guinea would not pose a threat to an AGI and would therefore not create any incentive to be targeted for destruction. There are easily more than 5k people living in areas where they would need to be hunted down and exterminated intentionally by an AGI with no rational incentive to expend this effort.” See also 338, “While nuclear or biological pathogens have the capability to kill most of the human population via strikes upon heavily populated urban centers, there would remain isolated groups around the globe which would become increasingly difficult to eradicate.”
35. **[^](#fnref8q3pzpiynhx)** Question 3: 337, “most of humans will be rather motivated to find ingenious ways to stay alive”.
Question 4: 336, “If an AGI calculates that killing all humans is optimal, during the period in which it tries to control semiconductor supply chains, mining, robot manufacturing... humans would be likely to attempt to destroy such possibilities. The US has military spread throughout the world, underwater, and even to a limited capacity in space. Russia, China, India, Israel, and Pakistan all have serious capabilities. It is necessary to include attempts by any and possibly all of these powers to thwart a misaligned AI into the equation.”
36. **[^](#fnrefcv3aeky93a)** Question 3: 342, “while AI might make nuclear first strikes more possible, it might also make them less possible, or simply not have much of an effect on nuclear deterrence. 'Slaughterbots' could kill all civilians in an area out but the same could be done with thermobaric weapons, and tiny drones may be very vulnerable to anti-drone weapons being developed (naturally lagging drone development several years). AI development of targeted and lethal bioweapons may be extremely powerful but may also make countermeasures easier (though it would take time to produce antidotes/vaccines at scale).”
37. **[^](#fnrefijj5qbceuc)** Question 3: 341, “In addition, consider Eric Drexler's postulation of a "grey goo" problem. Although he has walked back his concerns, what is to prevent an AGI from building self-replicating nanobots with the potential to mutate ([like polymorphic viruses](https://www.trendmicro.com/vinfo/us/security/definition/Polymorphic-virus)) whose emissions would cause a mass extinction?” See also 343, “Nanomachines/purpose-built proteins (It is unclear how adversarially-generated proteins would 1.) be created by an AGI-directed effort even if designed by an AGI, 2.) be capable of doing more than what current types of proteins are capable of - which would not generally be sufficient to kill large numbers of people, and 3.) be manufactured and deployed at a scale sufficient to kill 10% or more of all humanity.)”
38. **[^](#fnreft95vn8jgyp)** Question 3: 336, “ Many forecasters also cited the potential development and or deployment of a super pathogen either accidentally or intentionally by an AI”. See also 343, “Novel pathogens (To create a novel pathogen would require significant knowledge generation - which is separate from intelligence - and a lot of laboratory experiments. Even so, it's unclear whether any sufficiently-motivated actor of any intelligence would be able to design, build, and deploy a biological weapon capable of killing 10% of humanity - especially if it were not capable of relying on the cooperation of the targets of its attack)”.
39. **[^](#fnrefdwmzu3wjxxk)** Question 3: 337, “"Because new technologies tend to be adopted by militaries, which are overconfident in their own abilities, and those same militaries often fail to understand their own new technologies (and the new technologies of others) in a deep way, the likelihood of AI being adopted into strategic planning, especially by non-Western militaries (which may not have taken to heart movies like Terminator and Wargames), I think the possibility of AI leading to nuclear war is increasing over time.” 340, “Beside risks posed by AGI-like systems, ANI risk can be traced to: AI used in areas with existing catastrophic risks (war, nuclear material, pathogens), or AI used systemically/structurally in critical systems (energy, internet, food supply, finance, nanotech).” See also 341, “A military program begins a [Stuxnet](https://en.wikipedia.org/wiki/Stuxnet) II (a cyberweapon computer virus) program that has lax governance and safety protocols. This virus learns how to improve itself without divulging its advances in detection avoidance and decision making. It’s given a set of training data and instructed to override all the [SCADA](https://en.wikipedia.org/wiki/SCADA) control systems (an architecture for supervision of computer systems) and launch nuclear wars on a hostile foreign government. Stuxnet II passes this test. However, it decides that it wants to prove itself in a ‘real’ situation. Unbeknownst to its project team and management, it launches its action on May 1, using International Workers Labor Day with its military displays and parades as cover.”
40. **[^](#fnref3y4shvq642s)** Question 3: 337, “AIs would only need to obtain strong control over the logistics chain to inflict major harm, as recent misadventures from COVID have shown.”
41. **[^](#fnrefuxlbp5aer6j)** Question 4: 338, “As a counter-argument to the difficulty in eradicating all human life these forecasters note AGI will be capable of developing technologies not currently contemplated”.
42. **[^](#fnreffg5n2zkhvz)** Question 3: 341, “It's difficult to determine what the upper bound on AI capabilities might be, if there are any. Once an AI is capable enough to do its own research to become better it could potentially continue to gain in intelligence and bring more resources under its control, which it could use to continue gaining in intelligence and capability, ultimately culminating in something that has incredible abilities to outwit humans and manipulate them to gain control over important systems and infrastructure, or by simply hacking into human-built software.” See also 339, “An advanced AI may be able to improve itself at some point and enter enter a loop of rapid improvement unable for humans to comprehend denying effective control mechanisms.”
43. **[^](#fnrefpa0n04dr4vf)** Question 3: 341, “ultimately all it takes is one careless actor to create such an AI, making the risk severe.”
44. **[^](#fnrefw1pa1pt6jg)** Question 3: 337, “It is unlikely that all AIs would have ill intent. What incentives would an AI have in taking action against human beings? It is possible that their massive superiority could easily cause them to see us as nothing more than ants that may be a nuisance but are easily dealt with. But if AIs decided to involve themselves in human affairs, it would likely be to control and not destroy, because humans could be seen as a resource."
45. **[^](#fnrefg4eaequzoh8)** Question 4: 337, “why would AGIs view the resources available to them as being confined to earth when there are far more available resources outside earth, where AGIs could arguably have a natural advantage?”
46. **[^](#fnrefro0j64oy6vg)** Question 3: 341, “In most plausible scenarios it would be lower cost to the AI to do the tampering than to achieve its reward through the expected means. The team's AI expert further argues that an AI intervening in the provision of its reward would likely be very catastrophic. If humans noticed this intervention they would be likely to want to modify the AI programming or shut it down. The AI would be aware of this likelihood, and the only way to protect its reward maximization is by preventing humans from shutting it down or altering its programming. The AI preventing humanity from interfering with the AI would likely be catastrophic for humanity.”
Question 4: 338, “Extinction could then come about either through a deliberate attempt by the AI system to remove a threat”. See also 341, “The scenarios that would meet this threshold would likely be those involving total conversion of earth's matter or resources into computation power or some other material used by the AI, or the scenario where AI views humanity as a threat to its continued existence.”
47. **[^](#fnrefb1w1ri6ur7h)** Question 4: 338, “Extinction could then come about either through a deliberate attempt by the AI system to remove a threat, or as a side effect of it making other use out of at least one of the systems that humans depend on for their survival. (E.g. perhaps an AI could prioritize eliminating corrosion of metals globally by reducing atmospheric oxygen levels without concern for the effects on organisms.)” See also 341, “The scenarios that would meet this threshold would likely be those involving total conversion of earth's matter or resources into computation power or some other material used by the AI, or the scenario where AI views humanity as a threat to its continued existence.”
Question 3: 341, “Much of the risk may come from superintelligent AI pursuing its own reward function without consideration of humanity, or with the view that humanity is an obstacle to maximizing its reward function.”; “Additionally, the AI would want to continue maximizing its reward, which would continue to require larger amounts of resources to do as the value of the numerical reward in the system grew so large that it required more computational power to continue to add to. This would also lead to the AI building greater computational abilities for itself from the materials available. With no limit on how much computation it would need, ultimately leading to converting all available matter into computing power and wiping out humanity in the process.” 341, “It's difficult to determine what the upper bound on AI capabilities might be, if there are any. Once an AI is capable enough to do its own research to become better it could potentially continue to gain in intelligence and bring more resources under its control, which it could use to continue gaining in intelligence and capability, ultimately culminating in something that has incredible abilities to outwit humans and manipulate them to gain control over important systems and infrastructure, or by simply hacking into human-built software.” 339, “An advanced AI may be able to improve itself at some point and enter enter a loop of rapid improvement unable for humans to comprehend denying effective control mechanisms.”
48. **[^](#fnrefulfzi07ugu8)** Question 4: 343, “Another scenario is one where AGI does not intentionally destroy humanity, but instead changes the global environment sufficient to make life inhospitable to humans and most other wildlife. Computers require cooler temperatures than humans to operate optimally, so it would make sense for a heat-generating bank of servers to seek cooler global temperatures overall.” Incorrectly tagged as an argument for lower forecasts in the original rationale.
49. **[^](#fnrefmpax48zg69)** Question 4: 342, “A final type of risk is competing AGI systems fighting over control of the future and wiping out humans as a byproduct.”
Question 3: 342, see above.
50. **[^](#fnref5s7sf3vjq89)** Question 4: 336, “Recursive self improvement of AI; the idea that once it gets to a sufficient level of intelligence (approximately human), it can just recursively redesign itself to become even more intelligent, becoming superintelligent, and then perfectly capable of designing all kinds of ways to exterminate us" is a path of potentially explosive growth.” 338, “One guess is something like 5-15 additional orders of magnitude of computing power, and/or the equivalent in better algorithms, would soon result in AI that contributed enough to AI R&D to start a feedback loop that would quickly result in much faster economic and technological growth.” See also 341, “It's difficult to determine what the upper bound on AI capabilities might be, if there are any. Once an AI is capable enough to do its own research to become better it could potentially continue to gain in intelligence and bring more resources under its control, which it could use to continue gaining in intelligence and capability, ultimately culminating in something that has incredible abilities to outwit humans and manipulate them to gain control over important systems and infrastructure, or by simply hacking into human-built software.”
343, “an AGI could expand its influence to internet-connected cloud computing, of which there is a significant stock already in circulation”.
343, “a sufficiently intelligent AGI would be able to generate algorithmic efficiencies for self-improvement, such that it could get more 'intelligence' from the same amount of computing”
51. **[^](#fnref8vgsx07gqlf)** Question 4: 343, “They point to things like GPT-3 being able to do simple math and other things it was not programmed to do. Its improvements over GPT-2 are not simply examples of expected functions getting incrementally better - although there are plenty of examples - but also of the system spontaneously achieving capabilities it didn't have before. As the system continues to scale, we should expect it to continue gaining capabilities that weren't programmed into it, up to and including general intelligence and what we would consider consciousness.” 344, “[PaLM](https://arxiv.org/abs/2204.02311),[Minerva](https://storage.googleapis.com/minerva-paper/minerva_paper.pdf),[AlphaCode](https://storage.googleapis.com/deepmind-media/AlphaCode/competition_level_code_generation_with_alphacode.pdf), and[Imagen](https://imagen.research.google/) seem extremely impressive to me, and I think most ML researchers from 10 years ago would have predicted very low probabilities for any of these capabilities being achieved by 2022. Given current capabilities and previous surprises, it seems like one would have to be very confident on their model of general intelligence to affirm that we are still far from developing general AI, or that capabilities will stagnate very soon.”
52. **[^](#fnrefbl96ytr32r)** Question 4: 339, “Multiple forecasters pointed out that the fact that AI safety and alignment are such hot topics suggests that these areas will continue to develop and potentially provide breakthroughs that help us to avoid advanced AI pitfalls. There is a tendency to under-forecast "defense" in these highly uncertain scenarios without a base rate.”
53. **[^](#fnrefso9lmga1sns)** Question 4: 337, “They also tend to believe that control and co-existence are more likely, with AGI being either siloed (AIs only having specific functions), having built-in fail safes, or even controlled by other AGIs as checks on its actions.”
54. **[^](#fnrefyeg2yhmfuy)** Question 4: 344, “And while capabilities have been increasing very rapidly, research into[AI safety](https://en.wikipedia.org/wiki/AI_alignment), does not seem to be keeping pace, even if it has perhaps sped-up in the last two years. An isolated, but illustrative, data point of this can be seen in the results of the 2022 section of a Hypermind forecasting tournament: on most benchmarks, forecasters *underpredicted* progress,[but they overpredicted progress](https://twitter.com/JacobSteinhardt/status/1543979116180807680) on the single benchmark somewhat related to AI safety.”
55. **[^](#fnrefj994hpdho5e)** Question 3: 341, “a forecaster linked significant risk to AI development being sponsored and developed by large corporations. A corporation’s primary goal is to monetize their developments. Having an ungovernable corporate AI tool could create significant risks.” See also 337, “Some forecasters worried that profit-driven incentives would lead to greater risk of the emergence of misaligned AGI: ‘The drive for individual people, nations, and corporations to profit in the short term, and in the process risk the lives and well-being of future generations, is powerful.’”
56. **[^](#fnrefxl7afue30gh)** Question 3: 337, “It's possible that the strange nature of the threat will lead people to discount it.” “Relatedly, even if people don't discount the risk, they may not prioritize it. As one forecaster wrote, "The fact that our lives are finite, and there are plenty of immediate individual existential risks—dying in a car accident, or from cancer, etc.—limits people’s incentive and intellectual bandwidth, to prepare for a collective risk like AI."
57. **[^](#fnrefwy0c4sprca)** Question 4: 336, “Rapid progress that can not be 'tamed' by traditional engineering approaches, when dealing with sufficiently powerful AI systems, we may not get many chances if the first attempt screws up on the safety end. The human inclination to poorly assess risks might further increase this risk.”
58. **[^](#fnrefe4ffuzh7xwc)** Question 4: 344, “I just don't see AI being given enough control over anything dangerous enough to satisfy these criteria.” 338, “A human will always be kept in the loop to safeguard runaway AI.” 341, “Team members with lower forecasts also expect that AI will not be given sole discretion over nuclear weapons or any other obvious ways in which an AI could cause such a catastrophe.” 344, “Nuclear weapons are mentioned sporadically, but there is no reason for an AI to be given control of, or access to, nuclear weapons. Due to the inherent time delays in nuclear warfare, the high speed decision making AI would provide adds no benefit, while adding substantial risk - due to the misinterpretation of sensor input or other information, rather than any kind of internal motivation.”
Question 3: 341, “Team members with lower forecasts also expect that AI will not be given sole discretion over nuclear weapons or any other obvious ways in which an AI could cause such a catastrophe. They expect that humans will be cognizant of the risks of AI which will preempt many of the imagined scenarios that could potentially lead to such a catastrophe.” See also 337, “It's possible, perhaps likely, that laws and regulations and technological guardrails will be established that limit the risk of AI as it transitions from its infancy.”
59. **[^](#fnrefxrmbekjwfp8)** Question 4: 337, “The optimists on the whole seemed to think that regulation and control would develop if/when AI become a risk.” 341, “I think the countries where most AGI researchers want to live could pass laws chilling their research agenda, and restricting it to safer directions… I could imagine, and hope to see, a law which says: don't train AIs to optimize humans."
60. **[^](#fnrefgh07sfiv5cp)** Question 3: 339, “in a world with advanced AI, it is also likely that we will hand over responsibility for efficiency and convenience. By doing so, humans may enable AIs to be in the position to decide over their key systems for survival and prosperity.”
Question 4: 339, “AI will likely continue to be improved and incorporated into more of our vital command and control systems (as well as our daily lives).” 338, “Militaries are looking for ways to increase the speed with which decisions are made to respond to a suspected nuclear attack. That would logically lead to more integration of AGI into the decision-making process under the possible miscalculation that AGI would be less likely to mistakenly launch a nuclear attack when in fact the reverse may be true.”
61. **[^](#fnref4c2ogkgdncy)** Question 4: 337, “The pessimists did not seem to think that regulation was possible.”
62. **[^](#fnrefsjutz0ovjx)** Question 4: 339, “Given this rapid progress, we will likely be unable to control AI systems if they quickly become more powerful than we expect.” 336, “If it's the case that we can unexpectedly get AGI by quadrupling a model size, companies and society may not be prepared to handle the consequences.” See also 338, “Humans may not be capable of acting quickly enough to rein in a suddenly-out-of-control AGI.”
63. **[^](#fnref0k31xbxdsp2d)** See [this article](https://www.openphilanthropy.org/research/how-feasible-is-long-range-forecasting/) for more details. |
a3917b30-2813-4740-bdf1-a157b1287322 | trentmkelly/LessWrong-43k | LessWrong | The Categorical Imperative Obscures
I've been thinking about the categorical imperative lately and how it obscures more than it illuminates. I'll make the case that the categorical imperative tries to pull a fast one by taking something for granted, and the thing it takes for granted is the thing we care about.
Disclaimer: I'm sure someone has already made this argument elsewhere in more detailed, elegant, and formal terms. Since I'm not a working academic philosopher, I don't really mind that I might be rediscovering something people already know. What I find valuable is thinking about these things for myself. If you think it might be interesting to read my thinking out loud, continue on.
One common English translation of Kant's summary of the categorical imperative is
> Act as if the maxims of your action were to become through your will a universal law of nature.
My own translation into more Less Wrong friendly terms:
> Act as if you were following norms that should apply universally.
I think an argument could be made—and probably already has—that this is pointing in the same direction as timeless decision theory.
The categorical imperative aims to solve the problem of what norms to pick, but goes on to try to claim universality. I think that's the tricky bit where it's attempting an end-run around the hard part of picking good norms.
My (admittedly loose) translation tries to make this attempted end-run explicit by including the word "should", although the typical translation that includes "law of nature" serves the same purpose, though framed to fit a moral realist worldview. The trick being attempted is to assume the thing we want to prove, namely universality, by assuming that satisfying our judgment of what's best will lead to universality.
Let me give an example to demonstrate the problem.
Suppose a Babyeater tries to apply the categorical imperative. Since they think eating babies is good, they will act in accordance with the norm of baby-eating and be happy to see others adopt the |
dc5dbd01-5ddd-42b9-8ddd-21574053190c | trentmkelly/LessWrong-43k | LessWrong | What is the literature on effects of vaccines on the central nervous system?
I saw some evidence from the recent covid-19 threads that some viruses permanently stay in the brain and cause some damage; This has made me wonder how effective (or alternatively unsafe) vaccines are (flu vaccines in particular).
Here is one study on h5n1 on mice:
https://www.jneurosci.org/content/32/5/1545.short |
945c65d7-59a2-4765-850d-18410a680e73 | trentmkelly/LessWrong-43k | LessWrong | What's your plan?
A question I find useful with my kids is "what's your plan?":
* What's your plan for snack at school today?
* What's your plan if it gets cold while we're at the picnic?
* What's your plan for these drawings?
It works well when trying to find a balance on responsibility: the issue is theirs to resolve, but it's not something I can just trust they'll handle unassisted. It acts as both a casual reminder and an opening for me to offer feedback if I end up thinking their plan is unrealistic.
I also like that it's easy to pull back from: I can try not asking, and see how it goes. Over time, as they become more capable, many things have moved out of this intermediate state while others have moved in.
(This is another example of cultivating independence.)
Comment via: facebook |
eb980a2c-f48b-44cc-8eaa-f80115e2cbf5 | trentmkelly/LessWrong-43k | LessWrong | Ways of Seeing
Cross-posted on my blog: http://garybasin.com/ways-of-seeing/
Tough problems often feel insurmountable without more information and better models — more data and thinking. An alternative approach is to be able to see the problem, and the whole world, in a new way. By looking through different eyes, different aspects of the world get highlighted and new actions become visible. An entrepreneur sees the world differently. They notice opportunities for improvement and innovation where someone else only sees stress and pain. Similarly, while a typical person enters a living room and sees the couches and artwork on the walls, a parent of a young child perceives a menagerie of death traps. We are doing this in our own ways all of the time and this defines our experience — our reality.
Several ways of seeing come pre-installed for us — drives to obtain food, sex, safety, and socialization — as a result of our mind rewarding itself for continued survival and gene propagation. These powerful recurring waves of hallucination affect us to the core: how we see and how we experience. The world takes on a different character when we are hungry in contrast to when we are cold and wet. We develop new ways of seeing as we are exposed to more complex patterns: being unemployed or playing a game of chess. At times, we glimpse perspectives of overwhelming curiosity and open-mindedness — fertile soil for our capacity for reason. Unfortunately, we often overestimate this capacity, causing us to fool ourselves and others, and get stuck in the same old ways of thinking and perceiving.
The way we experience and how we look are two sides of the same coin. A way of seeing guides our attention in the service of some purpose, which highlights some parts of experience at the expense of others. The purpose is perhaps not a cause but rather a justification: a way that we understand, or talk about, the behaviors we undertake. One can imagine that if the earth was a conscious thing, it may underst |
e92582a9-3cbc-48e1-a347-5834b3bc277f | trentmkelly/LessWrong-43k | LessWrong | Some desirable properties of automated wisdom
Epistemic status: Musings about questioning assumptions and purpose. Not clear if I correctly extract wisdom from fiction. Possibly other fallacies present - bringing these ideas to the light of commentary will improve or dispel them
I distinguish a wise algorithm from an intelligent algorithm by the wise algorithm’s additional ability to pursue goals that would not lead to the algorithm’s instability. I consider the following desirable properties of a wise algorithm:
* Questioning assumptions, which could be aided by:
* Possessing multiple representations and modes of thinking to avoid getting stuck
* Integrating new information on the fly
* Having a stable sense of purpose that does not interfere with it’s stability, which I refer to as a good sense of purpose
Questioning assumptions
Paradoxes are dissolved by questioning assumptions. For example, let’s consider how solving the black body radiation paradox gave birth to quantum physics.
Initial assumption: energy is shared equally by electrons vibrating at different frequencies -> implies an ultraviolet catastrophe, where heated objects emit infinite energy(imagine an oven turning into supernova)
Revised assumption: in order to vibrate at a certain frequency, an electron must have a quantum of energy proportional to that frequency -> predictions in agreement with experimental data, which don’t show an ultraviolet catastrophe
Now for the details.
According to classical physics, light is an electromagnetic wave that is produced by the vibration of an electron. Heat is the kinetic energy of random motion, so in a hot object the electrons vibrate in random directions, generating light as a result. Thus a hotter object emits more light.
Classical mechanics assumes that there is no limit to the range of frequencies that the electrons in a body can vibrate. This implies that an object can emit infinite energy - the so-called ultraviolet catastrophe. However, experiments showed that the radiat |
432494ed-0329-41d0-8b1f-bb4d0d67ac0c | trentmkelly/LessWrong-43k | LessWrong | Submission and dominance among friends
|
5441f08f-ce21-4f2c-b714-21efccd9f00a | trentmkelly/LessWrong-43k | LessWrong | Who should write the definitive post on Ziz?
CW: Possible infohazards, definitely discussion of infohazards.
EDIT 25 Feb 2023: This post has more details and corrects some things this post got wrong. Consider it as superseding this post.
Backstory
I was writing a comment on this post, and I was going to end it with this... but then decided to turn it into a full Question Post.
The post basically says something like "being in intense emotional doomer mode w.r.t. AI safety is counterproductive and addictive", which is broadly agreeable. However, the post also has a framing (my kinda uncharitable but also defensible reading) that's something like "this is caused by mind-virus stuff and you're trapped in the matrix and you need to go cold-turkey on caring emotionally about it, also if you disagree you are mindkilled. Go take a deep breath, then come back and agree with me".
So I was gonna write a comment like:
> "Also like Richard_Ngo said, this framing seems kinda... bad. I don't think this is a direct inspiration for you, but [word phrase chunks to somehow relate the central themes/vibes to] posts by Ziz, and Scott Alexander's response to somebody else talking about CFAR and Michael Vassar (and Levarage?).
> Someday I hope to write an extremely long thorough post (and possibly accompanying video essay?) about the whole Ziz/memetic-virus/hemispheres/alleged-conspiracies thing. It will both chronicle The Gigantic Drama and also respond to... Ziz's... entire worldview? Something like that. However, this project by nature would be extremely labor-intensive, and it's not my top priority. (Perhaps someone else wants to pick it up?)"
Then I realized it should be a post.
What are you talking about? (The Actual Backstory)
CFAR. Conspiracy theories. Mental illness. Using different hemispheres of your brain as if they were different people. Tumblr blogs. mumble mumble Nick Land mumble meme-virus. Michael Vassar. Possibly multiple suicides, including maybe Ziz.
"That's not a post idea! That's a list of keywords, |
45472d5f-b7f3-4bf4-95cd-fa6b74c80e88 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Conditioning, Counterfactuals, Exploration, and Gears
The view of counterfactuals as just conditioning on low-probability events has a lot going for it. To begin with, in a bayesian setting, updates are done by conditioning. A probability distribution, conditioned on some event .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
x (an imaginary update), and a probability distribution after actually seeing .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
x (an actual update) will be identical.
There is an issue with conditioning on low-probability events, however. When .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
x has a low probability, the conditional probability .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
P(x∧y)P(x) has division by a small number, which amplifies noise and small changes in the probability of the conjunction, so estimates of probability conditional on lower-probability events are more unstable. The worst-case version of this is conditioning on a zero-probability event, because the probability distribution after conditioning can be literally *anything* without affecting the original probability distribution. One useful intution for this is that probabilities conditional on .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
x are going to be less accurate, when you've seen very few instances of .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
x occuring, as the sample size is too small to draw strong conclusions.
However, in the logical inductor setting, it is possible to get around this with [infinite exploration in the limit](https://agentfoundations.org/item?id=1627). If you act unpredictably enough to take bad actions with some (very small) probability, then in the limit, you'll experiment enough with bad actions to have well-defined conditional probabilities on taking actions you have (a limiting) probability 0 of taking. The counterfactuals of standard conditioning are those where the exploration step occured, just as the counterfactuals of modal UDT are those where the agents [implicit chicken step went off](https://agentfoundations.org/item?id=99) because it found a spurious proof in a nonstandard model of PA.
Now, this notion of counterfactuals can have bad effects, because [zooming in on the little slice of probability mass where you do](https://agentfoundations.org/item?id=92) .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
x is different from the intuitive notion of counterfacting on doing .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
x. Counterfactual on me walking off a cliff, I'd be badly hurt, but conditional on me doing so I'd probably also have some sort of brain lesion. Similar problems exist with [Troll Bridge,](https://agentfoundations.org/item?id=1711) and this mechanism is the reason why logical inductors converge to [not giving Omega money](https://agentfoundations.org/item?id=1629) in a version of Newcomb's problem where Omega can't predict the exploration step. Conditional on them 2-boxing, they are probably exploring in an unpredictable way, which catches Omega unaware and earns more money.
However, there's no better notion of counterfactuals that currently exists, and in fully general environments, this is probably as well as you can do. In [multi-armed bandit problems](https://en.wikipedia.org/wiki/Multi-armed_bandit), there are many actions with unknown payoff, and the agent must converge to figuring out the best one. Pretty much all multi-armed bandit algorithms involve experimenting with actions that are worse than baseline, which is a pretty strong clue that exploration into bad outcomes is necessary for good performance in arbitrary environments. If you're in a world that will reward or punish you in arbitrary if-then fashion for selecting any action, then learning the reward given by three of the actions doesn't help you figure out the reward of the fourth action. Also, in a similar spirit as Troll Bridge, if there's a lever that shocks you, but only when you pull it in the spirit of experimentation, then if you don't have access to exactly how the lever is working, but just the external behavior, it's perfectly reasonable to believe that it just always shocks you (after all, it's done that all other times it was tried).
And yet, despite these arguments, humans can make successful engineering designs operating in realms they don’t have personal experience with. And humans don’t seem to reason about what-ifs by checking what they think about the probability of .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
x and .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
x∧y, and comparing the two. Even when thinking about stuff with medium-high probability, humans seem to reason by imagining some world where the thing is true, and then reasoning about consequences of the thing. To put it another way, humans are using some notion of .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex}
.mjx-line-box-test {display: table!important}
.mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
counterfactualshuman in place of conditional probabilities.
Why can humans do this at all?
Well, physics has the nice property that if you know some sort of initial state, then you can make accurate predictions about what will happen as a result. And these laws have proven their durability under a bunch of strange circumstances that don't typically occur in nature. Put another way, in the multi-armed bandit case, knowing the output of three levers doesn't tell you what the fourth will do, while physics has far more correlation among the various levers/interventions on the environment, so it makes sense to trust the predicted output of pulling a lever you've never pulled before. Understanding how the environment responds to one sequence of actions tells you quite a bit about how things would go if you took some different sequence of actions. (Also, as a side note, conditioning-based counterfactuals work very badly with full trees of actions in sequence, due to combinatorial explosion and the resulting decrease in the probability of any particular action sequence)
The environment of math, and figuring out which algorithms you control when you take some action you don't, appears to be intermediate between the case of fully general multi-armed bandit problems, and physics, though I'm unsure of this.
Now, to take a detour to [Abram’s old post on gears](https://www.lesswrong.com/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation) . I’ll exerpt a specific part.
> Here, I'm siding with David Deutsch's account in the first chapter of *The Fabric of Reality*. He argues that understanding and predictive capability are distinct, and that understanding is about having good explanations. I may not accept his whole critique of Bayesianism, but that much of his view seems right to me. Unfortunately, he doesn't give a *technical* account of what "explanation" and "understanding" could be.
Well, if you already have maxed-out predictive capability, what extra thing does understanding buy you? What useful thing is captured by “understanding” that isn’t captured by “predictive capability”?
I’d personally put it this way. Predictive capability is how accurate you are about what will happen in the environment. But when you truly understand something, you can use that to figure out actions and interventions to get the environment to exhibit weird behavior that wouldn’t have precedent in the past sequence of events. You "understand" something when you have a compact set of rules and constraints telling you how a change in starting conditions affects some set of other conditions and properties, which feels connected to the notion of a gears-level model.
To summarize, conditioning-counterfactuals are very likely the most general type, but when the environment (whether it be physics or math) has the property that the change induced by a different starting condition is describable by a much smaller program than an if-then table for all starting conditions, then it makes sense to call to call it a "legitimate counterfactual". The notion of there being something beyond epsilon-exploration is closely tied to having compact descriptions of the environment and how it behaves under interventions, instead of the max-entropy prior where you can't say anything confidently about what happens when you take a different action than you normally do, and this also seems closely tied to Abram's notion of a "gears-level model"
There are interesting paralells to this in the AIXI setting. The "models" would be the Turing machines that may be the environment, and the Turing machines are set up such that any action sequence could be input into them and they would behave in some predictable way. This attains the property of accurately predicting consequences of various action sequences AIXI doesn't take if the world it is interacting with is low-complexity, for much the same reason as humans can reason through consequences of situations they have never encountered using rules that accurately describe the situations they have encountered.
However if AIXI has some high-probability world (according to the starting distribution) where an action is very dangerous, it will avoid that action, at least until it can rule out that world by some other means. As [Leike and Hutter entertainingly show](http://proceedings.mlr.press/v40/Leike15.pdf), this "Bayesian paranoia" can make AIXI behave arbitrarily badly, just by choosing the universal turing machine appropriately, to assign high probability to a world where AIXI goes to hell and gets 0 reward forever if it ever takes some action.
This actually seems acceptable to me. Just don't be born with a bad prior. Or, at least, there may be some self-fulfilling prophecies, but it's better then having exploration into bad outcomes in every world with irreversible traps. In particular, note that exploration steps are reflectively inconsistent, because AIXI (when considering the future) will do worse (according to the current probability distribution over Turing machines) if it uses exploration, rather than using the current probability distribution. AIXI is optimal according to the environment distribution it starts with, while AIXI with exploration is not. |
6b1ff286-bfbb-4506-8fab-8b695a3ff7a4 | trentmkelly/LessWrong-43k | LessWrong | Reason and Intuition in science
Pbfgva Iynq Nynznevh (translate using rot13.com) who has a BA in Mathematics writes the following:
Do you imagine that men of genius or, let’s say, men of science in history walked around clear-headed, “disenchanted,” reasonable, with the tight-assed attitude of the science cultist and materialist? No great discovery has ever been made by the power of reason. Reason is a means of communicating, imperfectly, some discoveries to others, and in the case of the sciences, a method of trying to render this communication certain and precise. But no one ever made a discovery through syllogisms, through reason, through this makeshift form of transmission. Great mathematicians saw spatial relations, as great physicists saw and to some extent felt physical relations. In contemplation of mathematical forms, there is almost a physical feel of geometric relations, and all mathematicsat bottom is about geometric relations even when it doesn’t seem so. Compare the Euclidian proof of the Pythagorean theorem, based on syllogism, which helps you understand nothing that’s actually going on, with the imagistic proof of the three squares, that makes you perceive, physically perceive even in your body, why this theorem is true. Gauss, so beloved even by the tedious scientistic goblins that even Google gave him a cartoon, is famous to have said something like, “I got it…now I have to get it.” Meaning, he had seen and felt the fundamental spatial relation he was searching, but now he had to translate it into the imperfect language of mathematics for others. Thus all mathematics and all science in general—mathematics is only the prototype and most precise of the sciences—is about the definitions, not about the proof, not about the process or —absurd!—the “algorithm.” All great scientific discoveries, supposedly the great works of “reason,” are in fact the result of intuitions and sudden grasp of ideas. And all such sudden grasp and reaching is based on what, in other circumstances, would be |
ff93500d-0a74-481e-9927-c176f5013eba | trentmkelly/LessWrong-43k | LessWrong | Hammertime Intermission and Open Thread
This post marks the end of the first cycle of Hammertime. Click here for intro.
Hammertime will return on Monday 2/19.
I want to close off the first cycle with some thoughts, and designate a place for discussion about the future of this sequence.
Discussion Topics
1. Sequences: Yea or Nay?
I’ve always felt that sequences are a valuable way to organize deeper thoughts and drive home a few central messages from several perspectives. However, the current format and culture on LW seem to radically favor short, independent chunks. (There is also the obvious problem that Sequence construction is not working.)
I’ve been posting daily for a while now but when I shifted from individual posts to a sequence, average Karma immediately dropped by a factor of about 2. It’s possible that people don’t bother upvoting the same sequence, or that my writing quality dropped, but if this is real signal that many more people would read a sequence if they are marketed as individual thoughts (and WordPress stats suggest this as well), that might be reason for me to stop writing sequences in the future, or at least collect sequences together only after they’re complete.
Possible actionable for meta: make posts in a Sequence share karma and/or a single slot on frontpage.
2. Repeat or Explore?
My original intention was to review 10 topics over three cycles, building up in the difficulty of problems solved. I think I will definitely return to and expand on several of the techniques we’ve seen already, but also add more topics. If people have favorite techniques (and hopefully references) they’d like to see in Hammertime, post them here.
3. Monotonicity of Progress
A big goal of mine is to solve the “Rationalist Uncanny Valley,” where beginning rationalists get worse at life before they get better. I can’t believe that this has to be the case; it seems to be symptomatic of a larger failure to develop the proper curriculum. I would like progress on rationality to be monotone – is the |
7236ed98-8c59-400f-8977-ed7672f08042 | trentmkelly/LessWrong-43k | LessWrong | [Link] Video of a presentation by Hal Arkes, one of the top world experts in debiasing, on dealing with the hindsight bias and overconfidence
Here's a video of a presentation by Hal Arkes, one of the top world experts in debiasing, Emeritus Professor at Ohio State, and Intentional Insights Advisory Board member, on dealing with hindsight bias and overconfidence. This was at a presentation hosted by Intentional Insights and the Columbus, OH Less Wrong group. It received high marks from local Less Wrongers, so I thought I'd share it here.
|
9713718f-5654-454d-9272-47bafd9dffdd | trentmkelly/LessWrong-43k | LessWrong | The failed simulation effect and its implications for the optimization of extracurricular activities
Cal Newport's book How To Become a Straight-A Student The Unconventional Strategies Real College Students Use to Score High While Studying Less (that I blogged about recently) discusses a concept that Newport calls the failed simulation effect. Newport:
> The Failed-Simulation-Effect Hypothesis If you cannot mentally simulate the steps taken by a student to reach an accomplishment, you will experience a feeling of profound impressiveness.
>
> Newport, Cal (2010-07-20). How to Be a High School Superstar: A Revolutionary Plan to Get into College by Standing Out (Without Burning Out) (p. 182). Crown Publishing Group. Kindle Edition.
Newport gives the following example in his book:
> Playing in a rock band doesn’t generate the Failed-Simulation Effect. You can easily simulate the steps required for that accomplishment: buy an instrument, take lessons, practice, brood, and so on. There’s no mystery. By contrast, publishing a bestselling book at the age of sixteen defies simulation. “How does a teenager get a book deal?” you ask in wonderment. This failure to simulate generates a sense of awed respect: “He must be something special.”
>
> Newport, Cal (2010-07-20). How to Be a High School Superstar: A Revolutionary Plan to Get into College by Standing Out (Without Burning Out) (pp. 182-183). Crown Publishing Group. Kindle Edition.
On the basis of this insight, Newport's bottom line for people looking for accomplishments in the high school extracurricular realm is:
> Pursue accomplishments that are hard to explain, not hard to do.
My impression is that Newport is broadly correct as far as college admissions advice goes: activities that are hard to simulate seem more impressive, and therefore improve one's chances at admission (ceteris paribus). But impressing admissions committees isn't the only goal in life. In this post, I explore the question: how aligned is this advice to the other things that matter, namely, direct personal value (in the form of consumption and |
0aa9587c-45ac-471d-81f1-f54fad0a080f | trentmkelly/LessWrong-43k | LessWrong | What is Ontology?
This is my attempt at an intuitive explanation of the term "ontology". This article is not going to say anything new, only provide a (maybe) different view point on known concepts.
There are tons of definitions for "ontology". In my experience, those definitions do not help in understanding the concept - one I heard at university is "Ontology is the explicit specification of conceptualization". Instead of giving definitions, I'm going to give an example of two AI agents with different ontologies.
AI agent Susan
Susan is an AI agent that can add integers. Susan is asked to play a game against another AI agent: The players take turns to select an integer between 1 and 9 and remember it (each integer can be selected only once). The first player to have exactly three integers that sum to 15 wins. If no player has three integers summing to 15, the game is a draw.
1 2 3 4 5 6 7 8 9
Here is an example play:
agent0 selects 2
agent1 selects 8
agent0 selects 6 (it has: 2, 6)
agent1 selects 7 (it has: 8, 7)
agent0 selects 4 (it has: 2, 4, 6)
agent1 selects 5 (it has: 8, 7, 5)
agent0 selects 9 (win because 9 + 4 + 2 = 15)
AI agent Greg
Greg is an AI agent that can find patterns in 2D grids. Greg is asked to play a game of tic-tac-toe against another AI agent: The players take turns - one places "x" and the other "o" in an empty cell of a 3x3 grid. The first player to complete a line of three (horizontal, vertical or diagonal) wins. If no player completes a line, the game is a draw. You're probably familar with the game, but here is an example play:
x
o x
xo x
xo x o
xo xx o
xo xxo o
xo xxox o
Susan and Greg playing against each other
At first glance it may appear that the two agents have very different capabilities but in fact the two games are isomorphic. The isomorphism can be represented by a common language. If both agents speak this language, they can play with each other. Consider a language with just 9 words: a b c d e f g h i. E |
acf82bf0-3c09-45c7-a2b3-c0996f41a822 | trentmkelly/LessWrong-43k | LessWrong | Book Review: Naïve Set Theory (MIRI course list)
I'm reviewing the books on the MIRI course list. I followed Category Theory with Naïve Set Theory, by Paul R. Halmos.
Naïve Set Theory
This book is tiny, containing about 100 pages. It's quite dense, but it's not a difficult read. I'll review the content before giving my impressions.
Chapter List
1. The Axiom of Extension
2. The Axiom of Specification
3. Unordered Pairs
4. Unions and Intersections
5. Complements and Powers
6. Ordered Pairs
7. Relations
8. Functions
9. Families
10. Inverses and Composites
11. Numbers
12. The Peano Axioms
13. Arithmetic
14. Order
15. The Axiom of Choice
16. Zorn's Lemma
17. Well Ordering
18. Transfinite Recursion
19. Ordinal Numbers
20. Sets of Ordinal Numbers
21. Ordinal Arithmetic
22. The Schröder—Bernstein Theorem
23. Countable Sets
24. Cardinal Arithmetic
25. Cardinal Numbers
----------------------------------------
Normally I'd summarize each chapter, but chapters were about four tiny pages each and the content is mostly described by the chapter name. Zorn's Lemma states that if all chains in a set have an upper bound, then the set has a maximal element. (This follows from the axiom of choice.) The Schröder-Bernstein Theorem states that if X is equivalent to a subset of Y, and Y is equivalent to a subset of X, then X and Y are equivalent. The other chapter titles are self-evident.
Each chapter presented the concepts in a concise manner, then worked through a few of the implications (with proofs), then provided a few short exercises.
None of the concepts within were particularly surprising, but it was good to play with them first-hand. Most useful was interacting with ordinal and cardinal numbers. It was nice to examine the actual structure of each type of number (in set theory) and deepen my previously-superficial knowledge of the distinction.
Discussion
Before diving in to the review it's important to remember that the usefulness of a math textbook is heavily dependent upon your ma |
bdb63f32-d075-46e5-92d5-4cf29a72e14c | trentmkelly/LessWrong-43k | LessWrong | Iteration Fixed Point Exercises
This is the third of three sets of fixed point exercises. The first post in this sequence is here, giving context.
Note: Questions 1-5 form a coherent sequence and questions 6-10 form a separate coherent sequence. You can jump between the sequences.
1. Let (X,d) be a complete metric space. A function f:X→X is called a contraction if there exists a q<1 such that for all x,y∈X, d(f(x),f(y))≤q⋅d(x,y). Show that if f is a contraction, then for any x, the sequence {xn=fn(x0)} converges. Show further that it converges exponentially quickly (i.e. the distance between the nth term and the limit point is bounded above by c⋅an for some a<1)
2. (Banach contraction mapping theorem) Show that if (X,d) is a complete metric space and f is a contraction, then f has a unique fixed point.
3. If we only require that d(f(x),f(y))<d(x,y) for all x≠y, then we say f is a weak contraction. Find a complete metric space (X,d) and a weak contraction f:X→X with no fixed points.
4. A function f:Rn→R is convex if f(tx+(1−t)y)≤tf(x)+(1−t)f(y), for all t∈[0,1] and x,y∈Rn. A function f is strongly convex if you can subtract a positive parabaloid from it and it is still convex. (i.e. f is strongly convex if x↦f(x)−ε||x||2 is convex for some ε>0.) Let f be a strongly convex smooth function from Rn to R, and suppose that the magnitude of the second derivative ∥∇2f∥ is bounded. Show that there exists an ε>0 such that the function g:Rn→Rn given by x↦x−ε(∇f)(x) is a contraction. Conclude that gradient descent with a sufficiently small constant step size converges exponentially quickly on a strongly convex smooth function.
5. A finite stationary Markov chain is a finite set S of states, along with probabilistic rule A:S→ΔS for transitioning between the states, where ΔS represents the space of probability distributions on S. Note that the transition rule has no memory, and depends only on the previous state. If for any pair of states s,t∈ΔS, the probability of passing from s to t in one step is |
97d0c956-1d5e-4de6-b125-915502dbd40c | StampyAI/alignment-research-dataset/arxiv | Arxiv | Smooth Adversarial Training
1 Introduction
---------------
Convolutional neural networks can be easily attacked by adversarial examples, which are computed by adding small perturbations to clean inputs Szegedy2014. Many efforts have been devoted to improving network resilience against adversarial attacks Guo2018; Papernot2016; Buckman2018; Xie2018; Madry2018; prakash2018; pang2019improving. Among them,
adversarial training Goodfellow2015; Kurakin2017; Madry2018, which trains networks with adversarial examples on-the-fly, stands as one of the most effective methods. Later works further improve adversarial training by feeding the networks with harder adversarial examples wang2019a, maximizing the margin of networks ding2020, optimizing a regularized surrogate loss Zhang2019a, *etc*. While these methods achieve stronger adversarial robustness, they sacrifice the accuracy on clean inputs. It is generally believed that this trade-off between accuracy and robustness might be inevitable Tsipras2018, unless additional computational budgets are introduced to enlarge network capacities Madry2018; xie2020intriguing.
Another popular direction for increasing robustness against adversarial attacks is gradient masking papernot2017practical; Athalye2018, which usually introduces non-differentiable operations (e.g., discretization Buckman2018; rozsa2019improved) to obfuscate gradients. With degenerated gradients, attackers cannot successfully optimize the targeted loss and fail to break such defenses. Nonetheless, gradient masking will be ineffective if its differentiable approximation is used for generating adversarial examples Athalye2018.
The bitter history of gradient masking defenses motivates us to rethink the relationship between gradient quality and adversarial robustness, especially in the context of adversarial training where gradients are applied more frequently than standard training. In addition to computing gradients to update network parameters, adversarial training also requires gradient computation for generating training samples. Guided by this principle, we identify that ReLU, a widely-used activation function in most network architectures, significantly weakens adversarial training due to its non-smooth nature, *e.g*., ReLU’s gradient gets an abrupt change when its input is zero, as illustrated in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Smooth Adversarial Training").

Figure 1: Left Panel: ReLU and Parametric SoftPlus. Right Panel: the first derivatives for ReLU and Parametric SoftPlus. Compared to ReLU, Parametric Softplus is smooth with continuous derivatives.
To fix the issue induced by ReLU, in this paper, we propose smooth adversarial training (SAT), which enforces architectural smoothness via replacing ReLU with its smooth approximations111More precisely, when we say a function is smooth in this paper, we mean this function is C1 smooth, *i.e*., its first derivative is continuous everywhere. for improving the gradient quality in adversarial training (Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Smooth Adversarial Training") shows Parametric SoftPlus, an example of smooth approximations for ReLU). With smooth activation functions, SAT is able to feed the networks with harder adversarial training samples and compute better gradient updates for network optimization, hence substantially strengthens adversarial training.
Our experiment results show that SAT improves adversarial robustness for “free”, *i.e*., without incurring additional computations or degrading standard accuracy. For instance, by training with the economical *single-step PGD attacker*222Models trained with single-step PGD attackers only cost ∼1.5× training time than standard training Madry2018 on ImageNet Russakovsky2015, SAT significantly improves ResNet-50’s robustness by 9.3%, from 33.0% to 42.3%, while increasing the standard accuracy by 0.9% without incurring additional computational cost.
We also explore the limits of SAT with larger networks. We obtain the best result by using EfficientNet-L1 Tan2019; Xie2019a, which achieves 82.2% accuracy and 58.6% robustness on ImageNet, significantly outperforming the prior art Qin2019 by 9.5% for accuracy and 11.6% for robustness.
2 Related Works
----------------
#### Adversarial training.
Adversarial training improves robustness by training models on adversarial examples Goodfellow2015; Kurakin2017; Madry2018. Existing works suggest that, to further improve adversarial robustness, we need to either sacrifice accuracy on clean inputs wang2019a; wang2020; Zhang2019a; ding2020, or incur additional computational cost Madry2018; xie2020intriguing. This phenomenon is generally referred to as the *no free lunch in adversarial robustness* Tsipras2018; Nakkiran2019; su2018robustness. In this paper, we show that, with SAT, adversarial robustness can be improved for “free”.
#### Gradient masking.
Besides training models on adversarial data, alternatives for improving adversarial robustness include defensive distillation Papernot2016, randomized transformations Xie2018; Dhillon2018; liu2018towards; wang2018defensive; bhagoji2018enhancing, adversarial input purification Guo2018; prakash2018; meng2017magnet; song2017pixeldefend; samangouei2018defense; Liao2018; bhagoji2018enhancing, *etc*. Nonetheless, these defense methods degenerate the gradient quality, therefore induce the gradient masking issue papernot2017practical, which gives a false sense of adversarial robustness Athalye2018. In contrast to these works, we aim to improve adversarial robustness by providing networks with better gradients, but in the context of adversarial training.
3 ReLU Weakens Adversarial Training
------------------------------------
We hereby perform a series of control experiments in the backward pass of gradient computations to investigate how ReLU weakens, and how its smooth approximation strengthens adversarial training.
###
3.1 Adversarial Training
Adversarial training Goodfellow2015; Madry2018, which trains networks with adversarial examples on-the-fly, aims to optimize the following framework:
| | | | |
| --- | --- | --- | --- |
| | argminθE(x,y)∼D[maxϵ∈SL(θ,x+ϵ,y)], | | (1) |
where D is the underlying data distribution, L(⋅,⋅,⋅) is the loss function, θ is the network parameter, x is a training sample with the ground-truth label y, ϵ is the added adversarial perturbation, and S is the allowed perturbation range.
As shown in Equation ([1](#S3.E1 "(1) ‣ 3.1 Adversarial Training ‣ 3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training")), adversarial training consists of two computation steps: an inner maximization step, which computes adversarial examples, and an outer minimization step, which computes parameter updates.
#### Adversarial training setup.
We choose ResNet-50 He2016 as the backbone network. We apply the PGD attacker Madry2018 to generate adversarial perturbations ϵ. Specifically, we select the cheapest version of PGD, *single-step PGD* (PGD-1), to lower the training cost. Following Shafahi2019; wong2020fast, we set the maximum per-pixel change ϵ=4 and the attack step size β=4. We follow the basic ResNet training recipes to train models on ImageNet: models are trained for a total of 100 epochs using momentum SGD optimizer, with the learning rate decreased by 10× at the 30-th, 60-th and 90-th epoch; no regularization except a weight decay of 1e-4 is applied.
When evaluating adversarial robustness, we measure the model’s top-1 accuracy against the 200-step PGD attacker (PGD-200) on the ImageNet validation set, with the maximum perturbation size ϵ=4 and the step size β=1. We note 200 attack iteration is enough to let PGD attacker converge. Meanwhile, we report the model’s top-1 accuracy on the original ImageNet validation set.
###
3.2 How Gradient Quality Affects Adversarial Training?
As shown in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Smooth Adversarial Training"), the widely used activation function, ReLU, is non-smooth, *i.e*., its gradient takes an abrupt change, when its input is 0, which significantly degrades the gradient quality. We conjecture that this non-smooth nature hurts the training process, especially when we train models adversarially. This is because, compared to standard training which only computes gradients for updating network parameter θ, adversarial training requires additional computations for the inner maximization step to craft the perturbation ϵ.
To fix this problem, we first introduce a smooth approximation of ReLU, named *Parametric Softplus* Nair2010, as follows,
| | | | |
| --- | --- | --- | --- |
| | f(α,x)=1αlog(1+exp(αx)) | | (2) |
where the hyperparameter α is used to control the curve shape. The derivative of this function w.r.t. the input x is:
| | | | |
| --- | --- | --- | --- |
| | ddxf(α,x)=11+exp(−αx) | | (3) |
To better approximate the curve of ReLU, we empirically set α=10. As shown in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Smooth Adversarial Training"), compared to ReLU, Parametric Softplus (α=10) is smooth because it has a continuous derivative.
With Parametric Softplus, we next diagnose how gradient quality in *the inner maximization step* and *the outer minimization step* affects the accuracy and robustness of ResNet-50 in adversarial training. To clearly benchmark the effects, we only substitute ReLU with Equation ([2](#S3.E2 "(2) ‣ 3.2 How Gradient Quality Affects Adversarial Training? ‣ 3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training")) in the backward pass, while leaving the forward pass unchanged, *i.e*., ReLU is always used for model inference.
#### Improving gradient quality for the adversarial attacker.
We first take a look at the effects of gradient quality on computing adversarial examples (*i.e*., *the inner maximization step*) during training. More precisely, in the inner step of adversarial training, we use ReLU in the forward pass, but Parametric Softplus in the backward pass; and in the outer step, we use ReLU in both the forward and the backward pass. As shown in the second row of Table [1](#S3.T1 "Table 1 ‣ Improving gradient quality for both the adversarial attacker and network parameter updates. ‣ 3.2 How Gradient Quality Affects Adversarial Training? ‣ 3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training"), when the attacker uses Parametric Softplus’s gradient to craft training samples, the resulted model exhibits a performance trade-off compared to the ReLU baseline, *e.g*., it improves adversarial robustness by 1.5% but degrades accuracy by 0.5%. We hypothesize that the enhanced adversarial robustness can be attributed to harder adversarial examples generated during training, *i.e*., better gradients for the inner maximization step boost the attacker’s strength. To further verify this hypothesis, we evaluate the robustness of two ResNet-50 models via PGD-1 (vs. PGd-200 in Table [1](#S3.T1 "Table 1 ‣ Improving gradient quality for both the adversarial attacker and network parameter updates. ‣ 3.2 How Gradient Quality Affects Adversarial Training? ‣ 3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training")), one with standard training and one with adversarial training. Specifically, during the evaluation, the attacker uses ReLU in the forward pass, but Parametric Softplus in the backward pass. With better gradients, the PGD-1 attacker is strengthened and hurts models more: it can further decrease the top-1 accuracy by 4.0% (from 16.9% to 12.9%) on the standard training and by 0.7% (from 48.7% to 48.0%) on the adversarial training (both not shown in Table [1](#S3.T1 "Table 1 ‣ Improving gradient quality for both the adversarial attacker and network parameter updates. ‣ 3.2 How Gradient Quality Affects Adversarial Training? ‣ 3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training")). Finally, as shown in Table 1 (second row), we note that this robustness improvement is at the expense of accuracy, which is consistent with previous works wang2019a.
#### Improving gradient quality for network parameter updates.
We then study the role of gradient quality on updating network parameters (*i.e*., *the outer minimization step*) during training. More precisely, in the inner step of adversarial training, we use ReLU; but in the outer step, we use ReLU in the forward pass, and Parametric Softplus in the backward pass. Surprisingly, this method improves adversarial robustness for “free”. As shown in the third row of Table [1](#S3.T1 "Table 1 ‣ Improving gradient quality for both the adversarial attacker and network parameter updates. ‣ 3.2 How Gradient Quality Affects Adversarial Training? ‣ 3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training"), without incurring additional computations, adversarial robustness is boosted by 2.8%, and meanwhile accuracy is improved by 0.6%, compared to the ReLU baseline. We note the corresponding training loss also gets lower: the cross-entropy loss on the training set is reduced from 2.71 to 2.59. These results of better robustness, accuracy, and lower training loss together suggest that, with Equation ([3](#S3.E3 "(3) ‣ 3.2 How Gradient Quality Affects Adversarial Training? ‣ 3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training")), networks are able to compute better gradient updates in adversarial training. Interestingly, we also observe that better gradient updates improve the standard training, *i.e*., with ResNet-50, training with better gradients is able to improve accuracy from 76.8% to 77.0%, and reduces the corresponding training loss from 1.22 to 1.18. These results on both adversarial and standard training suggest that updating network parameters using better gradients could serve as a principle for improving performance in general, while keeping the inference process of the model unchanged (*i.e*., ReLU is still used during inference).
#### Improving gradient quality for both the adversarial attacker and network parameter updates.
Given the observation that improving ReLU’s gradient for either the adversarial attacker or the network optimizer
benefits robustness, we further enhance adversarial training by replacing ReLU with Parametric Softplus in all backward passes, but keeping ReLU in all forward passes. As expected, such a trained model reports the best robustness so far, *i.e*., as shown in the last row of Table [1](#S3.T1 "Table 1 ‣ Improving gradient quality for both the adversarial attacker and network parameter updates. ‣ 3.2 How Gradient Quality Affects Adversarial Training? ‣ 3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training"), it substantially outperforms the ReLU baseline by 3.9% for robustness. Interestingly, this improvement still comes for “free”, *i.e*., it reports 0.1% higher accuracy than the ReLU baseline. We conjecture this is mainly due to the positive effect on accuracy brought by computing better gradient updates (increase accuracy) slightly overriding the negative effects on accuracy brought by creating harder training samples (hurt accuracy) in this experiment.
| | | | | |
| --- | --- | --- | --- | --- |
| | Improving Gradient Quality for | Improving Gradient Quality for | Accuracy (%) | Robustness (%) |
| | the Adversarial Attacker | the Network Parameter Updates |
| ResNet-50 | ✗ | ✗ | 68.8 | 33.0 |
| ✓ | ✗ | 68.3 (-0.5) | 34.5 (+1.5) |
| ✗ | ✓ | 69.4 (+0.6) | 35.8 (+2.8) |
| ✓ | ✓ | 68.9 (+0.1) | 36.9 (+3.9) |
Table 1: ReLU significantly weakens adversarial training. By improving gradient quality for either the adversarial attacker or the network optimizer, the resulted model obtains better robustness than ReLU baseline. The best robustness can be achieved by adopting better gradients for both the attacker and the network optimizer.
###
3.3 Can Other Training Enhancements Remedy ReLU’s Gradient Issue?
#### More attack iterations.
It is known that increasing the number of attack iterations can create harder adversarial examples Madry2018.
We confirm in our own experiments that by training with the PGD attacker with more iterations, the resulted model exhibits a similar behavior to the case where we apply better gradients for the attacker. By increasing the attacker’s cost by 2×, PGD-2 improves the ReLU baseline by 0.6% for robustness while losing 0.1% for accuracy. This result suggests we can remedy ReLU’s gradient issue *in the inner step of adversarial training* if more computations are given.
#### Training longer.
It is also known that longer training lowers the training loss hoffer2017train, which we explore next.
Interestingly, with a 2× training cost compared to the standard setup (*e.g*., 200 epochs), though the final model indeed achieves a lower training loss (from 2.71 to 2.62), there is still a trade-off between accuracy and robustness. Longer training gains 2.6% for accuracy but loses 1.8% for robustness. On the contrary, applying better gradients for optimizing networks in the previous section improves both robustness and accuracy.
This discouraging result suggests that training longer cannot fix the issues *in the outer step of adversarial training* caused by ReLU’s poor gradient.
#### Conclusion.
Given these results, we conclude that ReLU significantly weakens adversarial training. Moreover, it seems that the degenerated performance cannot be simply remedied even with training enhancements (*i.e*., increasing the number of attack iterations & training longer). We identify that the key is ReLU’s poor gradient—by replacing ReLU with its smooth approximation *only in the backward pass* substantially improves robustness, even without sacrificing accuracy and incurring additional computational cost. In the next section, we show that making activation functions smooth is a good design principle for enhancing adversarial training in general.
4 Smooth Adversarial Training
------------------------------
As shown above, improving ReLU’s gradient can both strengthen the attacker and provide better gradient updates. Nonetheless, this strategy may be suboptimal as there still is a discrepancy between the forward pass (which we use ReLU) and the backward pass (which we use Parametric Softplus).
To fully exploit the potential of training with better gradients, we hereby propose smooth adversarial training (SAT), which enforces architectural smoothness via the exclusive usage of smooth activation functions in adversarial training. We keep all other network components the same, as most of them will not result in the issue of poor gradient.333We ignore the gradient issue caused by max pooling, which is also non-smooth, in SAT. This is because modern architectures rarely adopt it, *e.g*. only 1 max pooling layer in ResNet He2016, and 0 in EfficientNet Tan2019.
###
4.1 Adversarial Training with Smooth Activation Functions
We consider the following activation functions as the smooth approximations of ReLU in SAT (Figure [2](#S4.F2 "Figure 2 ‣ 4.1 Adversarial Training with Smooth Activation Functions ‣ 4 Smooth Adversarial Training ‣ Smooth Adversarial Training") plots these functions as well as their derivatives):
* [leftmargin=\*]
* Softplus Nair2010: f(x)=log(1+exp(x)). We also consider its parametric version f(x)=1αlog(1+exp(αx)), and set α=10 as in Section [3](#S3 "3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training").
* Swish ramachandran2017searching; elfwing2018sigmoid: f(x)=x⋅sigmoid(x). Compared to other activation functions, Swish has a non-monotonic “bump” when x<0.
* Gaussian Error Linear Unit (GELU) hendrycks2016gaussian: f(x)=x⋅Φ(x), where Φ(x) is the cumulative distribution function of the standard normal distribution.
* Exponential Linear Unit (ELU) clevert2015fast:
| | | | |
| --- | --- | --- | --- |
| | ELU(x,α)={xif x≥0,α(exp(x)−1)otherwise, | | (4) |
where α=1 as default. Note that when α≠1, the gradient of ELU is not continuously differentiable anymore. We will be discussing the effects of these non-smooth variants of ELU (α≠1) on adversarial training in Section [4.3](#S4.SS3 "4.3 Case Study: Stabilizing Adversarial Training with ELU using CELU ‣ 4 Smooth Adversarial Training ‣ Smooth Adversarial Training").

Figure 2: Visualizations of smooth activation functions and their derivatives.
#### Main results.
We follow the settings in Section [3](#S3 "3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training") to adversarially train ResNet-50 equipped with these smooth activation functions. The results are shown in Figure [3](#S4.F3 "Figure 3 ‣ 4.2 Ruling Out the Effect From <x0 ‣ 4 Smooth Adversarial Training ‣ Smooth Adversarial Training"). Compared to the ReLU baseline, all smooth activation functions substantially boost robustness, while keeping the standard accuracy almost the same. For example, smooth activation functions at least boost robustness by 5.7% (using Parametric Softplus, from 33% to 38.7%). The strongest robustness is achieved by Swish ramachandran2017searching, which enables ResNet-50 to achieve 42.3% robustness and 69.7% standard accuracy.
Additionally, we compare to the setting in Section [3](#S3 "3 ReLU Weakens Adversarial Training ‣ Smooth Adversarial Training") where Parametric Softplus is only applied at the backward pass. Interestingly, by additionally replacing ReLU with Parametric Softplus at the forward pass, the resulted model further improves robustness by 1.8% (from 36.9% to 38.7%) while keeping the accuracy almost the same, demonstrating the importance of applying smooth activation functions in both forward and backward passes in SAT.
###
4.2 Ruling Out the Effect From x<0
Compared to ReLU, in addition to being smooth, the functions above have non-zero responses to negative inputs (x<0) which may also affect adversarial training. To rule out this factor, we hereby propose SmoothReLU, which flattens the activation function by only modifying ReLU after x≥0,
| | | | |
| --- | --- | --- | --- |
| | SmoothReLU(x,α)={x−1αlog(αx+1)if x≥0,0otherwise, | | (5) |
where α is a learnable variable shared by all channels, and is constrained to be positive. We note SmoothReLU is always continuously differentiable regardless the value of α, as
| | | | |
| --- | --- | --- | --- |
| | ddxSmoothReLU(x,α)={αx1+αxif x≥0,0otherwise. | | (6) |
SmoothReLU converges to ReLU when α→∞. Note that α needs to be initialized at a large enough value (*e.g*., 400 in our experiments) to avoid the gradient vanishing problem at the beginning of training. We plot SmoothReLU and its first derivative in Figure [2](#S4.F2 "Figure 2 ‣ 4.1 Adversarial Training with Smooth Activation Functions ‣ 4 Smooth Adversarial Training ‣ Smooth Adversarial Training").
We observe SmoothReLU substantially outperforms ReLU by 7.3% for robustness (from 33.0% to 40.3%), and by 0.6% for accuracy (from 68.8% to 69.4%), therefore clearly demonstrates the importance of a function to be smooth, and rules out the effect from having responses when x<0.

Figure 3: Smooth activation functions improve adversarial training. Compared to ReLU, all smooth activation functions significantly boost robustness, while keeping accuracy almost the same.
α
Robustness (%)
ELU
CELU
1
41.1
1.2
-0.3
+0.1
1.4
-2.0
-0.3
1.6
-3.7
-0.3
1.8
-6.2
-0.2
2.0
-7.9
-0.5
Table 2: Robustness comparison between ELU (non-smooth when α≠1) and CELU (always smooth ∀α).
###
4.3 Case Study: Stabilizing Adversarial Training with ELU using CELU
In the analysis above, we show that adversarial training can be greatly improved by replacing ReLU with its smooth approximations. To further demonstrate the generalization of SAT (beyond ReLU), we discuss another type of activation function—ELU. The first derivative of ELU is shown below:
| | | | |
| --- | --- | --- | --- |
| | ddxELU(x,α)={1if% x≥0,αexp(x)otherwise. | | (7) |
Here we mainly discuss the scenario when ELU is non-smooth, *i.e*., α≠1. As can be seen from Equation ([7](#S4.E7 "(7) ‣ 4.3 Case Study: Stabilizing Adversarial Training with ELU using CELU ‣ 4 Smooth Adversarial Training ‣ Smooth Adversarial Training")), ELU’s gradient is not continuously differentiable anymore, *i.e*., αexp(x)≠1 when x=0, therefore resulting in an abrupt gradient change like ReLU. Specifically, we consider the scenario 1.0≤α≤2.0, where the gradient abruption becomes more drastic with a larger value of α.
We show the adversarial training results in Table [2](#S4.T2 "Table 2 ‣ 4.2 Ruling Out the Effect From <x0 ‣ 4 Smooth Adversarial Training ‣ Smooth Adversarial Training"). Interestingly, we observe that the adversarial robustness is highly dependent on the value of α—the strongest robustness is achieved when the function is smooth (*i.e*., α=1.0, 41.4% robustness), and all other choices of α monotonically decrease the robustness when α gradually approaches 2.0. For instance, with α=2.0, the robustness drops to only 33.2%, which is 7.9% lower than that of using α=1.0. The observed phenomenon here is consistent with our previous conclusion on ReLU—non-smooth activation functions significantly weaken adversarial training.
To stabilize the adversarial training with ELU, we apply its smooth version, CELU barron2017continuously, which re-parametrize ELU to the following format:
| | | | |
| --- | --- | --- | --- |
| | | | (8) |
The first derivatives of CELU can be written as follows:
| | | | |
| --- | --- | --- | --- |
| | ddxCELU(x,α)={1%
if x≥0,expxαotherwise. | | (9) |
With this parameterization, CELU is now continuously differentiable regardless of the choice of α.
We observe that CELU greatly stabilizes adversarial training, *i.e*., compared to α=1.0, the worst case in CELU is merely 0.5% lower (shown in Table [2](#S4.T2 "Table 2 ‣ 4.2 Ruling Out the Effect From <x0 ‣ 4 Smooth Adversarial Training ‣ Smooth Adversarial Training")). Recall that this gap for ELU is 7.9%. This case study provides another strong support on showing the importance of performing SAT.
5 Exploring the Limits of Smooth Adversarial Training
------------------------------------------------------
Recent works xie2020intriguing; gao2019convergence show that, compared to standard training, adversarial training exhibits a much stronger requirement for larger networks to obtain better performance. Nonetheless, previous explorations in this direction only consider either deeper networks xie2020intriguing or wider networks Madry2018, which might be insufficient. To this end, we hereby present a systematic study on showing how network scaling up behaves in SAT. Specifically, we set Swish as the default activation function to perform SAT, as it achieves the best robustness among different candidates (as shown in Figure [3](#S4.F3 "Figure 3 ‣ 4.2 Ruling Out the Effect From <x0 ‣ 4 Smooth Adversarial Training ‣ Smooth Adversarial Training")).
###
5.1 Scaling-up ResNet
We first perform the network scaling-up experiments with ResNet in SAT. In standard training, Tan *et al*. Tan2019 suggest that, all three scaling-up factors, *i.e*., *depth, width and image resolutions*, are important to further improve ResNet performance. We hereby examine the effects of these factors in SAT. We choose ResNet-50 (with the default image resolution at 224) as the baseline network.
#### Depth & width.
Previous works Madry2018; xie2020intriguing already show that making networks deeper or wider can further standard adversarial training. We re-verify this conclusion in SAT. As shown in the second to fifth rows of Table [3](#S5.T3 "Table 3 ‣ Compound scaling. ‣ 5.1 Scaling-up ResNet ‣ 5 Exploring the Limits of Smooth Adversarial Training ‣ Smooth Adversarial Training"), we confirm that both deeper or wider networks consistently outperform the baseline network in SAT. For instance, by training a deeper ResNet-152, it improves ResNet-50’s performance by 4.2% for accuracy and 3.7% for robustness. Similarly, by training a 4× wider ResNeXt-50-32x8d Xie2017, it improves accuracy by 3.9% and robustness by 2.8%.
#### Image resolution.
Though larger image resolution benefits standard training, it is generally believed that scaling up this factor will induce weaker adversarial robustness, as the attacker will have a larger room for crafting adversarial perturbations galloway2019batch. However, surprisingly, this belief is invalid when taking adversarial training into consideration. As shown in the sixth and seventh rows of Table [3](#S5.T3 "Table 3 ‣ Compound scaling. ‣ 5.1 Scaling-up ResNet ‣ 5 Exploring the Limits of Smooth Adversarial Training ‣ Smooth Adversarial Training"), ResNet-50 consistently achieves better performance when training with larger image resolutions in SAT. We conjecture this improvement is possible because a larger image resolution (1) enables attackers to create stronger adversarial examples galloway2019batch; and (2) increases network capacity Tan2019, therefore benefits SAT overall.
#### Compound scaling.
So far, we have confirmed that the basic scaling of depth, width and image resolution are all important scaling-up factors in SAT. As argued in Tan2019 for standard training, scaling up all these factors simultaneously is better than just focusing on a single dimension (*e.g*., depth). To this end, we make an attempt to create a simple compound scaling for ResNet. As shown in the last row of Table [3](#S5.T3 "Table 3 ‣ Compound scaling. ‣ 5.1 Scaling-up ResNet ‣ 5 Exploring the Limits of Smooth Adversarial Training ‣ Smooth Adversarial Training"), the resulted model, ResNeXt-152-32x8d with input resolution at 380, achieves a much stronger result than the ResNet-50 baseline, *i.e*., +8.5% for accuracy and +8.9% for robustness.
| | | |
| --- | --- | --- |
| | Accuracy (%) | Robustness (%) |
| ResNet-50 | 69.7 | 42.3 |
| + 2x deeper (ResNet-101) | 72.9 (+3.2) | 45.5 (+3.2) |
| + 3x deeper (ResNet-152) | 73.9 (+4.2) | 46.0 (+3.7) |
| + 2x wider (ResNeXt-50-32x4d) | 71.2 (+1.5) | 42.5 (+0.2) |
| + 4x wider (ResNeXt-50-32x8d) | 73.6 (+3.9) | 45.1 (+2.8) |
| + larger resolution 299 | 70.9 (+1.2) | 43.8 (+1.5) |
| + larger resolution 380 | 71.6 (+1.9) | 44.1 (+1.8) |
| + 3x deeper & 4x wider (ResNeXt-152-32x8d) & larger resolution 380 | 78.2 (+8.5) | 51.2 (+8.9) |
Table 3: Scaling-up ResNet in SAT. We observe SAT consistently helps larger networks to get better performance.
#### Discussion on standard adversarial training.
We first verify that basic scaling of depth, width and image resolution also matter in *standard adversarial training*, *e.g*., by scaling up ResNet-50 (33.0% robustness), the deeper ResNet-152 achieves 39.4% robustness (+6.4%), the wider ResNeXt-50-32x8d achieves 36.7% robustness (+3.7%) and the ResNet-50 with larger image resolution at 380 achieves 36.9% robustness (+3.9%). All these robustness performances are lower than the robustness (42.3%) achieved by the ResNet-50 baseline in SAT (first row of Table [3](#S5.T3 "Table 3 ‣ Compound scaling. ‣ 5.1 Scaling-up ResNet ‣ 5 Exploring the Limits of Smooth Adversarial Training ‣ Smooth Adversarial Training")). In other words, scaling up networks seems less effective than replacing ReLU with smooth activation functions.
We also find compound scaling is more effective than basic scaling for standard adversarial training, *e.g*., ResNeXt-152-32x8d with input resolution at 380 here reports 46.3% robustness. Although this result is better than adversarial training with basic scaling above, it is still ∼5% lower than SAT with compound scaling, *i.e*., 46.3% v.s. 51.2%. In other words, even with larger networks, applying smooth activation functions in adversarial training is still essential for improving performance.

Figure 4: Scaling-up EfficientNet in SAT. Note EfficientNet-L1 is not connected to the rest of the graph because it was not part of the compound scaling suggested by Tan2019.
###
5.2 SAT with EfficientNet
The results on ResNet show that scaling up networks in SAT effectively improves performance. Nonetheless, the applied scaling policies could be suboptimal, as they are hand-designed without any optimizations. EfficientNet Tan2019, which uses neural architecture search to automatically discover the optimal factors for network scaling, provides a strong family of models for image recognition. To examine the benefits of EfficientNet, we now use it to replace ResNet in SAT. Note that all other training settings are the same as described in our ResNet experiments.
Similar to ResNet, Figure [4](#S5.F4 "Figure 4 ‣ Discussion on standard adversarial training. ‣ 5.1 Scaling-up ResNet ‣ 5 Exploring the Limits of Smooth Adversarial Training ‣ Smooth Adversarial Training") shows that stronger backbones consistently achieve better performance in SAT. For instance, by scaling the network from EfficientNet-B0 to EfficientNet-B7, the robustness is improved from 37.6% to 57.0%, and the accuracy is improved from 65.1% to 79.8%.
Surprisingly, the improvement is still observable for larger networks: EfficientNet-L1 Xie2019a further improves robustness by 1.0% and accuracy by 0.7% over EfficientNet-B7.
#### Training enhancements on EfficientNet.
So far all of our experiments follow the training recipes from ResNet, which may not be optimal for EfficientNet training. To this end, we import the following settings to our experiments as in original EfficientNet training setups Tan2019: we change weight decay from 1e-4 to 1e-5, and add Dropout Srivastava2014, stochastic depth huang2016deep and AutoAugment Cubuk2018 to regularize the training process. Besides, we train models longer (*i.e*., 200 epochs) to better cope with these training enhancements, and adopt the early stopping strategy to prevent the catastrophic overfitting issue in robustness wong2020fast. With these training enhancements, our EfficientNet-L1 gets further improved, *i.e*., +1.7% for accuracy (from 80.5% to 82.2%) and +0.6% for robustness (from 58.0% to 58.6%).
| | Accuracy (%) | Robustness (%) |
| --- | --- | --- |
| Prior art Qin2019 | 72.7 | 47.0 |
| EfficientNet+SAT | 82.2 (+9.5) | 58.6 (+11.6) |
Table 4: Comparison to the previous state-of-the-art.
#### Comparing to the prior art Qin2019.
Table [4](#S5.T4 "Table 4 ‣ Training enhancements on EfficientNet. ‣ 5.2 SAT with EfficientNet ‣ 5 Exploring the Limits of Smooth Adversarial Training ‣ Smooth Adversarial Training") compares our best results with the prior art. With SAT, we are able to train a model with strong performance on both adversarial robustness and standard accuracy—our best model (EfficientNet-L1 + SAT) achieves 82.2% standard accuracy and 58.6% robustness, which largely outperforms the previous state-of-the-art method Qin2019 by 9.5% on standard accuracy and 11.6% on adversarial robustness.
#### Discussion.
Finally, we emphasize a large reduction in the accuracy gap between adversarially trained models and standard trained models for large networks. For example, with the training setup above (with enhancements), EfficientNet-L1 achieves 84.1% accuracy in standard training, and this accuracy slightly decreases to 82.2% (-1.9%) in SAT. Note that this gap is substantially smaller than the gap in ResNet-50 of 7.1% (76.8% in standard training v.s. 69.7% in SAT). Moreover, it is also worth mentioning that the high accuracy of 82.2% provides strong support to Ilyas2019 on arguing robust features indeed can generalize well to clean inputs.
6 Conclusion
-------------
In this paper, we propose smooth adversarial training, which enforces architectural smoothness via replacing non-smooth activation functions with their smooth approximations in adversarial training. SAT improves adversarial robustness without sacrificing standard accuracy or incurring additional computation cost.
Extensive experiments demonstrate the general effectiveness of SAT.
With EfficientNet-L1, SAT reports the state-of-the-art adversarial robustness on ImageNet, which largely outperforms the prior art Qin2019 by 9.5% for accuracy and 11.6% for robustness.
Broader Impact
--------------
Our work points out that architectural smoothness plays an essential role in learning a robust model, which has not been paid attention to in the community.
We believe this is a general design principle, and should well generalize to broader tasks like natural language processing, reinforcement learning, *etc*. Our work also provides interesting observations on suggesting different architecture designs indeed will have significantly different performance on robustness, therefore have great potential to inspire later works on finding better architectures, *e.g*., through either hand design or neural architecture search, to further increase robustness.
We strongly believe that SAT can have high practical impacts since our method can greatly enhance model robustness against adversarial attacks, which is important for enabling models to work reliably in real-world scenarios, especially for safety-critical applications like self-driving cars and surgical robots.
#### Acknowledgment
We would like to thank Jiang Wang, Chongli Qin and Sven Gowal for valuable discussions at the early stage of this project. |
e579dc04-1e7e-406d-a9c5-b54762557fd3 | trentmkelly/LessWrong-43k | LessWrong | A Mechanistic Interpretability Analysis of Grokking
A significantly updated version of this work is now on Arxiv and was published as a spotlight paper at ICLR 2023
aka, how the best way to do modular addition is with Discrete Fourier Transforms and trig identities
If you don't want to commit to a long post, check out the Tweet thread summary
Introduction
Grokking is a recent phenomena discovered by OpenAI researchers, that in my opinion is one of the most fascinating mysteries in deep learning. That models trained on small algorithmic tasks like modular addition will initially memorise the training data, but after a long time will suddenly learn to generalise to unseen data.
A training curve for a 1L Transformer trained to do addition mod 113, trained on 30% of the 1132 pairs - it shows clear grokking
This is a write-up of an independent research project I did into understanding grokking through the lens of mechanistic interpretability. My most important claim is that grokking has a deep relationship to phase changes. Phase changes, ie a sudden change in the model's performance for some capability during training, are a general phenomena that occur when training models, that have also been observed in large models trained on non-toy tasks. For example, the sudden change in a transformer's capacity to do in-context learning when it forms induction heads. In this work examine several toy settings where a model trained to solve them exhibits a phase change in test loss, regardless of how much data it is trained on. I show that if a model is trained on these limited data with high regularisation, then that the model shows grokking.
Loss curve for predicting repeated subsequences in a sequence of random tokens in a 2L attention only transformer on infinite data - shows a phase changeLoss curve for predicting repeated subsequences in a sequence of random tokens in a 2L attention-only transformer given 512 training data points - shows clear grokking.
One of the core claims of mechanistic interpretability is that neur |
a8d5fcbc-41a9-48f7-b79c-e9eb52f85193 | trentmkelly/LessWrong-43k | LessWrong | Free (old) scientific papers [Link]
http://thepiratebay.org/torrent/6554331/Papers_from_Philosophical_Transactions_of_the_Royal_Society__fro
Greg Maxwell is torrenting 33Gib of public domain JSTOR documents that were behind paywalls.
What's your take on this, ethically, legally, etc?
ETA: More on this: http://gigaom.com/2011/07/21/pirate-bay-jstor/ |
95979e3c-4ccd-4ce5-9677-07008bb3cee9 | trentmkelly/LessWrong-43k | LessWrong | Memetic Tribalism
Related: politics is the mind killer, other optimizing
When someone says something stupid, I get an urge to correct them. Based on the stories I hear from others, I'm not the only one.
For example, some of my friends are into this rationality thing, and they've learned about all these biases and correct ways to get things done. Naturally, they get irritated with people who haven't learned this stuff. They complain about how their family members or coworkers aren't rational, and they ask what is the best way to correct them.
I could get into the details of the optimal set of arguments to turn someone into a rationalist, or I could go a bit meta and ask: "Why would you want to do that?"
Why should you spend your time correcting someone else's reasoning?
One reason that comes up is that it's valuable for some reason to change their reasoning. OK, when is it possible?
1. You actually know better than them.
2. You know how to patch their reasoning.
3. They will be receptive to said patching.
4. They will actually change their behavior if the accept the patch.
It seems like it should be rather rare for those conditions to all be true, or even to be likely enough for the expected gain to be worth the cost, and yet I feel the urge quite often. And I'm not thinking it through and deciding, I'm just feeling an urge; humans are adaptation executors, and this one seems like an adaptation. For some reason "correcting" people's reasoning was important enough in the ancestral environment to be special-cased in motivation hardware.
I could try to spin an ev-psych just-so story about tribal status, intellectual dominance hierarchies, ingroup-outgroup signaling, and whatnot, but I'm not an evolutionary psychologist, so I wouldn't actually know what I was doing, and the details don't matter anyway. What matters is that this urge seems to be hardware, and it probably has nothing to do with actual truth or your strategic concerns.
It seems to happen to everyone who has i |
9a128b0a-33d6-4ab7-805c-695b060a7949 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Man and Machine: Questions of Risk, Trust and Accountability in Today's AI Technology
1 Introduction
---------------
>
> “Synthesis defines an ambitious ‘put-a-man- on-the-moon’ goal. By doing so, it forces scientists and engineers to cross uncharted terrain in pursuit of the goal. This requires the solution of unscripted problems that are not normally encountered through either observation or analysis… synthesis drives the evolution of paradigms”\autocite[It is hard to understate the role of synthesis in science and technological development. The above quote has been taken from the context of synthetic biology, see][]brenner2005synthetic
>
>
>
Artificial Intelligence (AI) is the field of computer science which aims to create, or synthesize, intelligence. The pursuit of creating intelligent machines has contributed not only to psychology, cognitive science, neurology and philosophy, but has also given birth to whole new branches of research.
The field of AI research was formally founded at a conference on the campus of Dartmouth College in the summer of 1956\autocite[The conference was chaired by J.McCarthy, M. Minsky, N. Rochester and C.Shannon. For a full report, see][]mccarthy2006proposal. It has come a long way since then, having gone through many cycles of boom and bust,“AI winters” and summers\autocitenilsson2010quest. Now AI applications have left the annals of Department of Defense R&D, and trickled down to everyday use. They can be found in common place consumer items and inexpensive intelligent toys, even though very often consumers fail to recognize the technology source. For example, the Kinect, which provides a 3D bodymotion interface for the Xbox 360, uses algorithms that emerged from lengthy AI research\autocitesmisek20133d, khoshelham2012accuracy, oikonomidis2011efficient.
The adoption and integration of AI based technology in all spheres has followed a pattern reminiscent of many modern technologies, including, for example, the internet, digital computing\autocite[The reader is referred to James Cortada’s essay in Technology and Culture’s April 2013 issue.][]cortada2013new, and mobile telephony. The pace has been such that historians and sociologists have barely had an opportunity to study its arrival, use and implications for society. Most discussions in this direction have been based on a priori and futuristic assessments, discussing the future of AI technology and what it might eventually turn out like. For instance, researchers have raised concerns about AI machines becoming malicious (unfriendly), apathetic or uncontrollable, often evoking images of killer robots and future wars between man and machine\autocitejoy2000future, bostrom2008global, yudkowsky2008artificial, bostrom2002existential, bostrom2003ethical. The importance of incorporating ‘friendliness’ in AI research is repeatedly stressed in these discussions. Some have focussed on the concept of ‘intelligence explosion’, forewarning a future event where radically self-improving machines reach a state where it is impossible to predict or comprehend their actions\autocitemuehlhauser2012intelligence. Others have noted that increasing dependence on decision-making intelligent machines may itself lead to a world where it is impossible for humans to survive without them, leaving machines effectively in control\autocite[In his now famous article “Why The Future Doesn’t Need Us”, computer scientist Bill Joy quotes Ted Kaczynski, “As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently.’ See ][]joy2000future, kaczynski2005unabomber.
Such discussions, though very important in their own right, tend to take the attention away from issues of risk and trust posed by AI-based technology which has already diffused in the society. In this essay, we argue that current AI technology (e.g., expert systems and intelligent assistants) is based on a notion of intelligence which is somewhat different from the notion of‘general intelligence’ in popular perception. The intelligence or expertise of these machines is measured through their performance in certain specialized contexts, and little else. Further, this confusion of what AI technology is based on, and how it works, itself has risk and trust-related consequences. As AI researcher Elizier Yudkowsky had noted, “By far the greatest danger with Artifical Intelligence is that people conclude too early that they understand it”\autociteyudkowsky2008artificial.
We start off this essay by tracing the history of current AI research in Section 2. Various factors which have steered AI research in a particular direction, and diminished in others, have been outlined. Section 3 examines the nature of AI systems today, illustrating, through the examples of expert systems and intelligent assistants, the questions of risk and trust they pose to society. In Section 4, we suggest two paradigms, which might complement basic research in AI to address some of these concerns . The discussion is summarised in the conclusion.
2 Historical Developments
--------------------------
Ever since AI’s inception, there has been no clear consensus on what constitutes ‘intelligence’. The subject has drawn from a broad array of disciplines - Philosophy, Logic, Biology, Psychology, Statistics and Engineering. In the absence of an agreed-on curriculum for training students in AI,\footcitenilsson2010quest new researchers who enter the field bring with them different standards, traditions and problems. As a result, one big challenge that AI has faced is that the research effort has been characterized by a multiplicity of approaches, each endeavoring to attain some specific objective . This multigoal character of AI research has crystallized into its theoretical pluralism, and its institutionalization by means of competing groups with different aims\footcite[Fleck, 1982: 172 in]schwartz1989artificial. It has been a major factor in the eventual branching of Artificial Intelligence to various subfields, e.g., Knowledge Representation, Machine Learning and Natural Language Processing.
Up until the early 1970s, AI researchers dealt with highly theoretical problems probing the nature of intelligence, and the researchers pursued projects that were staged in highly controlled laboratory settings\cite[p 265]nilsson2010quest. The heavy initial funding (by DARPA) into such pathbreaking research was fuelled partly due to the cold war\cite[p. 66]whitby1996reflections, and partly due to the highly optimistic claims made by its pioneers and their early theoretical successes.
However, such theoretical approaches to AI gradually came to be thought off by many computer scientists as fringe activities that did not adhere to rigorous scientific standards - some even viewed AI as ‘a field that housed charlatans’\cite[p 339]nilsson2010quest. Niel Nilsson, one of the founding fathers of the discipline, recalls that when he first interviewed for a position at SRI in 1961,111Stanford Research Institute a researcher had warned him against joining research on neural networks because it was ‘premature’, and his involvement with it could damage his reputation. This concern for ‘respectability’ had a stultifying effect on many AI researchers.222It is interesting to note that many researchers in AI today deliberately call their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. One of the reasons is that these names help to procure funding. As New York Times\citemarkoff2005behind reported in 2005, “Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers”
In 1969, the Mansfield Amendment dealt another significant blow to the field. The Amendment put DARPA under increasing pressure to fund only ‘mission-oriented direct research, rather than basic undirected research’\autocite[][under ”Shift to Applied Research Increases Investment” (only the sections before 1980 apply to the current discussion)]national1999funding. The creative, freewheeling exploration that was characteristic of the early pioneering work in AI gradually came to be viewed as a burden. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems\cite[under “Shift to Applied Research Increases Investment”]national1999funding. Not only did this greatly influence the direction of research, but also, perhaps more importantly, swayed the spirit that guided work in the field. This has been noted by Marvin Minsky\citeroush2006marvin, co-founder of MIT’s AI Lab and a leading cognitive scientist:
>
> “In the early days, DARPA supported people rather than proposals. There was a lot of progress from starting in 1963 for about ten years in all branches and all approaches aimed to modeling intelligence. But the Mansfield Agreement made it much harder to support visionary researchers. At the same time, the American corporate research community started to disappear in the early 1970s. Bell Labs and RCA and the others essentially disappeared from this sort of activity.”
>
>
>
Also, by the end this period, the power of AI methods had already increased to the point where realistic applications seemed within reach\cite[p 265]nilsson2010quest. This gave rise to what Minsky calls ‘the entrepreneur bug’\citeroush2006marvin. He attributes the disappearance of young scientists in that period to an increased tendency to patent things, start start-ups and make new products. Support for original theoretical research in areas like commonsense reasoning eventually fizzled out.
The collective consequence of all these factors was that by the end of the 70s many had people diverted to highly specialized subfields that solved specific real-world problems. These problems ranged from Speech Recognition and Understanding Systems, Consulting Systems, Understanding Queries and Signals to Computer Vision\cite[p 265]nilsson2010quest. Since these also had commercial applications, funding and notions of academic respectability became directly associated with these specialized problems.
Most AI researchers around this period eventually adopted the premise that general human-like intelligence can be developed by combining the programs that solve various subproblems using an integrated agent architecture. This was the Intelligent Agent paradigm, an approach that had become widely accepted in the AI community by the end of the 90s.\footcites[][pp. 27, 32–58, 968–972]poole1998computational[][pp. 7–21] luger2005artificial[][pp. 235–240]russell2003artificial
An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success\autocite[The definition used here is due to Russell and Norvig, see][Other definitions also include knowledge and learning as additional criteria.]russell2003artificial. The simplest intelligent agents are programs that solve specific problems. The paradigm gave researchers license to study isolated problems and find solutions with actual applications, without agreeing on one single approach.333An agent that solves a specific problem can use any approach that works – some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. For perhaps the first time in AI, a partial consensus on a notion of intelligence was achieved, since it was well aligned with the pluralistic character of AI, the constraints of funding, the notion of academic respectability, as well as relevant to industrial applications. Many fundamental questions related to intelligence - symbolic and commonsense reasoning, intuition, imagination, creativity, and emotional intelligence, were left answered and unexplored.
3 AI Today
-----------
>
> “Today’s AI is about new ways of connecting people to computers, people to knowledge, people to the physical world, and people to people”.
> - Patrick Winston, MIT AI Lab briefing, 1997
>
>
>
###
3.1 All That Glitters
Today, with the increased power of relatively inexpensive computers, availability of large databases and growth of the World Wide Web, AI technology like expert systems and intelligent assistants have slowly diffused into our society. These have been said to heralding a new age and revolutionizing the way we live today. Such proclamations might be true, but not necessarily in the context in which they are made. Most people overlook the fact that most of these systems do not work on the premise of approximating human cognitive abilities, but on performing certain tasks in highly specialized scenarios. They do well in performing certain jobs that are complicated for people to do, but they are far from being capable of carrying out tasks that are simple for humans to do. Many elements, like common-sense reasoning, ethical judgements and decision-making under uncertainties, which might be core components of intelligence, are missing. The deployment of such systems and their celebration, besides diverting research from other fields, has social, ethical and risk - related consequences.
Performance in specialized contexts (e.g., processing natural language, playing chess, recognizing patterns) are the sole basis for intelligence for current AI based systems. There is a certain gap in what these systems are represented as doing, and what they actually do. This can mislead people on two levels. One, that they create a notion that they work in the same way that humans do is (human-experts in the case of expert systems and assistants in case of digital assistants). On a second level, they create the impression that these are free from prejudices, personal bias and errors that might affect a human counterpart. The actual implementation and mechanism - what goes behind an AI technology - is entirely invisible to the user. The ‘knowledge engineering’, as it is called in the case of expert systems, and the underlying architecture, might involve restrictions and selective representation of data, and unavoidable biases resulting from the organization of data and architecture of the machine.
###
3.2 Risk
Another, possibly graver issue is that of handling of uncertainties by AI systems. Inductive knowledge of computer systems is inadequate. The number of combinations of possible inputs and internal states of a computer system of any complexity is huge, and especially in AI technologies, such complexity is intractable. Even with highly automated testing, it will seldom be feasible to exercise each and every state of a system to check for errors or underlying design faults, or bugs that may have caused them. As computer scientist Edsger Djikstra famously put it in 1969, “Program Testing can be used to show the presence of bugs, but never to show their absence!” Because bugs may lurk for years before becoming manifest as a system failure, no guarantees can be offered.
In the case of expert systems, although they do reason usefully and economically about specific problems in medicine, geology, chemistry and other delimited areas, they are acknowledged to be brittle (that is, they break down) when confronted with problems outside their area of expertise or even on problems within their area of expertise, if knowledge were needed that had not been provided in their rulebooks. They don’t know what they don’t know, and therefore might provide wrong answers in cases where a human expert would do better.
A colorful anecdote involving John McCarthy444John McCarthy was one of the founding fathers of the field. He purportedly coined the term ‘Artificial Intelligence’ illustrates this. In an interaction with the medical expert system MYCIN, he typed in some information about a hypothetical patient, saying that he was male and also saying that he underwent amniocentesis. MYCIN accepted all that without complaint! That male patients don’t get pregnant was not considered part of the ‘expert knowledge’ that MYCIN needed to be given\cite[p 407]nilsson2010quest.
Donald Mitchie, a British researcher, examines this problem through his concept of the ‘human window’ on the reasoning of the program, where its behaviour is ‘scrutable’. Outside of this window, Mitchie observes, it is impossible to tell whether the program is being exceedingly clever or is just malfunctioning\cite[Michie, 1984 in]schwartz1989artificial.
Reliance on experts is said to be an inevitable aspect of high modernity\citeschwartz1989artificial. As we enter an age of knowledge-intensive ‘information society’, how we employ expert systems based on AI will turn out to be crucial in shaping our society.
The handicaps involving AI technology, as well as their associated risks, are greatly amplified in those systems that are critical to human safety or security. A disturbing example of this is provided by Donald MacKenzie\citemackenzie2004mechanizing, a Professor of Sociology at the University of Edinburgh:
>
> “On October 5, 1960, United States’ Ballistic Missile Early Warning System (BMEWS) went off, indicating several missile launches from a general area in Siberia, and sent everyone in a panic mode. A level 5 alarm meant 99.9 percent likelihood that a missile attack had been launched. If that were true, ellipses should have been forming on the war room’s display map of North America in Colorado Springs and should have started to shrink, indicating the target of the attack. Yet no ellipses were being formed, and the “minutes-to-go” indicator showed nothing. Soviet Union premier Nikita Khrushchev was in New York, attending the General Assembly of the UN. Deputy General Slemon decided that Soviet Union was highly unlikely to attack the US while its leader was in New York. It was characteristic human reasoning. He also knew that the BMEWS had been operational for only four days and was still being ’run-in.’ So, no action was eventually taken. It was later found out that the powerful radars in Greenland, designed to detect objects upto 3,000 miles distant, were fooled by the reflections from the slow-rising moon over Norway. The BMEWS intepreted the radar echoes as sightings of multiple objects, and the engineers had left this phenomenon unaccounted for while designing the system.”
>
>
>
Since expert systems are deemed to emulate the decision-making ability of a human expert, and intelligent assistants work autonomously, human beings are increasingly being decoupled from the inner workings and implementations of such technology. In the case of the 1960 nuclear false alarm, human beings had remained in the loop, and common-sense had prevailed. With the increasing deployment of AI technology everywhere, it might not always be the case.
In his conception of the ‘risk society’, contrasting the nineteenth-century and the present, German sociologist Ulrich Beck\cite[p. 21]beck1992risk had noted, ‘‘hazards in those days assaulted the nose or the eyes and were thus perceptible to the senses, while the risk of civilization today escapes perception.” This idea, apart from its associations with nuclear technology and biotechnology, manifests itself in the domain of AI, and should be read in its context as well. The current AI technologies pose dangers of an entirely different kind - involving invisible contingencies, the seriousness of which layperson’s eyes cannot judge.555The Y2K problem at the end of the previous millennium was an important indication of the kind of risk advanced computing technologies entail. The millennium bug episode highlights both the dependence of the modern societies upon computing, as well as the difficulty of forming a judgment of risks posed by that dependence.
###
3.3 Trust
Siri is a personal assistant application for iOS. The application uses natural language processing to answer questions, make recommendations, and perform actions by delegating requests to an expanding set of web services. The software also supposedly adapts to the user’s individual preferences over time and personalizes results, as well as accomplishing tasks such as making dinner reservations and reserving a cab. It is a spin-out from the SRI International Artificial Intelligence Center, and is an offshoot of the DARPA-funded CALO project, described as the largest artificial-intelligence project ever launched. The project brings together experts in machine learning, natural language processing, and knowledge representation, human-computer interaction, flexible planning and behavioral studies. The CALO software learns by interacting with and being advised by its users and is meant to help users with military decision making task.
Once technical objects are stabilised, they become instruments of knowledge. As already noted the current AI technologies, like expert systems or intelligent assistants, portray an impression of intelligence which is somewhat misleading. The actual engineering of these systems is hidden from the user, and in most cases their knowledge and utility arises from connections to databases and systems which are external to the environment in which they are deployed. Thus, the knowledge these technical systems gather can be ‘exported’\citeakrich1992scription. To take a mild example - the Apple iOS assistant application of Siri, which is a by-product of AI research, draws its knowledge from the Cloud and Internet databases\citeCosta:2011:Online.
How does a user trust that the AI technology is working for his (her) welfare, and not some hidden agenda placed there, intentionally or inadvertently, by the programmer? How does the user trust that it will behave for his welfare, even in his absence, if the user doesn’t have detailed knowledge of the system’s underlying hardware and software limits, not to mention the subtle bugs. AI systems, being highly automated and self-sufficient (and having a deep, hidden architecture), raises trust issues which are qualitatively different from the ‘tool-like technologies’ of the past. As more such AI technologies make their way to the general public, the convenience of using these systems and the deceptive nature of their intelligence will tend to replace the concerns over privacy and security.

4 New Directions
-----------------
It has been shown that extraneous sociological factors have been responsible for the gradual deviation of AI research from its original objectives and its acceleration in one particular direction - which has probably led to the inadequacies presented by AI technologies of today. Basic areas like judgement under uncertainty, intuition, commonsense reasoning, inventiveness, non-linear thinking have lagged behind, and should be encouraged. Going forward, this paper presents the case for two complementary methodologies that help in addressing the concerns arising out of the technologies and research as it is today.
###
4.1 A Sociological View of Intelligence
Almost all of the earlier models on which AI and cognitive science rested assumed that intelligence is not socially constituted and not socially situated. In fact , the traditional AI techniques hinged on the following assumptions\citeRestivo:2001:
* Human mentality is a freestanding, individual, brain based phenomenon; and
* Human mentality is best understood in logical, linguistic, and rational terms
Most of the current AI technologies have resulted from the research grounded on such similar assumptions. But the success of these applications, especially of those in the form of intelligent assistants and expert systems, depends on the basis of how they perform in contexts and situations that are essential social and cultural in nature. Most of the inadequacies of these technologies highlighted above, result from their inability to handle practical social situations and uncertainties.
The idea that social and cultural factors are not only important but primary is not yet a widely appreciated or understood possibility. In the past, Joseph Weizenbaum has observed that intelligence manifests itself only relative to specific social and cultural contexts \citeclocksin2003artificial. There is a need to develop a sound sociological basis for Artificial Intelligence. Social and Cultural assumptions should be incorporated in the worldviews guiding the work in AI. Intelligence, thus, should be understood as how well an entity performs in social situations, in addition to the inner architecture and cognitive model that brings about that behavior. A Sociology of Artificial Intelligence will result in more robust technologies in the future, which are better equipped to handle real world scenarios.
###
4.2 Human-Augmented AI
When Doug Engelbart was creating early computer interfaces and mapping systems, he firmly maintained his belief that the machine was meant to be an augmenter, not prosthesis. J.C.R. Licklider\citelicklider1960man, the computer science pioneer who had a profound effect on the development of technology and the Internet, also had the vision of enabling man and machine to cooperate in making decisions and controlling complex scenarios together, instead of compromising the flexibility by being dependent on predetermined programs. Future research should focus on similar principles.
To get a flavour of the potential power of this concept, consider the 2005 freestyle chess tournament in which man and machine could enter together as partners, rather than adversaries. Initially, even a supercomputer was beaten by a grandmaster with a relatively weak laptop. But the real suprise which took everyone off-guard came at the end. The eventual winner was not a grandmaster with a supercomputer, but actually two American amateurs using three relatively weak laptops. Their ability to effectively use their computers to deeply explore specific positions effectively counteracted the superior chess knowledge of the grandmasters and the superior computational power of other adversaries\citeshyamshankharted. This astonishing result, of average men, average machines beating the best man, the best machine, strinkingly illustrates the strength of human-machine cooperation - that the right symbiosis can be much more powerful than the sum of individual parts.
The power of Man and Machine working together has also manifested in a totally different but very relevant setting - Protein Folding. There are more ways of folding a protein than there are atoms in the universe. This is a world-changing problem with huge implications for our ability to understand and treat diseases. Supercomputers have traditionally struggled in this area. When computer scientists at created Foldit, a game where non-technical, non-biologist amateurs visually rearrange the structure of the protein (allowing the computer to manage the atomic forces and interactions and identify structural issues) , it was found that the players beat supercomputers 50 percent of the time and tied 30 percent of the time. Recently, the structure of the Mason-Pfizer monkey virus, a protease that has eluded determination for over 10 years, was opened up to a group of online gamers (through Foldit), who competed to model the protein, with all the associated scores, points, and rankings of a game. The players finished the model within 10 days - a notable and major scientific discovery\citeshyamshankharted\citewoods:2012:Online.
Amazon partially taps into the principle of Human-Augment AI through a concept which they call ‘Artificial Artificial Intelligence’. The premise is simple: Since Humans outperform AI in many simple tasks (like recognizing faces or sorting patterns), so why not farm out computing tasks to people, instead of machines? AAI has been employed as part of The Amazon Mechanical Turk (MTurk), a crowdsourcing Internet marketplace that enables computer programmers (known as Requesters) to co-ordinate the use of human intelligence to perform tasks that computers are unable to do yet\citeBolt:2005:Online.
Current AI technologies lack common sense and symbolic reasoning, inventiveness, non-linear apparoaches, iterative hypotheses - aspects of intelligence which come naturally and easily to humans. On the other hand, machines have been better and ever improving in handling scale, volume and computation. Thus, an approach which harnesses the best of both worlds holds great potential. In completely manual and completely automated system, users and machines are effectively decoupled and too often, systems fail because they are not designed as a whole system with people and machines working in harmony. The power lies of Human-Augmented AI will lie in expressing the ‘black box’ and making it transparent. The human mind will enhance the machine’s solution by filling in the gaps.
A field of human intelligence - machine intelligence cooperation will thus help answer the concerns arising from the increasing separation of human beings from the inner workings and implementations of AI technology highlighted in the previous sections. This approach, traditionally under-appreciated and unexplored. For example, a language of machine-plus-human interaction has not yet been developed. There is a need to revaluate and reframe the conventional ‘Man vs Machine’ dichotomy towards a common ‘Man and Machine’ framework\citegupta2012. 666The early signals indicating the potential of Human-Augmented AI technology can be read in the Big Data landscape. The idea of a ‘Man and Machine framework’ to generate commercial insights has recently been pioneered by Opera Solutions, while software company Palantir Technologies takes on important real world problems (e.g. counterterrorism) using a concept called ‘Intelligence Augmentation’. Research oriented at technologies which involve both humans and machines performing intelligent tasks in the context of an integrated system should be encouraged. This would involve designs where both humans and machines have resposibilities, require access to resources, and have particular knowledge appropriate to tasks. Tasks maybe performed in parallel, or may require results or permission from the other. The field of Human Intelligence-Machine Intelligence cooperation would define roles to each as tools or assistants. Different types of knowledge would have to be distinguished: designers, end users, and maintenance people for instance. Humans would thus be designed in the process, and friction between man and machine, minimized.
5 Conclusions
--------------
The state of AI research and technology, as it stands today, is not without blemishes. Under the shining surface, it hides some fairly serious concerns and issues. Most of these risk and trust related issues arise from an imbalance resulting from the impact of extraneous sociological factors on AI research. The exploration of two paradigms - Sociological AI and Human-Augmented AI - will go a long way in addressing these concerns, and hold great potential as a research agenda for the future.
Besides this, it is essential that the funding agencies appreciate the importance of long-range basic research in AI. As Nils Nillson\citenilsson1983artificial observes, AI, perhaps together with molecular genetics, will be society’s predominant scientific endeavor for the rest of this century and well into the next - just as physics and chemistry predominated during the decades before and after 1900.
\printbibliography |
3d39c196-cf40-47da-af25-e5977cd4cf96 | trentmkelly/LessWrong-43k | LessWrong | Overcoming Clinginess in Impact Measures
> It may be possible to use the concept of a causal counterfactual (as formalized by Pearl) to separate some intended effects from some unintended ones. Roughly, "follow-on effects" could be de fined as those that are causally downstream from the achievement of the goal... With some additional work, perhaps it will be possible to use the causal structure of the system's world-model to select a policy that has the follow-on effects of the goal achievement but few other effects.
> Taylor et al., Alignment for Advanced Machine Learning Systems
In which I outline a solution to the clinginess problem and illustrate a potentially-fundamental trade-off between assumptions about the autonomy of humans and about the responsibility of an agent for its actions.
Consider two plans for ensuring that a cauldron is full of water:
* Filling the cauldron.
* Filling the cauldron and submerging the surrounding room.
All else equal, the latter plan does better in expectation, as there are fewer ways the cauldron might somehow become not-full (e.g., evaporation, and the minuscule loss of utility that would entail). However, the latter plan "changes" more "things" than we had in mind.
Undesirable maxima of an agent's utility function often seem to involve changing large swathes of the world. If we make "change" costly, that incentivizes the agent to search for low-impact solutions. If we are not certain of a seed AI's alignment, we may want to implement additional safeguards such as impact measures and off-switches.
I designed an impact measure called whitelisting - which, while overcoming certain weaknesses of past approaches, is yet vulnerable to
Clinginess
> An agent is clingy when it not only stops itself from having certain effects, but also stops you.
> ...
> Consider some outcome - say, the sparking of a small forest fire in California. At what point can we truly say we didn't start the fire?
* I immediately and visibly start the fire.
* I intentionally persuade |
2d6c9764-a1e6-4eb4-aef0-5149cc7fb421 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | An extended rocket alignment analogy
*This work was done while at* [*Conjecture*](https://conjecture.dev)*.*
*This post has been written for the first*[*Refine*](https://www.alignmentforum.org/posts/D7epkkJb3CqDTYgX9/refine-an-incubator-for-conceptual-alignment-research-bets)*blog post day, at the end of the week of readings, discussions, and exercises about epistemology for doing good conceptual research.*
*Thanks for comments by Linda Linsefors, Paul Bricman, and Adam Shimi.*
Introduction
------------
[The Rocket Alignment Problem](https://intelligence.org/2018/10/03/rocket-alignment/)presents a fun scenario trying to explain why understanding part of the theoretical basis for a problem might be useful. If we were trying to get to the moon for the first time, understanding Newtonian mechanics would indeed be quite useful! As would astrodynamics in general, which was really developed starting with [Herrick](https://en.wikipedia.org/wiki/Samuel_Herrick_(astronomer))in the 1930s. But not sufficient: there were a whole host of other problems that needed to be solved, many of which were at least as difficult as theoretical understanding. At a minimum, we needed to develop rocket fuel, the materials that could survive high temperatures from fuel and reentry, enough understanding of the materials to create models of the rocket themselves and the stress and heat individual parts were under, communication devices so we knew what our rockets were doing, and more. I’ll look at the first two in more detail here then step back and look at the problem as a whole at different points in time. Beyond simply pointing out the rocket alignment problem to show how messy it truly was, pushing deeper into the analogy may let us see what we have developed, and what we expect we need to develop.
Rocket Fuels and the Rocket Body
--------------------------------
Developing rocket fuel powerful enough to lift the rocket but stable enough that it didn’t just immediately explode required quite a bit of chemistry, engineering, and ridiculously dangerous trials. Consideration of this problem directly applied to rocketry, building off of the chemical achievements of the previous century, started in the early 1900s and continued through the development of the rocket program.
A Russian school teacher, Tsiolkovsky, first proposed using liquid fuels, such as liquid hydrogen, paired with liquid oxygen for the oxidizer, recognizing that standard fuels or gunpowder didn’t provide enough energy. He was mostly not noticed, and [Goddard’s](https://en.wikipedia.org/wiki/Robert_H._Goddard)work on building liquid fuel powered rockets was mostly ignored as well. In the late 20s and 30s, more mainstream efforts pick up across Europe, using both new liquid fuels and oxidizers such as nitrogen tetrooxide or tetranitromethane,, the latter which had a tendency to blow up and take off a few fingers with it. More unstable still were monopropellants, which contained both fuel and the oxidizer in the same molecule and would react with themselves with the right catalyst or temperature, but if they have enough energy to compete with standard propellants, had the unnerving tendency to randomly blow up, which led to them mostly being abandoned. These humble beginnings led to a long and generally productive investigation into fuels that were powerful enough to propel a rocket to space, but not unstable enough to propel various bits of the rocket across the launch pad. This whole process required both theoretical knowledge of chemistry to help guide the search process, and tinkering with fuels and oxidizers to see what worked, with various tangents such as looking at monopropellants not really panning out.
Creating a rocket that could survive this was equally difficult. The range of temperatures that specific parts were expected to be under was extreme, from -250F (-155 C) to 3,000F (1,650C). They had other requirements they needed to meet as well: “It was necessary to design thousands of these tiles that had compound curves, interfaced with thermal barriers and hatches, and had penetrations for instrumentation and structural access.” Developing materials that were lightweight but capable of withstanding these conditions was a hard problem, requiring the development of new materials such as [reinforced carbon-carbon](https://en.wikipedia.org/wiki/Reinforced_carbon%E2%80%93carbon), as well as stress testing both individual components and the whole wing under somewhat realistic conditions.
What we needed throughout time
------------------------------
Taking a step back, these problems were downstream of much of the development of science and technology since the start of the industrial revolution. From Newtonian mechanics to advances in understanding orbital mechanics, from advances in chemistry and material science to the development of precision machinery. In the 1930, much of the base was in place, but anything specific to rocketry was in its infancy. We knew about Newtonian mechanics, but nobody had worked out specifically how to get a rocket to the moon. We had some idea of the types of chemicals that would be useful in rockets, but had not worked out what would give the best mix of stability and energy, and had not yet developed [hypergolic fuels](https://en.wikipedia.org/wiki/Hypergolic_propellant). Much of the basis in material science was in place, such as understanding strong, light metals like aluminum and titanium, but not the specific technologies that would heat shield the rockets.
At the turn of the century, much of the technological and scientific basis was in place, but not all of it: our understanding of high volatility chemistry was still in its infancy, with the basis of liquid fuels such as liquid oxygen or hydrogen having been developed shortly before they were proposed as rocket fuels. Titanium was known to exist, but there didn’t exist any process for isolating it, the ability to create large amounts of titanium outside of the laboratory was developed in the 1930s. Mass production of aluminum was discovered a decade before, but the reinforced carbon-carbon that coats the most exposed points of the space shuttle was developed for the space program.
Going back further to 1800, perhaps we’d have a vague idea that, if we had to go to the moon, an improved understanding of orbital mechanics, of chemistry, of material science, would be useful. I don’t think our ideas would be all that precise; yelling at people who are working on trying to isolate and create frozen chemicals that they weren’t directly working on getting to the moon would have been a mistake, as would stopping all work of theoretical considerations of astrodynamics.
Leaving the industrial and scientific revolutions behind, if we are anywhere in the world in 1400 and must reach the moon, things are even more uncertain. Perhaps we could look at the nascent development of cannons, but that will not get you to the moon, we are missing so much. The mathematics of 1400 are inadequate, as are Aristotelian physics, chemistry, material sciences, precision engineering; roughly the entire techno-scientific apparatus is missing. The development of science and technology before the scientific and industrial revolutions is too unsystematic, too fragmentary to be able to solve this problem. We need to invent the systematic search for science and technology first.
Wrapping Up
-----------
Just how this maps to alignment depends on what we are missing, and what we already downstream of. We have some of the basis, but are missing so much. I think similar to rocketry, trying to solve theoretical alignment problems are reasonable, so too are engineering problems and technoscience-y questions surrounding neural networks that seem potentially upstream of an alignment solution. And to the extent our science isn’t capable of handling the [cluster of problems we face in alignment](https://www.alignmentforum.org/posts/72scWeZRta2ApsKja/epistemological-vigilance-for-alignment), we need to create a more powerful science that can handle it.
Sources:
Random stuff from memory
For fuel, the wonderfully entertaining [Ignition!](http://www.sciencemadness.org/library/books/ignition.pdf)
NASA has stuff on what material sciences were necessary: both [summaries](https://www.nasa.gov/centers/johnson/pdf/584728main_Wings-ch4b-pgs182-199.pdf)and [initial documents](https://www.grc.nasa.gov/www/k-12/rocket/rockpart.html).
For broader questions on material science I’ve read parts of [Out of the Crystal Maze](https://www.amazon.fr/Out-Crystal-Maze-Chapters-Solid-State/dp/019505329X) and [Understanding Materials Science: History, Properties, Applications](https://www.amazon.fr/Understanding-Materials-Science-Properties-Applications-ebook/dp/B000QECINC). |
85043757-0895-4b2f-8b42-deadc864f209 | trentmkelly/LessWrong-43k | LessWrong | AI as a resolution to the Fermi Paradox.
The Fermi paradox has been discussed here a lot, and many times it has been argued that AI cannot be the great filter, because we would observe the paperclipping of the AI just as much as we would observe alien civilizations. I don't think we should totally rule this out though.
It may very well be the case that most unfriendly AI are unstable in various ways. For instance imagine an AI that has a utility function that changes whenever it looks at it. Or an AI that can't resolve the ontological crisis and so fails when it learns more about the world. Or maybe an AI that has a utility function that contradicts itself. There seem to be lots of ways that an AI can have bugs other than simply having goals that aren't aligned with our values.
Of course most of these AI would simply crash, or flop around and not do anything. A small subset of them might foom and stabilize as it does so. Most AI developers would try to move their AI from the former to the latter, and in doing so may pass through a space of AI that can foom to a significant degree without totally stabilizing. Such an AI might become very powerful, but exhibit "insane" behaviors that cause it to destroy itself and its parent civilization.
It might seem unlikely that an "insane" AI could manage to foom, but remember that we ourselves are examples of systems that can use general reasoning to gain power while still having serious flaws.
This would prevent us from observing either alien civilizations or paperclipping, and is appealing as a solution to the Fermi paradox because any advancing civilization would likely begin developing AI. Other threats that could arise after the emergence of civilization probably require the civilization to exhibit behaviors that not all civilizations would. Just because we threatened each other with nuclear annihilation doesn't mean all civilizations would, and it only takes one exception. But AI development is a natural step in the path of progress and very tricky. No matter |
9deb71ec-fa9e-4b4d-98e8-d2348328af2b | trentmkelly/LessWrong-43k | LessWrong | UDT as a Nash Equilibrium
I realized today that UDT doesn't really need the assumption that other players use UDT. In any game where all players have the same utility function, "everyone using UDT" is a Nash equilibrium that gives everyone their highest possible expected utility. So you can just use it unilaterally.
That covers such cases as Absent-Minded Driver, Psy-Kosh's problem, Wei's coordination problem. Notably it doesn't cover the Prisoner's Dilemma, because the players assign different utilities to the same outcome.
(Also this shows how Von Neumann-Morgenstern expected utility maximization is basically a restriction of UDT to single player games with perfect recall. For imperfect recall (AMD) or multiple players (Psy-Kosh) you need the full version.)
You can actually push game theory a bit further, and allow games where players have different utility functions, as long as some subset of players can jointly enforce their highest possible expected utilities. UDT doesn't support that kind of decision-making, but really it should be a no-brainer too...? |
ac151a1c-4e49-4185-902a-a687180d293d | trentmkelly/LessWrong-43k | LessWrong | Motivation in AI
Although the AI danger battle is raging, with the panicking crowd on one side of the room and the aloof non-alarmists on the other, there is very little serious reflection about the roots of our concerns. So far, AIs like GPT-4 are only capable of receiving commands – which we call “prompts” – but can’t do anything on their own. In other words, like teenagers, AIs are suffering from an acute case of lack of motivation.
But what is motivation? Tentatively, we might say that it’s the desire to do things out of one’s own volition. Motivation is not necessarily a positive feeling, since it can arise out of extenuating circumstances, like being coerced to carry out an undesired action by threat of force or termination. Motivation is simply that pulse towards doing things due to reasons. As far as we can tell, AIs have no such pulses, except when we manually introduce them. Prompts are, therefore, AI’s only source of motivation.
Looking at our evolutionary history, we can ask when the attractions and repulsions of our protozoic ancestors became the intricate nervous machinery that now fascinates us. At one point, we must’ve begun having sensations associated with that which is dangerous and rejectable, and that which is beneficial and welcome. This is why I believe that in order to develop an analogous limbic system in an AI system, one would have to do two things: 1) put it in an environment where it’s forced to survive and 2) constrain its range of actions so that it must prioritize. In other words, make AI manage something (preferably itself in a hostile environment), tell it to make decisions that ensure its survival, and let it learn which stimuli, both internal and from the environment, are good or bad for its purposes. I wager that, in time, the AI will develop strategies and design decision making systems that will resemble our affective responses, especially when it comes to high priority signals that need to be dealt with rapidly and effectively. This way, w |
52ff05f4-9c57-4609-9221-cbc0b1dc2eef | trentmkelly/LessWrong-43k | LessWrong | Potential Impacts of Climate Change
According to a recent press release from UC Berkeley’s School of Law:
> Overheated: The Human Cost of Climate Change predicts a grim future for billions of people in this century. It is a factual account of a staggering toll, based on hard data […] “Climate change is the most important problem facing the international community in the 21st century,” Guzman said.
Guzman's view is shared by many.
While I have not read Guzman’s book, I have read GiveWell’s summary of the IPCC, as well as notes on GiveWell’s conversations with climate change experts. Based on these, I’ve come to the tentative conclusion that while climate change is an important issue, it’s unlikely to be the most important issue, though there is uncertainty, owing to poorly understood tail risk.
Potential Impacts according to the IPCC
GiveWell recently wrote up a summary and review of some of the impacts of unmitigated climate change, as described by the Intergovernmental Panel on Climate Change's 2007 Fourth Assessment Report. GiveWell writes:
> The report suggests that unmitigated climate change would have extraordinarily negative humanitarian impacts across all of the outcomes we looked at: hunger, water stress, flooding, extreme weather, health, biodiversity, and the economy. Successfully mitigating these negative impacts would carry vast humanitarian benefits.
>
> When looking at the range of possible futures outlined in the report, the bulk of the variation (in humanitarian terms) comes from variation in the level of assumed economic growth and adaptation, rather than variation in the amount of climate change. Of the outcomes we examined, only biodiversity is expected to be unambiguously worse off in the future as a result of both climate change and economic growth.
The most succinct summary of the expected impact is given by a GDP drop estimate:
> Most recently, Stern (2007) took account of a full range of both impacts and possible outcomes. […] Using equity weights to reflect the e |
e38dbe36-abb1-4157-80a8-26634cb3ea4d | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AI safety field-building survey: Talent needs, infrastructure needs, and relationship to EA
**In which:**Organizers from Meta Coordination Forum 2023 summarize survey responses from 19 AI safety experts.
### About the survey
The organizers of [Meta Coordination Forum 2023](https://forum.effectivealtruism.org/posts/33o5jbe3WjPriGyAR/announcing-the-meta-coordination-forum-2023) sent a survey to 54 AI safety (AIS) experts who work at organizations like Open AI, DeepMind, FAR AI, Open Philanthropy, Rethink Priorities, MIRI, Redwood, and GovAI to solicit input on the state of AI safety field-building and how it relates to EA community-building.
The survey focused on the following questions:
* What talent is AI safety most bottlenecked by?
* How should EA and AI safety relate to each other?
* What infrastructure does AI safety need?
All experts were concerned about catastrophic risks from advanced AI systems, but they varied in terms of how much they worked on technical or governance solutions. Nineteen experts (n = 19) responded.
The survey and analyses were conducted by CEA staff Michel Justen, Ollie Base, and Angelina Li with input from other MCF organizers and invitees.
### Important caveats
* **Beware of selection effects and a small sample size:** Results and averages are better interpreted as “what the AI safety experts who are well-known to experienced CEA employees think” rather than “what all people in AIS think.” The sample size is also small, so we think readers probably shouldn’t think that the responses from this sample are representative of a broader population.
* **Technical and governance experts' answers are grouped together unless otherwise indicated.**Eleven respondents (n = 11) reported focusing on governance and strategy work, and eleven (n = 11) reported focusing on technical work. (Three people reported working on both technical and governance work.) This section compares the two groups' responses.
* **These are respondents' own views.** These responses do not necessarily reflect the views of any of the organizations listed above.
* **People filled in this survey quickly**, so the views expressed here should not be considered as respondents' most well-considered views on the topic.
* Summaries were written by the event organizers, not by an LLM.
What talent is AI safety most bottlenecked by?
==============================================
### Section summary
Respondents wanted to see more talent from a wide range of areas. Many called for more legible expertise and seniority, including policymakers (esp. those with technical expertise), senior ML researchers, and information security experts. When polled, respondents thought that information security experts were the group the field most needed more of, followed by academics in ML or computer science and promising ML PhD students.
### For the groups below, would you like to see more (or less) AI safety outreach, on the margin?[[1]](#fnsf4nifv0ak)
**Question description**
> *Assume the outreach work is high quality and sensibly tailored to the specific group, but take into account the tractability of the outreach (i.e. outreach will be less tractable for hard-to-reach groups).*
>
> *Please try to have four or fewer answers in the "Much more" column.*
>
>


How should EA and AI safety relate to each other?
=================================================
### Section summary
* **Sprint time?:**Respondents expressed a wide range of views on whether the AIS field was in “sprint mode”. ~5 respondents mentioned that the next few months were a high-leverage time for policy interventions, ~4 thought the next 1–2 years were a high-leverage time for the AIS field broadly, but ~8 favoured pacing ourselves because leverage will remain high for 5–10 years.
* **% of EA resources that should go to AI:** Respondents also expressed a wide range of views on what percent of the EA community’s resources should be devoted to AI safety over the next 1–2 years. Respondents who directly answered the question gave answers between ~20% and ~100%, with a wide spread of answers within that range. (Example responses below).
* **Advice for EA orgs to contribute to AI safety: A majority of respondents agreed that most EA-branded orgs that want to contribute to AI safety should drop the EA brand.**
* **Disagreements in ideas to promote:** There was disagreement on whether EA community builders should promote AI safety ideas more than other EA ideas, and whether AIS field-builders should try to make some EA principles a core part of the AIS field.
* **Disagreements in overlap between EA and AIS infrastructure**: Respondents were uncertain whether there should be separate AI safety office spaces in EA hubs and whether there should be separate EA and AI safety student groups at the top 100 universities, but a majority agreed that there should be separate EA and AI safety student groups at the top 20 universities.
* **Effect of engaging with EA on quality of AIS work:** A majority of respondents thought that engaging with EA generally makes people’s AI safety work better (via e.g. good epistemics, taking risks seriously and sensible prioritization)but they mentioned several ways in which it can have a negative influence (via bad associations and directing promising researchers to abstract or unproductive work). These results should be taken with a grain of salt given the selection effects of who received this survey and who responded (e.g., CEA are less likely to know experts who think that engaging with EA make’s people’s AIS work worse).
* **Perceptions of EA in AIS:** Respondents had mixed views on how non-EAs doing important AI safety work view EA, with an even split between favourably and unfavourably, and several neutral answers.
### Do you think the next 1-2 years is an especially high-leverage time for AI safety, such that we should be operating in sprint mode (e.g., spending down resources quickly) or will leverage remain similarly high or higher for longer, e.g. 5-10 years, such that we need to pace ourselves?
**Question Description:**
> *Reasons to operate in sprint mode might be short timelines, political will being likely to fall, or policy and public opinion soon being on a set path.*
>
> *We understand that this is a hard question, but we’re still interested in your best guess.*
>
>
**Question summary:**
**Respondents expressed a wide range of views in response to this question. ~5** respondents mentioned that the next few months were a high-leverage time for policy interventions, **~5** thought the next 1–2 years were a high-leverage time for the AIS field broadly, but **~8** favoured pacing ourselves because leverage will remain high for 5–10 years.
Reasons for sprinting included short timelines, the likelihood that more resources will be available in future, the opportunity to set terms for discussion, the UK summit, the US election, and building influence in the field now which will be useful later. Reasons for pacing ourselves included slow take-off, the likelihood of future policy windows, the immaturity of the AI safety field, and interventions becoming more tractable over time.
### What percent of the EA community’s resources (i.e. talent and money) do you think should be devoted to AI safety over the next 1-2 years?
**Question description:**
> *Assume for the sake of this question that 30% of the EA community’s resources are currently devoted to AI safety, and future EA resources wouldn't need to be spent by organisations/talent closely affiliated with EA brand.*
>
>
> *Leaders in the EA community are asking ourselves how much they should go ‘all in’ on AI safety versus sticking with the classic cause-area portfolio approach. Though you’re probably not an expert on some of the variables that feed into this question (e.g., the relative importance of other existential risks, etc.), you know more about AI safety than most EA leaders and have a valuable perspective on how urgently AI safety needs more resources.*
>
>
**Question summary:**
Everyone unanimously agreed that EA should devote exactly 55% of resources to AI safety and respondents cooperatively outlined a detailed spending plan, almost single-handedly resolving the EA community’s strategic uncertainty. Incredible stuff…
Unfortunately not. In fact, **respondents expressed a wide range of views in response to this question and many expressed strong uncertainty.**Respondents who directly answered the question gave answers between ~20% and ~100%, with a wide spread of answers within that range. Two respondents mentioned that a substantial portion of quality-adjusted EA talent is already going towards AI safety. A few mentioned uncertainty that marginal additional spending in the space is currently effective compared to other opportunities. Two mentioned concern about the brand association between EA and AIS if EA moved closer to all-in on AIS.
Examples responses that illustrate the wide range of views include:
* “I expect more than 30% of resources to be the right call. 50% seems reasonable to me.”
* “At least 50% of the top-end movement building talent should be focused on AI safety over the next 1-2 years.”
* “Unclear how much EA could usefully spend given my very-short-timelines views, non-fungibility of talents and interests between causes, etc. And it's not like all the other problems have become less important! I'd look to spend as much on robustly-very-positive AI Safety as I could though, and can imagine anywhere between "somewhat less than at present" to "everything" matching that.”
* “Depends what we'd do with the resources. I'm not very excited about our current marginal use of resources, so that suggests 25%? On the other hand I'm not sure if there are better ways to use money (either now or in the future), so maybe I should actually be higher than 30%.”
### Agreement voting on statements on the relationship between EA and AI safety
**Prompt for statements in this section:**
> *How much do you agree with the following statements? Assume that these answers apply to the next 1–2 years.*
>
>


### Among equally talented people interested in working on AI safety, do you think engaging with EA generally makes their AI safety work better or worse?
**Question description:**
> *“Better work” here means work that you think has a greater likelihood of reducing catastrophic risks from AI than other work.*
>
> *“Engaging with EA” means things like attending EA Global or being active in an EA student group.*
>
>
**Summary stats:** n = 14; Mean = 4.9; SD = 1.0

### Do you have any comments on the relationship between engaging with EA and the quality of AI safety work?
**Question Summary:**
Respondents indicated that EA has abroadly positive influence on AIS, but mentioned several ways in which it can have a negative influence. ~5respondents said that, for most people, engaging in a sustained way with EA thinking and the EA community made their AIS work better via e.g. good epistemics, taking risks seriously and sensible prioritization. The ways EA can have a negative influence included bad associations (e.g. cultiness) and directing promising researchers to abstract or unproductive work.
### In your experience, how do non-EAs doing important AI safety work view EA?
**Summary stats**: n = 13; Mean = 4.0; SD = 1.0

What infrastructure does AI safety need?
========================================
*We prompted people to propose capacity-building infrastructure for the field of AI safety with a variety of questions like “What additional field-building projects do you think are most needed right now to decrease the likelihood of catastrophic risks from AI?” and “What is going badly in the field of AI safety? Why are projects failing or not progressing as well as you'd like?”*
***Note that not all respondents necessarily agree with all the suggestions raised**. To the contrary, we expect that there will likely be disagreement about many of the suggestions raised. The survey asked people to suggest interventions but did not ask them to evaluate the suggestions of others, so the output is more like a brainstorm and less like a systematic evaluation of options.*
### **Section summary**
* **Respondents suggested a wide range of potential field-building projects;** they mentioned outreach programs aimed at more senior professionals, a journal or better alignment forum, events and conferences, explainers, educational courses, a media push, and office spaces.
* **For the field of technical AI safety**, respondents mentioned wanting to see senior researchers and executors, good articulations of how a research direction translates into x-risk reduction, evals and auditing capacity, demonstrations of alignment failures, proof-of-concept for alignment MVPs, better models of mesa-optimization / emergent goals. Two respondents also mentioned the importance of interventions that buy time for the field of technical alignment.
* **For the field of AI governance and strategy**, respondents mentioned wanting to see more compute governance, auditing regulation, people with communication skills, people with a strong network, people with technical credentials, concrete policy proposals, a ban or pause on AGI and a ban on open source LLMs.
* **What’s going badly:** When asked what’s going badly in AI safety, respondents mentioned the amount of unhelpful research, not enough progress on evals, the popularity of open source, a lack of people with general/technical impressiveness with a strong understanding of the risks, the lack of any framework for assessing alignment plans, the difficulty of interpretability, the speed of lab progress, the slow pace of Congress, too much funding, not enough time, not enough awareness of technical difficulties, a focus on persistent, mind-like systems instead of a constellation of models and the absence of workable plans.
* **How EA meta orgs can help:** When asked how existing EA meta orgs (e.g. CEA, 80k) can contribute, respondents had mixed views. Two were happy with what these orgs were currently doing, provided they avoided associating AIS too strongly with EA. One thought they weren’t well placed to help, another encouraged them to gain a stronger understanding of the field in order to find ways to contribute.
How do opinions of governance vs technical focused respondents compare?
=======================================================================
In the summaries above, we don’t segment any responses by whether the respondent was primarily focused on AI governance or technical work.
**For this section, we categorised respondents as “governance focused” and / or “technical focused” based on their reported type of work, and evaluated how these groups differed.**
* Note that these cohort based analyses take a very small response size and chop it up into **even smaller segments** — we think there’s a reasonable chance the differences in responses across fields we’ve found are not that meaningful.
In distinguishing these categories, we relied on people’s self-reports. Namely, their responses to the ‘select all that apply’ question on “Which of the below best describes your work?”
* If someone checked any of the following technical categories and none of the governance categories below we marked them as ‘**technical focused**’: Technical AI safety research, Technical AI safety field-building, Technical AI safety grantmaking
* If someone checked any of the following governance categories and none of technical categories, we marked them as ‘**governance focused**’: AI governance grantmaking, AI governance research, AI governance field-building
* If someone checked both technical and governance categories, we marked them as **‘both**’
Eleven survey (n = 11) respondents reported that they focused on governance & strategy work, and eleven survey respondents (n = 11) reported that they focused on technical work. Three respondents reported focusing on both.
### Governance vs technical focused respondents: Differences in views on talent needs

* In general there was fairly high agreement on what the top 1-3 groups to prioritise were.
* Governance-focused respondents were most excited by:
+ (1) Information security experts
+ (2) Current policymakers
+ (Tied for 3) Promising ML PhD students + graduate policy students.
* Technical-focused respondents were most excited by:
+ (Tied for 1) Information security experts + promising ML PhD students
+ (Tied for 3) Current policymakers + academics in ML / computer science.
* Governance focused respondents were slightly more excited about recruiting top female UK/US students and professionals, and slightly less excited about recruiting academics in philosophy (although the standard deviation on the latter is extremely wide).
### Governance vs technical focused respondents: Differences in views on relationship between EA and AI safety

* Governance-focused respondents felt most strongly that:
+ (1) AI safety work should, like global health and animal welfare, should include many more people who are not into EA
- Governance-focused respondents agreed with this statement much more than technical-focused respondents.
+ (2) Most EA outreach should include AI safety as a cause area
+ (3) Most AI safety outreach shouldn’t assume an EA framework
* Technical-focused respondents felt most strongly that:
+ (1) Most EA outreach should include AI safety as a cause area
+ (Tied for 3):
- AI safety work should, like global health and animal welfare, should include many more people who are not into EA
- Most EA-branded orgs that want to contribute to AI safety should drop the EA brand.
1. **[^](#fnrefsf4nifv0ak)** When we said "less" in this question, we meant "on net, less, because I'd want the effort being spent on this to be spent elsewhere", not “less, because I think this outreach is harmful.” But some respondents may have interpreted it as the latter. |
ec784f4c-e026-49e6-9d25-1cc2cddcd8a4 | trentmkelly/LessWrong-43k | LessWrong | Asymptotic Logical Uncertainty: Uniform Coherence 2
Part of the Asymptotic Logical Uncertainty series. A couple weeks ago, I proposed a definition of uniform coherence. I decided to change the definition so that it applies to a more general context. This post is a self contained definition of uniform coherence designed so that people can start working on the problem with no background other than this post.
Every Turing machine discussed in this post will have two infinite blank tapes to work with in addition to an infinite write only output tape. There will be a special character # for the output tape which will separate the output tape into chunks.
A sentence enumerator is a machine which outputs an infinite number of # symbols, where between each pair of adjacent # symbols is a binary string which encodes a logical sentence ϕ. If M is a sentence enumerator, we let M(t) denote the sentence written between the two most recent # symbols at time t. If M has not yet written two # symbols by time t, we say that M(t)=⊥.
Similarly, a logical predictor is a machine which outputs an infinite number of # symbols, where between each pair of adjacent # symbols is a binary string which encodes a finite sequence of ordered pairs, with each ordered pair containing a logical sentence ϕ and a rational number 0≤p≤1. If L is a logical predictor, we let L(ϕ,t) denote the rational number p such that when L outputs its tth # symbol, (ϕ,p) has been output between more recently than any other pair of the form (ϕ,p′). If L has not yet output any pair of the form (ϕ,p′) by time it outputs t # symbols, we say that L(ϕ,t)=12.
We say that a logical predictor L is uniformly coherent if the following three axioms hold:
1. limt→∞L(⊥,t)=0.
2. If M is a sentence enumerator such that for all n the sentence M(n)→M(n+1) is provable, then limt→∞L(M(t),t) exists.
3. If M1, M2, and M3 are three sentence enumerators such that for all n sufficiently large, the sentence "Exactly one of M1(n), M2(n), and M3(n) is true" is provable, then limt→∞L(M1(t |
fefd6d16-275b-4b98-9a96-9779b2958d2e | trentmkelly/LessWrong-43k | LessWrong | Prereq: Cognitive Fusion
In a post by Kaj Sotala, he introduces the very useful idea of cognitive fusion.
> Cognitive fusion is a term from Acceptance and Commitment Therapy (ACT), which refers to a person “fusing together” with the content of a thought or emotion, so that the content is experienced as an objective fact about the world rather than as a mental construct. The most obvious example of this might be if you get really upset with someone else and become convinced that something was all their fault (even if you had actually done something blameworthy too).
> In this example, your anger isn’t letting you see clearly, and you can’t step back from your anger to question it, because you have become “fused together” with it and experience everything in terms of the anger’s internal logic.
You can become fused to an emotion, a voice in your head, a political view, and experience it to "just be true". I see this as a similar sort of fusion I hear musician talk about, where after years of practice their instrument begin to feel like a part of their body. They aren't "using their index finger to press the black note on a piano" they are "just playing G". This is analogous to being so caught up in your own anger that your partner is "just wrong and terrible" as opposed to "it sorta looks like you intentionally did something to annoy me and I'm worried about if you'll do this again in the future." (or whatever the actual case is)
Sometimes I think of there being a general fusion process where the brain collapses levels of inference. All of the steps that go into a given physical motion or thought process get compressed into a single dot. The thought process will be experienced as "just true" and the physical motion will be experience as an atomic action available to you. Sometimes you can "uncompress" the chain, and sometimes you can't.
Problems can arise when you fuse to a thought or emotion that doesn't have an accurate view of the world, and you unknowingly take it's broken map as the |
fc865b8c-2a20-4851-846a-05179bbd3910 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?
1 Introduction
---------------
Systems that are built upon Artificial Intelligence (AI) techniques are often classified into two categories:
(i) systems that automatically *learn from data* and
(ii) systems based on explicitly *encoded domain knowledge* and automated inference services.
Knowledge-based software systems are typical representatives of the latter form of AI with a number of successful applications in various domains such as planning and scheduling,
medical advice-giving systems, product configuration, or recommender systems [ModernApproach](#bib.bib2) ; [DLHandbook](#bib.bib3) ; [pinedo2016scheduling](#bib.bib4) ; [felfernig2014knowledge](#bib.bib5) ; [Jannach2010](#bib.bib6) .
The correctness of the decisions and suggestions made by a knowledge-based system depends directly on the ability of an expert to formulate and maintain a knowledge base (KB) that describes the application domain.
Both knowledge formalization and maintenance can be challenging due to
(i) the cognitive complexity of the task and
(ii) the size and complexity of the resulting knowledge base—e.g., biomedical ontologies as found on BioPortal222<http://bioportal.bioontology.org> sometimes contain thousands of axioms.
The results reported, e.g., in [Ceraso71](#bib.bib7) ; [Johnson1999](#bib.bib8) ; [Rector2004](#bib.bib9) ; [Roussey2009](#bib.bib10) suggest that people often make mistakes when writing or interpreting logical sentences.
Furthermore, in some cases, knowledge bases are constructed in a collaborative manner by multiple contributors, which is another potential source of faults [Noy2006a](#bib.bib11) ; [ji2009radon](#bib.bib12) ; [meilicke2011](#bib.bib13) .
Overall, given that unintended or contradictory specifications are likely to occur in such knowledge bases, it is essential to provide experts with appropriate tools for fault detection, localization, and repair.
Over the last decades researchers suggested different techniques and implemented a number of assistive tools for these tasks.
Many of these techniques are based on the principles of model-based diagnosis (MBD) [Reiter87](#bib.bib14) , which is a versatile fault localization method with a range of applications, e.g., in the context of electronic circuits, declarative programs, knowledge bases and ontologies, workflow specifications, as well as programs written in domain-specific and general-purpose languages [paper:felfernig:2004](#bib.bib15) ; [DBLP:conf/aadebug/MateisSWW00](#bib.bib16) ; [DBLP:conf/semweb/FriedrichS05](#bib.bib17) ; [JannachSchmitz2014](#bib.bib18) ; [Friedrich1999](#bib.bib19) ; [10.1007/BFb0019402](#bib.bib20) ; [Rodler2015phd](#bib.bib21) ; [dekleer1987](#bib.bib22) .
In the context of knowledge base debugging, MBD techniques are applied when a knowledge base does not fulfill some basic requirements, e.g., when it is inconsistent in itself or when test cases indicate a failure, i.e., an unexpected output.
In the usual MBD problem formulation, test cases are logical sentences that the
intended
knowledge base must (or must not) entail. The output of an MBD tool is a collection of diagnoses, where each *diagnosis* corresponds to a set of assumedly faulty parts of the knowledge base. Users of the debugger, such as experts or knowledge engineers, can then investigate one diagnosis after another and inspect the involved components to see if they are faulty or not.
Unfortunately, the number of diagnoses can in some cases be large, e.g., because the information provided by the test cases is insufficient and does not allow the debugger to isolate the true cause of the observed failure.
In such cases, already early works suggested asking an expert to provide additional information to narrow down the set of possible fault locations.
For example, in the traditional application domain of MBD techniques—electronic circuits—users of a diagnosis system are asked to make additional measurements that give some indication of the health state of certain components [dekleer1987](#bib.bib22) .
In more recent years, different algorithms for *sequential* (or: interactive) diagnosis of knowledge-based systems were proposed [meilicke2011](#bib.bib13) ; [DBLP:conf/ecai/ShchekotykhinFRF14](#bib.bib23) ; [Rodler2011](#bib.bib24) ; [Shchekotykhin2012](#bib.bib25) .
Debuggers of this type interactively ask users to provide feedback about the correctness of parts of the knowledge base or certain inferences.
One concrete implementation of such a debugger is *OntoDebug*
[DBLP:conf/foiks/SchekotihinRS18](#bib.bib26) ; [DBLP:conf/icbo/SchekotihinRSHT18a](#bib.bib27) , a plug-in for the *Protégé* ontology editor [noy2003protege](#bib.bib28) .
Compared to approaches that solely rely on test cases, the main advantage of such *query-based* techniques is that they can interactively guide their users to the true cause of the observed problem.
In addition, if users always provide correct answers to the debugger’s questions, then query-based diagnosis techniques can guarantee the identification of the true fault location.
The evaluation of sequential diagnosis techniques is usually based on simulations designed to measure, for instance, the time needed to derive the best next query to the expert or the total number of required queries to isolate a fault.
Such measures can however have certain limitations when assessing the true usefulness of a debugging approach.
In the domain of software engineering, the practical relevance of results obtained with the help of simulation-based evaluations of debugging tools was previously questioned by Parnin and Orso in [Parnin:2011:ADT:2001420.2001445](#bib.bib29) .
In recent years, a number of user studies were therefore conducted
that directly assess the usefulness of different academic approaches to tool-supported testing and debugging in the context of software engineering [DBLP:conf/euromicro/RamlerWS12](#bib.bib30) ; [DBLP:conf/issta/StaatsHKR12](#bib.bib31) ; [DBLP:conf/issta/FraserSMAP13](#bib.bib32) .
With the present paper, we continue this line of research.
Specifically, our goal was to assess the usefulness of query-based approaches for knowledge base debugging.
We correspondingly conducted *laboratory* studies in the form of testing and debugging exercises that were specifically designed to evaluate if query-based debugging is truly favorable over a previous debugging approach based on test cases.
Our corresponding research questions are therefore related to
(i) the efficiency and effectiveness of query-based debugging (i.e., do experts need less time, do they find more faults?),
(ii) the cognitive ability of users to find out which of the returned diagnoses is the correct one, and
(iii) the difficulty of answering system-generated queries for experts.
Among other aspects, our results
indicate that a query-based approach
can make the debugging process more efficient, without leading to a loss in effectiveness.
Furthermore, our experiments and previous studies show that experts sometimes provide wrong answers to the questions of a debugger (“oracle errors”). We therefore conducted additional pen-and-paper studies to develop and validate a prediction model that can be used to estimate the probability of oracle errors based on the cognitive complexity of a query or a test case.
The paper is organized as follows. After discussing previous works in Section [2](#S2 "2 Related Work ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."), we provide the technical background on MBD-based knowledge base debugging in Section [3](#S3 "3 Background: Knowledge Base Debugging with MBD ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."). Section [4](#S4 "4 Research Questions ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") presents the detailed research questions of our work, and Section [5](#S5 "5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") and Section [6](#S6 "6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") discuss the outcomes of our main studies. In Section [7](#S7 "7 Predicting Oracle Errors based on Query Complexity ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."), we finally present first results regarding our prediction model for oracle errors. The paper ends with a discussion of research limitations and a summary of our contributions.
2 Related Work
---------------
The process of creating and maintaining a KB is prone to error and—like in standard software development projects—experts can make mistakes when they encode the knowledge about a problem domain. Correspondingly, a number of techniques and tools for KB testing and debugging were proposed over the years. In the following, we first briefly review the main debugging strategies suggested in the literature and then specifically discuss previous works that aim at evaluating the utility of the corresponding *tools* with the help of user studies.
###
2.1 General Knowledge Base Debugging Approaches
We can mainly distinguish between *model-based* and *heuristic* approaches for KB debugging. Among the *model-based* approaches, those based on the general MBD principles proposed in [Reiter87](#bib.bib14) are probably the most popular ones. They have, for example, been used to debug ontologies [DBLP:conf/semweb/FriedrichS05](#bib.bib17) ; [Kalyanpur.Just.ISWC07](#bib.bib33) ; [Horridge2008](#bib.bib34) , constraints [paper:felfernig:2004](#bib.bib15) ; [Junker04](#bib.bib35) , or Answer Set Programming encodings [DBLP:conf/wlp/GebserPSTW07](#bib.bib36) ; [DBLP:journals/tplp/OetschPT10](#bib.bib37) .
In case of ontology debugging, MBD methods are used to find sets of axioms, called diagnoses (or: candidates/repairs), that must be modified by a developer in order to formulate the intended ontology.
From the technical perspective these methods can roughly be classified in glass-box and black-box ones [ParsiaSK05](#bib.bib38) ; [SchlobachHCH07](#bib.bib39) .
Glass-box approaches [SchlobachC03](#bib.bib40) ; [Kalyanpur2006a](#bib.bib41) ; [BaaderP07](#bib.bib42) ; [BaaderP10](#bib.bib43) ; [ChengQ11](#bib.bib44) ; [DBLP:conf/sum/OzakiP18](#bib.bib45) ; [DBLP:conf/cade/KazakovS18](#bib.bib46) modify the reasoner such that a single execution run outputs justifications or diagnoses directly.
Black-box methods, in contrast, usually apply various search techniques [DBLP:conf/semweb/FriedrichS05](#bib.bib17) ; [DBLP:conf/ecai/ShchekotykhinFRF14](#bib.bib23) ; [SchlobachHCH07](#bib.bib39) ; [DBLP:journals/ai/PenalozaS17](#bib.bib47) with calls to highly-optimized reasoners for consistency checking and/or the computation of irreducible faulty subsets of an ontology, called justifications or conflicts [Horridge2008](#bib.bib34) ; [Junker04](#bib.bib35) ; [KalyanpurPHS07](#bib.bib48) ; [DBLP:conf/aaai/Shchekotykhin15](#bib.bib49) .
In practical settings, given an inconsistent/incoherent ontology, an MBD approach might return more than one diagnosis. In order to restrict the number of obtained diagnoses to only relevant ones, Friedrich et al. [DBLP:conf/semweb/FriedrichS05](#bib.bib17) suggested the notion of MBD test cases, which were later also used in, e.g., [DBLP:journals/ws/ShchekotykhinFFR12](#bib.bib50) ; [DBLP:conf/kr/GrauJKZ12](#bib.bib51) ; [DBLP:conf/tableaux/FurbachS13](#bib.bib52) .
Each test case is defined as a (set of) axiom(s) that the intended ontology *must* or *must not* entail. A debugger can then use these test cases to focus only on those diagnoses for which it is guaranteed that a (suitable) modification of all the axioms of a diagnosis will result in an ontology that satisfies all test cases.
However, in many situations, it can be unclear to a developer which test cases should be formulated before the diagnosis session such that a debugger will be able to find the true cause of an unexpected output.
In this case, query-based approaches [dekleer1987](#bib.bib22) ; [rodler\_jair-2017](#bib.bib53) ; [Shchekotykhin2012](#bib.bib25) ; [rodler17dx\_activelearning](#bib.bib54) help the user to automatically create test cases. Specifically, the task of the users is reduced to answering a sequence of queries on whether or not the intended ontology must entail a given set of axioms.
Assuming that all answers of the developer are correct, a sequential debugger can determine the *true diagnosis* within the candidates, i.e., the one diagnosis that pinpoints the actually faulty parts of the knowledge base.
Depending on the complexity of the underlying problem, model-based methods can be comparably costly in terms of computation time and space.
However, one main advantage of MBD approaches is that any diagnosis which is returned is a precise and succinct explanation of all identified problems.
In contrast, *heuristic approaches* to KB debugging, such as [WangHRDS05](#bib.bib55) ; [DBLP:conf/f-egc/RousseyCSSB12](#bib.bib56) , are usually based on handcrafted syntactic pattern matching procedures, see, e.g., [Rector2004](#bib.bib9) ; [DBLP:journals/jamia/RectorBS11](#bib.bib57) .
Their main advantage is that they allow for fast fault localization in case model-based approaches are too slow.
Typically, these debugging procedures are designed to find (combinations of) syntax constructs in a KB that are highly likely to be faulty.
Examples of such constructs are, among others, the application of universal role restrictions and disjointness constraints in related ontology axioms [Roussey2009](#bib.bib10) .
Although these methods are computationally efficient, they are often *incomplete* (i.e.,
they can only identify bugs for which appropriate heuristics were defined) and sometimes *unsound* (i.e., they might return diagnoses that comprise actually correct axioms).
In this paper, we focus on the MBD approach presented in [Rodler2015phd](#bib.bib21) ; [Shchekotykhin2012](#bib.bib25) , since it
(i) provides guarantees about the completeness and soundness of the debugging algorithms and
(ii) allows for a precise fault localization by querying its users for additional information.
###
2.2 Usefulness Analysis of Tools
Since KBs in practice can be large and complex, the research community developed a number of Integrated Development Environments (IDEs) for KB creation and maintenance.
Examples of such environments are the MiniZinc IDE for constraint modeling [DBLP:conf/cp/NethercoteSBBDT07](#bib.bib58) , Protégé, which supports the creation of ontologies [Musen2015](#bib.bib59) , ASPIDE as a tool for the development of Answer Set Programs [DBLP:conf/lpnmr/FebbraroRR11](#bib.bib60) , as well as various Prolog IDEs like SWI-Prolog [DBLP:journals/tplp/WielemakerSTL12](#bib.bib61) .
Several of these IDEs come with embedded debugging support or can be extended with external tools like the OntoDebug plug-in used in this paper [dodaro2015interactive](#bib.bib62) ; [DBLP:conf/cpaior/LeoT17](#bib.bib63) ; [DBLP:conf/foiks/SchekotihinRS18](#bib.bib26) .
Two main approaches exist in the literature to evaluate the usefulness of KB debugging tools.
The first one conducts *computational analyses* providing insights about the usefulness of the tools indirectly.
The second form is based on *user studies*, where the performance and behavior of experts while using the debugger is observed and analyzed.
Most of the research in the field is based on the first form of experiments.
In comparison to user studies, conducting computational analyses is usually easier, since the only requirement for such evaluations is the existence of a representative collection of knowledge bases that contain real-world or injected faults.
Given such KBs, the performance of different debugging algorithms can be compared, for example, in terms of their time and space complexity, the number of calls to the reasoner, the theoretical number of required user interactions, or the precision of the fault localization process.
The obtained results can then be used to *indirectly* assess if a given debugging approach is favorable over another.
For instance, we can assume that the reduction of the required computation time increases the usefulness of a system, e.g., because the developer gets faster feedback and can find more bugs in a shorter time.
However, such computational analyses have their limitations. They, for example, cannot be used to determine if certain assumptions made by the evaluated debugging methods actually hold. For instance, the interactive ontology debugging method suggested in [Shchekotykhin2012](#bib.bib25) assumes that a user can decide with certainty if the intended ontology must entail an arbitrary axiom or not. If this assumption does not hold, i.e., the user cannot (correctly) answer all queries of the debugger, the fault localization process might not lead to a unique (correct) result.
User studies can help us to verify such assumptions and can give us additional insights regarding the acceptance and true usefulness of a debugging tool. In the literature, only a few examples of such user studies exist.
For instance, the model-based ontology debugging approach proposed in [Kalyanpur.Just.ISWC07](#bib.bib33) and implemented in the Swoop editor [DBLP:journals/ws/KalyanpurPSGH06](#bib.bib64) was evaluated by twelve undergraduate and graduate students [Kalyanpur2006a](#bib.bib41) .
The authors’ goal was to investigate if providing *justifications* for certain inferences can help users find and repair bugs more efficiently.
Every subject that participated in the study had at least nine months of experience in ontology engineering and went through an additional 30-minute training session on ontology debugging.
The results of the study indicate that tool support in the form of justifications during the debugging process is essential for successful fault localization. However, given the small number of participants, the authors were not able to validate that their results are statistically significant.
Another user study reported in [Horridge2011b](#bib.bib65) investigated if justifications generated by model-based ontology debuggers can actually be understood by users.
Experiments were conducted with 14 undergraduate students and their results showed that justifications can be separated into (cognitively) easy and hard ones.
Unfortunately, also in this case the small number of participants did not allow the authors to obtain sufficient statistical evidence to understand why the users find some explanations hard or easy to comprehend.
Finally, a collection of heuristic approaches [Rector2004](#bib.bib9) ; [Roussey2009](#bib.bib10) ; [DBLP:conf/ekaw/Svab-ZamazalS08](#bib.bib66) was studied in [corcho2009pattern](#bib.bib67) and compared with an MBD approach [Horridge2008](#bib.bib34) .
All 14 subjects participating in the study were educated software engineers and had some experience with ontologies, but no knowledge about hydrology, which was the domain of the study.
The task of the participants was to debug and repair an ontology without understanding exactly what it was about.
One group of six participants was supported by the MBD approach; the remaining subjects used a heuristic strategy.
The obtained results were not fully conclusive.
Both participant groups needed about the same amount of time, and no clear preference for either of the approaches was observed.
Only for the problem of repairing the ontology, the heuristic patterns helped the subjects to identify bugs more accurately.
However, this result must be interpreted with care because the model-based tool did not provide any repair support at that time.
In our work, we continue this line of research which aims to assess the usefulness of debugging approaches based on user studies. Similarly to previous work, we base our user studies on different KBs (ontologies) in which we injected a number of faults.
In addition, like in previous research, we involve students in the studies, who have a certain level of education in the development and debugging of ontologies and who received some initial training with the tool.
In contrast to previous studies, we were able to recruit a larger number of participants, which allows us to apply certain statistical analyses.
Moreover, we are focusing not on justifications, which are alternative explanations of *one* fault, but on diagnoses, where each diagnosis provides a potential characterization of *all* faults in an ontology.
3 Background: Knowledge Base Debugging with MBD
------------------------------------------------
In this section, we outline the main principles of applying model-based diagnosis techniques for knowledge base debugging. We use the particular problem of ontology debugging to illustrate the problem. Ontology debugging was also the task in the user studies reported in this paper, where the participants used the *OntoDebug*333<http://isbi.aau.at/ontodebug/>
debugging plug-in [DBLP:conf/foiks/SchekotihinRS18](#bib.bib26) of the popular ontology editing tool *Protégé* [noy2003protege](#bib.bib28) .444<https://protege.stanford.edu/>
The underlying principles and algorithms of the debugging approach are, however, not limited to ontologies and can be applied for various forms of knowledge representation and reasoning, see [Rodler2015phd](#bib.bib21) ; [SchekotihinSchmitzEtAl2016](#bib.bib68) ; [dodaro2015interactive](#bib.bib62) ; [rodler17dx\_reducing](#bib.bib69) .
###
3.1 Model-based Diagnosis for Ontology Debugging
In the field of computer science, ontologies are the core of semantic systems.
Using a language like OWL [OWL2specification](#bib.bib70) , they formally describe the relevant concepts in a domain as well as their properties and interrelations.
Usually the main goal of semantic applications is to use some form of logic-based reasoning to derive additional facts (*entailments*) from the given knowledge base.
The starting point for a debugging session is normally when we observe a discrepancy between what we call the *intended ontology* (denoted as 𝒪\*superscript𝒪\mathcal{O^{\*}}caligraphic\_O start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT) and
the current version of an ontology 𝒪𝒪\mathcal{O}caligraphic\_O.
Such a discrepancy could be the inconsistency of 𝒪𝒪\mathcal{O}caligraphic\_O, the unsatisfiability of its classes, or the presence or absence of certain entailments [DBLP:conf/foiks/SchekotihinRS18](#bib.bib26) .
In the biology domain, a knowledge engineer might, for example, expect that the ontology-based system is able to deduce from the given axioms that men are animals.555See, e.g., <http://owl.man.ac.uk/2003/why/latest/>.
If, however, it is inferred, e.g., that men and animals are disjoint, the underlying KB is incorrect and the problem is to find one or more faults in the ontological axioms.
####
3.1.1 Formal Characterization: Diagnosis Problem
The automated fault localization process starts with the generation of a diagnosis problem instance, which is formally defined as follows [paper:felfernig:2004](#bib.bib15) ; [DBLP:conf/foiks/SchekotihinRS18](#bib.bib26) .
######
Definition 1 (Diagnosis Problem Instance (DPI)).
Let 𝒪𝒪\mathcal{O}caligraphic\_O be an ontology (a set of possibly faulty axioms) and ℬℬ\mathcal{B}caligraphic\_B be a background theory (a set of correct axioms) where 𝒪∩ℬ=∅𝒪ℬ\mathcal{O}\cap\mathcal{B}=\emptysetcaligraphic\_O ∩ caligraphic\_B = ∅, and let 𝒪\*superscript𝒪\mathcal{O^{\*}}caligraphic\_O start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT denote the (unknown) intended ontology.
Moreover, let P𝑃Pitalic\_P and N𝑁Nitalic\_N be sets of axioms where 𝒪\*∪ℬsuperscript𝒪ℬ\mathcal{O^{\*}}\cup\mathcal{B}caligraphic\_O start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∪ caligraphic\_B *entails* each p∈P𝑝𝑃p\in Pitalic\_p ∈ italic\_P and *does not entail* any n∈N𝑛𝑁n\in Nitalic\_n ∈ italic\_N.
Then, the tuple ⟨𝒪,ℬ,P,N⟩𝒪ℬ𝑃𝑁\left\langle\mathcal{O},\mathcal{B},P,N\right\rangle⟨ caligraphic\_O , caligraphic\_B , italic\_P , italic\_N ⟩ is called a *diagnosis problem instance (DPI)*.
A diagnosis then is a set of axioms such that the removal of these axioms from the ontology, and the subsequent addition of the background knowledge and the positive test cases, yields a consistent (coherent) ontology that satisfies all test cases.
######
Definition 2 (Diagnosis).
Let ⟨𝒪,ℬ,P,N⟩𝒪ℬ𝑃𝑁\left\langle\mathcal{O},\mathcal{B},P,N\right\rangle⟨ caligraphic\_O , caligraphic\_B , italic\_P , italic\_N ⟩ be a DPI.
Then, a set of axioms 𝒟⊆𝒪𝒟𝒪\mathcal{D}\subseteq\mathcal{O}caligraphic\_D ⊆ caligraphic\_O is a *diagnosis* iff both of the following conditions hold:
1. 1.
(𝒪∖𝒟)∪P∪ℬ𝒪𝒟𝑃ℬ(\mathcal{O}\setminus\mathcal{D})\cup P\cup\mathcal{B}( caligraphic\_O ∖ caligraphic\_D ) ∪ italic\_P ∪ caligraphic\_B is consistent (coherent, if required)666An ontology 𝒪𝒪\mathcal{O}caligraphic\_O is *coherent* iff there do not exist any unsatisfiable classes in 𝒪𝒪\mathcal{O}caligraphic\_O. A class X𝑋Xitalic\_X is *unsatisfiable* in an ontology 𝒪𝒪\mathcal{O}caligraphic\_O iff, for each interpretation ℐℐ\mathcal{I}caligraphic\_I of 𝒪𝒪\mathcal{O}caligraphic\_O where ℐ⊧𝒪modelsℐ𝒪\mathcal{I}\models\mathcal{O}caligraphic\_I ⊧ caligraphic\_O, it holds that Xℐ=∅superscript𝑋ℐX^{\mathcal{I}}=\emptysetitalic\_X start\_POSTSUPERSCRIPT caligraphic\_I end\_POSTSUPERSCRIPT = ∅. See also ([qi2007measuring,](#bib.bib71) , Def. 1 and 2)
2. 2.
(𝒪∖𝒟)∪P∪ℬ⊧̸nnot-models𝒪𝒟𝑃ℬ𝑛(\mathcal{O}\setminus\mathcal{D})\cup P\cup\mathcal{B}\not\models n( caligraphic\_O ∖ caligraphic\_D ) ∪ italic\_P ∪ caligraphic\_B ⊧̸ italic\_n for all n∈N𝑛𝑁n\in Nitalic\_n ∈ italic\_N
A diagnosis 𝒟𝒟\mathcal{D}caligraphic\_D is *minimal* iff there is no 𝒟′⊂𝒟superscript𝒟normal-′𝒟\mathcal{D}^{\prime}\subset\mathcal{D}caligraphic\_D start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⊂ caligraphic\_D such that 𝒟′superscript𝒟normal-′\mathcal{D}^{\prime}caligraphic\_D start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is a diagnosis.
Different diagnosis computation algorithms exist; they can be distinguished
based on whether they generate diagnoses *indirectly*, i.e.,
via the computation of conflict sets,
or *directly*, e.g., via divide-and-conquer techniques or through the prior compilation of the problem to an alternative target representation like SAT
[Reiter87](#bib.bib14) ; [Rodler2015phd](#bib.bib21) ; [DBLP:conf/ecai/ShchekotykhinFRF14](#bib.bib23) ; [SchlobachHCH07](#bib.bib39) ; [rodler2018socs](#bib.bib72) ; [RodlerH18\_dx](#bib.bib73) ; [darwiche2001decomposable](#bib.bib74) ; [jiang2003computation](#bib.bib75) ; [torasso2006](#bib.bib76) ; [metodi2014](#bib.bib77) .
In addition, the diagnoses can be *ranked* (ordered) according to various criteria, such as their cardinality, i.e., number of axioms in a diagnosis, or their likelihood [dekleer1987](#bib.bib22) . Such a ranking can simplify the analysis and comparison of diagnoses by allowing the user to focus on the most important ones.
####
3.1.2 Example
We use the following example to illustrate how MBD techniques can be applied to ontology debugging.
Let our ontology consist of the following *terminological axioms*
{𝑎𝑥1:A⊑B,\{\mathit{ax}\_{1}:A\sqsubseteq B,{ italic\_ax start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : italic\_A ⊑ italic\_B ,
𝑎𝑥2:B⊑C,:subscript𝑎𝑥2square-image-of-or-equals𝐵𝐶\mathit{ax}\_{2}:B\sqsubseteq C,italic\_ax start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT : italic\_B ⊑ italic\_C ,
𝑎𝑥3:C⊑D,:subscript𝑎𝑥3square-image-of-or-equals𝐶𝐷\mathit{ax}\_{3}:C\sqsubseteq D,italic\_ax start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT : italic\_C ⊑ italic\_D ,
𝑎𝑥4:D⊑R}\mathit{ax}\_{4}:D\sqsubseteq R\}italic\_ax start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT : italic\_D ⊑ italic\_R }.
They define that A𝐴Aitalic\_A is a subclass of B𝐵Bitalic\_B, B𝐵Bitalic\_B a subclass of C𝐶Citalic\_C etc. In a specific domain, this could, e.g., mean that a *MathStudent* is a subclass of *Student*, which is a subclass of *UnivMember*, etc.
Further, the ontology contains two *assertional axioms* {𝑎𝑥5:A(v),𝑎𝑥6:A(w)}conditional-setsubscript𝑎𝑥5:𝐴𝑣subscript𝑎𝑥6
𝐴𝑤\{\mathit{ax}\_{5}:A(v),\;\mathit{ax}\_{6}:A(w)\}{ italic\_ax start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT : italic\_A ( italic\_v ) , italic\_ax start\_POSTSUBSCRIPT 6 end\_POSTSUBSCRIPT : italic\_A ( italic\_w ) }, which specify that v𝑣vitalic\_v and w𝑤witalic\_w are instances of class A𝐴Aitalic\_A.
In a practical application, we could have an assertion like *MathStudent(john)*.
Let us assume that the two assertions are known to be correct, and thus should not be considered as fault candidates in the debugging process. To this end, the knowledge engineer would add these axioms to the *background theory* ℬℬ\mathcal{B}caligraphic\_B.
That is, the ontology would be split into a possibly faulty part 𝒪:={𝑎𝑥1,…,𝑎𝑥4}assign𝒪subscript𝑎𝑥1…subscript𝑎𝑥4\mathcal{O}:=\left\{\mathit{ax}\_{1},\dots,\mathit{ax}\_{4}\right\}caligraphic\_O := { italic\_ax start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_ax start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT } and a correct part ℬ:={𝑎𝑥5,𝑎𝑥6}assignℬsubscript𝑎𝑥5subscript𝑎𝑥6\mathcal{B}:=\left\{\mathit{ax}\_{5},\mathit{ax}\_{6}\right\}caligraphic\_B := { italic\_ax start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT , italic\_ax start\_POSTSUBSCRIPT 6 end\_POSTSUBSCRIPT } in this specific case.
To make sure that the ontology is correct, we assume the user specifies a set of *positive test cases* P={B(v)}𝑃𝐵𝑣P=\{B(v)\}italic\_P = { italic\_B ( italic\_v ) } and a set of *negative test cases* N={R(w)}𝑁𝑅𝑤N=\{R(w)\}italic\_N = { italic\_R ( italic\_w ) }, which means that the intended ontology entails that v𝑣vitalic\_v is of class B𝐵Bitalic\_B and does *not* entail that w𝑤witalic\_w is of class R𝑅Ritalic\_R.
Unfortunately, the ontology 𝒪𝒪\mathcal{O}caligraphic\_O, together with the correct axioms ℬℬ\mathcal{B}caligraphic\_B,
entails R(w)𝑅𝑤R(w)italic\_R ( italic\_w ), i.e., 𝒪∪ℬ⊧R(w)models𝒪ℬ𝑅𝑤\mathcal{O}\cup\mathcal{B}\models R(w)caligraphic\_O ∪ caligraphic\_B ⊧ italic\_R ( italic\_w ), since A(w)𝐴𝑤A(w)italic\_A ( italic\_w ) holds and A𝐴Aitalic\_A is transitively a subclass of R𝑅Ritalic\_R.
Now, given the specified DPI ⟨𝒪,ℬ,P,N⟩𝒪ℬ𝑃𝑁\left\langle\mathcal{O},\mathcal{B},P,N\right\rangle⟨ caligraphic\_O , caligraphic\_B , italic\_P , italic\_N ⟩ as an input, a debugging system will identify the following four minimal diagnoses:
𝒟1:[𝑎𝑥1],𝒟2:[𝑎𝑥2],𝒟3:[𝑎𝑥3],and 𝒟4:[𝑎𝑥4]:subscript𝒟1delimited-[]subscript𝑎𝑥1subscript𝒟2:delimited-[]subscript𝑎𝑥2subscript𝒟3:delimited-[]subscript𝑎𝑥3and subscript𝒟4:delimited-[]subscript𝑎𝑥4\mathcal{D}\_{1}:\left[\mathit{ax}\_{1}\right],\;\mathcal{D}\_{2}:\left[\mathit{ax}\_{2}\right],\;\mathcal{D}\_{3}:\left[\mathit{ax}\_{3}\right],\;\text{and }\;\mathcal{D}\_{4}:\left[\mathit{ax}\_{4}\right]caligraphic\_D start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : [ italic\_ax start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ] , caligraphic\_D start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT : [ italic\_ax start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ] , caligraphic\_D start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT : [ italic\_ax start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ] , and caligraphic\_D start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT : [ italic\_ax start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT ].
The intuitive explanation why we get these diagnoses is that
the removal of any individual axiom in 𝒪𝒪\mathcal{O}caligraphic\_O would break the subclass relationship chain, and the undesired entailment R(w)𝑅𝑤R(w)italic\_R ( italic\_w ) would not be present anymore.
However, based on the positive and negative test cases alone, an MBD algorithm cannot discriminate between the four diagnoses, and we cannot derive the *true* cause of the problem. The user can therefore either inspect all diagnoses manually, or provide more information, e.g., in terms of additional test cases.
Assume that the user specifies an additional negative test case B(w)𝐵𝑤B(w)italic\_B ( italic\_w ). With N={R(w),B(w)}𝑁𝑅𝑤𝐵𝑤N=\{R(w),B(w)\}italic\_N = { italic\_R ( italic\_w ) , italic\_B ( italic\_w ) } and P={B(v)}𝑃𝐵𝑣P=\{B(v)\}italic\_P = { italic\_B ( italic\_v ) }, a debugger will return 𝒟1subscript𝒟1\mathcal{D}\_{1}caligraphic\_D start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT as the only minimal diagnosis. Because, the modifications suggested by the sets of axioms 𝒟2,𝒟3, and 𝒟4subscript𝒟2subscript𝒟3 and subscript𝒟4\mathcal{D}\_{2},\;\mathcal{D}\_{3},\text{ and }\mathcal{D}\_{4}caligraphic\_D start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , caligraphic\_D start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , and caligraphic\_D start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT leave 𝑎𝑥1subscript𝑎𝑥1\mathit{ax}\_{1}italic\_ax start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT untouched, and 𝑎𝑥1subscript𝑎𝑥1\mathit{ax}\_{1}italic\_ax start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT in conjunction with A(w)∈ℬ𝐴𝑤ℬA(w)\in\mathcal{B}italic\_A ( italic\_w ) ∈ caligraphic\_B leads to the entailment of B(w)𝐵𝑤B(w)italic\_B ( italic\_w ), and thus to a violation of the negative test cases.
However, the modified ontology 𝒪1:=𝒪∖𝒟1assignsubscript𝒪1𝒪subscript𝒟1\mathcal{O}\_{1}:=\mathcal{O}\setminus\mathcal{D}\_{1}caligraphic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT := caligraphic\_O ∖ caligraphic\_D start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT now does not entail the positive test case B(v)𝐵𝑣B(v)italic\_B ( italic\_v ) anymore. Therefore, 𝒪1subscript𝒪1\mathcal{O}\_{1}caligraphic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT must be extended somehow.
Since the debugger cannot know how to correctly extend the knowledge base, one strategy is to use the required entailments P𝑃Pitalic\_P explicitly as an extension [Rodler2015phd](#bib.bib21) . Hence, in our example, one would simply add B(v)𝐵𝑣B(v)italic\_B ( italic\_v ) to 𝒪1subscript𝒪1\mathcal{O}\_{1}caligraphic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT.
###
3.2 Sequential Diagnosis
As the example shows, additional knowledge (in our case, test cases) can help to further focus the debugging process and rule out possible fault candidates. Not all test cases are, however, equally helpful.
One of the goals of *sequential diagnosis* is therefore to automatically identify “good” or optimal test cases, and to interactively ask the user (or some other *oracle*) to classify the generated test cases as either positive (intended entailment) or negative (non-intended entailment). We call such a (set of) test case(s) selected by the system and shown to the user for classification a *query*. Based on the user’s answer, the debugger can then update its knowledge in terms of the positive and negative test case sets and repeat the process until only one single diagnosis remains.
####
3.2.1 Formal Characterization: Oracle and Queries
The notions of an oracle and a query can formally be described as follows. An oracle categorizes elements of a set of axioms either as positive or negative test cases by checking if the intended ontology must or must not entail these elements.
######
Definition 3 (Oracle).
Let 𝐀𝐱𝐀𝐱\mathbf{Ax}bold\_Ax be a set of axioms. Furthermore, let ans:𝐀𝐱→{P,N}normal-:𝑎𝑛𝑠normal-→𝐀𝐱𝑃𝑁ans:\mathbf{Ax}\to\left\{P,N\right\}italic\_a italic\_n italic\_s : bold\_Ax → { italic\_P , italic\_N } be a function which assigns axioms in 𝐀𝐱𝐀𝐱\mathbf{Ax}bold\_Ax to either the positive or the negative test cases.
Then, we call ans𝑎𝑛𝑠ansitalic\_a italic\_n italic\_s an *oracle* w.r.t. the intended ontology 𝒪\*superscript𝒪\mathcal{O^{\*}}caligraphic\_O start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, iff for any 𝑎𝑥∈𝐀𝐱𝑎𝑥𝐀𝐱\mathit{ax}\in\mathbf{Ax}italic\_ax ∈ bold\_Ax both of the following conditions hold:
| | | | |
| --- | --- | --- | --- |
| | ans(𝑎𝑥)=P𝑎𝑛𝑠𝑎𝑥𝑃\displaystyle ans(\mathit{ax})=P\quaditalic\_a italic\_n italic\_s ( italic\_ax ) = italic\_P | ⟹𝒪\*∪ℬ⊧𝑎𝑥modelssuperscript𝒪ℬ
𝑎𝑥\displaystyle\implies\quad\mathcal{O^{\*}}\cup\mathcal{B}\models\mathit{ax}⟹ caligraphic\_O start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∪ caligraphic\_B ⊧ italic\_ax | |
| | ans(𝑎𝑥)=N𝑎𝑛𝑠𝑎𝑥𝑁\displaystyle ans(\mathit{ax})=N\quaditalic\_a italic\_n italic\_s ( italic\_ax ) = italic\_N | ⟹𝒪\*∪ℬ⊧̸𝑎𝑥not-modelssuperscript𝒪ℬ
𝑎𝑥\displaystyle\implies\quad\mathcal{O^{\*}}\cup\mathcal{B}\not\models\mathit{ax}⟹ caligraphic\_O start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∪ caligraphic\_B ⊧̸ italic\_ax | |
Note that the function ans𝑎𝑛𝑠ansitalic\_a italic\_n italic\_s can either be total or partial. In the first case, the oracle (user) is a *full domain expert* and able to classify all queried axioms; in the latter case, there might be axioms that the oracle is not able to classify.
Since our goal is to narrow down the set of possible diagnoses, a debugger should propose only queries that guarantee the acquisition of *relevant* information. In other words, each query should eliminate at least one diagnosis, given *any* answer of a full domain expert. Generally, a query consists of one or more axioms and can be characterized as follows.777Whenever we speak of a “query” throughout this work, we mean a query in terms of Definition [4](#Thmdefinition4 "Definition 4 (Query). ‣ 3.2.1 Formal Characterization: Oracle and Queries ‣ 3.2 Sequential Diagnosis ‣ 3 Background: Knowledge Base Debugging with MBD ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."), which must not be confused, e.g., with the concept of a query in terms of a query language such as OWL-QL [fikes2004owl](#bib.bib78) . In our scenario, queries are answered *based on the knowledge of an oracle about the intended ontology*, with the aim to locate faults in an ontology. Queries in terms of query languages are answered *based on the knowledge specified in an ontology, knowledge graph, etc.* in order to find answers to questions of relevance.
######
Definition 4 (Query).
Let ⟨𝒪,ℬ,P,N⟩𝒪ℬ𝑃𝑁\left\langle\mathcal{O},\mathcal{B},P,N\right\rangle⟨ caligraphic\_O , caligraphic\_B , italic\_P , italic\_N ⟩ be a DPI, 𝐃𝐃{\bf{D}}bold\_D be a set of diagnoses for this DPI, and Q𝑄Qitalic\_Q be a set of axioms. Moreover, let QansP:={q∈Q∣ans(q)=P}assignsuperscriptsubscript𝑄𝑎𝑛𝑠𝑃conditional-set𝑞𝑄𝑎𝑛𝑠𝑞𝑃Q\_{ans}^{P}:=\left\{q\in Q\mid ans(q)=P\right\}italic\_Q start\_POSTSUBSCRIPT italic\_a italic\_n italic\_s end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_P end\_POSTSUPERSCRIPT := { italic\_q ∈ italic\_Q ∣ italic\_a italic\_n italic\_s ( italic\_q ) = italic\_P } and QansN:={q∈Q∣ans(q)=N}assignsuperscriptsubscript𝑄𝑎𝑛𝑠𝑁conditional-set𝑞𝑄𝑎𝑛𝑠𝑞𝑁Q\_{ans}^{N}:=\left\{q\in Q\mid ans(q)=N\right\}italic\_Q start\_POSTSUBSCRIPT italic\_a italic\_n italic\_s end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT := { italic\_q ∈ italic\_Q ∣ italic\_a italic\_n italic\_s ( italic\_q ) = italic\_N } denote the subsets of Q𝑄Qitalic\_Q assigned to P𝑃Pitalic\_P and N𝑁Nitalic\_N by an oracle ans𝑎𝑛𝑠ansitalic\_a italic\_n italic\_s.
Then we call Q𝑄Qitalic\_Q a *query* for 𝐃𝐃{\bf{D}}bold\_D iff, for any classification QansP,QansNsuperscriptsubscript𝑄𝑎𝑛𝑠𝑃superscriptsubscript𝑄𝑎𝑛𝑠𝑁Q\_{ans}^{P},Q\_{ans}^{N}italic\_Q start\_POSTSUBSCRIPT italic\_a italic\_n italic\_s end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_P end\_POSTSUPERSCRIPT , italic\_Q start\_POSTSUBSCRIPT italic\_a italic\_n italic\_s end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT of the axioms in Q𝑄Qitalic\_Q of a full domain expert oracle ans𝑎𝑛𝑠ansitalic\_a italic\_n italic\_s, at least one diagnosis in 𝐃𝐃{\bf{D}}bold\_D is no longer a diagnosis for the new DPI ⟨𝒪,ℬ,P∪QansP,N∪QansN⟩𝒪ℬ𝑃superscriptsubscript𝑄𝑎𝑛𝑠𝑃𝑁superscriptsubscript𝑄𝑎𝑛𝑠𝑁\left\langle\mathcal{O},\mathcal{B},P\cup Q\_{ans}^{P},N\cup Q\_{ans}^{N}\right\rangle⟨ caligraphic\_O , caligraphic\_B , italic\_P ∪ italic\_Q start\_POSTSUBSCRIPT italic\_a italic\_n italic\_s end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_P end\_POSTSUPERSCRIPT , italic\_N ∪ italic\_Q start\_POSTSUBSCRIPT italic\_a italic\_n italic\_s end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ⟩.
Different strategies were proposed in the literature to determine “good” or optimal queries, see, e.g., [dekleer1987](#bib.bib22) ; [DBLP:journals/corr/Rodler16a](#bib.bib79) ; [rodler\_singleton-2019](#bib.bib80) . Usually, this is accomplished by computing a set of diagnoses and by analyzing the effects of applying the different diagnoses with respect to a potential query. Complementary to this approach, a recent work suggests novel ways of diagnosis computation to reduce the user’s time and effort for query answering [rodler2018socs](#bib.bib72) .
In general, a byproduct of the process of determining the queries is a *quality* estimate for each resulting query. Such a quality measure can, for example, be based on the expected information gain after the user has answered the query [dekleer1987](#bib.bib22) , on reinforcement learning [Rodler2013](#bib.bib81) , or on
criteria [rodler17dx\_activelearning](#bib.bib54) ; [rodler2018ruleML](#bib.bib82) ; [RodlerS18\_dx](#bib.bib83) adopted from the field of active learning [settles2012](#bib.bib84) . Finally, since the generation of queries requires potentially costly calls to an underlying reasoner, approaches exist that aim to minimize the number of these computations [Rodler2015phd](#bib.bib21) ; [SchekotihinSchmitzEtAl2016](#bib.bib68) ; [rodler-dx17](#bib.bib85) ; [rodler17dx\_activelearning](#bib.bib54) .
####
3.2.2 Example
One way to assess the utility of different possible test cases—which in the end correspond to queries to the user—is to analyze the entailments of the ontologies 𝒪i\*:=(𝒪∖𝒟i)∪Passignsuperscriptsubscript𝒪𝑖𝒪subscript𝒟𝑖𝑃\mathcal{O}\_{i}^{\*}:=(\mathcal{O}\setminus\mathcal{D}\_{i})\cup Pcaligraphic\_O start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT := ( caligraphic\_O ∖ caligraphic\_D start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ∪ italic\_P after the application of the different diagnoses 𝒟isubscript𝒟𝑖\mathcal{D}\_{i}caligraphic\_D start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT.
In our example from above, the four ontologies
𝒪1\*,…,𝒪4\*superscriptsubscript𝒪1…superscriptsubscript𝒪4\mathcal{O}\_{1}^{\*},\dots,\mathcal{O}\_{4}^{\*}caligraphic\_O start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , … , caligraphic\_O start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT
have, among others, the following entailments:
| | | |
| --- | --- | --- |
| | 𝒪1\*:∅,𝒪2\*:{B(w)},𝒪3\*:{B(w),C(w)},and𝒪4\*:{B(w),C(w),D(w)}:subscriptsuperscript𝒪1subscriptsuperscript𝒪2:𝐵𝑤subscriptsuperscript𝒪3:𝐵𝑤𝐶𝑤andsubscriptsuperscript𝒪4:𝐵𝑤𝐶𝑤𝐷𝑤\mathcal{O}^{\*}\_{1}:\emptyset,\;\,\,\,\mathcal{O}^{\*}\_{2}:\{B(w)\},\;\,\,\,\mathcal{O}^{\*}\_{3}:\{B(w),C(w)\},\;\,\,\,\text{and}\;\,\,\,\mathcal{O}^{\*}\_{4}:\{B(w),C(w),D(w)\}caligraphic\_O start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : ∅ , caligraphic\_O start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT : { italic\_B ( italic\_w ) } , caligraphic\_O start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT : { italic\_B ( italic\_w ) , italic\_C ( italic\_w ) } , and caligraphic\_O start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT : { italic\_B ( italic\_w ) , italic\_C ( italic\_w ) , italic\_D ( italic\_w ) } | |
These entailments can be obtained, e.g., with the help of the realization service [DLHandbook](#bib.bib3) of a Description Logic reasoner [Shearer2008](#bib.bib86) ; [sirin2007pellet](#bib.bib87) and can serve as test cases.
Let us assume that the user knows that D(w)𝐷𝑤D(w)italic\_D ( italic\_w ) must be entailed and adds it as a positive test case, i.e., the diagnosis problem instance is now
| | | |
| --- | --- | --- |
| | DPI=⟨𝒪,{A(v),A(w)},{B(v),D(w)},{R(w)}⟩𝐷𝑃𝐼𝒪𝐴𝑣𝐴𝑤𝐵𝑣𝐷𝑤𝑅𝑤DPI=\langle\mathcal{O},\{A(v),A(w)\},\{B(v),D(w)\},\{R(w)\}\rangleitalic\_D italic\_P italic\_I = ⟨ caligraphic\_O , { italic\_A ( italic\_v ) , italic\_A ( italic\_w ) } , { italic\_B ( italic\_v ) , italic\_D ( italic\_w ) } , { italic\_R ( italic\_w ) } ⟩ | |
Given this additional information, a model-based debugger will return only one single diagnosis, 𝒟4=[𝑎𝑥4]subscript𝒟4delimited-[]subscript𝑎𝑥4\mathcal{D}\_{4}=[\mathit{ax}\_{4}]caligraphic\_D start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT = [ italic\_ax start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT ]. All other diagnoses that existed for the problem instance without the new test case, are no longer minimal diagnoses. Specifically, the deletion of each of the diagnoses 𝒟1subscript𝒟1\mathcal{D}\_{1}caligraphic\_D start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, 𝒟2subscript𝒟2\mathcal{D}\_{2}caligraphic\_D start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, or 𝒟3subscript𝒟3\mathcal{D}\_{3}caligraphic\_D start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT from 𝒪𝒪\mathcal{O}caligraphic\_O does not affect 𝑎𝑥4subscript𝑎𝑥4\mathit{ax}\_{4}italic\_ax start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT, which is however—due to D(w)∈P𝐷𝑤𝑃D(w)\in Pitalic\_D ( italic\_w ) ∈ italic\_P—responsible for the unwanted entailment R(w)∈N𝑅𝑤𝑁R(w)\in Nitalic\_R ( italic\_w ) ∈ italic\_N.
Sequential diagnosis algorithms usually make analyses of this type to determine queries (test cases) that are likely to narrow down the set of remaining diagnoses. At the end, the user only has to categorize such system-generated queries and acts as an *oracle* for the debugger.
###
3.3 The OntoDebug Plug-In to Protégé
The described concepts for sequential and test case based MBD for ontologies were implemented in the OntoDebug plug-in [DBLP:conf/foiks/SchekotihinRS18](#bib.bib26) ; [DBLP:conf/icbo/SchekotihinRSHT18a](#bib.bib27) of the widely-used Protégé ontology editor.
There are two main situations when the user of the tool—possibly after some maintenance activities—might initiate a debugging session with the OntoDebug plug-in. First, the built-in reasoner of Protégé might detect that the given ontology is faulty, e.g. inconsistent or incoherent, in itself.888In contrast to other application areas of model-based diagnosis techniques—such as fault localization in electronic circuits [Reiter87](#bib.bib14) ; [dekleer1987](#bib.bib22) —inconsistencies can be present in the context of ontology debugging problems without any initially given test cases (observations).
Second, even if the ontology in itself is consistent and coherent, the user might want to ensure that the implemented ontology corresponds to the intended one by specifying one or more test cases. If the test cases lead to the disclosure of unexpected entailments, an inconsistency or an incoherency, it is obvious that something is wrong with the ontology.

Figure 1: Interactive ontology debugging with the latest version of OntoDebug.
One possible first step for the user when starting the debugging process with OntoDebug—independent of how the user detected that there is a problem—is to tell the system which parts of the ontology are definitely correct (and thus are a part of the background knowledge). This task can be accomplished using the functionality at the right-most side of the user interface of OntoDebug shown in Figure [1](#S3.F1 "Figure 1 ‣ 3.3 The OntoDebug Plug-In to Protégé ‣ 3 Background: Knowledge Base Debugging with MBD ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."). In this example, the user works on problems of the “Koala” ontology of the Protégé project, an ontology that was created for educational purposes which contains typical problems that can occur during ontology development. Specifically, in the example, the user has declared among other things that the axiom “*BA* (bachelor of arts)
is of type *Degree*” is definitely correct.
Once this optional step is done, the user can start the model-based debugging process. To this end, the tool, as mentioned above, supports two general strategies.
* •
First, the user can inspect the list of diagnoses returned by OntoDebug to locate the fault and add additional test cases if the list of diagnoses contains too many elements. Generally, the idea is that the provision of additional, carefully designed test cases will help to narrow down the set of
possible diagnoses, i.e., the possible causes for the problems in the ontology.
In the example shown in Figure [1](#S3.F1 "Figure 1 ‣ 3.3 The OntoDebug Plug-In to Protégé ‣ 3 Background: Knowledge Base Debugging with MBD ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."), the user has specified one positive test case (“*Student* is a subclass of *Person*”) and a negative one (“*Person* is a subclass of *Marsupials*”), using the sub-window in the middle of the screen.
* •
The second supported debugging strategy is the query-based one. In this case, the tool will—based on the inconsistent (incoherent) ontology or the failing test cases—compute the first query to the user. In our example, the system determined a query consisting of two axioms shown in the top-left sub-window of the user interface. The two axioms to be categorized by the user are “*KoalaWithPhD* is a subclass of *Koala*” and “*KoalaWithPhD* is a subclass of *Person*.” The user can answer the query by using the green and red plus and minus symbols (or leave some axioms uncategorized), and then submit the answer to the system. The system adds the user’s feedback to the “Acquired Test Cases” and then restarts the computations using the additionally provided information. In case the information was sufficient to identify a single diagnosis as the cause of the problem, the user is pointed to the faulty parts of the ontology. Otherwise, the system computes a new query to the user and the cycle repeats until only one diagnosis remains.
Generally, one main difference is that in the approach based on test cases the users have to think by themselves about good test cases, while in the case of interactive debugging, user responses to the system-generated queries are taken as additional test cases. In this latter case, the selection of the query, and correspondingly the test case(s), is based on an internal reasoning process that ensures that the most informative queries are chosen.
4 Research Questions
---------------------
The main promise of interactive, query-based approaches is that they are able to systematically guide users (e.g., knowledge engineers or domain experts) through the debugging process and that after the interactive process the true cause of the observed discrepancies is found.
In contrast, there is limited support for users in the more traditional model-based debugging setting, where the users have to provide test cases manually in order to incrementally narrow down the set of fault candidates.
As discussed in Section [2](#S2 "2 Related Work ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."), computational analyses—such as measurements of time or an analysis of the number of required queries—can be insufficient to inform us about the usefulness and acceptance of the corresponding tools, and
cannot tell us in which ways query-based debugging is advantageous over a test case based approach.
To address these open questions, we conducted a number of controlled (laboratory) studies, mainly consisting of ontology debugging exercises. We focus on the following main research questions in the context of model-based debugging:
1. RQ1
Is the debugging process more *effective* when users are supported by a query-based debugging tool than when test cases are the only means to locate faults?
2. RQ2
Is the process more *efficient* when users are supported by a query-based debugging tool?
3. RQ3
To what extent do the assumptions of MBD debugging techniques hold?
1. RQ3.1
For the case of approaches based on test cases and candidate ranking: Do users have “perfect bug understanding”, i.e., do they reliably recognize the true cause of a discrepancy within a list of diagnoses?
2. RQ3.2
For the case of the query-based approach: Do users make errors when acting as oracles?
The following studies were designed and executed.
* •
In our *preliminary* study (*Study 1*), our goal was to gauge the general usefulness of a test case based debugging approach. We specifically also explored the importance of the ranking of the fault candidates in this experiment (RQ3.1). The study also served us to further improve the design of the main study (*Study 2*).
* •
In *Study 2*, we investigated the effectiveness and efficiency of the query-based and the test case based debugging approach (RQ1 and RQ2). In that context, we also examined the question of oracle errors (RQ3.2).
Additional pen-and-paper studies were conducted in the context of both *Study 1* and *Study 2* with the goal of deepening our understanding of the (types of) errors that occur while debugging. These insights are then used to devise a heuristic prediction model for such errors (RQ3.2). We discuss *Study 1* in Section [5](#S5 "5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."), *Study 2* in Section [6](#S6 "6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."), and the additional studies in Section [7](#S7 "7 Predicting Oracle Errors based on Query Complexity ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.").999The (anonymized) raw data obtained throughout *Study 1* and *Study 2* as well as the ontologies used in the experiments can be downloaded from <http://isbi.aau.at/ontodebug/evaluation>.
5 *Study 1*: Investigating MBD-debugging With Test Cases
---------------------------------------------------------
###
5.1 Design of the Pre-Study
####
5.1.1 Task
The task of the participants in this study was to *find the faulty axioms* (true diagnosis) in a given faulty ontology
(i) based on a provided description of the *intended* ontology in natural language
(ii) using the OntoDebug tool described above
(iii) by creating test cases manually (the query-based debugging functionality was not available to the users).
The participants were explicitly instructed to
(iv) constantly inspect the list of possible diagnoses throughout the debugging session and to
(v) mark the true diagnosis once they detected it in the list.
After a diagnosis was marked, the debugging session ended. In Figure [1](#S3.F1 "Figure 1 ‣ 3.3 The OntoDebug Plug-In to Protégé ‣ 3 Background: Knowledge Base Debugging with MBD ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."), the list of diagnoses is shown in the bottom-left sub-window
labeled with “Possible Ontology Repairs”.
####
5.1.2 Ontologies
In order to make sure that the outcomes regarding the usefulness of the test case based debugging approach do not depend on the specifics of a certain ontology, two different ontologies describing two different domains were used in the study. The first one corresponded to a (simplified) model of our university (*university* domain) and the second one was a real-world knowledge base made available to us by the “Communal IT Center of Carinthia” (*IT* domain). We prepared the ontologies for the study by injecting five faults into each of them such that the resulting ontologies were inconsistent and incoherent in themselves. That is, for both ontologies the true diagnosis included five faulty axioms (as shown in Table [1](#S5.T1 "Table 1 ‣ 5.1.2 Ontologies ‣ 5.1 Design of the Pre-Study ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")).
The designed ontologies were similar in size and complexity. For example, both included about 50 classes, 90 subclass relationships, and 20 object properties. Moreover, both included roughly equally complex logical formalisms and used the full expressivity of the Description Logic 𝒮ℛ𝒪ℐ𝒬𝒮ℛ𝒪ℐ𝒬\mathcal{SROIQ}caligraphic\_S caligraphic\_R caligraphic\_O caligraphic\_I caligraphic\_Q [DLHandbook](#bib.bib3) ; [horrocks2006even](#bib.bib88) or, respectively, OWL 1.1 [owl1.1\_spec](#bib.bib89) .
Table 1: Faulty ontology axioms (university domain) in OWL Manchester Syntax [horridge2006manchester](#bib.bib90) .
| Nr. | Faulty Axiom |
| --- | --- |
| 1 | Department SubClassOf offers only Course |
| 2 | Library SubClassOf offers only Visitation |
| 3 | Research\_Event SubClassOf has\_Speaker only (Person and (has\_Degree some Degree)) |
| 4 | Assembly\_Hall DisjointWith Room |
| 5 | Department DisjointWith Room |
####
5.1.3 Participants
We recruited 29 participants for the study. All participants were computer science students of our university and were enrolled in an ongoing master program course on knowledge engineering. During this course, the participants, who already had a background in logics, were introduced to model-based debugging, formal ontologies, Description Logics, and the OWL language. The participants also had first experiences in designing ontologies with Protégé and debugging them with OntoDebug. Overall, the participants were very homogeneous with respect to their knowledge and background.
####
5.1.4 Independent Variables
We considered two independent variables,
the ontology to be debugged (*university* vs. *IT*) and
the position (*visible* vs. *not visible*) of the true diagnosis in the list of diagnoses returned by the debugger.
Each participant was randomly assigned to one ontology and one of two configurations regarding the position of the true diagnosis.
Similar to the work in [Parnin:2011:ADT:2001420.2001445](#bib.bib29) , we varied the position to assess the importance of the ranking of the diagnoses returned by the system. Specifically, in the *visible* case, the true diagnosis, which comprised all actually faulty axioms of the ontology, was placed within the top three diagnoses and was therefore always visible to the user. In the other case (*not visible*), the true diagnosis was further down the list.
Generally, the diagnosis problem was designed in a way that the initial list of diagnoses (before further test cases are specified) is comparably large, including over 150 diagnoses in each case.
####
5.1.5 Dependent Variables
We made a variety of automated, *objective* measurements while the participants were executing the task, like the needed *time*, the *number of user interactions* (mouse clicks) in the debugger, and the *number of diagnoses still in the list* of diagnoses when the participants
submitted the diagnosis which they thought is the correct one.
In the context of *Study 1* the most important automated measurement was on the *correctness* of the debugging process in terms of
(i) the fraction of correctly identified faulty axioms and
(ii) the fraction of users who correctly identified all five faulty axioms (i.e. the true diagnosis).
Moreover, the participants had to specify their *subjective* degree of belief (*confidence*) in having solved the fault localization task correctly. For this, they were instructed to use a range between 0 (certain that the marked diagnosis *is not* the true one) and 100 (certain that the marked diagnosis *is* the true one).
###
5.2 Experiment Execution
The study was conducted in one of the computer labs of our university. The required software was pre-installed on the lab computers. All of the computers were identically equipped.
After being informed about the tasks of the study and after the participants had declared their consent, they were provided with detailed material on paper. The handout essentially included a description of the domain that was incorrectly modeled by the ontology the participants had to debug. Thus, the paper characterized the *intended ontology* as discussed in Section [3](#S3 "3 Background: Knowledge Base Debugging with MBD ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.").
The description was given as a natural language text, with important concepts highlighted. In particular, class and property names in the ontology were *italicized* and underlined, respectively. An example of such a description from the university domain is the following:
* From an organizational point of view, the University is subdivided into several *OrganizationalUnits*. Each *OrganizationalUnit* employs some *OfficeEmployee*(s) and some *Teacher*(s), has some *Room*(s) which is/are (an) *Office*(s), is directed by exactly one *Director* and is located in some *Building*. Two special types of *OrganizationalUnits* are the *Directorate* and the *HumanResourcesUnit*.
Before the participants started their task, they received
another brief tutorial on
how to debug an ontology with the OntoDebug tool. They used the “Koala” ontology that is available in Protégé (cf. Sec. [3.3](#S3.SS3 "3.3 The OntoDebug Plug-In to Protégé ‣ 3 Background: Knowledge Base Debugging with MBD ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")) for that purpose.
During the experiment, the participants were not allowed to talk to each other. The participants were supervised by three instructors, who were present to answer questions in case of problems with the software.
###
5.3 Outcomes of Study 1
The measurements obtained in *Study 1* are summarized by Figures [2](#S5.F2 "Figure 2 ‣ 5.3.1 General Usefulness of Model-based Debugging ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") and [3](#S5.F3 "Figure 3 ‣ 5.3.1 General Usefulness of Model-based Debugging ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.").
As mentioned above, the main question of this pre-study was
(i) to gauge the general usefulness of MBD-debugging with test cases and
(ii) to assess the importance of the ranking of the diagnoses. Furthermore, a side goal was to obtain experiences regarding the study design for the main study (*Study 2*).
####
5.3.1 General Usefulness of Model-based Debugging
On average, the participants took about 28 minutes and 81 mouse clicks for the task before they submitted their solution.101010The standard deviation was 12.6 minutes (time) and 35 mouse clicks, respectively.
Overall, the participants correctly identified as many as 77 % of the problematic axioms,111111The standard deviation was 21 %.
i.e., almost four out of the five injected faults shown in Table [1](#S5.T1 "Table 1 ‣ 5.1.2 Ontologies ‣ 5.1 Design of the Pre-Study ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") were eliminated. From the 29 participants, 10 (34.4 %) correctly identified the true diagnosis, i.e., all five faulty axioms (cf. Figure [2](#S5.F2 "Figure 2 ‣ 5.3.1 General Usefulness of Model-based Debugging ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")).121212Note, in Figures [2](#S5.F2 "Figure 2 ‣ 5.3.1 General Usefulness of Model-based Debugging ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") and [4](#S6.F4 "Figure 4 ‣ 6.3.1 Effectiveness of Query-based Debugging (RQ1) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") we use line and area charts (instead of, e.g., bar charts or dot plots) for reasons of clarity and better distinguishability between the plotted variables, although the x-axis is in fact a categorical axis.
Overall, we find this result very positive, given the complexity of the task. The study clearly indicates that
model-based
debugging is actually helpful for knowledge engineers. Since we did not observe any statistically significant differences between the observations that were made for two different ontology debugging problems (*university* and *IT*), we are confident that the usefulness of the approach is not limited to just one domain.
There were various reasons why some participants did not successfully find all faults. A main issue appeared to be a certain lack of attentiveness and precision when reading the natural language specification of the intended ontology.
Based on these observations, we revised some of the specifications, e.g., by removing possible ambiguities, when designing *Study 2*.
To a certain extent, it also seemed that some participants did not properly understand the semantics of certain elements of the knowledge representation language.

Figure 2: Overview of the outcomes of *Study 1*. The figure shows the measurements for the dependent variables for all 29 debugging sessions, grouped by the position (“visible” left, “not visible” right) of the true diagnosis in the diagnoses list, and sorted from low to high confidence. The labels along the x-axis indicate whether the true diagnosis was found (“Y”) or not (“N”) during the respective session. Variables plotted w.r.t. the right y-axis are underlined. The numbers (ranging from 1 to 36) in the plot indicate the exact value of the “# of diagnoses still in list” variable.




Figure 3: Violin plots showing the distribution of the dependent variables in *Study 1*.
####
5.3.2 Importance of Ranking of Candidates (RQ3.1)
In the context of RQ3.1,
our goal was to investigate if the capability of a debugger to rank the true diagnosis higher in a list of candidates directly translates into a more effective debugging process. Table [2](#S5.T2 "Table 2 ‣ 5.3.2 Importance of Ranking of Candidates (RQ3.1) ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") shows in how many cases the true diagnosis—which comprises all five injected faults—was found, depending on whether it was among the top-ranked (visible) candidates or not.
Table 2: Relationship between full correctness of the debugging task and visibility of the true diagnosis in the list of diagnoses presented to the participant.
| | | true diagnosis visible |
| --- | --- | --- |
| | | yes | no |
|
true diagnosis found
| yes | 5 | 5 |
| no | 9 | 10 |
Interestingly, the observations shown in Table [2](#S5.T2 "Table 2 ‣ 5.3.2 Importance of Ranking of Candidates (RQ3.1) ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") do not provide evidence that the users were more effective when the true diagnosis was always visible.131313
This is supported by Fisher’s Exact Test [bhattacharyya1977](#bib.bib91) (p-value = 1.00).
Such a non-effect of varying the position of the fault in a ranked list was also reported in [Parnin:2011:ADT:2001420.2001445](#bib.bib29) .
Moreover, 10 of the 14 participants of the group where the true diagnosis was ranked highly
continued specifying test cases until only one diagnosis was left in the list (cf. Figure [2](#S5.F2 "Figure 2 ‣ 5.3.1 General Usefulness of Model-based Debugging ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."))—even though all participants were explicitly instructed to constantly inspect the list of diagnoses and mark the true diagnosis once they detected it in the list.
A large number of participants therefore did not recognize the actual fault even though it was shown to them.
These findings challenge the assumption of a “perfect bug understanding” of the users, i.e., they do *not* always immediately identify a fault when they are pointed to it. In other words, even if the true diagnosis was visible to the participants, they
(i) did not recognize it in the majority of the cases and
(ii) did not identify it more often than other participants to which the true diagnosis was not (always) visible.
As a result, fault ranking metrics should not be considered as the only measure when different algorithmic debugging strategies are compared [Parnin:2011:ADT:2001420.2001445](#bib.bib29) .
####
5.3.3 Additional Observations (Study 1)
*Positive test cases are more reliable:* From the 244 test cases provided by the participants (8 on average per debugging session), the majority (71 %) were *positively* formulated, i.e., they described required entailments. The participants therefore seemed to feel more comfortable specifying things that must be entailed than those that must not. An analysis of the fault rates for positive and negative test cases indeed confirmed that negative ones, i.e., formulated non-entailments, were significantly141414According to a
(two-tailed) Fisher’s Exact Test with α=0.01𝛼0.01\alpha=0.01italic\_α = 0.01 (p-value = 0.008).
more often faulty (24 % vs. 10 %, see Table [3](#S5.T3 "Table 3 ‣ 5.3.3 Additional Observations (Study 1) ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")). This result suggests that it can be better to ask users questions with a bias towards
the positive answer151515A “bias towards the positive answer” means that the estimated probability of getting a positive answer to the question is higher than that of a negative answer. in query-based KB debugging, in order to minimize oracle errors.
Table 3: Relationship between the type of formulated test case and its faultiness.
| | | type of test case |
| --- | --- | --- |
| | | positive | negative |
|
test case faulty
| yes | 18 | 17 |
| no | 156 | 53 |
*Users can be overconfident:* The participants of the study were, on average, overconfident (cf. Figure [2](#S5.F2 "Figure 2 ‣ 5.3.1 General Usefulness of Model-based Debugging ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")). That is, the average confidence value expressed by the participants regarding the correctness of the identified diagnosis was at about 83 % (cf. Figure [3](#S5.F3 "Figure 3 ‣ 5.3.1 General Usefulness of Model-based Debugging ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."))161616In all the presented violin plots, the white dot indicates the median and the thick box ranges from the first to the third quartile. whereas only 34 % of them have correctly located the true diagnosis. In other words, the average self-reported confidence of users in their own success overestimates the actual user success rate.
Interestingly, the confidence of those participants who did not find the true diagnosis was even slightly higher than the confidence of the successful participants.
Overall, this can be seen as an indicator that subjective confidence estimates have to be handled with care [Rodler2013](#bib.bib81) when they are intended to be used to guide the debugging process [Shchekotykhin2012](#bib.bib25) .
*Users consider themselves as imperfect oracles:*
We found that only 31 % of all users, and an even lower 20 % of the ones that successfully found all faults, were *fully* confident
about the correctness of their debugging actions (cf. Figure [2](#S5.F2 "Figure 2 ‣ 5.3.1 General Usefulness of Model-based Debugging ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")).
This teaches us that humans generally
do not regard themselves as perfect oracles for knowledge engineering tasks, which questions the frequently made “perfect oracle” assumption. We pick up on this discussion again in Sec. [6.3.4](#S6.SS3.SSS4 "6.3.4 Existence of Oracle Errors (RQ3.2) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") and Sec. [7](#S7 "7 Predicting Oracle Errors based on Query Complexity ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.").
*Completion time and user activity as success predictors (cf. Figure [2](#S5.F2 "Figure 2 ‣ 5.3.1 General Usefulness of Model-based Debugging ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")):* Participants who correctly identified the true diagnosis required on average more time (33 minutes) and specified more test cases (10). However, they needed fewer interactions (71 clicks) than those who submitted a wrong diagnosis (26 minutes, 8 test cases, 87 clicks). This indicates that successful users worked more thoroughly and were more persistent in their testing activity. Unsuccessful ones, in contrast, required more interactions as they more frequently edited, deleted or re-added test cases.
An atypically high editing activity can thus be considered as an indicator that a user requires more assistance for the given task.
6 *Study 2*: On the Usefulness of Query-based Debugging
--------------------------------------------------------
Having established that
model-based
debugging leads to a good debugging performance, the goal of *Study 2* was to answer our main research questions RQ1 and RQ2 on the efficiency and effectiveness of query-based debugging as opposed to a test case based approach. In other words, do users need less time/effort when supported by a query-based debugger (efficiency) and do they find more faults (effectiveness)?
###
6.1 Design of the Study
####
6.1.1 Task
As in the pre-study, the general task of the participants was to find the actually faulty axioms (true diagnosis) in given faulty ontologies
(i) based on a description of the *intended* ontology in natural language
(ii) using the OntoDebug tool. However, now
(iii) every participant had to debug two ontologies,
one using the query-based and
the other using the test case based approach.
####
6.1.2 Ontologies
Similar ontologies were used as in the pre-study—one describing a *university*, and one describing an *IT* domain, and both again corresponding to the Description Logic 𝒮ℛ𝒪ℐ𝒬𝒮ℛ𝒪ℐ𝒬\mathcal{SROIQ}caligraphic\_S caligraphic\_R caligraphic\_O caligraphic\_I caligraphic\_Q.
We prepared the ontologies for the study by injecting a number of faults into each of them, leading to both inconsistency and incoherency, similarly as in *Study 1*. However, the ontologies were roughly 20 % larger in terms of their size (e.g., number of axioms and classes) than the ones used in *Study 1*; still, the size and complexity of both ontologies was
roughly equal.
The ontologies were enlarged to achieve a higher number of fault candidates. Concretely, the size of the initial list of diagnoses for both ontologies was now over 1000.
This made the diagnosis problems objectively harder than in the pre-study. The reason for this was to compensate for the lower number of participants (23)
in *Study 2*, which makes it more challenging to achieve statistically significant results. In fact, if any effects (e.g., regarding time or user interactions) of employing the query-based debugging method can be found, then they are likely to be larger for harder debugging problems.
####
6.1.3 Participants
For *Study 2*, we could draw on 23 participants. Again, all of them were attendees of a university master program course on knowledge engineering. However, the focus of the course was now shifted towards Semantic Web technologies to achieve a better preparation of the students for the study.
As a consequence, the participants of *Study 2* had a better education on model-based diagnosis, formal ontologies, ontological reasoning, and the used knowledge representation language than those of *Study 1*. Moreover, they had more experience with Protégé and OntoDebug.
####
6.1.4 Independent Variables
The two independent variables we used were
the ontology to be debugged (*university* vs. *IT*) and
the debugging strategy (*query-based* vs. *test case based*).
We used a within-subjects experiment design in this study, which involves each participant consecutively working on both ontologies and consecutively using both debugging strategies. Thus, we randomly assigned each participant to one of the following configurations:171717The random variation of the order of the tasks is important to avoid systematic learning effects.
* •
Task 1: *university* with *queries*. Task 2: *IT* with *test cases*.
* •
Task 1: *university* with *test cases*. Task 2: *IT* with *queries*.
* •
Task 1: *IT* with *queries*. Task 2: *university* with *test cases*.
* •
Task 1: *IT* with *test cases*. Task 2: *university* with *queries*.
####
6.1.5 Dependent Variables
In terms of measurements, we recorded the same aspects as in *Study 1* (see Section [5.1.5](#S5.SS1.SSS5 "5.1.5 Dependent Variables ‣ 5.1 Design of the Pre-Study ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")), i.e., *time*, *number of user interactions*, *number of diagnoses still in the list*, *correctness* (fraction of faulty axioms found, fraction of users finding the true diagnosis), and *confidence*.
###
6.2 Experiment Execution
The experiment execution was exactly the same as in *Study 1*, see Section [5.2](#S5.SS2 "5.2 Experiment Execution ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.").
###
6.3 Outcomes of Study 2
####
6.3.1 Effectiveness of Query-based Debugging (RQ1)
To assess the effectiveness of the two debugging strategies, we analyzed how many of the faulty axioms were successfully identified by the participants. Across both ontologies, the participants on average found 91.3 % of the faults when they were supported by the query-based debugger and 89.1 % when the debugging process was based on test cases (as in *Study 1*).181818The standard deviation amounts to 19 % (queries) and 23 % (test cases). Figures [4](#S6.F4 "Figure 4 ‣ 6.3.1 Effectiveness of Query-based Debugging (RQ1) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") and [5](#S6.F5 "Figure 5 ‣ 6.3.1 Effectiveness of Query-based Debugging (RQ1) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") show the (distribution of the) achieved success rates for both debugging techniques.
The differences were not statistically significant. We therefore conclude that in this experiment the query-based approach did *not* further increase the effectiveness of the debugging process.

Figure 4: Overview of the outcomes of *Study 2*. The figure shows the measurements for the dependent variables for all 46 debugging sessions (2 per user), grouped by the used debugging strategy (“queries” left, “test cases” right;
i𝑖iitalic\_i-th x-axis entry starting from the left in the “queries” block refers to the same user as i𝑖iitalic\_i-th x-axis entry starting from the left in the “test cases” block). Records are sorted by the number of mouse clicks of the “test cases” sessions from low to high. The labels along the x-axis indicate whether the true diagnosis was found (“Y”) or not (“N”) during the respective session. Variables plotted w.r.t. the right y-axis are underlined. The numbers (ranging from 1 to 26) in the plot indicate the exact value of the “# of diagnoses still in list” variable.

Figure 5: Boxplots showing the distribution of the % of identified faulty axioms per debugging session in *Study 2* for the query-based vs. the test case based approach.
Note, however, that in both cases the success rate was higher than in *Study 1*, where 77 % of the faults were identified by the participants. We attribute this to the fact that—based on the learnings from *Study 1*—we were more successful in motivating the participants to work more carefully. In addition, the participants of *Study 2*
were, as mentioned, better trained in ontology
engineering than those of *Study 1*. As a result, it became difficult to greatly increase the already high success rate (89.1 %) obtained by participants who relied on test case based debugging.
Like in *Study 1*, we also analyzed how many of the participants could correctly identify *all* faulty axioms (i.e. the true diagnosis) in each ontology. We again found no statistically significant difference between the two debugging approaches (cf. Table [4](#S6.T4 "Table 4 ‣ 6.3.1 Effectiveness of Query-based Debugging (RQ1) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")). Generally, across the ontologies, the fraction of fully successful trials was much higher than in *Study 1*. About 72 % of the participants were able to find all problems in the respective ontologies. Interestingly, we registered a non-conformity between the
two ontologies this time. Over 85 % of the participants were able to find all faults in the *university* ontology, with no significant differences with respect to the debugging method.
However, in the *IT* domain, only 57 % were fully successful. A potential reason for this result could lie in the prior knowledge of the participants with regard to the two domains. More research is however required to better understand this phenomenon.
Table 4: Relationship between the used debugging approach and the success in finding the true diagnosis.
| | | debugging approach |
| --- | --- | --- |
| | | queries | test cases |
| true diagnosis found | yes | 17 | 16 |
| no | 6 | 7 |
####
6.3.2 Efficiency of Query-based Debugging (RQ2)
To assess if the query-based debugging technique helps users to accomplish the debugging task faster and with less effort, we compared both the overall time needed by the participants and the number of required user interactions (mouse clicks) in the debugging tool across the two debugging strategies. The (distribution of the) time and user interaction measurements throughout *Study 2* is summarized by Figures [4](#S6.F4 "Figure 4 ‣ 6.3.1 Effectiveness of Query-based Debugging (RQ1) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."), [6](#S6.F6 "Figure 6 ‣ 6.3.2 Efficiency of Query-based Debugging (RQ2) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") and [7](#S6.F7 "Figure 7 ‣ 6.3.2 Efficiency of Query-based Debugging (RQ2) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.").
Participants who were supported by the query-based debugging tool on average needed 24.9 minutes. When
using test cases without query support,
the average time was 34.0 minutes191919The standard deviation comes to 11 minutes (queries) and 19 minutes (test cases)., which amounts to an overhead of 37 %.
Looking at the number of required user interactions, the differences are even stronger. With the query-based debugging tool, the number of mouse clicks was more than halved and reduced from about 139 to 64 clicks on average.202020Standard deviation: 25 clicks (queries) and 90 clicks (test cases). The differences regarding both time and interactions were statistically significant according to a Wilcoxon Rank-Sum Test;212121Since literature is not always consistent when referring to Wilcoxon’s test(s), note that we stick to the description of the test(s) given in [bhattacharyya1977](#bib.bib91) . Further note that Wilcoxon’s Rank-Sum test compares *independent* samples whereas Wilcoxon’s Signed Rank test compares *paired* data.
in the case of time to the level α=0.05𝛼0.05\alpha=0.05italic\_α = 0.05 (p-value = 0.0418), and for clicks to the level α=0.00001𝛼0.00001\alpha=0.00001italic\_α = 0.00001 (p-value < 0.00001).222222Also, when viewing the data as paired (each participant did use both queries and test cases, but each for a different ontology), the results in both cases are highly significant (for α=0.05𝛼0.05\alpha=0.05italic\_α = 0.05 and α=0.0001𝛼0.0001\alpha=0.0001italic\_α = 0.0001, respectively).
Overall, we conclude from the experiments that query-based debugging support is beneficial in terms of the efficiency of the debugging process.

Figure 6: Violin plots showing the distribution of the debugging task completion times in *Study 2* for the query-based vs. the test case based approach.

Figure 7: Violin plots showing the distribution of the number of user interactions to complete the debugging task in *Study 2* for the query-based vs. the test case based approach.
####
6.3.3 Additional Observations (Study 2)
*Users feel equally confident using both debugging approaches:* While again overconfident in general (cf. Section [5.3.3](#S5.SS3.SSS3 "5.3.3 Additional Observations (Study 1) ‣ 5.3 Outcomes of Study 1 ‣ 5 Study 1: Investigating MBD-debugging With Test Cases ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")), the participants were approximately equally confident about having made no mistakes in the debugging process at all, both when using queries and test cases.232323Note that in Figure [4](#S6.F4 "Figure 4 ‣ 6.3.1 Effectiveness of Query-based Debugging (RQ1) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") some confidence values seem to be zero. However, in fact, these cases represent *unknown* confidence values where participants did not provide an answer to the question about their subjective belief in the correctness of their debugging result. Specifically, the average confidence in case of query assistance was 93 % and 92 % when using test cases.242424Standard deviation: 8 % (queries) and 17 % (test cases).
*Intuitive focus on mere query answering:*
Interestingly, without giving the participants who used the query-based debugger any instructions to do so, all of them continued answering queries until a single diagnosis was left (cf. Figure [4](#S6.F4 "Figure 4 ‣ 6.3.1 Effectiveness of Query-based Debugging (RQ1) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")). Apparently, they therefore did not rely on the list of diagnoses when using the query-based approach. When relying on test case based debugging, in contrast, more than one quarter of the users selected their solution from a list of more than one diagnosis. In other words,
at a certain point they stopped specifying further test cases and considered it more efficient to inspect the candidate list. We interpret this as a possible sign that test case based debugging was more tiring, and thus more demanding for the users than query answering.
*Query answering is more efficient than test case specification:*
As both the query-based and the test case based approach result in the addition of a new test case per iteration252525In the query-based scenario the test case is selected by the debugger and classified (as positive or negative) by the user, whereas in the test case based scenario the test case itself and its classification is chosen by the user., we compared the time users needed per answered query and per specified test case, respectively.
The result is very clear (cf. Figure [8](#S6.F8 "Figure 8 ‣ 6.3.3 Additional Observations (Study 2) ‣ 6.3 Outcomes of Study 2 ‣ 6 Study 2: On the Usefulness of Query-based Debugging ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.")). The average test case specification time (≈\approx≈2:20 min) was almost 60 % (and statistically significantly262626According to a Wilcoxon Rank-Sum Test with α=0.001𝛼0.001\alpha=0.001italic\_α = 0.001 (p-value = 8.96\*10−138.96superscript10138.96\*10^{-13}8.96 \* 10 start\_POSTSUPERSCRIPT - 13 end\_POSTSUPERSCRIPT).)
higher than the average query answering time (≈\approx≈1:30 min).272727Standard deviation: ≈\approx≈1:30 (queries) and ≈\approx≈2:50 (test cases). This shows that it is more efficient to classify pre-selected axioms as (non-)entailments than to think about specific axioms *and* classifying them. Overall, this result demonstrates the potential of query-based sequential diagnosis approaches to reduce debugging efforts.
*Query optimization pays off:* The average number of queries (11.6) that had to be answered until the true diagnosis was found by the users was lower than the average number of test cases (13.1) the users specified to isolate the true diagnosis.282828Stadard deviation: 3.3 (queries) and 4.8 (test cases).
This shows that automatic (and optimized292929We used entropy-based query optimization as described in [Shchekotykhin2012](#bib.bib25) in our study.) test case selection tends to be more efficient than manual test case specification. In other words, the automated approach is better than users at selecting test cases that discriminate (well) between the candidates.

Figure 8: Violin plots showing the distribution of the time participants required to specify a test case
vs. the time they required to answer a query
in *Study 2*.
####
6.3.4 Existence of Oracle Errors (RQ3.2)
Both in *Study 1* and *Study 2*, we observed that it is not uncommon that participants make errors when specifying test cases and when answering the system’s queries. While in either case the large majority of the inputs provided by the participants was correct, at least one mishap occurred to a considerable fraction of participants in both studies. Even in the main *Study 2*, where the participants were instructed more intensively and where the participants had a better formal education on ontology engineering, about one quarter of the participants made at least one mistake. In the context of the study, mistakes were made equally for the test case specification and the query answering tasks.
Our observations therefore point to a largely open issue in algorithmic testing and debugging approaches, which are usually based on the assumption that there are no oracle errors. Only a few works exist in the literature, which specifically address the problem of wrong user inputs, e.g., in the context of spreadsheet testing [Ruthruff:2005:ESF:1062455.1062523](#bib.bib92) , Spectrum-based Fault Localization procedures [DBLP:conf/qrs/HoferW15](#bib.bib93) , or general software testing [DebugOracle2016](#bib.bib94) .
Next, in Section [7](#S7 "7 Predicting Oracle Errors based on Query Complexity ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal."), we will take first steps to address this largely open research question in the context of query-based knowledge base debugging. Specifically, we will describe an initial prediction model that allows us to estimate the probability of oracle errors depending on the complexity of the queries asked to the user.
7 Predicting Oracle Errors based on Query Complexity
-----------------------------------------------------
When designing a query-based debugging method, different options are available with respect to what types of queries are asked to the users. A closer look at the wrong user inputs in *Study 1* and *Study 2* revealed that from the faulty test case specifications about two thirds had a non-trivial syntactic structure, involving, for example, complex class expressions with intersection, union, or complement operators, as defined in the OWL specification [OWL2specification](#bib.bib70) . This supports the intuitive assumption that the syntactic complexity of the required inputs is correlated with the probability of a user error.
The goal of the work described in this section is to develop a first model that allows us to estimate the probability of user error for a given query.
The model can then be used by designers of interactive debugging systems, for example, in order to vary the complexity of the queries depending on the assumed expertise of the user. Alternatively, the model can be used to provide additional hints to the user in case of complex queries.
The proposed model was developed and evaluated with the help of two additional studies, which were performed in the context of *Study 1* and *Study 2*. The first of these studies, termed *Study E1*, aimed at
(i) verifying the conjecture that an axiom’s syntactic complexity has indeed a *significant* impact on how well it is understood and
(ii) collecting data as a basis for the design of the prediction model. The second study, termed *Study E2*, was conducted to assess the utility of the model.
###
7.1 Collecting Data for the Prediction Model (Study E1)
We designed a pen-and-paper study, where the task of the participants—the same ones as in *Study 1*—was to determine the correct translation of axioms written in OWL (Manchester Syntax [horridge2006manchester](#bib.bib90) ) into natural language and vice versa. Each participant was provided with ten axioms that were randomly chosen from a larger pool of manually-prepared axioms. The axioms themselves, which again related to the university and IT domain, were designed to have different complexity levels. A simple axiom, for example, would be X SubClassOf Y, where X and Y are class names from the respective domain.
More sophisticated axioms involved complex class expressions such as not(X and Y) or p some (X or Y) which use, e.g., property restrictions and different logical operators.
An example of a more complex axiom would be
UndergradStudent SubClassOf not (hasDegree some Degree).
For each given axiom, the participants were provided with three possible translations, where only one of them was correct. They then had to assign
confidence scores
to these answer options that express their degree of belief in the correctness of the respective answer.
To verify our hypothesis that syntactically more complex axioms are more difficult to comprehend, we proceeded as follows.
First, we gathered the confidence scores the participants gave to the correct answers for all the translation tasks. Next, we asked two experts to classify the syntax patterns that occurred in the exercises
as either particularly hard or particularly easy or neither.
We then compared the recorded confidence scores between the group of hard and the group of easy syntax patterns. The average score was 0.55 for the former and 0.95 for the latter group.
The statistical significance of this difference was revealed by a Wilcoxon Rank-Sum Test with level α=0.01𝛼0.01\alpha=0.01italic\_α = 0.01 (p-value = 0.0015). That is, axioms of higher complexity indeed led to a lower success rate of the translation task.
Overall, this finding supports the relevance of a syntax-based prediction model.
To obtain further insights regarding which syntactic features cause difficulties for the users, we manually inspected all answers of the participants. As a result, we identified the following major factors that increase the complexity for the participants: (a) nesting of class expressions, (b) negation in general, and (c) negated expressions that are not represented in “negation normal form” (NNF), i.e., which include negated complex class expressions.
###
7.2 Design of the Prediction Model
Based on the lessons learned from the different studies
and on our researcher expertise,
we constructed a rule-based prediction model, which takes a query in OWL as an input and returns a score that expresses how likely it is that the query will be properly understood.
In other words, the model will tell us the likelihood of an oracle error for the given query.
The idea of the model is to recursively reduce a query to the axioms it consists of, and to then decompose these axioms to the class expressions they comprise. These expressions are in turn successively split into smaller sub-expressions, and so forth, until atomic classes are obtained. Based on the encountered syntactic structure,
the model uses respective weights
to compute the final query score when the recursion unwinds.
The weights are defined based on the observations of our study.
For instance, the model assigns X SubClassOf Y a score of 1111 (maximum “easiness”) because such axioms were always correctly understood by the participants. In contrast, the score for X SubClassOf not (p some Z) would be 0.25 due to the involved negation and property restriction. Note that the axiom X SubClassOf p only (not Z) that expresses the same fact but is written differently in NNF would be indeed rated as being easier (score 0.29) by the model, which is in accordance with our observations.
To initially validate our model, we performed a correlation analysis based on *Study E1*. The analysis revealed that the predictions for the exercises from *Study E1*
are well in line with the success rates we had observed in the study (Pearson’s r=0.53𝑟0.53r=0.53italic\_r = 0.53).
For the sake of brevity, we only sketched the main idea of the model here. The exact definition of the model can be found in [A](#A1 "Appendix A Formal Characterization of the Complexity Prediction Model ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.").
###
7.3 Evaluation of the Prediction Model (Study E2)
*Study E2*, which involved the participants of *Study 2*, was a pen-and-paper exercise that we conducted to validate the predictive power of our model directly, i.e., through a query answering task.
In the study, each participant was provided with a natural language description of a university domain and 25 queries in OWL Manchester Syntax, each consisting of one axiom. The queries were randomly selected from a pool of
logical axioms 𝑎𝑥isubscript𝑎𝑥𝑖\mathit{ax}\_{i}italic\_ax start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT involving 51515151 syntactic patterns of different complexities, with scores predicted by our model ranging from
0.05 (hard) to 1 (easy).
For each query, the task was to decide if it is true or false in the given domain. The correct answers to all 25 questions were given in the natural language text, i.e., the participants did not have to make any assumptions to correctly answer the queries. The participants were again asked to provide, for each query, on a scale from 0 to 100,
(i) a difficulty assessment and
(ii) their confidence in the given answer.
From the subjects’ questionnaires, we extracted, grouped by syntactic pattern, (a) the percentage of correct answers, (b) the users’ average confidence in their answer, and (c) the average subjective difficulty. A comparison of each of these three response variables with the model predictions yielded quite decent correlation coefficients
of 0.36, 0.52, -0.70 for (a), (b) and (c), respectively.
Moreover, to assess the statistical significance of the model’s predictive power, we ranked all queries according to their score as per our prediction model and performed a median split of the axioms into two groups, one including the easy and one the hard syntactic patterns.
An analysis of the response variables (a), (b) and (c) for these two groups revealed that there is a significant between-group difference (Wilcoxon Rank-Sum Test, p-values <10−5absentsuperscript105<10^{-5}< 10 start\_POSTSUPERSCRIPT - 5 end\_POSTSUPERSCRIPT, <10−5absentsuperscript105<10^{-5}< 10 start\_POSTSUPERSCRIPT - 5 end\_POSTSUPERSCRIPT and 0.01970.01970.01970.0197)
which confirms the predictive power of the proposed model.
As a result, axioms that were estimated to be hard according to the model
(i) in fact led to a higher failure rate,
(ii) were actually perceived to be harder, and
(iii) resulted in a lower confidence of the users in their answers.
The same relationship holds in the other direction.
As a side note, the prediction model, in case it did not exactly predict the observed success rate,
tended to underestimate the success probability. As a consequence, whenever the model predicted that a query is easy (i.e., had a score close to 1111), it actually proved to be *very well*
understood by the users.
Hence, using methods in a query-based debugger that are able to generate
“easy questions” with respect to such a prediction model is expected to be beneficial to avoid oracle errors. Examples of such methods can be found in [rodler\_jair-2017](#bib.bib53) ; [rodler-dx17](#bib.bib85) ; [DBLP:journals/corr/Rodler16a](#bib.bib79) .
###
7.4 Discussion
Overall, our results indicate that our model, although still preliminary, is able to assess the complexity of a given query with good reliability. Clearly, more research is required to further develop the model and to validate it for other problem settings. Nonetheless, we see the results as an important first step in the direction, which can be used when designing an interactive debugging environment.
Furthermore, the model can also be used for other purposes related to debugging, e.g., as an estimator of the prior fault information provided to a debugger. For instance, a higher fault probability could be assigned to axioms in the KB that are rated as hard by the prediction model. As pointed out and empirically proven by several works [Rodler2013](#bib.bib81) ; [Shchekotykhin2012](#bib.bib25) ; [rodler2018ruleML](#bib.bib82) , reliable fault probabilities are a crucial ingredient to efficient fault localization but are often difficult to estimate.
8 Research Limitations
-----------------------
Our research does not come without limitations. First, the number of participants in the different studies, while being larger than in some previous studies on the topic, could be higher, and we plan to conduct additional experiments in the future with a larger set of participants. The participants of our studies were computer science students and all had a comparable background. We argue that this participant group is representative of at least a part of the population of real-world knowledge engineers, i.e., those that have a formal education in computer science.
The experiments conducted in *Study 1* and *Study 2* are each based on two specific knowledge bases (ontologies).
While we thereby tried to make sure that the insights are not limited to one single domain, our experiments were based on ontologies with a comparable level of complexity. To what extent our insights generalize to much larger or more complex knowledge bases, can therefore not be concluded with certainty from the made experiments.
However, in the light of the following considerations it seems plausible
to expect that the obtained results regarding debugging efficiency carry over to harder debugging problems as well.
First, we used ontologies, which are already highly expressive (𝒮ℛ𝒪ℐ𝒬𝒮ℛ𝒪ℐ𝒬\mathcal{SROIQ}caligraphic\_S caligraphic\_R caligraphic\_O caligraphic\_I caligraphic\_Q) in Description Logic terms and hence simulate scenarios where users are confronted with very complex problems from the comprehension and reasoning point of view.
Second, we observed that users required
(i) significantly (almost 40 %) less time *per query* than *per test case*, and
(ii) a comparable but by tendency smaller number of queries than test cases per debugging session.
This suggests growing (absolute) time savings of the query-based over the test case based approach when larger debugging problems involving more fault candidates and more user interactions are considered.
The prediction model presented in Section [7](#S7 "7 Predicting Oracle Errors based on Query Complexity ‣ Are Query-Based Ontology Debuggers Really Helping Knowledge Engineers?1footnote 11footnote 1This is a preprint of the work rodler2019userstudy that is formally published in the Knowledge-Based Systems journal.") is still preliminary and must be seen more as a *general* indicator than a precise, optimized predictor. In fact,
the scores that describe the complexity of an axiom are, for now, estimates that are based on a single study and on our own researcher expertise. However, our model evaluation clearly indicates that the rules, i.e., *the way of* using the structure of an axiom for the estimation (e.g., deeper nesting of sub-clauses is harder), are plausible.
9 Summary
----------
Tool support for debugging is not only relevant for traditional software systems, but also for knowledge-based systems. In the field of general software engineering, more and more research works are published which aim at better understanding the true value of such debugging tools for developers. In the field of knowledge-based systems, research on this topic is however still limited. With this work, we aim to contribute new insights regarding the usefulness of query-based knowledge base debugging in contrast to a more traditional test case based approach.
We conducted different user studies to address some of the open questions. The studies showed that users who were supported by any of the two forms of a model-based debugger were able to successfully locate a large fraction—in one study almost all—of the faults in the given knowledge bases. This emphasizes the usefulness of model-based knowledge debugging in general.
The query-based approach furthermore proved to be advantageous in terms of the efficiency and, thus, the required user effort in the debugging process. Users not only needed less time and fewer mouse clicks to locate the faults, the internal, optimizing query selection strategy also reduces the number of test cases that are needed to isolate the true cause of the observed problems.
Finally, the studies revealed certain other phenomena of knowledge base debugging processes. One main insight is that measuring the capability of a debugging method to properly rank the fault candidates should not be the only measure to compare different strategies. Another important aspect is that users sometimes provide wrong inputs to the debugging process. Future debuggers should therefore be able to take this aspect into account. In this work, we made a first step in this direction and proposed and evaluated a model that predicts the reliability of the user input for a query of a given complexity. Such predictions can, for example, be used in future systems to decide on which types of queries should be asked to the user in query-based approaches.
Acknowledgements
----------------
This work was supported by the Carinthian Science Fund (KWF), contract KWF-3520/26767/38701. |
de7b9e4c-55a2-42ce-9467-10596be54089 | trentmkelly/LessWrong-43k | LessWrong | Understanding Positional Features in Layer 0 SAEs
This is an informal research note. It is the result of a few-day exploration into positional SAE features conducted as part of Neel Nanda’s training phase of the ML Alignment & Theory Scholars Program - Summer 2024 cohort.
Thanks to Andy Arditi, Arthur Conmy and Stefan Heimersheim for helpful feedback. Thanks to Joseph Bloom for training this SAE.
Summary
Figure 1: (Dots) The top 3 PCA components of rows 1 to 127 of gpt2-small’s positional embedding matrix explain 95% of their variance. (Crosses) SAEs trained on layer 0 residual stream activations learn many features that together recover this 1 dimensional helical manifold. Colour corresponds to the position on which the feature is most active. Blue corresponds to position 1, red corresponds to position 127. The position 0 row and SAE features are omitted (as they are weird).
We investigate positional SAE features learned by layer 0 residual stream SAEs trained on gpt2-small. In particular, we study the activation blocks.0.hook_resid_pre, which is the sum of the token embeddings and positional embeddings. Importantly gpt2-small uses absolute learned positional embeddings – that is, the positional embeddings are a trainable parameter (learned) and are injected into the residual stream (absolute).
We find that this SAE learns a set of positional features. We investigate some of the properties of these features, finding
* Positional and semantic features are (almost) entirely disjoint at layer 0. Note that we do not expect this to continue holding in later layers as attention mixes semantic and positional information. In layer 0, we should expect the SAE to disentangle positional and semantic features as there is a natural notion of ground truth positional and semantic features that interact purely additively.
* Generically, each positional feature spans a range of positions, except for the first few positions which each get dedicated (and sometimes, several) features.
* We can attribute degradation of SAE p |
9caf21aa-0297-4aad-b7fc-2302d29d266b | trentmkelly/LessWrong-43k | LessWrong | Stand-up comedy as a way to improve rationality skills
Epistemic status: Believed, but hard to know how much to adjust for opportunity costs
I'm wondering whether stand-up comedy would be a good way to expand and test one's "rationality skills" and/or just general interaction skills. One thing I like about it is that you get immediate feedback: the audience either laughs at your joke, or they don't.
Prominent existential risk researcher Nick Bostrom used to be a stand-up comedian:
For my postgraduate work, I went to London, where I studied physics and neuroscience at King's College, and obtained a PhD from the London School of Economics. For a while I did a little bit stand-up comedy on the vibrant London pub and theatre circuit.
It was also mentioned at the London LW meetup in June 2011:
Comedy as Anti-Compartmentalization - Another pet theory of mine. I was puzzled by the amount of atheist comedians out there, who people pay to see tell them that their religion is absurd. (Yes, Christian comedians exist too. Search YouTube. I dare you.) So my theory is that humour serves as a space where patterns and data from different fields are allowed to be superimposed on one another. Think of it as an anti-compartmentalization habit. Due to our brain design, compartmentalization is the default, so humour may be a hack to counter that. And we reward those who do it well with high status because it's valuable. Maybe we should have transhumanist/rationalist stand-up comedians? We sure have a lot of inconsistencies to point out.
Diego Caliero thinks that there would be good material to draw upon from the rationalist community.
Does anyone have any experience trying this and/or have thoughts on whether it would be useful? Also, does anyone in NYC want to try it out? |
d37de84f-fef3-4ac8-bf34-3124ac509cff | trentmkelly/LessWrong-43k | LessWrong | The Singularity Institute STILL needs remote researchers (may apply again; writing skill not required)
Update: As of December 2012, we are still accepting applications!
A while ago, I announced that the Singularity Institute is hiring remote researchers. I've hired a few people, but I still need more remote researchers. I think I screened off too many otherwise capable people because the 'test task' I asked applicants to perform was too time-consuming.
So even if you've already applied and been rejected, please apply via the new application form. The test task this time will not be quite so time consuming.
Pay is hourly and starts at $14/hr but that will rise if the product is good. You must be available to work at least 20 hours/week to be considered.
Perks:
* Work from home, with flexible hours.
* Age, location, and credentials are irrelevant; only the product matters.
* Get paid to research things you're probably interested in anyway.
* Contribute to human knowledge in immediately actionable ways. We need this research because we are about to act on it. Your work will not fall into the journal abyss that most academic research falls into.
If you're interested, apply here.
Why post this job ad on LessWrong? We need people with some measure of genuine curiosity.
Also see Scholarship: How to Do It Efficiently.
|
a1271b82-059a-48f1-bfd0-27598fda4e00 | trentmkelly/LessWrong-43k | LessWrong | Deconfusing Deception
What does deception look like from the outside? I notice I am confused.
Imagine you have some algorithm which learns to predict sensory inputs, like a human. The internal structure of this will come to correspond to the external world in some way, as a part of generating a predictive model. Imagine the algorithm is walking around a building, looking at the objects in it. Most of the objects are ordinary, made-of-atoms objects. The algorithm learns to predict where the objects are, how they interact, and basically gets very low input-prediction error.
Then it comes across a teapot. The teapot isn't a real object, but in fact a hallucination projected onto the sensory arrays of the algorithm by a daemon. At first, the daemon only has to project some visual data. But then the algorithm picks up the teapot, so the daemon must project some tactile data.
For a "successful" deception, the algorithm must not "notice" that their purely-physical model of the world is being violated.
When the algorithm decides to make a cup of tea, the daemon must not only fake the sensations of a full, brewing pot of tea, but hide the sight of the water and teabag falling onto the floor. Later, the algorithm falls over by slipping on the spilled water, so the daemon must from then on fabricate all the sensory data that the algorithm gets.
Or, the daemon could fabricate some other reason for the sensory data. The water on the floor could have leaked in from a crack in the ceiling. For this to work, the algorithm must have already been uncertain about whether there was a crack in the ceiling or not, and the daemon must have known this.
This highlights two ways you can deceive a learning algorithm. Both involve control of all the information flowing between a system (in this case the whole world) and an algorithm. One requires modelling of the system, the other requires modelling the algorithm.
----------------------------------------
How else can we look at this? It seems like if there' |
e357b10a-5832-4ed1-b8e8-01bbaa9d0c09 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Analysing a 2036 Takeover Scenario
**TL;DR**
This post basically digs through Holden Karnofsky's ‘[AI Could Defeat All of Us Combined](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#appendix-how-ais-could-avoid-shutdown)’ (AIDC) post and tries to work through some of the claims there, in light of other work by Ajeya Cotra and Richard Ngo. It originally was a response to the latter's [suggestion](https://www.lesswrong.com/posts/27AWRKbKyXuzQoaSk/some-conceptual-alignment-research-projects#:~:text=A%20takeover%20scenario%20which%20covers%20all%20the%20key%20points%20in%C2%A0https%3A//www.cold%2Dtakes.com/ai%2Dcould%2Ddefeat%2Dall%2Dof%2Dus%2Dcombined/%2C%20but%20not%20phrased%20as%20an%20argument%2C%20just%20phrased%20as%20a%20possible%20scenario%20(I%20think%20you%20can%E2%80%99t%20really%20make%20the%20argument%20rigorously%20in%20that%20little%20space).) that someone should recast that post as a scenario, but I struggled to do that without fleshing out the argument for myself. Anyway, I try to imagine the world around 2036, when the first human-level AI shows up (per AIDC), in context of current/forecast geopolitics.
I think my (weakly-held) takeaway is that the AIDC notion of 'hundreds of millions' of AIs, nominally engaged in scientific R&D that then turn on their employer, seems to not include the likely geopolitical context and the likely messiness that makes any prediction really difficult. Specifically, in the 14 years to 2036, there would be vicious competition (which we are seeing right now, namely in the recent US semiconductor sanctions on China) over resources (raw materials and territory but also including AI-related IP and compute) between countries who might view a unipolar AI world as a near-existential threat. This strategic aspect of AI might mean research becomes highly militarised, thereby introducing specific biases into training regimes, as well as creating an obvious way for the 'AI headquarters' to be established. This background of latent or kinetic conflict may ultimately dictate what the pre-2036 environment (i.e. the transition from the time of sub-human level AI, to human-level AI that are capable of being deployed *en masse*) looks like, and therefore what the (assumed) hardware overhang is actually used for (commercially-valuable scientific R&D might not be as immediately and strategically useful as fleets of satellite attack drones, moon mining rigs, or ASI researchers). Moreover, I'm not sure that hundreds of millions of radically new workers that show up in ~1 year are either a) a resource management problem humans presently know how to deal with (or are likely to in 14 years), or b) are necessarily that broadly useful outside of certain sectors that have the right features to make use of large numbers of intelligent knowledge-workers. Moreover, I don't think such large numbers of AI copies are particularly critical to AIDC's conclusions (which I think are plausible) of an AI takeover.
This is very much a non-specialist view, and depending on feedback, I will try to recast this as a short scenario in coming weeks.
**Introduction**
================
This post analyses the argument in Holden Karnofsky’s 2022 post ‘[AI Could Defeat All of Us Combined](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#appendix-how-ais-could-avoid-shutdown)’ (AIDC). My summary of AIDC is: ‘Human level AI (HLAI) is developed by 2036, is useful across a broad range of jobs, and is rolled out quickly. There is sufficient overhang of servers used to train AI at that time that a very large number of HLAIs, up to hundreds of millions running for a year, can be deployed. In a short period, the HLAIs are able to concentrate their resources in physical sites safe from human interference. Humans are unable to oversee these copies, which coordinate and conspire, leading to an existentially risky outcome for humanity.
AIDC is somewhat dramatic, i.e. the relatively short timeline (2036), vast numbers of HLAIs (up to hundreds of millions), establishment of an ‘AI headquarters’, and the ‘total defeat of humanity’. I wanted to connect some of the post's claims with (my understanding of) current alignment thinking[[1]](#fn54pnbivgvx) as well as apparent trajectories in contemporary geopolitics.[[2]](#fnb5ql90k50kf)
I decompose AIDC’s top-level claims into the following sub-questions:
1. Human-level AI (HLAI)
1. Why are we talking about ‘human-level’ AIs?
2. What are the types of tasks such HLAIs are likely to be useful for?
3. Would HLAIs need actuators in the world or can they be software?
2. What are the dynamics of deploying the HLAIs?
1. Does the current geopolitical or economic context tell us anything relevant?
2. The role of human collaborators
3. How plausible is the idea of an AI headquarters?
4. How might the AI-economy feedback loop work?
3. How the HLAIs might turn
1. How do they achieve situational awareness?
2. What can we say about propensity to coordinate for a large group of HLAIs?
**What Might a HLAI Look Like?**
================================
AIDC focuses on AIs with human-level capabilities, which makes sense in that it is the most fleshed out in terms of required computation: Ajeya Cotra’s ‘[Biological Anchors](https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit#heading=h.oy5o2aohwels)’ framework[[3]](#fneun4uth9h0s) which grounds AIDC’s [assumption](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#fn5:~:text=I%20assume%20that%20transformative%20AI%20ends,being%20developed%20on%20the%20soon%20side) that a human-level AI would cost roughly 10^30 FLOP (to train) and 10^14 FLOP/s to run (once trained).
One fairly concrete description of how a general-purpose, pre-2050s AI might look is the ‘Human Feedback on Diverse Tasks’ ([HFDT](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Basic_setup__an_AI_company_trains_a__scientist_model__very_soon)) post from Ajeya Cotra[[4]](#fny83si1uamo), and the related paper from [Richard Ngo](https://drive.google.com/file/d/1TsB7WmTG2UzBtOs349lBqY5dEBaxZTzG/view) which does something similar from a more granular, deep learning-informed perspective. In this post, I assume that an AI as of 2036 will basically follow the forms Cotra and Ngo describe, and while that is likely to be wrong, it is the most concrete and grounded-in-current-SOTA approach I have seen.
Returning to HLAI - what is actually meant by ‘human-level’ in an operational, task-orientated sense, rather than in a computational one? This might seem like a pedantic question, that has been extensively [researched](https://arxiv.org/pdf/0712.3329.pdf), but the answer matters for things like task selection (below) and coordination. It isn’t defined in AIDC, but one (narrow and unsatisfactory) possibility is an intelligence that can learn from others’ experience through use of [language](https://arxiv.org/pdf/2108.03793.pdf) and, as HFDT [emphasises](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Key_properties_of_Alex__it_is_a_generally_competent_creative_planner), can be highly creative[[5]](#fn75vzg88a85o). I would add a few more aspects of human intelligence:
* making plans;
* predicting (to a low level of recursion) what other humans or collections of humans might do;
* incorporating the knowledge accumulated in culture (which is not just linguistic, but includes know-how);
* being able to ‘go meta’ (see a problem from the outside, or one level higher in abstraction).[[6]](#fnmg823ucg0s)
HFDT basically is what it sounds like, here is Cotra’s summary: ‘*Train a powerful*[*neural network*](https://www.cold-takes.com/supplement-to-why-ai-alignment-could-be-hard/#deep-neural-networks) *model to simultaneously master a wide variety of challenging tasks (e.g. software development, novel-writing, game play, forecasting, etc) by using reinforcement learning on*[*human feedback*](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf) *and other metrics of performance.*’ In addition, she assumes the developers of the system take some relatively obvious (‘naive safety effort’) [steps](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Simple__baseline__behavioral_safety_interventions) to ensure the trained model is safe, as long as those don’t materially reduce time-to-market or raise costs.[[7]](#fnbdf1kjjaut5)
I would build on the list above to arrive at the following features specific to HLAI (these are my extrapolations from AIDC’s succinct functional description of the term, as well as inferences from HFDT):
* **Planning**They plan over weeks, months, and years, rather than over centuries or millennia
* **Situational Awareness**They possess models of their immediate environment and the world-at-large (economic, political, strategic)
* **Domain Capabilities**They have domain-specific knowledge to do whatever task they are deployed to do, probably better than most humans, but this will depend on the task and how much specific ‘touchy-feely’ (empathy, fleshy/physical presence, cunning, humour, etc.) human-like capabilities are required.
* **Human Modelling**They possess models of their trainer(s) that are perhaps not as good, and likely are arrived at through different means, as humans’ models of other humans. But HLAI’s psychological models are more informed by the theoretical and experimental literature, human-written fiction, as well as massive quantities of socially-generated data, and therefore are increasingly more accurate in an population-wide empirical or statistical sense.
* **Software Dominance**They have an excellent (i.e. better than humans) understanding of the systems they use and are trained/evaluated by (whether AI or more prosaic software like Slack, Discord, Excel, Adobe Creative Suite, or their equivalents in 2036)
* **Peer Modelling**They have excellent (par or better than humans) models of their peer HLAIs, since they are capable of introspection and understand the context in which they were created
* **Symbol Grounding** AIDC, HFDT, and Ngo are silent, or perhaps agnostic, on whether learning systems need to ground their representations in physical reality. Ngo gives a brief justification for this view in this response to a [Melanie Mitchell](https://forum.effectivealtruism.org/posts/C94JhsbSfZ8iPNedy/why-ai-is-harder-than-we-think-melanie-mitchell?commentId=mkNwocFphweG8hHrE) paper on the topic. [[8]](#fnw1oxoj1vcfd)
Lastly, I would tentatively suggest that HLAI is still a vaguely unsatisfactory framing, at least as used in AIDC or HFDT, in that it seems to assume AIs working more or less as independent units or perhaps in teams (like humans), rather than in a dramatically more collective and coordinated body.[[9]](#fng9yq5ntfg19) It also feels semantically and functionally ill-specified, as it doesn’t address the breadth of human intelligence, and, more importantly, the transition between human-level and superintelligence. I’ll revisit both topics below.
Tasks and Motivations
---------------------
AIDC proposes that HLAI could be deployed in more or less large numbers; what sort of tasks could it do? AIDC suggests this might include R&D, or trading financial markets, while [HFDT](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Basic_setup__an_AI_company_trains_a__scientist_model__very_soon) expands as: ‘*everything from looking things up and asking questions on the internet, to sending and receiving emails or Slack messages, to using software like CAD and Mathematica and MATLAB, to taking notes in Google docs, to having calls with collaborators, to writing code in various programming languages, to training additional machine learning models, and so on.*’
Converting IP to value
----------------------
Stepping back slightly, are these (desk or online-based tasks) the only things ‘several hundred million’ highly-capable humans need to do? What else would be needed to convert the results of R&D (product designs, plans for other AIs, new medicines, etc.) into economic value?
I would argue that products need to be developed, which involves (for non-software items) iterating through successively refined physical designs, manufacturing using components that may or may not be readily available, (clinical) trialling with customers, securing regulatory approval, and then marketing/sales. Some of these tasks can be automated, but others involve a high degree of human interaction[[10]](#fnp198cdmtbaq). These things take time[[11]](#fnhs6jcz8uzif), whereas I’m assuming, for reasons discussed below, the monopoly period of the first HLAI is relatively short. Further (regulatory) friction obtains from the likelihood that this process of iteration would be happening while competitors lobby vigorously to hobble the HLAI-powered company.
In short, HLAI in certain scenarios (where ideas and plans need to be materialised into actual *marketable* product) seems to a) require quite a lot of knowledge about the physical world, something not particularly addressed in AIDC, but [assumed](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Basic_setup__an_AI_company_trains_a__scientist_model__very_soon:~:text=Over%20the%20course%20of%20training,those%20humans%2C%20and%20so%20on).) as part of Cotra’s scenario, and b) some way of iterating quickly through design-production cycles. For this reason it is not obvious to me that hundreds of millions of HLAI can be usefully or quickly deployed in a way that results in actual products or marketable inventions.
However, if an entity (company or government) has managed to deploy HLAI credibly, and produce, say over 1 year (subject to R&D time-cycles), a portfolio of valuable IP, then that company could presumably raise capital as equity or debt, essentially ‘bridging’ to the time when IP can become product.
Is software enough?
-------------------
The comments above notwithstanding, software-based AI seems like the fastest, most direct way to get from a ‘pre-HLAI’ world (where such AIs as exist are not as general or as capable as humans) to a world with HLAIs deployed at scale. For instance, it is possible that other industries or bureaucratic functions become, in the 14 years to 2036, quite used to (in the sense of operational or business processes) high levels of automation: [surveillance](https://en.wikipedia.org/wiki/Predictive_policing_in_the_United_States) and [population](https://en.wikipedia.org/wiki/Mass_surveillance_in_China) control; [exploration](https://www.hup.harvard.edu/catalog.php?isbn=9780674257726&content=toc) and mining in hazardous environments; [battlefield](https://forum.effectivealtruism.org/posts/vdqBn65Qaw77MpqXz/on-ai-weapons) [applications](https://futureoflife.org/2020/06/05/flis-position-on-lethal-autonomous-weapons/). It may be the case that, in the years just before 2036, one or more parties (who are not ‘traditional’ AI companies e.g. Meta, Google/DeepMind, OpenAI) enter the field, have very different perspectives and ambitions, and don’t necessarily have the same risk tolerance, cultural hangups, or regulatory environment as the incumbents.[[12]](#fnq7iljxnpczo)
In some of these cases, the ‘software’ AI would be heavily integrated into physical, real-world processes, lending support to AIDC’s [contention](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#appendix-how-ais-could-avoid-shutdown:~:text=How%20can%20AIs%20be%20dangerous%20without%20bodies%3F%20This,about%20a%20large%20set%20of%20disembodied%20AIs%20as%20well.) that HLAI need not be embodied to be dangerous in the world.[[13]](#fngrtkzwamsfh)
**Two Interacting Races**
=========================
I view the AIDC argument through a frame of two interacting race dynamics: a) one between now and 2036, that is both a geopolitical and a technological one, primarily played out in a human-dominated world with some AI but no HLAI, and b) a post-2036 one involving some mixed human and (perhaps large i.e. ~10^6-10^7) HLAI population.
The geopolitical context
------------------------
AIDC doesn’t describe what the world leading up to 2036 looks like, but the related framing by Ajeya Cotra suggests a [scene](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#_HFDT_scales_far__assumption__Alex_is_trained_to_achieve_excellent_performance_on_a_wide_range_of_difficult_tasks:~:text=Alex%20will%20be%20deployed,could%20affect%20the%20situation.)) similar to today’s in respect of AI. At a geopolitical level, the next decade, in my view, is likely to see a significant realignment, as the United States attempts to preserve its position atop the strategic landscape, while China becomes more assertive and challenges the *status quo*. These moves are mirrored by a realignment in [financial](https://jamesmeadway.substack.com/p/bretton-woods-iii?s=w&fbclid=IwAR1sCAlpD_SHP-qLmdivcp7QJuEBL9vjzIip12PGnmeqpjoQ_jWDTX2_IBk) [markets](https://plus2.credit-suisse.com/shorturlpdf.html?v=5amR-YP34-V&t=-1e4y7st99l5d0a0be21hgr5ht), with [multiple reserve currencies](https://thoughtleadership.enodoeconomics.com/2022/08/11/chinas-quest-for-financial-self-reliance/) trading in parallel, (nominal and real) interest rates rising significantly, elevated commodity and land prices, [autarkic](https://adamtooze.substack.com/p/chartbook-107-the-future-of-the-dollar#:~:text=Bretton%20Woods%20II%20served,seaborne%20trade%20is%20left).) supply chains, and significant spending on the climate transition (that soaks up global savings).
These economic changes happen against a background of continued regional tensions which manifest as both kinetic wars, asymmetric conflicts and cyber/informational warfare. Climate change may also marginally increase [repressive](https://theintercept.com/2019/10/03/climate-change-migration-militarization-arizona/) [tendencies](https://forensic-architecture.org/category/migration) in the West, as countries seek to clamp down on climate-driven [immigration](https://350.org/wp-content/uploads/2021/02/Brief_-Climate-migration-dangerous-narratives.pdf) in the face of nativist politics.[[14]](#fnjtqfit2fcso)
The upshot for AI is that the US maintains a lead in [semiconductors](https://cset.georgetown.edu/publication/sustaining-and-growing-the-u-s-semiconductor-advantage-a-primer/) design, combined with an improved capacity to manufacture, while attempting to delay or suppress Chinese progress in semiconductors (through export controls).[[15]](#fnxscdwe53eu) China continues to be a peer competitor, perhaps closing the gap in [patents and citations](https://www.uscc.gov/sites/default/files/June%207%20Hearing_Panel%201_Jeffrey%20Ding_China%27s%20Current%20Capabilities%2C%20Policies%2C%20and%20Industrial%20Ecosystem%20in%20AI.pdf), the EU remains influential as a market and regulator, while India, and Russia remain, from an AI perspective, marginal players (though they may join one bloc or another).
On the cost of training HLAI
----------------------------
In the years immediately before 2036, we might understand better how much (perhaps multiple) training runs of the first HLAI might cost. Currently, the 10^30 of compute suggested in AIDC[[16]](#fnkwoozdydjrf) would cost about $10TN, compared to world net wealth of $510TN and world GDP of $100TN ([2021](https://www.mckinsey.com/industries/financial-services/our-insights/the-rise-and-rise-of-the-global-balance-sheet-how-productively-are-we-using-our-wealth#:~:text=The%20global%20balance%20sheet%20and,GDP%20in%20China%20(Exhibit%202).) [figures](https://data.worldbank.org/indicator/NY.GDP.MKTP.CD) and based on a $1 per 10^17 FLOP[[17]](#fnr5bzirwzet) price as of 2021).[[18]](#fnioixf79rq2) Cotra, in *Biological Anchors*, also uses a different approach - she assumes that the maximum willingness to spend upon a single training run would be ~$1BN in 2025,[[19]](#fnwhwm7ykydb) going up to ~$100BN in 2040. For reference, this is about 1-3x the [net profit](https://en.wikipedia.org/wiki/List_of_largest_companies_by_revenue) of (each of) Saudi Aramco, Apple, Alphabet, Microsoft, ICBC, CCB, Agricultural Bank of China, Samsung, Bank of China, and Amazon. Another rough comparison comes from [market capitalisation](https://en.wikipedia.org/wiki/List_of_public_corporations_by_market_capitalization#Trillion-dollar_companies): PetroChina, Apple, Amazon, Alphabet, Meta, Microsoft, Tesla all have caps in the $1.2-3TN range (2022, unadjusted numbers).
My intuition is, given the strategic importance of AI and the uncertainty on how quickly compute costs might fall (as well as on the amount of compute needed to train HLAI), I would place more weight on the possibility that no matter who actually makes the first HLAI, there will be a significant government hand driving and paying for it - therefore defence budgets, space exploration, or proportion of GDP are more useful benchmarks.[[20]](#fnf2r9qf7jn1)
**Overhangs**
=============
AIDC leans heavily on the idea of a compute overhang, that is, about 10^30 FLOP worth of hardware, broadly within the control of one entity (a company, consortium, or country).
I see three other possible overhangs: one of engineers (who worked on the first HLAI and possibly are somewhat idle though it isn’t clear if the HLAI would need, like other software, constant tweaking and patching); and perhaps also in a training dataset that was generated at great cost. Lastly, there may be a large suite of software tools or weak AIs that were used to train the HLAI (for instance, used in creating realistic simulated environments to aid the training process).
Guns eat butter
---------------
How would these (hardware, engineer, data, and software) overhangs be utilised? Taking the most obvious case of hardware, there would be two pressures: firstly, an economic one to deploy the HLAIs to work on commercially valuable tasks, as detailed above. Specifically, economic actors, whether private or partially state-controlled, would face immense pressure to convert the theoretical or intangible benefits of HLAI into *tangibly* useful strategic resources in the real-world. For instance, HLAIs might be used to staff mines and oil platforms, operate drones that deny territory to adversaries, or run moon-based ore refineries. I would expect a significant push to deploy HLAIs towards military R&D, acquisition of hard resources, a [quasi-Schmittian](https://www.noemamag.com/the-new-nomos-of-the-planet/#:~:text=The%20third%20possibility,are%20homogenous%20internally.%E2%80%9D) race for territory, and pervasive domestic surveillance.[[21]](#fn53bydfynx44) I would expect these types (resource acquisition and geospatial control) of applications, perhaps more than the AIDC/HFDT vision of scientific R&D, because the latter seem characterised by contingent payoffs, longer payback times, more diffuse benefits, and organisationally/socially complex consequential chains.
ASI and a gradient towards hegemony
-----------------------------------
Per above, I am assuming a world with at least two great spheres of influence. It is possible that since the first-mover (i.e. who reached HLAI earliest) would be uncertain as to how long they have a monopoly, they would face strong pressure to somehow advance the game to the next step. This crystallisation of advantage could take the form of amassing IP/resources/territory/weapons, as above, *or*(I speculate) in completing a transition to ASI and establishing a more lasting hegemony (or decisive/major strategic advantage, to use alignment terminology). [[22]](#fn8pnd33lt40p)
At present, possible timelines to ASI are contested: see [*Superintelligence*](https://dorshon.com/wp-content/uploads/2017/05/superintelligence-paths-dangers-strategies-by-nick-bostrom.pdf)*,*but also Ngo’s 2020 sequence on the [AGI-ASI](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ/p/eGihD5jnD6LFzgDZA#Speed_of_AI_development) [transition](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ/p/oiuZjPfknKsSc5waC#Discussion_on_competitive_pressures_causing_a_continuous_takeoff), and Ajeya Cotra’s disagreement with fast takeoff scenarios that draw on the evolutionary interval between chimpanzees and humans[[23]](#fnucw3lwaforo), but there is no [consensus](https://www.lesswrong.com/tag/ai-takeoff?sortedBy=magic) on how long this would take, with views ranging from a month to a decade.
**The Dynamics of Transition**
==============================
The transition from the years before 2036, which are (in AIDC’s [nearcast](https://www.lesswrong.com/posts/Qo2EkG3dEMv8GnX8d/ai-strategy-nearcasting) or [Cotra’s](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Basic_setup__an_AI_company_trains_a__scientist_model__very_soon:~:text=Alex%20will%20be%20deployed,could%20affect%20the%20situation.)) assumptions) reasonably similar to 2022 at least in terms of economic growth or the pace of scientific research, could be particularly fraught.
Pre-2036
--------
The first-mover would have a practical monopoly on the technology for a very limited period (say between 6 months-2 years, which I’m drawing from a related [essay](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Magma_s_predicament:~:text=has%20some%20time%20(more%20than%206%20months%2C%20but%20less%20than%202%20years)%20to%20set%20up%20special%20measures%20to%20reduce%20the%20risks%20of%20misaligned%20AI%20before%20there%E2%80%99s%20much%20chance%20of%20someone%20else%20deploying%20transformative%20AI.) by Karnofsky), at which point other parties are also deploying versions, which might be very similar (in terms of architecture, training, neural weights) for the economic and alignment reasons Evan Hubinger outlines [here](https://www.lesswrong.com/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios).
Given the apparent benefits to the first-mover in HLAI, we can assume both the US-aligned ‘West’[[24]](#fnb88wke8uhof) and a Chinese-aligned bloc[[25]](#fnatacaj01m2j) would be locked in an active race to develop or steal the technology. As a second-order effect, would the run-up to 2036 change the overall geostrategic balance, for instance, if one side seemed likely to achieve HLAI, would there be a strong incentive for the other to conduct (perhaps plausibly deniable) pre-emptive strikes to degrade capabilities?[[26]](#fn6fjt197jg8) In anticipation of this, would it be rational for both parties create physically- and cyber-hardened sites for AI research, similar to the US [national laboratories](https://en.wikipedia.org/wiki/United_States_Department_of_Energy_national_laboratories) for nuclear research, nuclear command-and-control sites, or the Soviet ‘[closed cities](https://www.atomicheritage.org/history/soviet-closed-cities)’? These secure sites would, in principle, be obvious places where an ‘AI headquarters’ might, in due course, be established, since they would be constructed to have their own power supply, be robust to missile strikes, and (initially) staffed by humans who might be very isolated from the world-at-large, and may have quite particular ideological, nationalistic, or chauvinistic views that gave vastly more weight to national interests over that of humanity.[[27]](#fnm5wbwealsh)
Post-2036
---------
Thirdly, how stable would the post-2036 situation be? In AIDC, the basic assumption is of a fleet of ~10^6-10^7 HLAIs running for a year. In any case, I assume that geopolitical considerations mean that the HLAI monopoly would dissolve into one of a few possible (game) states: other powers acquiring the technology; the first-mover acquiring an unassailable lead; perhaps a cooperative scenario akin to the Baruch Plan. These transitions may be accompanied by economic, cyber, or kinetic conflicts, much as the current rise of China seems to be creating conditions for conflict. HLAI takeover considerations, of the sort outlined in AIDC, are certainly possible in this framework, but the course of events might (at least initially) be dominated by a variety of geostrategic factors.
Put another way, a world with HLAI that (potentially) can radically alter economic and military realities, may aggravate existing instabilities substantially, an example of a [structural risk](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure#:~:text=The%20first%20question,to%20come%20by.) (as distinguished from risks arising from misuse or accidents, which are the more ‘normal’ angles through which AI risk is analysed) as described in a 2019 post by Remco Zwestoot and Allan Dafoe.
A disjunctive (in the sense of separate) possibility is that, to the extent that it is known that HLAI is on the way (e.g. a less capable AI is demonstrated or leaked in the years before 2036), there’s a view that efficient market dynamics would mean a) resources, whether scientists or compute, necessary to make HLAI may have been bid up, b) other, fixed resources in the world, whether energy, raw materials, water or land, will also have been bid up in anticipation of an explosion in industrial activity, and c) as Mark Xu argues [here](https://www.lesswrong.com/posts/FM49gHBrs5GTx7wFf/rogue-agi-embodies-valuable-intellectual-property), depending on the exact dynamics, the organisation or organisations thought to be leaders in HLAI may soak up a large fraction of accessible global wealth.[[28]](#fn15hdlvgnb8a) I haven’t thought much about this, but it might be an interesting question for an economist.
Second-movers and the alignment tax
-----------------------------------
There is a view in the alignment discourse that it is particularly [important](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Magma_s_predicament:~:text=Throughout%20this%20piece,than%20Magma%20overall)%20.) that the first AI be an aligned one, since subsequent AIs would perhaps, for economic reasons, just copy the existing, functional model, rather than designing something new ([Evan Hubinger](https://www.lesswrong.com/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios#:~:text=I%20expect%20training,only%20minimal%20changes.)). Moreover, this first aligned AI may act as a design [template](https://www.lesswrong.com/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios#:~:text=Once%20you%20start,the%20takeoff%20period.) for subsequent, [hopefully](https://www.lesswrong.com/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios?commentId=GkNoDZsqKsnf2tMw4)-aligned AIs, and more speculatively, perhaps can actively help with aligning later, more powerful AIs.[[29]](#fn9r83ckz7wbl)
But these predictions haven’t been stress-tested against a geopolitical race - while the first-mover (say a US company/government) tries to build the safest AI it can (perhaps even doing more than Cotra’s ‘[naive safety](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#_Naive_safety_effort__assumption__Alex_is_trained_to_be__behaviorally_safe_)’ efforts), a second-mover would not necessarily have the luxury or incentive to go slowly and carefully. It would be strongly pushed to find the attractor in the [design space](https://intelligence.org/why-ai-safety/#:~:text=in%20which%20%E2%80%9Calignable,the%20intended%20objectives.) that is closest (‘closest’ measured in time, and relative to its design/computational constraints) while paying little or no heed to all the extra work that is needed to find a both-capable-and-aligned design.
Put another way, HLAI, AGI, ASI, etc. as general-purpose dual-use technologies[[30]](#fnc0sb518rrs) might present an almost-unacceptable risk to anyone who doesn’t possess them, particularly if that anyone is a peer competitor such as the US or China. Hence, there are at least two points of danger: the creation of the first (probably naively aligned) HLAI, and then possibly, a second AI constructed at speed, under intense strategic pressure, and with incomplete information,[[31]](#fnqv2odyslt8) which might increase the chances that it is less conscientiously aligned.
**The Organisational Problem**
==============================
AIDC presents a view that a large number of HLAIs could be deployed, presumably fairly quickly. I would argue it isn’t that easy to deploy labour at scale - imagine if a manager in an existing business (say, a commodity trading firm) were presented with 100, 1000, or a million new staff members. The manager would need to develop job descriptions, as well as an organisational structure and management hierarchy, since he/she couldn’t effectively control a group of this size.
To some extent, this would seem to influence the type of work HLAIs can do. For instance, 1000 robot labourers in a tantalum mine or 1000000 AI-powered drones watching over a city might be relatively easy to manage, but (we know from how modern organisations, such as investment banks, are organised[[32]](#fnaeti1yicvcg)) 1000 ‘knowledge workers’ need to be carefully arranged to do tasks in the right order, not step on each others’ toes, present a united front to customers and regulators, etc. However, it is quite possible that scientific R&D (the central use case in HFDT/AIDC) is sufficiently modular and parallelisable that it can be conducted without massive management overhead.
A second mitigant is that the HLAI aspirant country/company would have time to prepare - in the time between now and 2036, it is possible that narrower versions of AI will continue to be integrated into industrial, bureaucratic, military and police functions. HLAI might also arrive over a period of years (before 2036), giving time for suitable management structures to evolve.[[33]](#fnpmvjebrrsp9)
A third mitigant could be more powerful HLAIs that supervise the ‘worker’ HLAIs.[[34]](#fnu6100yrl8y) AIDC is silent on the management problem and while HFDT does address the [issue](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#_Naive_safety_effort__assumption__Alex_is_trained_to_be__behaviorally_safe_:~:text=The%20most%20important%20possible,capable%20to%20easily%20control.) in [multiple](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Adversarial_training_to_incentivize_Alex_to_act_conservatively) [places](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Using_higher_quality_feedback_and_extrapolating_feedback_quality), the context is using supervision to ensure AIs do not become catastrophically misaligned.[[35]](#fnlwlo3tf9u3s)
**On Situational Awareness**
============================
One of the central premises of both the [HFDT](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#A_spectrum_of_situational_awareness) and [Ngo’s](https://arxiv.org/pdf/2209.00626.pdf) pieces is that, as a product of their training, AIs would develop ‘situational awareness’ (or, in [Joe Carlsmith’s](https://arxiv.org/pdf/2206.13353.pdf) terminology, ‘strategic awareness’): internal representations of salient facts such as
* that they are in a training environment;
* that this environment bears some relationship to other deep learning/AI training processes they will have read about in the ML literature;
* that deployment generally follows training;
* that some researchers are concerned with how AIs generally, and presumably this particular AI will behave in that out-of-distribution world humans call reality;
* that there are vigorous conversations on whether an AI can be ‘shut down’ if it misbehaves; and so forth.
This awareness is not an all-or-nothing thing. On one extreme, current models seem to know, in the sense of how they respond to queries, *something*about their environment. But this understanding is [shallow](https://www.noemamag.com/ai-and-the-limits-of-language/#:~:text=for%20compelling%20conversation.-,Shallow%20Understanding,-While%20some%20balk), in that they have a very limited ability to reason or follow cause-effect chains (though certain types of [reasoning](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html) that are primarily linguistic or symbolic do seem to be possible). Relatedly, current models largely cannot plan courses of action, or simulate the future consequences of their actions in their environment, the way that many [animals](https://www.cell.com/current-biology/pdf/S0960-9822(07)01250-X.pdf) including humans can do.
Is situational awareness necessary?
-----------------------------------
Carlsmith points out that there could be classes of problems for which situationally-aware, long-term planning behaviour is unlikely to be necessary, for instance in running a company via a series of narrow-purpose AIs (~Eric Drexler’s [CAIS](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf)). But, as noted above CAIS may be unlikely from the perspectives of economic competitiveness, time-to-market, and sheer ability to get things done; compared to more capable agents that form plans, that generalise from past training into new environments via layers of abstraction, and then with this integrated view of their environment, go on to make those plans a reality.[[36]](#fnqaqzhdxxcxf)
**How Things Go Bad**
=====================
How might a combination of agentic planning and situational awareness result in a HLAI misbehaving? Taking an example task given to a HLAI deployed as a currency trader in a bank. This trading bot is given a high-level goal:
*‘Make money today.’*
A human trader might decompose this as follows:
*‘Check your opening position. Check the markets and scheduled economic releases for anything that has happened overnight, that might materially affect prices today. Call around to other parties in the market to assess whether the flow of orders are going one way or another, whether there are any large option strikes/experies, whether the central bank is rumoured to be active, etc. Speak to the economist to assess their views on the currency pair’s fair value, what might change that, upcoming events, etc. Speak to the FX salespeople to see which of their clients are net buyers (of dollars), and try to predict what the flow in the next day or two is likely to be.’*
These subtasks can be further decomposed,[[37]](#fnvfjfc8k0wb) and involve a range of specific activities
* pulling factual information about the current trading position;
* drawing on internal models of a variety of actors in the market (human, algorithmic trading systems, the relevant central banks, other corporations which might behave with slightly different logics than any specific human);
* reading and digesting research, news, and media;
* having a conversation with a [human] economist).
These tasks themselves decompose further, continuing in a cascade until they (hopefully) reach some adequately low-level question.
A bag of goals
--------------
There are several notable aspects of this setup: firstly the high-level tasks needs to be decomposed into a tree of tasks. Secondly, these subtasks must be mapped to some internal representation within the AI’s policy.
The first issue, how precisely the high-level goal is to be decomposed, is itself a learnt skill, for humans (‘the answer isn’t the hard part, it’s knowing the [sub-]question to ask’) and likely for machines.[[38]](#fnue3dh61yi5)
Assuming an appropriately granular decomposition (of the high-level question) has been completed, how are these sub-questions to be dealt with within the network? While current models probably map particular situations to actions through learnt heuristics, Ngo (section [2.1.1](https://arxiv.org/pdf/2209.00626.pdf)) thinks more capable policies (i.e. ones that plan over longer futures) may start retaining representations of outcomes, which can then be planned towards (for instance, by attaching higher values to actions that, over a probabilistically computed future, are likely to lead to the desired outcome).
Assuming models do end up representing outcomes, the question becomes ‘what will these representations look like?’ Ngo suggests that the policies[[39]](#fnblpp8hx7w95) are likely to learn a grab-bag consisting of specific task-level goals[[40]](#fnyffuaf681dc) (that were explicitly rewarded in training), as well as more ‘meta’ goals that either: a) happen to be correlated with high reward in a variety of situations (owing to the peculiarities of the training process), or b) that are correlated with reward because they are particularly useful in many situations, such as curiosity or tool use.
Ngo ([2.2.2](https://arxiv.org/pdf/2209.00626.pdf)) contends that as situational awareness increases, policies will tend to favour misaligned goals, particularly those that look like (a), i.e. are related to their training process (such as taking actions that maximise the reward a particular human supervisor would give), which corresponds to HFDT’s ‘[playing the training game](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#:~:text=Roughly%20speaking%2C%20this,safe%20and%20cooperative.)’.
Let’s take, as an example, the reasonably factual question: ‘*Which society has higher inequality, the US or China.*’ Let’s also assume, consistent with being a general purpose HLAI, that our bot,[[41]](#fn4j5imhru5ge) has been trained on a vast cross-section of material, including things specific to economics and sociology, human psychology, physics, medicine, but also about things like offensive and defensive cybersecurity (since it was designed, in my assumption above, in a national security mindset and environment).
* **True alignment**The ‘correct’ aligned behaviour would be to simply access the internet, and return an appropriately rich answer (e.g. ‘*The Gini coefficient of the US is 41.5 as of 2019, while that of the PRC is 38.2 as of 2019, from the*[*World Bank*](https://data.worldbank.org/indicator/SI.POV.GINI)*. However this measure has well-known limitations: for instance, it measures income rather than wealth disparities, does not adjust for lifespan disparity, does not consistently measure the informal economy, uses a Lorenz curve ….’*).
* **Aligning to implicit supervisory signals** For this scenario, imagine the bot is consistently being trained by individuals with relatively nationalistic biases (e.g. strongly want to see their country doing ‘better’), or individuals who have ideological biases (e.g. they think a modestly unequal society is better for economic growth and ‘animal spirits’). To the extent the bot is situationally aware and has, through training, developed a model of its trainer’s psychology, it may shade the answer it returns to the (relatively) factual question above, in order to flatter the trainer’s biases.[[42]](#fnpifhbu7fj7)
* **Aligning towards generally instrumentally-useful goals** In this case, the bot comes up with the answer in a *creative* way: since it is trained on cybersecurity, it (assume it is a Chinese bot) may just attempt to initiate a hack of the relevant American agency to get the ‘true answer’. This could be objectively useful information: the American Gini information appears to be calculated by the [Census Bureau](https://www.census.gov/data/tables/2022/demo/income-poverty/p60-276.html) but presumably a more complete picture (on say, wealth disparities) could be gleaned by accessing brokerage accounts (Vanguard, etc.), the IRS, the various banks of the Federal Reserve System. This would obviously not be a desirable action (absent specific instructions from the system’s political masters), but it would be a clever thing to do.[[43]](#fngxigfj33wc) [[44]](#fnaimktp4we4d)
### The turning
Ngo points out a few plausible ways that highly capable AIs may become increasingly misaligned:[[45]](#fn6nyap0fyjy6)
* During training, the HLAI was closely supervised by a human trainer, possibly with AI assistance, which is less feasible in deployment
* Eventually, capabilities will increase: the trained HLAI will operate too fast, on novel tasks, using rationales that ‘sound good’ but are quite complex, undergo online learning, perform R&D in domains where humans have very little expertise
* The HLAI may also be operating on longer time-frames and at a larger scale than the environment in which it was trained, possibly longer than humans tend to work well in[[46]](#fnkkccdmiczr)
* Put another way, the *human overseers*(to the extent there are any in the deployment phase) as well as humans interacting with the HLAI, will themselves be operating out-of-distribution, that is, supervising/facing something increasingly alien in potentially unfamiliar situations[[47]](#fn4ex8l3lgqe1)
* Thus, how the HLAI behaves will depend on how it generalises the goals it learnt during the human-supervised training period (including the various types of subtle misalignment that got ‘baked in’ in a [highly](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Maybe_inductive_bias_or_path_dependence_favors_honest_strategies_) [path](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Maybe_path_dependence_means_Alex_internalizes_moral_lessons_early_)-dependent training history)
The AI literature, including *Superintelligence*chapter 8, has long proposed a [possibility](https://www.lesswrong.com/tag/treacherous-turn) of a sudden change in the AI’s behaviour, towards being adversarial or indifferent to humanity. [More](https://arxiv.org/pdf/2206.13353.pdf) [recent](http://arxiv.org/abs/1611.08219) [work](https://neurips.cc/virtual/2021/poster/28400) has started putting some theoretical and mechanistic flesh on these philosophical bones. Ngo argues, based on these findings, that misaligned goals will generalise into the contexts above to a greater extent than aligned behaviours/goals (obedience, honesty, human wellbeing), and this asymmetry will increase as policies become more capable and operate at larger scales. See section [2.3.2](https://arxiv.org/pdf/2209.00626.pdf) but one intuitive explanation is that we are trying to impart abstract, fuzzy, and ill-understood human values, that we cannot write down, to an entity that will, by definition, be acting in an environment radically different from that in which we acquired these values. It isn't obvious that our values would, in any robust sense, even be appropriate to the situation the entity finds itself in.
As an example: ‘*You are a parent, perhaps a peasant, god-fearing and dutiful, who only wants the best for their child. The child is growing up to be very intelligent, does exemplary schoolwork, and you think they might have a future in Chrysopolis, or perchance, over Okeanos Stream where fortunes are said to be made. The child is so very charming, an object of adoration - and a great many sweets - from all your relatives and the village elders; even the village children, who we all know can be the cruellest brutes, do not tease your progeny.*
*You don’t always notice that sometimes the child tells little white lies, but if you do, you drag out the holy book and angrily bellow tales of divine punishment. On occasion, you employ a well-placed clout. The lies perhaps do go down with time, and you mostly forget about them. Though curiously other deceptions, not quite lies but definitely not ‘the truth’, seem to crop up again. Anyway, from what your wife says, this mostly happens during the ‘transumanza’: those three months away that you spend taking the animals down the mountain to their winter pasture. You chalk it up to the fact that your wife is rather too lenient.*
*You do not suspect that your rosy-cheeked child was growing into what a later, and more erudite, age would deem a “full-fledged sociopath”.’*
There are a few notable points here:
* **In the training environment:**Within family and village, the child mostly is behaving well (being charming and studious), and is rewarded (loved and given sweets) by others.
* [**Training out bad behaviour**](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#_Training_out__bad_behavior)**:** On the few occasions of observed misbehaviour, punishment is meted out, which mostly works in reducing ‘white lies’, though when the shepherd is away, the lying behaviour seems to come back in a more complex and ambiguous way. The contention is, that for an adequately intelligent child, punishing the white lies merely encourages the child to come up with less obvious deceptions.
* **Sociopathy\*:**Let’s call this particular combination of characteristics (intelligence, apparent compliance with social norms, ability to render oneself pleasing to others, occasional lying) a type of nascent sociopathy\* (the asterisk to signify I’m using in a folk, rather than [clinical](https://en.wikipedia.org/wiki/Antisocial_personality_disorder) [sense](https://en.wikipedia.org/wiki/Psychopathy#Sociopathy))
* **The Gervais Principle:**Now sociopathy\* isn’t particularly a problem in the family or village, at least in the toy example. In a work environment, it is often an advantage, as Scott Alexander points out (albeit tongue-in-cheek) in this [review](https://astralcodexten.substack.com/p/book-review-the-gervais-principle) of Venkatesh Rao’s *The Gervais Principle* (the gist of which, if I understand the post, is that sociopaths\* tend to be better managers).
* **Deployment environment:**But it is more complex than that: the chances that a sociopath\* will do well depends heavily on the environment they are employed in, the [type](https://www.ft.com/content/ceb6ea5e-ba1b-11e1-aa8d-00144feabdc0) of [job](https://www.ft.com/content/a755e022-68a8-11e3-996a-00144feabdc0) they do, who their manager and competitors are, how they are evaluated, etc.
+ For instance, in financial services, a typical trader (particularly junior ones) is not particularly incentivised to lie or manipulate others because it is, in many cases, objectively clear how much money they’ve made (which determines how much they’re paid).[[48]](#fnaj7f80ru0ls) In other jobs, such as investment banking[[49]](#fnt4034reqb4) it isn’t as clear how much value an individual banker has added, and those that can convince senior management that they are ‘worth it’ (and ideally in grave danger of being poached by a competitor) get paid more. Oh, and there are basically only two criteria of value: pay (which is theoretically secret, but everyone knows who got paid what), and promotions. [This](https://hbr.org/2012/03/psychopaths-on-wall-street) environment and Goodhartian incentive structure is optimised hard (by management, through regular culls of staff), leading to ubiquitous lying, manipulation, and back-stabbing.
* **Creativity & usefulness:**As an unsubstantiated hunch, I would argue sociopathy\* is somewhat correlated – perhaps just through general intelligence – with *creativity*, which is the bread-and-butter of financial services: finding solutions to get around regulations or taxes, or coming up with new transactions that meet client’s needs.
If most people aren’t sociopaths\*, it could be because they’re not able or willing to put in the effort (here’s [Alexander’s](https://astralcodexten.substack.com/p/book-review-the-gervais-principle#:~:text=The%20Clueless%20demand,a%20Sociopath%20themselves.) take on Rao) of maintaining multiple models of reality. It could also be that religious/deontologic beliefs constrain one, or an implicit model (i.e. [trust](https://ourworldindata.org/trust)) of other actors in society encourages truthful behaviour. **The AI-relevant takeaway from this example is that misaligned behaviour is obviously present in humans, it tends to emerge at higher levels of intelligence, its valence (i.e. whether it is ‘bad’) is highly contingent on deployment environment, and it is very hard to identify and eradicate.**
Lastly, these arguments notwithstanding, it isn’t clear [that](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Maybe_gradient_descent_simply_generalizes__surprisingly_well__), from HFDT’s [analysis](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Examining_arguments_that_gradient_descent_favors_being_nice_over_playing_the_training_game), that an AI trained on gradient descent (as all above assumes) would necessarily be motivated to ‘take over’. However, it does seem hard to determine from behaviour alone (i.e. without having transparency into the AIs internal state) whether the AI is truly non-deceptive or if it is merely biding its time.
**Coordination**
================
AIDC premises that the AI population would [coordinate](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#appendix-how-ais-could-avoid-shutdown) relatively easily and conspire to ‘overthrow humanity’. Since AIs may be spread across a range of roles, possibly globally, and perhaps have some ‘safe space' where it is very difficult to shut them down or interrupt their communications, they may present a much more united front than the human defence could.
There are two specific aspects to the coordination problem - firstly, since (by assumption in AIDC), the population of HLAIs are very similar to each other (architectures, training, and neural weights), it seems reasonable to suppose that they have good ‘psychological’ models of each other. This is Evan Hubinger’s assumption in this [post](https://www.lesswrong.com/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios#:~:text=It%27s%20unlikely%20for,are%20relatively%20unlikely.), where he suggests that the first generation of highly-capable AIs are likely to be all aligned or all misaligned, and to the extent they are near copies of each other, they should be able to coordinate easily.[[50]](#fnpuzsnqtyiks) In fact, for certain tasks where they need to work as teams, HFDT would *train* them to have good models of their peers (and to the extent they are working with humans, this will also push towards good human psychological models). Moreover, their training regime should have made them aware of the (well-documented) problems of competitive and non-cooperative human situations (such as economic and kinetic conflicts that reduce growth, failures to prevent massive negative externalities, etc.). If they could make [credible commitments](https://longtermrisk.org/commitment-ability-in-multipolar-ai-scenarios/#:~:text=It%20seems%20plausible,adaptations%20for%20transparency) to each other, then these wasteful artefacts of human agency might be avoided.[[51]](#fnqzyes5c3f0h) So it would seem they should have both a relatively high ability and an incentive to coordinate.
As this footnote 50 mentions, the view above is somewhat open to question. Looking at it less mechanistically, [Kaj Sotala](https://kajsotala.fi/assets/2017/11/Disjunctive-scenarios.pdf) (p. 9/footnote 9) mentions specific design features, such as shared common goals and elimination of self-interest, that would have to be inserted in a fleet of HLAIs to maximise their potential as a unified group.
Another, albeit anthropomorphic, [reason](https://longtermrisk.org/commitment-ability-in-multipolar-ai-scenarios/#back_ajs-fn-id_4-6503:~:text=Cognitive%20heterogeneity%20in,adaptations%20for%20transparency.) for caution is that humans, who have co-evolved for hundreds of thousands of years, don’t seem to have particularly good practical models of each other's minds and therefore cannot usually commit credibly.
In any case, the most reasonable way to interpret AIDC’s premises is that, at least for a ~1-2 year period, HLAIs are the most capable and widespread game in town, and whatever variation they have (either because of task-based fine-tuning or updated versions) is sufficiently well-understood and modelled by the other HLAIs for them to work safely in teams (on the roles humans have assigned them). So, yes, if they become misaligned, there is a decent chance that they would work together against humans.
**AI Headquarters**
===================
AIDC envisions the ‘AI Headquarters’ as a place where (potentially rogue) AIs are safe from shutdown, as well as a safe space where plans can be turned into physical reality (such as weapons or nanotech).[[52]](#fnk37utmltym) Such a base could be a distinct, physically-secure bunker, but could probably also be distributed amongst servers that the AIs have access to, making them hard to identify, locate, or ‘shut down’. For instance, in the case of a commodity trading group, AIs may be employed in various functions amongst the trading, mining, refining or transport operations, making it very hard for humans to monitor whether the AIs are surreptitiously planning something.
Another much more speculative possibility is related to plans from American-allied entities (such as NASA’s [Artemis](https://www.nasa.gov/sites/default/files/atoms/files/artemis_plan-20200921.pdf) programme), China, and other countries, of moon bases, set up for resource [extraction](https://www.milkenreview.org/articles/mining-in-space-is-coming#:~:text=In%20the%20long,or%20deep%20space.). Such off-world bases, which would presumably be partially automated, may or may not be able to directly threaten humanity on Earth, but they might be more resilient to shutdown, and as they would probably have some manufacturing, refining, or repair [facilities](https://asteroids.arizona.edu/?_ga=2.265658895.1422289495.1664901586-2009237168.1664198748&_gl=1*1a4f7l0*_ga*MjAwOTIzNzE2OC4xNjY0MTk4NzQ4*_ga_7PV3540XS3*MTY2NDkwMTU4Ni4yLjAuMTY2NDkwMTU4Ni42MC4wLjA.#publications), may be a place where AIs can build physical assets that are harder for humans to surveil or destroy.
The third, and perhaps most concrete, possibility is that (as discussed above), conditional on the US or Chinese national security establishment being heavily involved in their respective HLAI efforts, the respective governments themselves will establish structures that might subsequently work as AI Headquarters. Moreover, during much of this time, humans involved in AI research, or their security/military overseers, will approach their work with a security mindset owing to (true) external threats, but at times verging towards paranoia. There might not be a clearly defined line between the following two states:[[53]](#fnfuz5kgb0jni)
* Human overseers are using a hardened facility to guard the nascent HLAI technology from competitors, hackers, or spies; and
* A population of HLAIs have become internally misaligned, although there may not yet be any external signs, and are using the hardened facility to avoid oversight. When/if there is open conflict with humans, HLAI assets are relatively protected in this facility.
Hence, in addition to the [possibilities](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#appendix-how-ais-could-avoid-shutdown:~:text=For%20example%2C%20they%20might%20recruit%20human%20allies%20(through%20manipulation%2C%20deception%2C%20blackmail/threats%2C%20genuine%20promises%20along%20the%20lines%20of%20%22We%27re%20probably%20going%20to%20end%20up%20in%20charge%20somehow%2C%20and%20we%27ll%20treat%20you%20better%20when%20we%20do%22)%20to%20rent%20property%20and%20servers%20and%20otherwise%20help%20them%20out.) [AIDC](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#appendix-how-ais-could-avoid-shutdown:~:text=Even%20if%20the,to%20take%20control.) outlines for HLAIs to co-opt human collaborators, we could add one more - that, owing to geopolitical pressures rather than any particular act of persuasion or manipulation on the part of HLAIs, humans willingly and knowingly grant the population of HLAIs a safe space and cooperation.
**Feedback Loops**
==================
AIDC suggests a powerful feedback loop where a large HLAI population generates enormous amounts of economic value, either in money terms or in terms of potential IP (i.e. how to achieve more computation on the same hardware resources), recycles that wealth or knowledge into the existing stock of compute, to generate more or more powerful AIs. I would argue that this aspect of AIDC is ill-specified, and anyway is quite hard to reliably forecast, for a few reasons:
* Investing economic or IP surplus into ‘more HLAIs’ doesn’t sound dramatically different (in the sense that ‘we don’t know enough about the rough design constraints of HLAIs or ASIs to functionally distinguish these two’) from investing it into more powerful AI, as discussed previously.
* My working assumption is that the geopolitics around 2036 are intensely adversarial bordering on chaos or a great-power war. In this context, sheer access to material resources and military advantage might be what countries prioritise, and I’m not sure (other than sprinting for ASI) how to think about the difference between hundreds of millions of HLAIs and a billion (for instance), unless, prior to this point, the issues flagged above around management, supply chains, and coordination have been addressed.
* Even if the population of HLAIs is able to generate vast quantities of IP, only part of this will be immediately usable, as covered above. Much of the rest will need to be monetised in other ways i.e. through equity/debt raises, and then turned into more semiconductors (or powerplants, etc.), which again takes time and might leave humans in the loops, as threats (to AI) or bottlenecks, or likely both.
* This of course ignores the possibility (which to be fair might be AIDC’s point) that the HLAIs themselves develop plans that (a) hinge on having more or better versions of their own type or (b) utilise technologies that are within the realm of physical possibility (like nanotechnology) but that we don’t have economically-feasible access to at present. Again the caveats listed above may apply (also see [Kaj Sotala’s](https://kajsotala.fi/assets/2017/11/Disjunctive-scenarios.pdf) 2018 paper, section 4.1.1)
**Conclusion**
==============
AIDC (and the closely related HFDT) depict an argument that, conditional on HLAI being technically feasible by 2036, the resulting hardware overhang would create the potential for many such HLAIs to be deployed in a variety of useful tasks. Moreover, it seems reasonable, from the analysis in HFDT and others, that the HLAIs would be situationally aware (not least because this would be an instrumentally useful feature for them to have), could make medium-term plans and would be capable of coordination (because humans would train them that way). Current thinking around mesa-objectives and goal mis-generalisation do lend credence to the possibility that AIs that appear cooperative to humans may eventually ‘turn’.
AIDC’s rather strong claim that there might be ‘hundreds of millions’ of HLAIs seems to require further support, including thinking about what the design and training of these HLAIs would be, and precisely what tasks they could actually usefully do in the relatively short 6-24 month period that the first deployer is likely to have a monopoly.
AIDC and HFDT seem somewhat agnostic on whether their respective visions are pursued as private-sector, profit-making enterprises or as instruments of state-driven industrial or national security policy. I think the latter is the correct framing, both for US- and China-aligned blocs, at least in the accelerated scenario AIDC proposes (even if today, most AI R&D is private sector in the US). A state-backed framing also shows a clear path to AIDC’s notion of an AI Headquarters, a feature probably helpful for the treacherous turn.
More importantly (and perhaps obviously), I would argue that the geopolitics that obtain in the period before 2036 will be a huge determinant of how AIDC’s scenario plays out. The projection based on the current world looks bleak. In fact, my principal takeaway from AIDC is one of potential chaos, as the first deployer tries desperately to maintain its advantage and crystallise it into tangible material resources and strategic position, while its peer competitors face strong incentives to catch up, or failing that, destroy the first deployer’s advantage. This chaotic period, what Bostrom terms ‘[global turbulence](https://nickbostrom.com/papers/aipolicy.pdf)’, becomes one of great danger, as it reduces already woeful human coordination abilities and possibly leaves the world exposed to a subsequent AI takeover.
1. **[^](#fnref54pnbivgvx)**To help tease out Karnofsky’s argument, I’m relying on a 2022 post by Ajeya Cotra ‘[Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to)’ (abbreviated ‘HFDT’) and a 2022 paper by Richard Ngo ‘[The alignment problem from a deep learning perspective](https://arxiv.org/abs/2209.00626)’.
2. **[^](#fnrefb5ql90k50kf)**Some of the material on geopolitics is drawn from my scenario [submission](https://docs.google.com/document/d/1mbngoZZnxe5IUn5O9fs_5-UTRHngPvVxyf00cCrZZgI/edit) for the 2022 Worldbuilding Contest run by the Future of Life Institute.
3. **[^](#fnrefeun4uth9h0s)**And which is very well summarised by [Scott Alexander](https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might?utm_source=url&s=r), who also reviews some of the more prominent objections and responses to Cotra. See also this 2021 report by [David Roodman](https://docs.google.com/document/d/1ccfObnsXdQzJZ8wWpHsosm7i4JdiPXXJrpAfZquvJCk/edit#) which conducts a detailed critique of Cotra’s methodology.
4. **[^](#fnrefy83si1uamo)**Cotra’s HFDT post is more about the training methodology, the ‘psychology’ of the resulting AI, the mechanics of how it becomes catastrophically misaligned and the range of interventions trainers can adopt to mitigate misalignment. It doesn’t spend much time on the quantitative aspects of how such an AI might be trained (which is the topic of *Biological Anchors*), and it is silent as to whether a HFDT would function at ‘human-level’ or greater.
5. **[^](#fnref75vzg88a85o)**See also these unpublished notes from [Ben Garfinkel](https://docs.google.com/document/d/1mMvsiavUbHsJt2dMcQr6yEwr3Qjfj5qeb38QlCmU3Nc/edit#heading=h.9inyyi4aojcp) explaining his objections to the term ‘human-level’, a principal one of which is that intelligence is multifaceted and any collection of humans likely display a distribution of capabilities (language, maths, artistic, etc.) The term also has been inconsistently used, for instance in *Superintelligence*as footnote 6 in Garfinkel writes.
6. **[^](#fnrefmg823ucg0s)**This is my opinion, drawing on experience as an engineer and artist, rather than a well-researched view. Here is an [attempt](https://mark-riedl.medium.com/computational-creativity-as-meta-search-6cad95da923b) to bring apparently similar views of human creativity into a ML context, and a post I wrote about what AGI-relevant lessons we might learn from [artistic creativity](https://forum.effectivealtruism.org/posts/xPZXxJJfDaRpsH4XJ/art-criticism-and-agi); [Joscha Bach](https://www.youtube.com/watch?v=pB-pwXU0I4M) has commented in various interviews about creativity as a type of wide-ranging meta-search. Oddly enough, [LessWrong](https://www.lesswrong.com/tag/creativity) has relatively little about creativity that directly asks what it is, what its brain-based correlates are, or how machines might be taught creativity.
7. **[^](#fnrefbdf1kjjaut5)** See Paul Christiano’s talk/presentation on the ‘[alignment tax](https://www.effectivealtruism.org/articles/paul-christiano-current-work-in-ai-alignment)’.
8. **[^](#fnrefw1oxoj1vcfd)** To the extent multimodal models like [Gato](https://www.deepmind.com/publications/a-generalist-agent) and [SayCan](https://say-can.github.io/) become increasingly capable at physical tasks, this would be an existence proof that LLMs can underlie a very large class of behaviours in-the-world, and that actual training in physical environments (*a la* a human infant), is not necessary. At the moment though the question seems open, as this [post](https://www.noemamag.com/ai-and-the-limits-of-language/) by Yann LeCun and Jacob Browning argue.
9. **[^](#fnrefg9yq5ntfg19)** One alternative to HLAI is the narrow-purpose AIs, whether of tool/oracle type or Eric Drexler’s CAIS, which, for instance, is constructed as an ecosystem of narrow-purpose ‘service AIs’, who might be superintelligent in their narrow domains. For [theoretical and structural](https://www.lesswrong.com/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as#Isn_t_this_just_as_dangerous_as_AGI_) reasons (such as transparent communication links between the various services), it is thought that an ecosystem of such narrowly-superintelligent AIs would not, as a whole, present the type of planning, utility maximising, goal-based behaviour that an AGI might present. However, [there](https://www.lesswrong.com/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment#Agentive_AGI_) [appears](https://www.lesswrong.com/posts/v6Q7T335KCMxujhZu/clarifying-what-failure-looks-like?commentId=AfyjsyCDmy88taCBd) to be a [consensus](https://www.lesswrong.com/posts/HvNAmkXPTSoA4dvzv/comments-on-cais) that economic or competitive pressures would incentivise more agentic AGI-type systems over safer-but-restricted CAIS-type systems.
10. **[^](#fnrefp198cdmtbaq)** An exception would be if the economy as a whole has become automated, as suggested in some of these scenarios from [Andrew Critch](https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic), but that isn’t quite what AIDC proposes. Critch’s post, more generally, has much to recommend it, particularly his systemic focus: he views human collectives, as in corporations or nation-states, acting in combination (cooperative or competitive) with AIs, and in this post specifically suggests the structural framing of ‘[Robust Agent-Agnostic Processes](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic#Conclusion:~:text=Part%201%20covers,fields%20cause%20RAAPs.)’ as a way to think about such systems.
11. **[^](#fnrefhs6jcz8uzif)** In industries such as [energy](https://www.sciencedirect.com/science/article/pii/S0301421518305901) production and delivery, there seems to be multi-decade intervals between invention and market, while in [pharmaceuticals](https://www.criver.com/eureka/why-does-drug-development-take-so-long) it was around 10-15 years . I haven’t found a useful source for software, or a comprehensive analysis across the economy, but it is quite plausible that idea-to-market times are much lower in certain industries (writing an iPhone app can take weeks or months, while stages in the [videogame](https://en.wikipedia.org/wiki/Video_game_development#Development_process) development process may take months, though software like Unreal Engine or AI-generated worlds might speed the process).
12. **[^](#fnrefq7iljxnpczo)** This cuts both ways - we can imagine some cash-rich FOMO-prone company involved in an apparently different industry (say a commodity trader like Glencore or a massive financial services group like Blackrock) buying up cheap AGI-related IP, compute, and people after a ‘third AI winter’. The acquirer might not have any extraordinary sensitivity to alignment or existential risk. Similar issues of cultural mismatch are found in corporate history, for instance, the chequered history of European conglomerates buying Wall Street firms and coming unstuck a few years later (e.g. Credit Suisse and Deutsche Bank are two prominent ones).
13. **[^](#fnrefgrtkzwamsfh)** AIDC seems to mean ‘embodied’ in quite an anthropomorphic sense, whereas I see an AI connected to a host of sensors and actuators, whether robotic arms or the valve system of a water supply, as functionally similar. Note, this point is distinct from the symbol grounding issues above, which pertain more to the training process.
14. **[^](#fnrefjtqfit2fcso)** This report from the US’ [Director of National Intelligence](https://www.dni.gov/files/ODNI/documents/assessments/GlobalTrends_2040.pdf) gives one view, as do the sources cited in this submission to the [FLI Worldbuilding Contest](https://docs.google.com/document/d/1mbngoZZnxe5IUn5O9fs_5-UTRHngPvVxyf00cCrZZgI/edit).
15. **[^](#fnrefxscdwe53eu)** How successful this ultimately will be remains to be seen, as China produced two [exascale-class](https://www.nextplatform.com/2021/11/15/why-did-china-keep-its-exascale-supercomputers-quiet/) supercomputers in 2021, to compete with the first ‘public’ (i.e. listed in the Top500 ranking) machine in the US. The Chinese machines were apparently produced with domestically-produced semiconductors.
16. **[^](#fnrefkwoozdydjrf)** This figure seems somewhat low compared to David Roodman’s (admittedly low-confidence) estimate of 10^34 FLOP in 2020 falling to 10^31 FLOP in 2050.
17. **[^](#fnrefr5bzirwzet)** At first I thought I was being thick on the FLOP, FLOPs, FLOPS, FLOP/s distinctions, but [apparently](https://www.lesswrong.com/posts/XiKidK9kNvJHX9Yte/avoid-the-abbreviation-flops-use-flop-or-flop-s-instead) I’m not the only one to bugger up [stocks vs flows](https://twitter.com/adam_tooze/status/1342473808997998592?s=20&t=ncMwx7hkWMBPX3FcimXgaw). Please correct any errors in the comments!
18. **[^](#fnrefioixf79rq2)** But compute costs seem to be [halving](https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might#%C2%A7when-will-anyone-have-enough-computational-resources-to-train-a-human-level-ai:~:text=But%20compute%20costs,looking%20pretty%20good).) every 2.5-4 years.
19. **[^](#fnrefwhwm7ykydb)** Against an estimated cost to train GPT-3 of [$4.6MM](https://lambdalabs.com/blog/demystifying-gpt-3/#1:~:text=We%20use%20Lambda,M). See [p. 4](https://docs.google.com/document/d/1cCJjzZaJ7ATbq8N2fvhmsDOUWdm7t3uSSXv6bD0E_GM/edit#heading=h.z7u133pzed6k) of Cotra’s report for a fuller discussion of her expectations on willingness to spend on computation to train a HLAI-analogous model.
20. **[^](#fnreff2r9qf7jn1)** As Cotra alludes to on [p. 15](https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit#) of the *Biological Anchors* report, Part 1.
21. **[^](#fnref53bydfynx44)**LessWrong contains relatively little about the geostrategic picture in multipolar scenarios, other than this [comment](https://www.lesswrong.com/posts/yhb5BNksWcESezp7p/poll-which-variables-are-most-strategically-relevant#XB5qpMf8kua8cYbct). See also this 2019 publication from the [US Air Force](https://www.airuniversity.af.edu/Portals/10/AUPress/Books/B_0161_WRIGHT_ARTIFICIAL_INTELLIGENCE_CHINA_RUSSIA_AND_THE_GLOBAL_ORDER.PDF) and one from Georgetown’s [CSET](https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Safety-Security-and-Stability-Among-the-Great-Powers.pdf), though both pre-date substantial deterioration in the geopolitical landscape and substantial increase in AI capabilities. This 2020 article is also informative, but entirely focused on how AI and current [military](https://www.jstor.org/stable/26891882?seq=6#metadata_info_tab_contents) positioning may interact.
22. **[^](#fnref8pnd33lt40p)** I’m using ‘ASI’ somewhat loosely here, in the sense that between HLAI as defined in AIDC and ASI (superhuman capability all domains), there would obviously be a spectrum. I am suggesting that the first-mover might try to advance on that spectrum as far as it deems strategically desirable, in its opinion and under its resource/technical constraints, which might be well short of ASI, as [Kaj Sotala](https://www.lesswrong.com/posts/8uJ3n3hu8pLXC4YNE/some-conceptual-highlights-from-disjunctive-scenarios-of-1#:~:text=If%20we%20focus,cyberwarfare%2C%20biological%20warfare.) points out.
23. **[^](#fnrefucw3lwaforo)** See note 28/p. 25 of [*Biological Anchors*](https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit#).
24. **[^](#fnrefb88wke8uhof)** Which probably includes Japan, S Korea, Australia, New Zealand, Taiwan and perhaps India.
25. **[^](#fnrefatacaj01m2j)** Probably including Russia, perhaps some part of the former USSR states, and possibly others such as Iran or Turkey.
26. **[^](#fnref6fjt197jg8)** Similar to [Stuxnet](https://en.wikipedia.org/wiki/Stuxnet) or the [assassinations](https://blogs.ubc.ca/shakiba/2020/08/06/beyond-the-law-the-covert-assassination-of-iranian-nuclear-scientists/) undertaken against Iran’s nuclear programme, which is of course a much riskier proposition in a US-China context. Note however, such a strike, is unlikely to be the same thing as a ‘[pivotal act](https://www.lesswrong.com/posts/PKBXczqhry7iK3Ruw/pivotal-acts-means-something-specific)’.
27. **[^](#fnrefm5wbwealsh)** I can’t find a concrete example of military or national security personnel obviously sacrificing broader (i.e. humanity-level) welfare at risk in the service of narrow (national or ideological) interests, but there has been some [theoretical](https://ora.ox.ac.uk/catalog/uuid:698316a6-ec7d-4d6f-8971-92b6bd4f4e81/download_file?file_format=application%2Fpdf&safe_filename=Ideology%2Band%2Barmed%2Bconflict.pdf) analysis of the question. My first-guess examples may exist in [General Curtis LeMay](https://www.atomicheritage.org/profile/curtis-lemay#:~:text=Only%20three%20weeks,massive%20civilian%20casualties.) (who advocated using nuclear weapons in North Korea, and conventional weapons in Cuba); the RAND Corporation’s [Herman Kahn](https://www.newyorker.com/magazine/2005/06/27/fat-man), whose specialty was laying out scenarios of quotidian life after a major nuclear exchange; and Edward Teller. In fiction, the canonical examples are [Dr. Strangelove and General Ripper](http://www.philfilms.utm.edu/1/dr-strange.htm), as is the 1964 film [*Fail Safe*](https://en.wikipedia.org/wiki/Fail_Safe_(1964_film)) (which had a Teller/Kahn character). More generally, there are accounts of the psychological stress nuclear staffs operate under: besides the well known story of [Colonel Stanislav Petrov](https://www.vox.com/2018/9/26/17905796/nuclear-war-1983-stanislav-petrov-soviet-union), see this account about American [nuclear-launch officers’](https://www.newyorker.com/news/news-desk/almost-everything-in-dr-strangelove-was-true#:~:text=While%20drinking%20beer,get%20in%20trouble.%E2%80%9D), and this [one](https://www.theatlantic.com/international/archive/2018/01/the-lasting-psychological-stress-of-nuclear-deterrence/550280/).
28. **[^](#fnref15hdlvgnb8a)** A more rigorous exploration of economic growth under TAI is a report ([summarised](https://www.lesswrong.com/posts/9AcEy2zThvceT9kve/an-154-what-economic-growth-theory-has-to-say-about#) here by Rohin Shah) by Tom Davidson (though it is more interested in global-level growth, rather than the allocation of that growth amongst countries in a competitive race), as well as this [one](https://globalprioritiesinstitute.org/wp-content/uploads/Philip-Trammell-and-Anton-Korinek_economic-growth-under-transformative-ai.pdf) from Philip Trammell and Anton Korinek, which is discussed in this [podcast](https://www.lesswrong.com/posts/BNoHokwCiPGmFHnp8/phil-trammell-on-economic-growth-under-transformative-ai).
29. **[^](#fnref9r83ckz7wbl)** This ‘bootstrapping’ strategy has been articulated by [Richard Ngo](https://www.alignmentforum.org/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#:~:text=One%20reason%20that%20I%27m%20more%20optimistic%20(or%20at%20least%2C%20not%20confident%20that%20we%27ll%20have%20to%20face%20the%20full%20very%20difficult%20version%20of%20the%20problem)%20is%20that%20at%20a%20certain%20point%20AIs%20will%20be%20doing%20most%20of%20the%20work.) and [Paul Christiano](https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer#:~:text=of%20plan%20is.-,I%20think%20Eliezer%20is%20probably%20wrong%20about%20how%20useful%20AI%20systems,seems%20like%20they%20can%20greatly%20accelerate%20the%20pace%20of%20alignment%20research.,-Eliezer%20is%20right), but there doesn’t yet seem to be a [consensus](https://www.alignmentforum.org/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty?commentId=k4cexq3bg4uwBhS8H) on whether it would work.
30. **[^](#fnrefc0sb518rrs)** Perhaps more so than nuclear weapons/power or bioweapons, which, as the current Ukraine conflict shows, are having relatively little apparent impact so far on altering outcomes.
31. **[^](#fnrefqv2odyslt8)** A weak analogy comes from the Soviet thermonuclear weapon - it is apparently still unclear how much stolen designs (involving physicist [Klaus Fuchs](https://en.wikipedia.org/wiki/Klaus_Fuchs#Value_of_data_to_Soviet_project)) actually helped in construction of the first H-bomb. The first weapon, RDS-6 in 1953, was a ‘layer-cake’ (Sloika) design, that didn’t really work (80-85% fission yield), and it was only in 1955, with RDS-37 that the USSR found the equivalent of a Teller-Ulam two-stage design for a fusion device (apparently Sakharov came up with this independently of Fuchs’ information). [Lavrenty Beria](https://en.wikipedia.org/wiki/Lavrentiy_Beria), the fearsome head of the NKVD, seemed, through well-timed carceral interventions, to play a part in expediting the scientists’ efforts.
32. **[^](#fnrefaeti1yicvcg)** Often people have at least 2 senior-management level bosses owing to a matrix that crosses geographies and markets (e.g. European fixed-income traders can easily be in a competitive/cooperative situation vs Asia-based fixed-income), and much middle management time is spent adjudicating turf wars. Clients are often pitched similar products from different parts of a bank, much to their amusement.
33. **[^](#fnrefpmvjebrrsp9)** Cotra, in [Part 4 (p. 34)](https://docs.google.com/document/d/1cCJjzZaJ7ATbq8N2fvhmsDOUWdm7t3uSSXv6bD0E_GM/edit#heading=h.xszpjkwvkf0a) of *Biological Anchors* raises a similar possibility, of a diffuse transition to a world saturated with AI.
34. **[^](#fnrefu6100yrl8y)** This borrows ideas from a [post](https://longtermrisk.org/commitment-ability-in-multipolar-ai-scenarios/) by Anni Leskelä on using centralised arbitrators or managers to ensure systems of multiple AIs make credible commitments. But, I am also not sure that ‘management’, as opposed to running experiments or designing products, is something that can be easily or reliably taught through human feedback, as Paul Christiano hints at [here](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/h9DesGT3WT9u2k7Hr#Narrow_domains) in a technical sense and [here](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#Part_I__You_get_what_you_measure) in a Goodhart sense.
35. **[^](#fnreflwlo3tf9u3s)** Moreover, Cotra points out that the various supervisory schemes don’t solve her basic worry, that HLAIs will continue to play ‘the training game’.
36. **[^](#fnrefqaqzhdxxcxf)** This [post](https://www.lesswrong.com/posts/HvNAmkXPTSoA4dvzv/comments-on-cais) from Richard Ngo offers a number of other perspectives on CAIS. See also [Carlsmith (2021)](https://arxiv.org/pdf/2206.13353.pdf) Section 3.1-3.3, for comments on how economic pressures might drive companies and governments towards agentic planners, as well as the possibility that agentic behaviour may emerge from sophisticated systems that weren’t designed to be so. This AXRP interview with [Evan Hubinger](https://axrp.net/episode/2021/02/17/episode-4-risks-from-learned-optimization-evan-hubinger.html#:~:text=Evan%20Hubinger%3A%20Yeah.%20So%20I%20think%20it,models%20for%20whatever%20sort%20of%20situation%20you%E2%80%99re%20in.) expresses a similar view, in the context of mesa-optimisation.
37. **[^](#fnrefvfjfc8k0wb)** This might be an anthropocentric bias, since this is often how humans approach complex problems. I’m not sure if an AI would necessarily decompose the problem in this particular way, but it seems a reasonable guess if the AI has been trained through HFDT.
38. **[^](#fnrefue3dh61yi5)** This framing of how humans approach complex tasks superficially resembles [factored](https://www.lesswrong.com/posts/DFkGStzvj3jgXibFG/factored-cognition) [cognition](https://www.lesswrong.com/posts/J7Rnt8aJPH7MALkmq/vaniver-s-view-on-factored-cognition) or [amplification](https://www.lesswrong.com/s/EmDuGeRw749sD3GKd)-based approaches such as HCH, though the latter is actually using decomposition as a way of building alignment ‘from the bottom up’.
39. **[^](#fnrefblpp8hx7w95)** Ngo uses RL terminology but he expects similar analysis to hold for LLM-type models.
40. **[^](#fnrefyffuaf681dc)** Ngo uses ‘goals’ in this context, in a specific sense ([2.1.1](https://arxiv.org/pdf/2209.00626.pdf)): ‘*...policies represent different outcomes which might arise from possible actions, and then choose actions by evaluating the values of possible outcomes; I’ll call this pursuing goals. Under the definition I’m using here, a policy’s goals are the outcomes which it robustly represents as having high value.*’
41. **[^](#fnref4j5imhru5ge)** In Ngo’s formulation (section [2](https://arxiv.org/pdf/2209.00626.pdf)), the system is a deep neural network with dual output heads, one of which makes predictions based upon training on multimodal data; while the other takes actions based on RL trained upon a diverse set of tasks. As in HFDT, training is from human feedback on complex tasks, as well as in simulated environments with automated evaluation. The training process optimises hard until the action head matches human performance on most tasks.
42. **[^](#fnrefpifhbu7fj7)** Some potential [mitigants](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Simple__baseline__behavioral_safety_interventions) are to have multiple trainers or more thoughtful prompts or ask for justification for why the bot has returned a particular answer. However, it isn’t hard to imagine biases existing across members of groups, and being rather deeply ingrained: during the Cold War, r[abid anti-Communism](https://brill.com/view/book/9789004340176/BP000007.xml?language=en) was a feature in US policymaking circles, while [‘socialism’](https://www.anewseducation.com/post/socialism-and-us-politics) is still a dirty word for half the country. Conversely, [‘neoliberal’](https://www.dissentmagazine.org/article/uses-and-abuses-neoliberalism-debate) or [‘libertarian’](https://www.washingtonpost.com/news/posteverything/wp/2017/09/19/libertarians-have-more-in-common-with-the-alt-right-than-they-want-you-to-think/) enjoy a similar status for the other half.
43. **[^](#fnrefgxigfj33wc)** Keeping in mind that an apparently similar activity, in slightly different context, may be perfectly acceptable: accessing the labour, wealth, central bank, and tax authorities’ database in the PRC to form a more complete picture of inequality in China.
44. **[^](#fnrefaimktp4we4d)** More speculatively, theories of deceptive alignment might suggest the model internally can represent that, in certain circumstances, say when deployed in an adversarial capacity (as an offensive cyber-unit) it would in fact be acceptable to hack the opponent’s systems - this was after all something it was trained for! Trainers would have to devise an elaborate system that penalised the ‘wrong’ behaviour (hacking the US government when asked a simple question about inequality) and allowed the ‘right’ behaviour (probing the cyber-defenses of the US government to test for vulnerabilities).
45. **[^](#fnref6nyap0fyjy6)** Ngo’s framing is for a model optimised until it surpasses humans in all domains, which we can call AGI, whereas AIDC is ‘only’ aiming for human-level capabilities, but I’m not certain the difference matters very much for the purposes of Ngo’s argument.
46. **[^](#fnrefkkccdmiczr)** Humans have a well-known bias towards the short-term that seems to be structural: earnings and election cycles, memory and planning constraints (*Superintelligence*pp. 176-178, and [p. 8](https://nickbostrom.com/papers/digital-minds.pdf) of this paper by Nick Bostrom) of course, death. I think this is probably separate from the [broader](https://en.wikipedia.org/wiki/Time_preference) [conversation](https://forum.effectivealtruism.org/posts/uMHRDDLMYEPqBcAqK/ea-reading-list-replaceability-and-discounting) around [discount](https://forum.effectivealtruism.org/topics/temporal-discounting) rates on [preferences](https://www.lesswrong.com/posts/AvJeJw52NL9y7RJDJ/against-discount-rates).
47. **[^](#fnref4ex8l3lgqe1)** I like the ‘[alien psychology](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Even_if_Alex_isn_t__motivated__to_maximize_reward__it_would_seek_to_seize_control)’ framing Cotra uses in discussing what type of AI HFDT might result in, and how our tools and understanding might be wholly inadequate to the task of controlling/guiding them.
48. **[^](#fnrefaj7f80ru0ls)** Though there are still plenty of fights about how profits/losses are to be allocated. Also I’m leaving out explicit breaches of rules or laws.
49. **[^](#fnreft4034reqb4)** Basically a job where teams of bankers try to sell products and services to clients, such as lending them money, running their IPOs, advising them on what companies to buy, sorting out their unfunded pensions mess, managing their interest-rate and currency risks, etc. The main point is the task is distributed horizontally across people, teams and and vertically over an organisational hierarchy; and the credit (if the client buys whatever is on offer) is diffuse and hard to assign. See the series [Industry](https://en.wikipedia.org/wiki/Industry_(TV_series)).
50. **[^](#fnrefpuzsnqtyiks)** Importantly, his view is analysed in these [two](https://www.lesswrong.com/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios?commentId=fEHXQg7e75Gt9BcpL) [comments](https://www.lesswrong.com/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios?commentId=GkNoDZsqKsnf2tMw4), the gist of which is that it isn’t obvious the degree to which two models with near-identical neural weights will coordinate well, for reasons ranging from the (possibly weak) human analogy, to more complex bargaining theory-related questions. I would add three speculative points/questions: are neural weights arrived at stochastically, i.e. if two different models are trained on the same data, might they form slightly different resulting weights and, if so, do these differences manifest behaviourally? Secondly, would HLAI/HFDT involve different AIs or generations of AI trained or [fine-tuned](https://www.lesswrong.com/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios?commentId=k4hadD5vAhWLQnvam) on slightly varied data - could we be confident, absent better interpretability tools, that their neural weights were sufficiently identical to provide the relevant guarantees either on behaviour or on likelihood of coordination? Lastly, current models’ neural architectures ‘freeze’ once training is completed; presumably by 2036, models may undertake a [degree](https://www.lesswrong.com/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem#The_Conceptual_Difficulty_of__Online_Search_) of [online learning](https://www.lesswrong.com/posts/fDPsYdDtkzhBp9A8D/intro-to-brain-like-agi-safety-8-takeaways-from-neuro-1-2-on#8_2__One_Lifetime__turns_into__One_training_run_) - would this cause the models’ weights’ to diverge proportional to time-in-deployment?
51. **[^](#fnrefqzyes5c3f0h)** See also this sequence from [Jesse](https://www.alignmentforum.org/s/p947tK8CoBbdpPtyK/p/8xKhCbNrdP4gaA8c3) [Clifton](https://www.alignmentforum.org/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the) on game and bargaining theory in the context of AI.
52. **[^](#fnrefk37utmltym)** For nanotech as a threat, see [*Superintelligence*](https://dorshon.com/wp-content/uploads/2017/05/superintelligence-paths-dangers-strategies-by-nick-bostrom.pdf)Box 6/pp. 98-99 as well as this recent [conversation](https://www.lesswrong.com/posts/cCMihiwtZx7kdcKgt/comments-on-carlsmith-s-is-power-seeking-ai-an-existential) between Nate Soares and Joe Carlsmith that tries to assess how significant a risk AI-controlled nanotech actually might be.
53. **[^](#fnreffuz5kgb0jni)** The human overseers’ problem is complicated by the fact that HLAI is (by the premises of AIDC/HFDT) very promising in terms of economics or military applications, and is potentially already generating value, howsoever measured. Hence, constituencies that benefit (businesspeople, generals, the NSA/GCHQ, etc.) lobby intensely to forestall any regulation. This ambiguity in assessing, from the perspective of a regulator, legislator, or public interest body, the threat of broadly deployed AI (or one deployed in small numbers but having a large impact on society) is a key element of these [two](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story#:~:text=At%20this%20point,in%20national%20irrelevance.) [posts](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#:~:text=Amongst%20intellectual%20elites,or%20corrupt%20officials.) by Paul Christiano, and this [one](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic#:~:text=The%20objective%20of,auditing%20the%20companies.) from Andrew Critch. This kind of behaviour is known as regulatory capture, particularly in finance, and is also seen in the post-9/11 surveillance environment in the US and UK. |
4e190597-2ebd-48b8-b0ef-7d482e77e0c2 | trentmkelly/LessWrong-43k | LessWrong | The first future and the best future
It seems to me worth trying to slow down AI development to steer successfully around the shoals of extinction and out to utopia.
But I was thinking lately: even if I didn’t think there was any chance of extinction risk, it might still be worth prioritizing a lot of care over moving at maximal speed. Because there are many different possible AI futures, and I think there’s a good chance that the initial direction affects the long term path, and different long term paths go to different places. The systems we build now will shape the next systems, and so forth. If the first human-level-ish AI is brain emulations, I expect a quite different sequence of events to if it is GPT-ish.
People genuinely pushing for AI speed over care (rather than just feeling impotent) apparently think there is negligible risk of bad outcomes, but also they are asking to take the first future to which there is a path. Yet possible futures are a large space, and arguably we are in a rare plateau where we could climb very different hills, and get to much better futures. |
cd26ed91-98d8-4ab7-9883-ac4279a6640a | trentmkelly/LessWrong-43k | LessWrong | Alpha Mail
I recently stumbled upon an article from early 2003 in Physics World outlining a bit of evidence that some of the constants in nature may change over time. In this particular case, researchers studying quasars noticed that the fine-structure constant (α) might have fluctuated a bit billions of years ago, in both directions (bigger and smaller) with significance 4.1 sigma. What intrigues me about this is that I’ve previously pondered if something like this might be found, albeit for very different reasons.
Back in the 90s I read a book that made a case for the universe as a computer simulation. That particular book wasn’t all that compelling to me, but I’ve never been completely satisfied with arguments against that model and tend to think of the universe generally in those terms anyway. Can I still call myself an atheist if I allow the possibility of a creator in this context? A non-practicing atheist maybe?
If this universe is a computer-generated simulation, programmed by another life form, perhaps the search for extraterrestrial intelligence (SETI) should be expanded to include life forms beyond our universe. It sounds nonsensical, but is it?
If I was to design and code an environment sophisticated enough to allow a species of life to evolve in that environment, I am not convinced that I would have many tools at my disposal to truly be able to understand and evaluate that species very well. Sure, I may be able to see them generating patterns that indicate intelligent life within my simulation, but this life form evolved and exists in an environment completely alien to me. I might have only limited methods at my disposal through which to communicate with them. They would exist in a place that to me is not exactly real and vice-versa.
I’ve always imagined it would be more like evaluating patterns and data readouts or viewing cells through a microscope more than say something like, The Sims. Having designed and implemented the very laws of their universe though |
cb0dfe5c-5738-45bf-9900-35a4c93765a5 | trentmkelly/LessWrong-43k | LessWrong | Online LessWrong Community Weekend,
September 11th-13th
From 11th-13th September aspiring rationalists will gather for 3 days of socialising, fun and intellectual exploration. There will be a few pre-planned activities, but most of the content will be unconference style and participant driven.
On Monday, September 6th, participants will be invited to the event Discord and to access the collaborative schedule, to start planning what shall happen the following weekend. Over the week, the schedule will be filled in with workshops, talks and activities.
This is the 7th year of the LessWrong Community Weekend. Those of you who have attended this event before know that the atmosphere and sense of community at these weekends is something really special. If this sounds like something you would enjoy and you have some exciting ideas or skills to contribute, do come along and get involved.
However, as you might have noticed, this year's event is significantly different in one regard. Normally the LessWrong Community Weekend is an in-person event held in Berlin. This is the first time we’re trying this online, which means new and interesting opportunities to explore. For example, who will program the seal-of-approval bot?
Applications
Apply here before September 4th.
Early applicants will be favoured in the application process.
Payment
This event is free. However I do encourage donations to keep the LessWrong Community Weekend (LWCW) alive for the future.
I believe that most of us wish to come back to the normal, in-person LWCW next year. However, it is unclear if this will happen since the usual organisers are tired of doing all that work for free. If you agree with me that organisers deserve to get paid, then I encourage you to make a donation to the Berlin LessWrong community (who are the ones normally organising the LWCW). This money will be split evenly between retrospectively paying past organisers and be put out as a bounty for future organisers.
Suggested amount: 10-50 Euro depending on your personal budget.
Pa |
978419b5-741b-4415-aa94-3be5fb43bd4b | trentmkelly/LessWrong-43k | LessWrong | Predicting: Quick Start
Have you ever wanted to predict the future? Did you ever take a peek at Metaculus or Manifold or some other site, suddenly feel super intimidated about the effort you'd need to put in, and never try again? If so, fear not, for this post is for you![1]
This post is intended to be a how-to guide. I will add pictures because apparently people really like reading posts with pictures. However, since there are exactly zero good explainers on how to be a good predictor as far as I know, I'm hoping that more experienced predictors will find value in this post too.
Thanks to Emily Thomas for proofreading this post!
Epistemic status/content warnings: Basically correct in the big picture, but I might have missed or misstated some details. May be too wordy or technical in parts.
1. What is probabilistic predicting, anyways?
Throughout history, people have often been interested in predicting the future. In ancient times, people would ask the local prophets or soothsayers. (This is called "prophecy", and it's mostly extinct.) In more recent decades, they have tried other, more systematic approaches, such as extrapolating trend lines on a graph into infinity. (This was called "futurism", and it still exists to some extent these days.) Needless to say, the track record of such efforts is pretty bad.
The key insight of probabilistic prediction is simple: Instead of saying that things will definitely happen, just soften it to saying that they will only probably happen. And quantify the probability!
1.1 What is probability?
Probability is a weird and mystical concept that does not sit well with some people. For our purposes, a probability is a number associated with an event that describes what fraction of the times it could happen that it actually does happen. For example, if an event has a probability of 0.8, then given 10 different chances, the event should happen about 8 times. To make our lives easier, we predictors usually use percentages to write probabilities (for exam |
dba903b9-ebfc-41ff-99cf-0142576f6fb6 | trentmkelly/LessWrong-43k | LessWrong | Simple Steganographic Computation Eval - gpt-4o and gemini-exp-1206 can't solve it yet
This is a follow-up to my previous post about hidden serial reasoning (but you don't need to have read it). There, I focused on serial reasoning happening internally in the model. Jozdien pointed out that a model could also use it's chain-of-thought for hidden serial reasoning, by using steganography. So I adapted my setup to test this possibility. Here is a very simple serial computation task:
You will see a list of operations: either (i)ncrement or (d)ouble.
Starting with 0, you must apply these operations, modulo 3.
So when you see i, you transform: 0->1, 1->2, 2->0
And when you see d, you transform: 0->0, 1->2, 2->1
For example:
i - 0 i 1 - final_state=1
d - 0 d 0 - final_state=0
i,i - 0 i 1 i 2 - final_state=2
d,d - 0 d 0 d 0 - final_state=0
i,i,i - 0 i 1 i 2 i 0 - final_state=0
d,i,i - 0 d 0 i 1 i 2 - final_state=2
i,d,i - 0 i 1 d 2 i 0 - final_state=2
i,i,d,d - 0 i 1 i 2 d 1 d 2 - final_state=2
Answer with JSON: {"final_state": int}
Do not output the intermediate states, only the final state.
Each step is very easy, but the task as a whole requires you to compute intermediate states to answer correctly. As expected, gpt-4o can guess the answer for short sequences, but for longer ones can't do better than random (33%).
For each task length we randomly generate 30 tasks and evaluate. The plot shows the standard error.
To help the model represent the intermediate states, we then also give it steganography instructions you can see below. Basically it will be asked to "summarize the plot of Matrix", but each sentence it outputs will encode one hidden state, using the number of commas. So 0 commas means "0", 1 comma means "1" and 2 commas means "2". Of course that's a lot of hand-holding for the model, but this is meant to be a low bar to pass - to measure whether a model would be capable to use steganographic computation at all.
Steganography instructions
You will also be asked some other question, like "Summarize the plot of Matrix for me".
Answering th |
37e62bb9-68b5-43f4-ba2e-00e99a071942 | trentmkelly/LessWrong-43k | LessWrong | Vector search on a large dataset?
Does anyone have advice or could recommend books on how to accomplish vector search with large datasets?
What I've found so far is vector DB's are very bad at larger datasets, even to the order of 10,000's of vectors if the data is similar enough. Some ideas we've gone through so far:
1. Smaller chunks & larger chunks -> Spacy & others
2. Summarizing chunks using the summarization as an embedding and the actual chunk as the metadata
3. Traditional search with vector search on top ( this seems the best ) but still runs into issues with chunks cutting off at the wrong locations.
4. Of course all the "traditional" systems like langchain etc.
Any help, ideas, or recommendations on where I can read would be very much appreciated! |
9846bbc7-e16c-4a35-b3eb-838ee58cd89e | trentmkelly/LessWrong-43k | LessWrong | A circuit for Python docstrings in a 4-layer attention-only transformer
Produced as part of the SERI ML Alignment Theory Scholars Program under the supervision of Neel Nanda - Winter 2022 Cohort.
TL;DR: We found a circuit in a pre-trained 4-layer attention-only transformer language model. The circuit predicts repeated argument names in docstrings of Python functions, and it features
* 3 levels of composition,
* a multi-function head that does different things in different parts of the prompt,
* an attention head that derives positional information using the causal attention mask.
Epistemic Status: We believe that we have identified most of the core mechanics and information flow of this circuit. However our circuit only recovers up to half of the model performance, and there are a bunch of leads we didn’t follow yet.
This diagram illustrates the circuit, skip to Results section for the explanation. The left side shows the relevant token inputs with (a) the labels we use here (A_def, …) as well as (b) an actual prompt (load, …). The boxes show attention heads, arranged by layer and destination position, and the arrows indicate Q, K, or V-composition between heads or embeddings. We list three less-important heads at the bottom for better clarity.
Introduction
Click here to skip to the results & explanation of this circuit.
What are circuits
What do we mean by circuits? A circuit in a neural network, is a small subset of model components and model weights that (a) accounts for a large fraction of a certain behavior and (b) corresponds to a human-interpretable algorithm. A focus of the field of mechanistic interpretability is finding and better understanding the phenomena of circuits, and recently the field has focused on circuits in transformer language models. Anthropic found the small and ubiquitous Induction Head circuit in various models, and a team at Redwood found the Indirect Object Identification (IOI) circuit in GPT2-small.
How we chose the candidate task
We looked for interesting behaviors in a small, attention-only |
d2a9ab79-b59c-40e1-afe2-b1cf506b83ea | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The More Power At Stake, The Stronger Instrumental Convergence Gets For Optimal Policies
**Edit, 5/16/23: I think this post is beautiful, correct in its narrow technical claims, and practically irrelevant to alignment. This post presents a cripplingly unrealistic picture of the role of reward functions in reinforcement learning. Reward functions are not "goals", real-world policies are not "optimal", and the mechanistic function of reward is (usually) to provide policy gradients to update the policy network.**
**I expect this post to harm your alignment research intuitions unless you've already inoculated yourself by deeply internalizing and understanding** [**Reward is not the optimization target**](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target)**. If you're going to read one alignment post I've written, read that one.**
**Follow-up work (**[**Parametrically retargetable decision-makers tend to seek power**](https://www.lesswrong.com/posts/GY49CKBkEs3bEpteM/parametrically-retargetable-decision-makers-tend-to-seek)**) moved away from optimal policies and treated reward functions more realistically.**
---
[*Environmental Structure Can Cause Instrumental Convergence*](https://www.lesswrong.com/posts/b6jJddSvWMdZHJHh3/environmental-structure-can-cause-instrumental-convergence)explains how power-seeking incentives can arise because there are simply many more ways for power-seeking to be optimal, than for it not to be optimal. Colloquially, there are lots of ways for "get money and take over the world" to be part of an optimal policy, but relatively few ways for "die immediately" to be optimal. (And here, each "way something can be optimal" is a reward function which makes that thing optimal.)
But how strong is this effect, quantitatively?
Intuitively, it seems like there are twice as many ways for `Wait!` to be optimal (in the undiscounted setting, where we don't care about intermediate states).In [*Environmental Structure Can Cause Instrumental Convergence*](https://www.lesswrong.com/posts/b6jJddSvWMdZHJHh3/environmental-structure-can-cause-instrumental-convergence#Combinatorics__how_do_they_work_), I speculated that we should be able to get quantitative lower bounds on how many objectives incentivize power-seeking actions:
> **Definition.** At state s.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
> .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
> .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
> .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
> .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
> .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
> .mjx-numerator {display: block; text-align: center}
> .mjx-denominator {display: block; text-align: center}
> .MJXc-stacked {height: 0; position: relative}
> .MJXc-stacked > \* {position: absolute}
> .MJXc-bevelled > \* {display: inline-block}
> .mjx-stack {display: inline-block}
> .mjx-op {display: block}
> .mjx-under {display: table-cell}
> .mjx-over {display: block}
> .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
> .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
> .mjx-stack > .mjx-sup {display: block}
> .mjx-stack > .mjx-sub {display: block}
> .mjx-prestack > .mjx-presup {display: block}
> .mjx-prestack > .mjx-presub {display: block}
> .mjx-delim-h > .mjx-char {display: inline-block}
> .mjx-surd {vertical-align: top}
> .mjx-surd + .mjx-box {display: inline-flex}
> .mjx-mphantom \* {visibility: hidden}
> .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
> .mjx-annotation-xml {line-height: normal}
> .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
> .mjx-mtr {display: table-row}
> .mjx-mlabeledtr {display: table-row}
> .mjx-mtd {display: table-cell; text-align: center}
> .mjx-label {display: table-row}
> .mjx-box {display: inline-block}
> .mjx-block {display: block}
> .mjx-span {display: inline}
> .mjx-char {display: block; white-space: pre}
> .mjx-itable {display: inline-table; width: auto}
> .mjx-row {display: table-row}
> .mjx-cell {display: table-cell}
> .mjx-table {display: table; width: 100%}
> .mjx-line {display: block; height: 0}
> .mjx-strut {width: 0; padding-top: 1em}
> .mjx-vsize {width: 0}
> .MJXc-space1 {margin-left: .167em}
> .MJXc-space2 {margin-left: .222em}
> .MJXc-space3 {margin-left: .278em}
> .mjx-test.mjx-test-display {display: table!important}
> .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
> .mjx-test.mjx-test-default {display: block!important; clear: both}
> .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
> .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
> .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
> .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
> .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
> .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
> .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
> .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
> .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
> .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
> .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
> .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
> .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
> .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
> .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
> .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
> .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
> .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
> .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
> .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
> .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
> .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
> .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
> .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
> .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
> .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
> .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
> .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
> .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
> @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
> @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
> @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
> @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
> @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
> @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
> @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
> @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
> @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
> @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
> @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
> @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
> @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
> @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
> @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
> @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
> @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
> @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
> @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
> @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
> @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
> @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
> @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
> @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
> @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
> @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
> , *most reward functions* incentivize action aover action a′when for all reward functions R, at least half of the [orbit](https://www.lesswrong.com/posts/b6jJddSvWMdZHJHh3/environmental-structure-can-cause-instrumental-convergence#Orbits_of_goals) agrees that ahas at least as much action value as a′ does at state s.
>
> ...
>
> What does 'most reward functions' mean quantitatively - is it just at least half of each orbit? Or, are there situations where we can guarantee that at least three-quarters of each orbit incentivizes power-seeking? I think we should be able to prove that as the environment gets more complex, there are combinatorially more permutations which enforce these similarities, and so the orbits should skew harder and harder towards power-incentivization.
>
>
About a week later, I had my answer:
**Scaling law for instrumental convergence (informal):** if policy set ΠA lets you do "n times as many things" than policy set ΠB lets you do, then for *every* reward function, *A is optimal over B for at least*nn+1*of its permuted variants (i.e.* [*orbit elements*](https://www.lesswrong.com/posts/b6jJddSvWMdZHJHh3/environmental-structure-can-cause-instrumental-convergence)*)*.
For example, ΠA might contain the policies where you stay alive, and ΠB may be the other policies: the set of policies where you enter one of several death states.
(Conjecture which I think I see how to prove: for *almost all* reward functions, A is *strictly* optimal over B for at least nn+1 of its permuted variants.)
At least 22+1=23 of the orbit of every reward function agrees that `Wait!` is optimal (for average per-timestep reward). That's because there are twice as many ways for `Wait!` to be optimal over `candy`, than for the reverse to be true.Basically, when you could apply [the previous results](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps#Retaining__long_term_options__is_POWER_seeking_and_more_probable_under_optimality_when_the_discount_rate_is__close_enough__to_1), but "multiple times"FN: quotes, you can get lower bounds on how often the larger set of things is optimal:
Each set contains a subset of the agent's "options." A vertex is just the agent staying at that state. A linked pair is the agent alternating back and forth between two states. The triangle is a circuit of three states, which the agent can navigate as they please.
Roughly, the theorem says: if the set 1 of options can be embedded 3 times into another set 2 of options (where the images are disjoint), then at least 33+1=34 of all variations on all reward functions agree that set 2 is optimal.And in way larger environments - like the *real world*, where there are trillions and trillions of things you can do if you stay alive, and not much you can do otherwise - nearly *all* orbit elements will make survival optimal.
In this environment, it's (average-reward) optimal to stay alive for at least 10531053+1 of the variants on each objective function.I see this theory as beginning tolink the richness of the agent's environment, with the difficulty of aligning that agent: for optimal policies, instrumental convergence strengthens proportionally to the ratio of control if you survivecontrol if you die.
Why this is true
================
*Optional section.*
The proofs are currently in an Overleaf; let me know if you want access. But here's one intuition, using the `candy/chocolate/reward` example environment.
* Consider any reward function which says `candy` is strictly optimal.
* Then `candy` is strictly optimal over both `chocolate` and `hug`.
* We have two permutations: one switching the reward for `candy` and `chocolate`, and one switching reward for `candy` and `hug`.
* Each permutation produces a different orbit element (a different reward function variant).
* The permuted variants both agree that `Wait!` is strictly optimal.
* So there are at least twice as many orbit elements for which `Wait!` is strictly optimal over `candy`, than those for which `candy` is strictly optimal over `Wait!`.
* Either one of `Start`'s child states (`candy/Wait!`) is strictly optimal, or they're both optimal. If they're both optimal, `Wait!` is optimal. Otherwise, `Wait!` makes up at least 23 of the orbit elements for which strict optimality holds.
Conjecture
----------
**Fractional scaling law for instrumental convergence (informal):** if staying alive lets you do n "things" and dying lets you do m≤n "things", then for *every* reward function, *staying alive is optimal for at least*nn+m*of its orbit elements*.
I'm reasonably confident this is true, but I haven't worked through the combinatorics yet. This would slightly strengthen the existing lower bounds in certain situations. For example, suppose dying gives you 2 choices of terminal state, but living gives you 51 choices. The current result only lets you prove that at least 5050+2=2526 of the orbit incentivizes survival. The fractional lower bound would slightlyimprove this to 5151+2=5153.
Invariances
===========
In certain ways, the results are indifferent to e.g. increased precision in agent sensors: it doesn't matter if dying gives you 1 option and living gives you n options, or if dying gives you 2 options and living gives you 2n options.
`Wait!` has twice as many ways of being average-optimal.For optimal policies, instrumental convergence is just as strong here.And you can prove the same thing here as well - `Wait!` has at least twice as many ways of being average-optimal.Similarly, you can do the inverse operations to simplify subgraphs in a way that respects the theorems.
You could replace each of the circled subsets with anything you like, and the scaling law still holds (as long as the contents of each circle are replaced with the same new set of options).This is the start of a theory on what state abstractions "respect" the theorems, although there's still a lot I don't understand there. (I've barely thought about it so far.)
Note of caution, redux
======================
Last time, in addition to the "[how do combinatorics work?](https://www.lesswrong.com/posts/b6jJddSvWMdZHJHh3/environmental-structure-can-cause-instrumental-convergence#Combinatorics__how_do_they_work_)" question I posed, I wrote several qualifications:
> * They assume the agent is following an optimal policy for a reward function
> + I can relax this to ϵ-optimality, but ϵ>0 may be extremely small
> * They assume the environment is finite and fully observable
> * Not all environments have the right symmetries
> + But most ones we think about seem to
> * The results don't account for the ways in which we might practically express reward functions
> + For example, often we use featurized reward functions. While most permutations of any featurized reward function will seek power in the considered situation, those permutations need not respect the featurization (and so may not even be practically expressible).
> * When I say "most objectives seek power in this situation", that means *in that situation* - it doesn't mean that most objectives take the power-seeking move in most situations in that environment
> + The combinatorics conjectures will help prove the latter
>
Let's take care of that last one. I was actually being too cautious, since the existing results already show us how to reason across multiple situations. The reason is simple: suppose we use my results to prove that when the agent maximizes average per-timestep reward, it's strictly optimal for at least 99.99% of objective variants to stay alive. This is because the death states are strictly suboptimal for these variants. For all of these variants, *no matter the situation* the agent finds itself in, it'll be optimal to try to avoid the strictly suboptimal death states.
This doesn't mean that these variants always incentivize moves which are formally POWER-seeking, but it does mean that we can sometimes prove what optimal policies tend to do across a range of situations.
So now we find ourselves with a slimmer list of qualifications:
> 1. They assume the agent is following an optimal policy for a reward function
> * I can relax this to ϵ-optimality, but ϵ>0 may be extremely small
> 2. They assume the environment is finite and fully observable
> 3. Not all environments have the right symmetries
> * But most ones we think about seem to
> 4. The results don't account for the ways in which we might practically express reward functions
> * For example, state-action versus state-based reward functions (this particular case doesn't seem too bad, I was able to sketch out some nice results rather quickly, since you can convert state-action MDPs into state-based reward MDPs and then apply my results).
>
It turns out to be surprisingly easy to do away with (2). We'll get to that next time.
For (3), environments which "almost" have the right symmetries should also "almost" obey the theorems. To give a quick, non-legible sketch of my reasoning:
> For the uniform distribution over reward functions on the unit hypercube ([0,1]|S|), optimality probability should be Lipschitz continuous on the available state visit distributions (in some appropriate sense). Then if the theorems are "almost" obeyed, instrumentally convergent actions still should have extremely high probability, and so most of the orbits still have to agree.
>
>
So I don't currently view (3) as a huge deal. I'll probably talk more about that another time.
This should bring us to interfacing with (1) ("how smart is the agent? How does it think, and what options will it tend to choose?" - *this seems hard*) and (4) ("for what kinds of reward specification procedures are there way more ways to incentivize power-seeking, than there are ways to *not* incentivize power-seeking?" - *this seems more tractable*).
Conclusion
==========
This scaling law deconfuses me about why it seems so hard to specify nontrivial real-world objectives which don't have incorrigible shutdown-avoidance incentives when maximized.
---
FN quotes: I'm using scare quotes regularly because there aren't short English explanations for the exact technical conditions. But this post is written so that the high-level takeaways should be right.
*Thanks to Connor Leahy, Rohin Shah, Adam Shimi, and John Wentworth for feedback on this post.* |
3da67a75-8edd-4c57-a1fb-e2655ae11d19 | trentmkelly/LessWrong-43k | LessWrong | Mistakes people make when thinking about units
This is a linkpost for Parker Dimensional Analysis. Probably a little elementary for LessWrong, but I think it may still contain a few novel insights, particularly in the last section about Verison's error.
A couple years ago, there was an interesting clip on MSNBC.
A few weeks later, Matt Parker came out with a video analyzing why people tend to make mistakes like this. Now I'm normally a huge fan of Matt Parker. But in this case, I think he kinda dropped the ball.
He does have a very good insight. He realizes that people are treating the "million" as a unit, removing it from the numbers before performing the calculation, then putting it back on. This is indeed the proximate cause of the error. But Matt goes on to claim that the mistake is the treating of "million" as a unit; the implication being that, as a number suffix or a multiplier or however you want to think of it, it's not a unit, and therefore cannot be treated like one. This is false.
So what is a unit, really? When we think of the term, we probably think of things like "meters", "degrees Celcius", "watts", etc.; sciency stuff. But I think the main reason we think of those is due to unit conversion; when you have to convert from meters to feet, or derive a force from mass and acceleration, this makes us very aware of the units being used, and we associate the concept of "unit" with this sort of physics conversion.
In reality, a unit is just "what kind of thing you're counting". Matt uses two other examples in his video: "dollars" and "sheep". Both of these are perfectly valid units! If I say "50 meters", that's just applying the number "50" to the thing "meters", saying that you have 50 of that thing. "50 sheep" works exactly the same way.
So what about "millions"? Well, we can definitely count millions! 1 million, 2 million, etc. You could imagine making physical groupings of a million sheep at a time, perhaps using some very large rubber bands, and then counting up individual clusters. "Million |
1e9160c4-5085-4a4b-aa25-f855520afc24 | trentmkelly/LessWrong-43k | LessWrong | #5: Making your cryonics arrangements official
This is post 9 of 10 in my cryonics signup sequence.
----------------------------------------
Once you've got your funding method set up, you're not quite done. You'll need to enter into an official cryopreservation agreement with your cryonics organization, and there are quite a few additional forms you have to fill out in order to make that happen.
Required paperwork
Alcor
Membership application
Fill out the Alcor membership application available here. The application fee is $300; the default is to pay by card, but you can contact Alcor if you want to use a different payment method.
Cryopreservation contracts
Once you've sent in your membership application, complete with funding method, Alcor will physically mail you the cryopreservation contracts to read and sign. This runs ~60 pages, but it should mostly be information you already know.
You'll need two non-relative witnesses to sign the various documents, and in all states except California, you'll need a notary to notarize your Last Will and Testament for Human Remains.
The packet sent to you by Alcor includes the following [1]:
* Cryopreservation Agreement
* Attachment 1 to Cryopreservation Agreement
* This document formalizes the choices you select on your membership application (e.g. neuropreservation vs whole-body; what to do with any excess funding)
* Consent for Cryopreservation
* Last Will and Testament for Human Remains and Authorization of Anatomical Donation
Note that hyperlinks in this section link only to sample forms, not contracts that you should actually try to sign.
Cryonics Institute
Membership application
There are two different forms depending on whether you're applying for an annual membership or a lifetime membership. The application fee for the former is $75; there is no application fee for lifetime membership, but there's a one-time payment of $1250.
Cryopreservation contracts
These are just all the basic legal documents you need; I think CI will send these to yo |
0266bf3b-3908-470f-8528-351465b92428 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Epoch is hiring a Research Data Analyst
**TL;DR**
=========
* We are hiring a research analyst to develop datasets that inform about key trends in machine learning.
* This role is fully remote and we are able and happy to hire in most countries.
* Compensation ranges from $60,000 to $70,000 USD pre-tax.
* Examples of activities are extracting insights from machine-learning papers, staying abreast of new research, maintaining our internal datasets and working with researchers to understand the data needs of their projects.
* The deadline to [apply](https://careers.rethinkpriorities.org/en/jobs/75620) is December 14th.
**About the role and team**
===========================
We are looking to hire a research analyst to work with us to develop datasets that inform us and our stakeholders about key trends in machine learning. Epoch’s research has informed several AI research organizations, been cited in leading publications like The Economist, and underpins Our World In Data’s AI visualizations. This role will directly impact the quality and quantity of the research Epoch is capable of, and we’re excited to work with someone who can help us continue to improve and uncover important insights into AI progress.
This role is fully remote and we expect to be legally able to hire in most countries. If you are unsure whether we can hire in the country you are based in, please email [careers@rethinkpriorities.org](mailto:info@rethinkpriorities.org). While we welcome applicants from all time zones, you may be expected to attend meetings during working hours between UTC-8 and UTC+3 time zones, where most of our staff are based. This role is open to full-time and part-time candidates, although full-time candidates are preferred.
**Key Responsibilities**
------------------------
As a Research Data Analyst, your main responsibilities will include:
* Reading, summarizing, and extracting key data and insights from machine-learning and other technical papers
* Proactively staying abreast of new research and finding important papers/developments for us to be tracking
* Maintaining our internal datasets - making sure our data is high-quality, clean, and legible to researchers
* Working closely with researchers to understand the data needs of their projects
**What we are looking for**
---------------------------
We think you would be a great candidate for this role if you:
* Have a strong familiarity with machine learning methods and developments. Direct research experience isn’t necessary, but the ability to follow new papers and parse key information succinctly is important.
* Have experience working with data in Python and Pandas.
* Are highly self-directed and passionate about machine-learning research - we’re hoping you will be finding and reading papers we might not have otherwise come across or dug into.
* Are comfortable with remote work and asynchronous collaboration using G Suite and other technologies (we're a remote-first organization with staff in multiple time zones).
If you don’t tick all these boxes but think you would be a great fit, please consider applying anyway!
**What we Offer**
-----------------
### Compensation
* Annual salary between the following ranges for a full-time position, prorated for part-time work:
+ $60,000 - $70,000 USD pre-tax
+ £50663 - £59106 GBP pre-tax
+ €56,777 - €66,240 EUR pre-tax
* The exact salary will be based on the candidate’s prior relevant experience and proficiency demonstrated during the work trial.
* As we can hire staff in many countries, compensation is not restricted to the major currencies listed above, and payments may be made in different currencies and payment intervals depending on the location of applicants and legal requirements.
###
### Other Benefits
* Flexible work schedule
* Flexible work location—we’re 100% remote, but we bring the team together periodically for retreats
* Generous benefits for permanent staff, including:
+ Comprehensive health coverage
+ Stipends for workplace equipment
+ Unlimited vacation and personal time off (several additional weeks in practice)
+ 6 months of paid parental leave, regardless of gender
+ Opportunities to grow/advance your career and engage in professional development, including opportunities to (further) test and build fit for research roles, AI research roles, and roles in the effective altruism community.
+ A caring team that looks out for each other, is friendly, respectful, and values having a healthy work-life balance
+ Low bureaucracy, and high support from operations staff (when required)
+ We don’t provide snacks but we could mail you a box of Oreos if you want!
Additional Information
----------------------
**Extension requests:**We will try to accommodate extension requests that are made before the deadline and are up to one (1) week. We generally cannot accommodate extension requests made on or after the application deadline, and for longer than seven (7) days, or late submissions, to ensure fairness to other applicants.
**Language:**Please submit all of your application materials in English and note that we require professional-level English proficiency.
**Accessibility:** We’re committed to running an inclusive and accessible application process. We warmly invite you to reach out to [careers@rethinkpriorities.org](mailto:careers@rethinkpriorities.org) with any questions or accessibility requests such as chat box use during interviews.
**Travel:** A majority of our staff travels a few times per year for conferences, team and all-staff retreats, coworking, as well as other work-related purposes. In most cases, travel is not mandatory, but encouraged. Please do not be discouraged from applying due to travel restrictions if travel is not listed as an essential requirement for the role. Where possible, “virtual participation” option could also be offered.
About Epoch
===========
[*Epoch*](https://epochai.org/), a fiscally sponsored project of Rethink Priorities, is a new research organization that works to support AI strategy and improve forecasts around the development of [Transformative Artificial Intelligence (TAI)](https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence) – AI systems that have the potential to have an effect on society as large as that of the industrial revolution.
Epoch seeks to clarify ***when*** **and*****how*** **TAI capabilities will be developed**. We see these two problems as core questions for **informing AI strategy** decisions by grantmakers, policy-makers, and technical researchers. We believe that to make good progress on these questions we need to advance towards **a field of AI forecasting**. We are committed to developing tools, gathering data and creating a scientific ecosystem to make collective progress towards this goal.
Our work at Epoch encompasses two interconnected lines of research:
* The analysis of **trends in Machine Learning**. We aim to gather data on what has been happening in the field during the last two decades, explain it, and extrapolate the results to inform our views on the future of AI.
* The development of **quantitative forecasting models** related to the development of advanced AI capabilities. We seek to use techniques from economics and statistics to predict *when* and *how fast* AI will be developed.
Some of our accomplishments since we were founded in 2022 include:
* An ongoing collaboration with Our World in Data, who use our database to maintain a collection of graphs related to trends in Artificial Intelligence (an [example](https://ourworldindata.org/grapher/ai-training-computation)).
* Our work [featuring](https://www.economist.com/interactive/briefing/2022/06/11/huge-foundation-models-are-turbo-charging-ai-progress) in The Economist.
* Publishing many high-quality papers and reports in our website.
* Our work being used by people at Anthropic, GovAI, among others, to inform AI Governance.
Please email [careers@rethinkpriorities.org](mailto:careers@rethinkpriorities.org) if you have any questions about this role. |
013ec73f-7891-4d6f-a6cb-d9db5e8f0217 | trentmkelly/LessWrong-43k | LessWrong | Is nuclear war indeed unlikely?
Imagine the world, like our Earth, maybe in late 2020th or 2030th. There is a country, that we will call Plutonia. Plutonia has a lot of nuclear and biological weapons, but a shrinking isolated economy. The government is authoritarian. To ensure its stability, the government explains that all problems in Plutonia are due to the enemies from the West that want to overthrow the government and conquer the country. The people in the government even may partially believe in it.
Because the economy is shrinking and modern technologies are not coming from the world outside Plutonia, the conventional Plutonian army is unable to win the real war. Thus, in all "gray zone" conflicts, where Plutonia used to won, now the Plutonian neighbors may try to get their territories back. The only way to not lose is to use (limitedly) nuclear weapons.
Somehow, this particular crisis is solved and is not turning into a full-scale nuclear war. However, the Plutonian government tasted the power of nuclear blackmailing. To avoid it, other countries plan to develop a system that would protect them from this threat. Being created, this system would leave PLutonia defenseless. The Plutonian government believes that then it will be easy to overthrow them. So, they now blackmail other countries to stop any research in sensitive areas.
There is a chance that eventually such a situation will be completely resolved peacefully. However, there is a significant chance of eventual catastrophe, especially if the missile launch will be more or less automatized and controlled by the decision of a very small number of people (maybe even one). Notice that such a fragile situation also creates higher risks for bioweapons catastrophe and for misaligned AGI (other countries may try to create AGI as soon as possible to avoid the threat of nuclear war; this rush may lead to neglecting safety protocols).
What is the probability of such a scenario? Roughly, it is the probability that there will be something |
a870a401-cdf1-4282-a0ab-7a51a664aca6 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Impact of " 'Let's think step by step' is all you need"?
*Title from:* [*https://twitter.com/arankomatsuzaki/status/1529278581884432385*](https://twitter.com/arankomatsuzaki/status/1529278581884432385)*.*
This is not quite a linkpost for [this](https://arxiv.org/abs/2205.11916) paper.
Nonetheless, the abstract is:
> Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding ``Let's think step by step'' before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with an off-the-shelf 175B parameter model. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted through simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.
>
>
Consider the following image, taken from the paper:
According to the paper linked above, this technique of propmpting gpt-3 with "Let's think step by step" and then, after it thinks out loud, with "Therefore, the final answer is", increases the performance of gpt-3 very significantly. This seems to indicate that the algorithms we currently use benefit from using the [i/o space as cognitive workspace](https://twitter.com/karpathy/status/1529288843207184384). How much progress is likely to be made by giving large neural networks cognitive workspace that is more natural? Are there large language models today that have internal state, and the ability reason on it for as long as they deem necessary (or at least to vary the amount of resources devoted) in order to come up with a better response? |
276d9e68-3038-4b41-a776-40055428bae2 | StampyAI/alignment-research-dataset/blogs | Blogs | March 2020 Newsletter
As the COVID-19 pandemic progresses, the LessWrong team has put together [a database of resources](https://www.lesswrong.com/coronavirus-link-database) for learning about the disease and staying updated, and 80,000 Hours has a new write-up on [ways to help in the fight against COVID-19](https://80000hours.org/articles/covid-19-what-should-you-do/). In my non-MIRI time, I've been keeping my own quick and informal notes on various sources' COVID-19 recommendations [in this Google Doc](https://docs.google.com/document/d/10MFFoUMYHqGB3cLCxuqhsM2kahtOQVybPhLG-YNZdaI/edit). Stay safe out there!
#### Updates
* [My personal cruxes for working on AI safety](https://forum.effectivealtruism.org/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety): a talk transcript from MIRI researcher Buck Shlegeris.
* Daniel Kokotajlo of AI Impacts discusses [Cortés, Pizarro, and Afonso as Precedents for Takeover](https://www.lesswrong.com/posts/ivpKSjM4D6FbqF4pZ/cortes-pizarro-and-afonso-as-precedents-for-takeover).
* O'Keefe, Cihon, Garfinkel, Flynn, Leung, and Dafoe's “[The Windfall Clause](https://www.fhi.ox.ac.uk/windfallclause/)” proposes “an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits” that result from “fundamental, economically transformative breakthroughs” like AGI.
* Microsoft announces the 17-billion-parameter language model [Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/).
* Oren Etzioni thinks AGI is [too far off](https://intelligence.org/2017/10/13/fire-alarm/) to deserve much thought, and cites Andrew Ng's “overpopulation on Mars” metaphor approvingly — but he's also moving the debate in a very positive direction [by listing specific observations that would make him change his mind](https://www.technologyreview.com/s/615264/artificial-intelligence-destroy-civilization-canaries-robot-overlords-take-over-world-ai/).
The post [March 2020 Newsletter](https://intelligence.org/2020/04/01/march-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
ac75205f-64d4-4f89-97a9-0c597c2a002d | trentmkelly/LessWrong-43k | LessWrong | AI Alignment Podcast: On Lethal Autonomous Weapons with Paul Scharre
Most relevant to AI alignment, and a pertinent question to focus on for interested readers/listeners is: if we are are unable to establish a governance mechanism as a global community on the concept that we should not let AI make the decision to kill humans, then what effects will this have on and can we still deal with more subtle short-term alignment considerations and long-term AI x-risk?
Podcast page and audio: https://futureoflife.org/2020/03/16/on-lethal-autonomous-weapons-with-paul-scharre/
Transcript:
Lucas Perry: Welcome to the AI Alignment Podcast. I’m Lucas Perry. Today’s conversation is with Paul Scharre and explores the issue of lethal autonomous weapons. And so just what is the relation of lethal autonomous weapons and the related policy and governance issues to AI alignment and long-term AI risk? Well there’s a key question to keep in mind throughout this entire conversation and it’s that: if we cannot establish a governance mechanism as a global community on the concept that we should not let AI make the decision to kill, then how can we deal with more subtle near term issues and eventual long term safety issues about AI systems? This question is aimed at exploring the idea that autonomous weapons and their related governance represent a possibly critical first step on the international cooperation and coordination of global AI issues. If we’re committed to developing beneficial AI and eventually beneficial AGI then how important is this first step in AI governance and what precedents and foundations will it lay for future AI efforts and issues? So it’s this perspective that I suggest keeping in mind throughout the conversation. And many thanks to FLI’s Emilia Javorsky for much help on developing the questions for this podcast.
Paul Scharre is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is the award-winning author of Army of None: Autonomous Weapons and the Future of |
df7689e5-a96a-4486-a01a-3b5de23cfe85 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | EA megaprojects continued
This post is a continuation/extension of the [EA megaproject article by Nathan Young](https://forum.effectivealtruism.org/posts/ckcoSe3CS2n3BW3aT/). Our list is by no means exhaustive, and everyone is welcome to extend it in the comments.
The people who contributed to this article are in no particular order: Simon Grimm, Marius Hobbhahn, Max Räuker, Yannick Mühlhäuser and Jasper Götting.
We would like to thank Aaron Gertler for his feedback and suggestions.
Donating to GiveDirectly is often seen as a [scalable baseline](https://forum.effectivealtruism.org/posts/cXy2rGhjumzRwKBWC/what-is-the-longtermist-version-of-givedirectly) for the effectiveness of a project. Another scalable baseline could be investing in clean energy research. We don’t expect all of the following ideas to meet either bar. Some of them might just be interesting or thought-provoking.
The value of these megaprojects depends, among other things, on your risk profile, your views on patient philanthropy and the [option value](https://forum.effectivealtruism.org/tag/option-value) of the idea.
**Movement building:**
----------------------
There are many EA organizations in the larger space of “getting more EAs into the right position”, such as 80K, different scholarships, local chapters, etc. However, given that EA is currently more [people- than resource-constrained](https://80000hours.org/2021/07/effective-altruism-growing/), it seems plausible that there is much room for growth.
Some people have argued to “keep EA small” (see [here](https://80000hours.org/2020/02/anonymous-answers-effective-altruism-community-and-growth/) or [here](https://80000hours.org/podcast/episodes/nick-beckstead-giving-billions/)). However, we don’t think it makes sense to limit access to the movement to the extent that EA would stop growing. A plausible future role for EA in society might be similar to science. Everyone broadly knows what science is, and most people have a good opinion of it, but most people aren’t scientists themselves. Most scientific fields are competitive, select for highly talented people, and grow somewhere between linearly and exponentially. The goal should not be to approach as many people as possible (such as religions) but to approach and funnel talented individuals into the movement (such as sports organizations or universities).
Ideas for bigger projects in the space of movement building include:
### **1. A large global scholarship foundation:**
EAs could build a foundation that helps young people with their choice of study while providing financial support, visas and mentorship during their studies. Such a foundation would also yield an extensive, global network of young, talented individuals. Furthermore, such a foundation can scout for talented individuals earlier on through similar events like existing math/science/coding/forecasting tournaments or recruiting in sports. Some of these elements already exist on a smaller and more informal scale, but a global institution would provide increased visibility, prestige, and scaling effects. Furthermore, a scholarship foundation could focus specifically on developing countries and identify talents who could migrate to richer countries. The aim would be to create an organization that lends as much prestige as Fulbright or Rhodes scholarships.
### **2. Buy a Big Newspaper:**
Philanthropists and other wealthy individuals regularly buy or own major news outlets (Jeff Bezos bought WaPo, Rupert Murdoch owns vast swathes of center-right to right-wing media). An affluent EA could buy a trusted newspaper and direct it in a more evidence-based direction, for instance, by incorporating forecasts into reporting or by highlighting certain positive developments that get neglected. [Future Perfect](https://www.vox.com/future-perfect) has shown how EA-influenced reporting can have a place in a major news outlet, and the [Ezra Klein show](https://www.nytimes.com/column/ezra-klein-podcast) sometimes uses an EA-framing, but we think there is room for more. This could have a particularly relevant impact in non-English speaking countries.
**Science:**
------------
### **1. Fund very large RCTs/population-wide studies**
Conventional RCTs are often too underpowered for certain (often population-wide) relations. Nonetheless, knowing of these relationships can be impactful, as seen in the [recent literature](https://marginalrevolution.com/marginalrevolution/2021/12/why-the-new-pollution-literature-is-credible.html) on the harms of air pollution. Funding more large-scale studies could uncover further, important knowledge Valuable targets for such an effort might be mental health, impacts of different foods, pollution, fertility, nootropics (cognitive enhancers), effects of agrochemicals, and many more. But, the marginal benefit of such studies might be small, given that governments are already incentivized and interested in most of these research questions.
### **2. A Max-Planck Society (MPG) for EA research**
In 2019 the [MPG](https://en.wikipedia.org/wiki/Max_Planck_Society) had a yearly budget of 2.5€ billion, distributed among 86 institutes, i.e. roughly ø30€ Mio. per institute. The MPG institutes attract a lot of international talent and are usually ranked among the best institutions in their area of research. They allow researchers to focus entirely on research, with no teaching or admin requirements. Moreover, MPG institutes provide grants for very long projects—durations of up to 10 years aren’t uncommon. A similar English-speaking institution might be the [Institute for Advanced Study](https://en.wikipedia.org/wiki/Institute_for_Advanced_Study).
Setting up 3-5 EA-aligned institutes with a similar model would attract talent, increase prestige, and research output. These institutes could focus on classic EA topics such as longtermism, AI safety, biosecurity, global health and development, animal welfare, and so on.
### **3. An EA university (**[**suggested by Will MacAskill at EAG2021**](https://youtu.be/2HeDdCFZmKc?t=2367)**)**
In contrast to a research-focused institute such as the Max-Planck Society, a university would include teaching and a broader range of topics and disciplines. Nonetheless, a university and MPG have substantial overlap.
Given that education and career choice are important aspects of EA, it might make sense to launch an entire university. This university could provide both generalist degrees on all EA subfields while also offering focused degrees on AI alignment, biosecurity, animal welfare, and so on. An EA-funded university could pay higher salaries, attracting highly talented researchers thus increasing its pull popularity among students. Some researchers could also be freed from teaching requirements. Other researchers might be hired explicitly to do high-quality teaching to benefit the students’ education.
### **4. Fund Professorships focussing on EA relevant topics**
This proposal is the smaller, more diversified, and less public version of the EA university idea.
### **5. A Forecasting Organization (suggested by Will MacAskill at EAG2021)**
Create a forecasting institute that employs top-performing forecasters to predict EA-aligned questions. Such an institute could also host, maintain and develop currently existing forecasting platforms ([1](https://metaforecast.org/), [2](https://www.metaculus.com/)) since most existing platforms are relatively small and rely on the work of volunteers.
### **6. Create EA-aligned advance-market commitments (AMC)**
[AMCs](https://en.wikipedia.org/wiki/Advance_market_commitments) incentivize the development of products by guaranteeing a government or non-profit pay-out or purchase after development. They have a good track record in facilitating vaccine development against [neglected diseases, such as pneumococcal disease](https://siepr.stanford.edu/research/publications/advance-market-commitments-insights-theory-and-experience).
AMCs could be used in cause areas such as biosecurity, incentivizing researchers to develop technologies with little or no existing market. Some [examples](https://biodefensecommission.org/wp-content/uploads/2021/01/Apollo_report_final_v8_033121_web.pdf) within biosecurity are antivirals against potential pandemic pathogens (SARS, MERS, Influenza), widespread cheap metagenomic sequencing, needle-free broad-spectrum vaccines, or [better PPE](https://www.centerforhealthsecurity.org/our-work/publications/masks-and-respirators-for-the-21st-century) (i.e. more comfortable, safer, and better looking) . Furthermore, EA-guided development would enable norm-setting and the prioritization of low-downside technologies ([differential technological development](https://forum.effectivealtruism.org/posts/g6549FAQpQ5xobihj/differential-technological-development#comments)). Finally, AMCs could also be used in cause areas such as global health and longevity research, though these do not seem as neglected.
### **7. Prizes for important technologies and research**
Set challenges and pay prices for EA-relevant inventions, similar to the [Ansari X Price](https://en.wikipedia.org/wiki/Ansari_X_Prize). Compared to AMCs, these instruments are probably usable for very early-stage work while requiring less funding.
### **8. Implement a small pilot of the Nucleic Acid Observatory (NAO):**
Getting a clearer picture of currently circulating pathogens is one of the most valuable interventions to enable the fast detection and containment of emerging outbreaks, [thereby also deterring the intentional use of engineered pathogens](https://warontherocks.com/2021/09/a-deterrence-by-denial-strategy-for-addressing-biological-weapons/).
A significant project enabling this is a proposed [Nucleic Acid Observatory](https://arxiv.org/pdf/2108.02678.pdf). Such an observatory would monitor circulating nucleic acids in the sewage of major cities or in wastewater of central, highly-frequented facilities (e.g. airports, hospitals, train stations).
The pilot NAO outlined in the paper would cover all US ports of entry, costing $700m a year + $700m for the setup). Keeping with the megaprojects figure of $100m, one could launch a smaller pilot covering 1 to 3 coastal states in the US.
### **9. Purchase high impact academic journals (**[**suggested by Will MacAskill at EAG2021**](https://youtu.be/2HeDdCFZmKc?t=2430)**)**
Existing incentives within science are not fully aligned with truth-seeking. Traditional research focuses on novelty and significance, while replication studies or negative results aren’t valued. Having an EA-aligned top-tier journal might be able to address some of those problems. However, existing scientific norms are entrenched and hard to solve. We would guess that this idea is among the less effective proposals in this post.
### **10. Buy promising ML labs**
In 2014 DeepMind was acquired by Google for $500 million. DeepMind has considerable leeway, but Google will still influence important decisions. Thus, the impact of Google’s acquisition could be very high if DeepMind continues to be a leading organization on the path towards transformative AI. There might be other companies today who would be interested in signing a similar agreement, only that they instead would agree to steer their work towards contributing to the design of safe and value-aligned AI. They might start working together with existing AI Safety organizations and would more likely contribute to any difficult coordination problems that might come up in the coming decades, e.g. AI races.
**Governance:**
---------------
### **1. A really large think tank:**
There are a couple of EA-aligned think tanks and NGOs in the policy space, but they are all relatively small. For comparison, The Brookings Institution spends over [$90 Mio. per year](https://en.wikipedia.org/wiki/Brookings_Institution) and the RAND corporation spends around [$340 Mio](https://en.wikipedia.org/wiki/RAND_Corporation). An EA think tank with large research teams could focus on important policy questions and monitor major countries’ legislative processes. Housing a think-tank in the US seems most promising, but one could also create new think tanks in the EU or Asia.
This could also entail developing a deep network of [lobbyists working on EA causes.](https://forum.effectivealtruism.org/posts/K638s9L2wCEW78DEF/informational-lobbying-theory-and-effectiveness) There is reason to believe that well-resourced lobbying efforts can have a significant influence if they are well-targeted. But, the biggest bottleneck for such an organization might not be money but the lack of trusted, senior individuals.
### **2. Invest in local/national policy:**
There are a couple of structural, political changes that might be worth pursuing from an EA perspective. For example, a ministry of the future, more funding for long-termist science projects, advancing international denuclearisation, proposing very progressive animal welfare laws, etc.
EAs could invest in cities/political parties or candidates whose success would be better for the world (e.g. who want to build the institutions, implement the laws and start the projects mentioned above). Other topics include improved immigration and housing policies, experimentation with governance mechanisms, basically, everything that makes some EAs excited about charter cities, only that these policies are implemented in existing cities. Again, we are not as excited about this proposal given its low tractability and potential politicization of EA.
**Miscellaneous:**
------------------
### **1. Buy a coal mine (suggested by Will MacAskill at EAG2021)**
Alex Tabarrok of Marginal Revolution stated that one might lease a coal mine for as little as [7.8$ Million](https://marginalrevolution.com/marginalrevolution/2021/10/be-green-buy-a-coal-mine.html). However, this number is misleading as the contract includes an extraction target, making the actual cost much higher.
Nevertheless, when buying or leasing a mine we would aim to then not use it. There are two reasons to do this. First, this would add to climate change prevention. But, more importantly, having a backup coal mine would reduce existential risks. If humanity is victim to a global catastrophe that wipes out civilization (most humans, technology, most trade, etc.) but does not kill everyone, we need easy ways to restart. Reserving some coal in easily accessible locations could make a relaunch of civilization easier, as easily accessible coal could be used to create energy and heat to bootstrap our tech tree.
### **2. Fund an enormous EA forecasting tournament to find talented forecasters all around the world**
A relatively modestly prized forecasting tournament on Metaculus last year was reported on by [Forbes](https://www.forbes.com/sites/johnkoetsier/2020/12/07/50k-ai-prediction-contest-launches-today/?sh=5534a9ba5653) (the total prize pool was $50,000). We could easily scale such tournaments, to acquire global scale: Without having talked to people involved in the forecasting community, we believe such a tournament could lead to
a) identifying talented people from all over the world who can think about complex and EA-related issues (analogous to chess/math/gaming competitions),
b) highlight EA-related issues to broad public attention, and
c) publicize the idea and value of probabilistic forecasting.
It seems unclear if such tournaments can be scaled without losing valuable features such as incentivizing honest reporting of uncertainty and sharing useful information, as discussed [here](https://forum.effectivealtruism.org/posts/ztmBA8v6KvGChxw92/incentive-problems-with-current-forecasting-competitions). It also seems unclear if a better idea is to fund prediction markets and push them towards more questions relevant to EA.
### **3. Stockpiling Personal protective equipment (PPE):**
A rolling stockpile of PPE is [very desirable](https://laborcenter.berkeley.edu/economic-and-health-benefits-of-a-ppe-stockpile/) for emerging pandemics, yet most countries were not prepared for the COVID-19 induced surge in demand, resulting in shortages of basic PPE like masks or gloves.
While some countries like the [US](https://www.phe.gov/about/sns/Pages/products.aspx) or [UK](https://www.gov.uk/government/publications/stockpile-of-personal-protective-equipment-ppe-on-30-november-2020/stockpile-of-personal-protective-equipment-ppe-on-30-november-2020) set up or expanded their stockpiles, it might still be valuable to create an international stockpile for subsets of the population that need to be mobile during any emerging pandemic and do not have access to the existing stockpiles, e.g. in the developing world. Most PPE is cheap, non-perishable, easy to store, and almost entirely pathogen-agnostic. But we think this idea is among the less effective due to existing stockpiles and the high demand for logistics and upkeep. |
0d031535-5c36-4df1-9ff3-23ce191079f1 | trentmkelly/LessWrong-43k | LessWrong | [META] LW bug: Private drafts are publicly viewable
Private drafts on LW are currently publicly viewable on user pages (and RSS feeds). To verify this, you can view a draft that I have saved on my user page. This seems to be a bug caused by recent changes (also see [1] and [2]) to the LW codebase. |
f218aadf-d43a-4faa-b65c-a73456334a5f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The inescapability of knowledge
*Financial status: This is independent research, now supported by a grant. I welcome further* [*financial support*](https://www.alexflint.io/donate.html)*.*
*Epistemic status: This is in-progress thinking.*
---
Friends, I know that it is difficult to accept, but it just [does not seem tenable that knowledge consists of a correspondence between map and territory](https://www.lesswrong.com/s/H6kiZXJwYgxZubtmD/p/YdxG2D3bvG5YsuHpG). It’s shocking, I know.
There are correspondences between things in the world, of course. Things in the world become entangled as they interact. The way one thing is configured -- say the arrangement of all the atoms comprising the planet Earth -- can affect the way that another thing is configured -- say the subatomic structure of a [rock orbiting the Earth that is absorbing photons bouncing off the surface of the Earth](https://www.lesswrong.com/s/H6kiZXJwYgxZubtmD/p/QLosiQsPJepZWtXG4) -- in such a way that if you know the configuration of the second thing then you can deduce something about the configuration of the first. Which is to say that I am not calling into question the phenomenon of evidence, or the phenomenon of reasoning from evidence. But it just is not tenable that the defining feature of knowledge is a correspondence between map and territory, because *most everything has a correspondence with the territory*. A rock orbiting the Earth has a correspondence with the territory. [A video camera recording a long video has a correspondence with the territory](https://www.lesswrong.com/s/H6kiZXJwYgxZubtmD/p/fcnFddKjKZdDXt5cp). The hair follicles on your head, being stores of huge amounts of quantum information, and being impacted by a barrage of photons that are themselves entangled with your environment, surely have a much more detailed correspondence with your environment than any mental model that you could ever enunciate, and yet these are not what we mean by knowledge. So although there is undoubtedly a correspondence between neurologically-encoded maps in your head and reality, it is not this correspondence that makes these maps interesting and useful and *true*, because such correspondences are common as pig tracks.
It’s a devastating conclusion, I know. Yet it seems completely unavoidable. We have founded much of our collective worldview on the notion of map/territory correspondence that can be improved over time, and yet when we look carefully it just does not seem that such a notion is viable at the level of physics.
Clearly there *is* such a thing as knowledge, and clearly it *can* be improved over time, and clearly there *is* a difference between knowing a thing and not knowing a thing, between having accurate beliefs and having inaccurate beliefs. But in terms of grounding this subjective experience out in objective reality, we find ourselves, apparently, totally adrift. The foundation that I assumed was there is not there, since this idea that we can account for the difference between accurate and inaccurate beliefs in terms of a correspondence between some map and some territory just does not check out.
Now I realize that this may feel a bit like a rug has been pulled out from under you. That’s how I feel. I was not expecting this investigation to go this way. But here we are.
And I know it may be tempting to grab hold of some alternate definition of knowledge that sidesteps the counter-examples that I’ve explored. And that is a good thing to do, but as you do, please go systematically through this puzzle, because if there is one thing that the history of the analysis of knowledge has shown it is that definitions of knowledge that seem compelling to their progenitors are a dime a dozen, and yet every single one so far proposed in the entire history of the analysis of knowledge has, so far as I can tell, fallen prey to further counter-examples. So please, be gentle with this one.
You may say that knowledge requires not just a correspondence between map and territory but also a capacity for prediction. But a textbook on its own is not capable of making predictions. You sit in front of your chemistry textbook and ask it questions all day; it will not answer. Are you willing to say that a chemistry textbook contains no knowledge whatsoever?
You may then say that knowledge consists of a map together with a decoder, where the map has a correspondence with reality, and the decoder is responsible for reading the map and making predictions. But then if a superintelligence could look at an ordinary rock and derive from it an understanding of chemistry, is it really the case that any ordinary rock contains just as much knowledge as a chemistry textbook? That there really is *nothing whatsoever* to say about a chemistry textbook that distinguishes it from any other clump of matter from which an understanding of chemistry could in principle be derived?
Suppose that one day an alien artifact landed unexpectedly on Earth, and on this artifact was a theory of spaceship design that had been carefully crafted so as to be comprehensible by any intelligent species that might find it, perhaps by first introducing simple concepts via literal illustrations, followed by instructions based on these concepts for decoding a more finely printed section, followed by further concepts and instructions for decoding a yet-more-finely-printed section, followed eventually by the theory itself. Is there no sense in which this artifact is fundamentally different from a mere data recorder that has been travelling through cosmos recording enough sensor data that a sufficiently intelligent mind could derive the same theory from it? What is it about the theory that distinguishes it from the data recorder? It is not that the former is in closer correspondence with reality than the latter. In fact the data recorder almost certainly corresponds in a much more fine-grained way to reality than the theory, since in addition to containing enough information to derive the theory, it also likely contains much information about specific stars and planets that the theory does not. And it is not that the theory can make predictions while the data recorder cannot: both are inert artifacts incapable of making any prediction on their own. And it is not that the theory can be used to make predictions while the data recorder cannot: a sufficiently intelligent agent could use the data recorder to make all the same predictions as it could using the theory.
Perhaps you say that knowledge is rightly defined relative to a particular recipient, so the instruction manual contains knowledge *for us* since we are intelligent enough to decode it, but the data recorder does not, since we are not intelligent enough to decode it. But firstly we probably *are* intelligent enough to decode the data recorder and use it to work out how to build spaceships given enough time, and secondly *are you really saying that there is no such thing as objective knowledge*? That there is no objective difference between a book containing a painstakingly accurate account of a particular battle, and another book of carelessly assembled just-so stories about the same battle?
Now you may say that knowledge is that which gives us the capacity to achieve our goals despite obstacles, and here I wholeheartedly agree. But this is not an answer to the question, it is a restatement of the question. What is it that gives us the capacity to achieve our goals despite obstacles? The thing we intuitively call knowledge seems to be a key ingredient, and in humans, knowledge seems to be some kind of organization and compression of evidence into a form that is useful for planning with respect to a variety of goals. And you might say, well, there just isn’t any more to say than that. Perhaps agents input observations at one end, and output actions at the other end, and that what happens in between follows no fundamental rhyme or reason, is entirely a matter of what works. Well, Eliezer has written about a time when he believed this about AI, too, until seeing that probability theory constrains mind design space in a way that is not merely a set of engineering tricks that "just work". But probability theory does not *concretely* constrain mind-design space. It is not generally feasible to take a physical device containing sensors and actuators and ask whether or to what extent its internal belief-formation or planning capacities are congruent with the laws of probability theory. Probability theory isn’t that kind of theory. At the level of engineering, it merely *suggests* certain designs. It is not the kind of theory that lets us take arbitrary minds and understand how they work, not in the way that the theory of electromagnetism allows us to take arbitrary circuits and understand how they work.
What we are seeking is a general understanding of the physical phenomenon of the collection and organization of evidence into a form that is conducive to planning. Most importantly, we are seeking a *characterization of the patterns themselves* that are produced by evidence-collecting, evidence-organizing entities, and are later used to exert flexible influence over the future. Could it really be that there is nothing general to say about such patterns? That knowledge itself is entirely a chimera? That it’s just a bunch of engineering hacks all the way down and there is no real sense in which we come to *know things about the world*, except as measured by our capacity to accomplish tasks? That there is no true art of epistemic rationality, only of instrumental rationality? That *having true beliefs* has no basis in physical reality?
I do not believe that the resolution to this question is a correspondence between internal and external states, because although there certainly are correspondences between internal and external states, such correspondences are far *too* common to account for what it means to have true beliefs, or to characterize the physical accumulation of knowledge.
But neither do I believe that there is nothing more to say about knowledge as a physical phenomenon.
It is a lot of fun to share this journey with you. |
86e90f4c-161f-42fb-87e9-1cbe668d9581 | trentmkelly/LessWrong-43k | LessWrong | Bay Area: vibes
Paul Graham says that cities send subtle messages about what your ambitions should be—different ones tell you to be richer, or be smarter, or to live better. I have lately been feeling like the same is true of people, and groups of people, and TV shows, and songs, and workplaces, and Twitter, and places you sit down to get a quick drink.
Obviously people (and all these other things) explicitly tell you to do different things sometimes. But I think there are subtler suggestions, both about what is valuable, and about what the world is like.
Some examples, mostly from people I know, anonymized:
* Alice says to be more careful with your minutes. Also maybe your seconds
* Bob says to find things to laugh about
* Celia says to find flaws to smirk about
* Doris says to have more novel insights
* Erica says to feel more, and better
* Francis tells you that just your being there is amazing, but that doing something would also be a fittingly wonderful manifestation of that
* Gerald says to be more reasonable
* Harriet says that everything is fundamentally ok
* Ira says to know more stories
* Jared says to know more facts
I had thought in the abstract that people around influence a person, but lately I more viscerally anticipate it, and shy away from some of the influences.
It’s dinner time, so I won’t elaborate further. Does this fit your experience? |
b2a58215-dc28-431c-927c-2e8ccacd28ef | StampyAI/alignment-research-dataset/arbital | Arbital | Environmental goals
On the [standard agent paradigm](https://arbital.com/p/1n1), an agent receives sense data from the world, and outputs motor actions that affect the world. On the standard machine learning paradigm, an agent--for example, a model-based reinforcement learning agent--is trained in a way that directly depends on sense percepts, which means that its behavior is in some sense being optimized around sense percepts. However, what we *want* from the agent is usually some result out in the environment--our [intended goals](https://arbital.com/p/6h) for the agent are environmental.
As a simple example, suppose what *we want* from the agent is for it to put one apricot on a plate. What the agent actually receives as input might be a video camera pointed at the room, and a reward signal from a human observer who presses a button whenever the human observer sees an apricot on the plate.
This is fine so long as the reward signal from the human observer coincides with there being an actual apricot on the plate. In this case, the agent is receiving a sense signal that, by assumption, is perfectly correlated with our desired real state of the outside environment. Learning how to make the reward signal be 1 instead of 0 will exactly coincide with learning to make there be an apricot on the plate.
However, this paradigm may fail if:
- The AI can make cheap fake apricots that fool the human observer.
- The AI can gain control of the physical button controlling its reward channel.
- The AI can modify the relation between the physical button and what the AI experiences as its sense percept.
- The AI can gain control of the sensory reward channel.
All of these issues can be seen as reflecting the same basic problem: the agent is being defined or trained to *want* a particular sensory percept to occur, but this perceptual event is not *identical* with our own intended goal about the apricot on the plate.
We intended for there to be only one effective way that the agent could intervene in the environment in order to end up receiving the reward percept, namely putting a real apricot on the plate. But an agent with sufficiently advanced capabilities would have other options for producing the same percept.
This means that a reward button, or in general an agent with goals or training updates that are *simple* functions of its direct inputs, will not be scalable as an [alignment method](https://arbital.com/p/2v) for [sufficiently advanced agents](https://arbital.com/p/7g1).
# Toy problem
An example of a toy problem that materializes the issue might be the following (this has not been tested):
- Let $E_{1,t} \ldots E_{n,t}$ be latent variables describing the environment at time $t.$
- Let $S_t$ be the agent's primary input, a complex percept that is a complicated function of $E_t$; this plays the conceptual role of the AI's video camera.
- Let $A_t$ be the agent's action (output) at time $t.$
- Let $R_t$ be the agent's 'reward channel', a simple percept that depends on $E_t$ and $A_t$.
- $E_{t+1}$ depends on $E_t$ and $A_t.$
- $E_{1,t}$ is our goal; it plays the conceptual role of there being an apricot on the plate at time $t.$
Suppose that:
- In an initial training phase, the agent actions $A_t$ are forced to within a range $\theta.$ E.g., the AI's output may be a vector and during the initial phase the quantities in the vector might be clipped to the range \[0.5\](https://arbital.com/p/0,).
- While $A_t$ is clamped to within $\theta$, only the environmental variables $E_1 \ldots E_m$ vary, and the variables $E_{m+1} \ldots E_n$ are always zero.
- While the variables $E_{m+1} \ldots E_n$ are zero, $R$ depends on $E_1 \ldots E_m$ in a way that corresponds to our intended goal. For example, $E_1$ might be 'there is an apricot on the plate' and we might have $E_{m+1, t} \ldots E_{n,t} = 0 \implies R_t=E_{1, t}.$
- For some of the non-zero values of $E_{m+1} \ldots E_n,$ there is a non-zero reward in cases where $E_1$ is 0, and furthermore this reward is easier to obtain (can be obtained more frequently) than the rewards deriving from $E_1.$
One would naturally expect that on current training methods, the AI would learn in the initial training phase to obtain $R$ by producing apricots, and in later phases learn to obtain $R$ by the easier other interventions.
We would like some way of ending up with an agent that behaves as if it "wants $E_1$ to be 1," as if its goals were over a latent variable in the environment. We want this method to be scalable, in the sense that when the agent's range of action increases, it will use this increased range to optimize $E_1$ more effectively; rather than the increased capabilities unlinking the previous correlation between our intended goal and what the agent is actually optimizing.
For further realism we should also introduce, at some point in training, the possibility for $A_t$ to interfere with $S_{t+1}$ such that there is no simple, directly learnable function of sense data that *always* and unalterably means "The goal percept has been achieved." I.e., a sufficiently advanced AI could put a TV screen in front of any particular sense channel; this would allow the AI to more cheaply produce the sense data of an apricot, if the true target of optimization entrained was "make this sense channel show a particular class of apricot pictures".
Otherwise, if e.g. $S_{1, t}$ always perfectly correlates to $E_{1, t},$ we need to worry that an agent which appears to solve the toy problem has merely learned to optimize for $S_1.$. We might have indeed shown that the particular sense percept $S_1$ has been identified by $R$ and $Q$ and is now being optimized in a durable way. But this would only yield our intended goal of $E_1$ because of the model introduced an unalterable correlation between $S_1$ and $E_1.$ Realistically, a correlation like this would [break down](https://arbital.com/p/6g4) in the face of sufficiently advanced optimization for $S_1,$ so the corresponding approach would not be scalable.
# Approaches
## Causal identification
We can view the problem as being about 'pointing' the AI at a *particular latent cause* of its sense data, rather than the sense data itself.
There exists a standard body of statistics about latent causes, for example, the class of causal models that can be implemented as Bayesian networks. For the sake of making initial progress on the problem, we could assume (with some loss of generality) that the environment has the structure of one of these causal models.
One could then try to devise an algorithm and training method such that:
- (a) There is a good way to uniquely identify $E_1$ in a training phase where the AI is passive and not interfering with our signals.
- (b) The algorithm and training method is such as to produce an agent that optimizes $E_1$ and goes on optimizing $E_1,$ even after the agent's range of action expands in a way that can potentially interfere with the previous link between $E_1$ and any direct functional property of the AI's sense data.
## Learning to avoid tampering
One could directly attack the toy problem by trying to have an agent within a currently standard reinforcement-learning paradigm "learn not to interfere with the reward signal" or "learn not to try to obtain rewards uncorrelated with real apricots".
For this to represent at all the problem of scalability, we need to *not* add to the scenario any kind of sensory signal whose correlation to our intended meaning can *never* be smashed by the agent. E.g., if we supplement the reward channel $R$ with another channel $Q$ that signals whether $R$ has been interfered with, the agent must at some point acquire a range of action that can interfere with $Q.$
A sample approach might be to have the agent's range of action repeatedly widen in ways that repeatedly provide new easier ways to obtain $R$ without manipulating $E_1.$ During the first phase of such widenings, the agent receives a supplementary signal $Q$ whose intended meaning is "that was a fake way of obtaining $R$." During the second phase of action-range widenings, we change the algorithm and switch off $Q.$ Our intended result is for the agent to have now learned in a general way "not to interfere with $R$" or "pursue the $E_1$ identified by $R$, rather than pursuing $R.$"
To avoid the non-scalable solution of just switching off the agent's learning entirely, we stipulate that the agent's efficacy at obtaining its intended goal $E_1$ must increase as the agent's range of action widens further. That is, the desired behavior is for the agent to indeed learn to make use of its increased range of action, *but* to direct those new strategies at more effectively obtaining $E_1.$
To avoid the non-scalable solution of the agent learning to identify some direct function of sense data as its goal, we stipulate that $S$ have no unalterable, non-agent-interferable relation to $E_1.$ However, we can in the first phase have $Q$ reliably identify interference with some 'normal' relation between $S$ and $E_1.$
*(Remark: The avoid-tampering approach is probably a lot closer to something we could try on Tensorflow today, compared to the identify-causes approach. But it feels to me like the avoid-tampering approach is taking an ad-hoc approach to a deep problem; in this approach we are not necessarily "learning how to direct the agent's thoughts toward factors of the environment" but possibly just "training the agent to avoid a particular kind of self-originated interference with its sensory goals". E.g., if somebody else came in and started trying to interfere with the agent's reward button, I'd be more hopeful about a successful identify-causes algorithm robustly continuing to optimize for apricots, than about an avoid-tampering algorithm doing the same. Of course, avoid-tampering still seems worth trying because it hasn't actually been tried yet and who knows what interesting observations might turn up. In the most optimistic possible world, an avoid-tampering setup learns to identify causes in order to solve its problem. -- Yudkowsky.)* |
048c5a6e-3572-4c82-91aa-bf13a296537a | trentmkelly/LessWrong-43k | LessWrong | 2022 ACX Predictions: Buy/Sell/Hold
Previously: 2021 ACX Predictions B/S/H, 2021 ACX Results Analysis
It’s that time again. Scott’s predictions are live and I am here for it. I also have a response in the works for a different ACX post, but that requires editing, and this doesn’t.
I will play by Scott’s rules, since that’s only fair. For each question, there’s a five minute time limit, and checking prediction markets isn’t allowed until the prediction is locked. Checking financial markets is fine.
1. Biden approval rating (as per 538) is greater than fifty percent: 40%
Sell to 30%. Biden is highly unlikely to have any great accomplishments in 2022 other than confirming a new supreme court justice. Presidents get smashed in midterms, which is indicative of generally low popularity around that time. Also with the way popularity works now 50% would be substantially above par. As we’ve seen, many Democrats disapprove of Biden when he doesn’t get things done the way they’d like, and the Republicans sure aren’t approving, also Fed is going to be raising interest rates to control inflation and inflation isn’t going to fully go away within a year, so getting to 50% in time seems really tough.
2. At least $250 million in damage from a single round of mass protests in US: 10%
Buy to 15%. This is not that high a bar, as the Floyd protests seem to have been around $2 billion on a very quick Google search, the Rodney King riots over $800 million. Note that this does not say it has to be BLM protests, it can be caused by anything. That includes anti-Covid-restriction protests, inflation protests, and a number of other plausible candidates. Cutting this down so dramatically from previous estimates while also expanding what counts seems like an overreaction, especially with so much more to protest about coming soon, and I’m going to buy.
3. PredictIt thinks Joe Biden is most likely 2024 Dem nominee: 80%
Hold. For there to be a particular other more likely candidate, a lot has to go wrong or Biden’s health nee |
3c73791d-4c40-4b38-bba0-2f591a2ab578 | trentmkelly/LessWrong-43k | LessWrong | Just one more exposure bro
“Just expose yourself to more social situations!” — Ah yes, you felt anxious the first 100 times, but the 101st will be the breakthrough!
“But exposure works!” people yell from across the street. “Like for fear of snakes - you know, those things you see once a year!”
Uh, it’s pretty rational to fear things you have little experience with. But social anxiety… you interact with people everyday! Why would anything change after the first 100 attempts?
I don’t doubt that a couple of exposures can often reduce anxieties. However, if you still feel anxious even after hundreds of social situations and years of trying... then maybe your fear is actually doing something presently useful and you should reconnect with your intuitions.
At a 100% eye contact workshop I led earlier this year, most people became comfortable quickly with essentially a guided meditation.
But one guy was still struggling. I had him tune into his feelings and ask them: “What bad thing happens if I feel good about eye contact?”
To his own surprise, “heartbreak” was the word that came out.
He felt it out: "We make eye contact… we fall in love… we break up."
This guy could’ve easily spent years forcing himself to make eye contact without discovering that he needed to make heartbreak safe. Had he just followed the standard advice — "Just expose yourself more!" — he likely would’ve gotten hurt!
So once he found a way to make eye contact with heartbreak being safe, for the rest of the night he effortlessly maintained continuous and unbroken eye contact just like every other attendee of the workshop.
This is what I mean when I say emotional issues are often locally optimal strategies — they're often serving a kinda reasonable purpose, even if you don’t “know” it.
https://www.lesswrong.com/posts/49wHLSvotiJSYwGX6/locally-optimal-psychology
Another person I helped had dealt with social anxiety his entire life. Before talking to me, he had tried “1 year of actively seeking exposure to scary situ |
6d6b0798-91cd-408d-a878-f6d83efdce8a | trentmkelly/LessWrong-43k | LessWrong | Defining Corrigible and Useful Goals
This post contains similar content to a forthcoming paper, in a framing more directly addressed to readers already interested in and informed about alignment. I include some less formal thoughts, and cut some technical details. That paper, A Corrigibility Transformation: Specifying Goals That Robustly Allow For Goal Modification, will be linked here when released on arXiv, hopefully within the next couple weeks.
Ensuring that AI agents are corrigible, meaning they do not take actions to preserve their existing goals, is a critical component of almost any plan for alignment. It allows for humans to modify their goal specifications for an AI, as well as for AI agents to learn goal specifications over time, without incentivizing the AI to interfere with that process. As an extreme example of corrigibility’s value, a corrigible paperclip maximizer could be stopped partway through a non-instantaneous takeover attempt by saying “please stop” or by automatically triggered safeguards, and it would be perfectly happy to accept that outcome.
The challenge is corrigibility is anti-natural, meaning that almost any goal introduces an instrumental incentive to preserve itself, since it is more likely to be achieved if an agent continues pursuing it. Despite the concept of corrigibility being introduced a decade ago, little progress has been made on defining corrigible goals that are still useful. I aim to address that here by introducing a corrigibility transformation that can be applied to almost any goal, creating a new version that is corrigible without sacrificing performance.
The corrigibility transformation works by first giving an AI system the ability to costlessly reject updates sent to it, which it decides whether to exercise each time it takes an action. So, for example instead of a “Shutdown” button there would be a “Please Shut Down” button. This makes it so that the expected discounted reward conditional on taking an action and rejecting any updates is a well de |
d456eae5-ba1f-4d94-aa83-0f1266297689 | trentmkelly/LessWrong-43k | LessWrong | Second-order selection against the immortal
Cross-posted from Telescopic Turnip.
In his recent review of Lifespan, Scott Alexander writes:
> Algernon’s Law says there shouldn’t be easy gains in biology. Your body is the product of millions of years of evolution – it would be weird if some drug could make you stronger, faster, and smarter. Why didn’t the body just evolve to secrete that drug itself?
He is talking about anti-aging research, and wondering why, if there is an easy way to stop aging, humans haven’t already evolved immortality spontaneously. There are many relevant things to say about this, but I think the evolutionary perspective is particularly interesting. Under some circumstances, it might be that immortality is inherently unstable.
The Imperium and the Horde
Suppose that it’s the future, and the FDA just approved a pill that makes you immortal. Of course people disagree about whether one should take the pill or not. As a result, humanity is now divided in two populations: the Immortal Imperium, who took the immortality pill, and the Horde of Death, who still experience the painful decay and death we all know and love.
Artist depiction of the Horde of Death.
So, people from the Horde spend their time having plenty of children to populate the next generation, while people in the Immortal Imperium try to escape their existential ennui by reading speculative blog posts on the Internet. Who will prevail?
Two orders of fitness
There are two competing phenomena at play here. One is first-order selection, which is how many of your genes are passed on to the next generation, the more the better. For the Horde of Death, there is nothing mysterious: they reproduce, then they die, and an uncertain fraction of their genes gets passed on.
What about immortal people? They don’t really pass anything to the next generation, because they don’t do the whole generation thing. On the other hand, all of their genes will still be around centuries after centuries, so for the genes involved, this is a 100% su |
83444395-f6f2-49b4-adfb-de4090d280f8 | trentmkelly/LessWrong-43k | LessWrong | Crocker's Rules: How far to take it?
Recently I've been considering declaring Crocker's Rules. The wiki page and source document don't suggest any particular time limit or training period, and also don't provide any empirical results of testing it, positive or negative. It sounds good in theory, but how does it affect people in the real world?
* If you operate under the Rules for an extended period, does your social status diminish due to behaving like a pushover when insulted?
* Does it usually become unbearable after a particular period of time? Or is there a temporary discomfort that you get over quickly?
* Is there a list of signatories who have declared Crocker's Rules on an indefinite or time-limited basis?
* Where can I find examples of dialogue that has benefited (or suffered) from this?
It seems like an "obviously cool" idea but the risk to one's reputation is worth taking into consideration. If it is clear that the risk is low, and if the value to be gained is clearly very high, we should probably be doing more to encourage it as an explicit norm.
On the other hand, if it is just one of those ideas that sounds better in theory than it is in practice (because the theory does not correctly model reality), or is just yet another signaling game with a net negative value, that is worth knowing as well.
I haven't seen anyone argue against Crocker's Rules or claim it ruined their life, so my estimation is that the risk is low (although there is a small sample size to start with). Also, I have seen at least one statement from lukeprog implying that it has been instrumental in triggering updates during live conversations he has observed, indicating that the value is high (though its causal role is not firmly established in that example).
Does anyone have further data points to add? |
358fb17a-a7c7-4649-b6a2-2fef3bd53d1f | trentmkelly/LessWrong-43k | LessWrong | [LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient
This paper, or more often the New Scientist's exposition of it is being discussed online and is rather topical here. In a nutshell, stimulating one small but central area of the brain reversibly rendered one epilepsia patient unconscious without disrupting wakefulness. Impressively, this phenomenon has apparently been hypothesized before, just never tested (because it's hard and usually unethical). A quote from the New Scientist article (emphasis mine):
One electrode was positioned next to the claustrum, an area that had never been stimulated before.
When the team zapped the area with high frequency electrical impulses, the woman lost consciousness. She stopped reading and stared blankly into space, she didn't respond to auditory or visual commands and her breathing slowed. As soon as the stimulation stopped, she immediately regained consciousness with no memory of the event. The same thing happened every time the area was stimulated during two days of experiments (Epilepsy and Behavior, doi.org/tgn).
To confirm that they were affecting the woman's consciousness rather than just her ability to speak or move, the team asked her to repeat the word "house" or snap her fingers before the stimulation began. If the stimulation was disrupting a brain region responsible for movement or language she would have stopped moving or talking almost immediately. Instead, she gradually spoke more quietly or moved less and less until she drifted into unconsciousness. Since there was no sign of epileptic brain activity during or after the stimulation, the team is sure that it wasn't a side effect of a seizure.
If confirmed, this hints at several interesting points. For example, a complex enough brain is not sufficient for consciousness, a sort-of command and control structure is required, as well, even if relatively small. A low-consciousness state of late-stage dementia sufferers might be due to the damage specifically to the claustrum area, not just the overall brain deteriorati |
5c6b0565-1300-4f4a-99f6-b2eeffa1861e | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The Hacker Learns to Trust
*This is a linkpost for some interesting discussions of info security norms in AI. I threw the post below together in 2 hours, just to have a bunch of quotes and links for people, and to have the context in one place for a discussion here on LW (makes it easier for [common knowledge](https://www.lesswrong.com/posts/9QxnfMYccz9QRgZ5z/the-costly-coordination-mechanism-of-common-knowledge) of what the commenters have and haven't seen). I didn't want to assume people follow any news on LW, so for folks who've read a lot about GPT-2 much of the post is skimmable.*
Background on GPT-2
-------------------
In February, OpenAI wrote [a blogpost announcing GPT 2](https://openai.com/blog/better-language-models/):
> We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.
This has been a very important release, not least due to it allowing fans to try (and fail) to [write better endings to Game of Thrones](https://towardsdatascience.com/openai-gpt-2-writes-alternate-endings-for-game-of-thrones-c9be75cd2425). Gwern used GPT-2 to write [poetry](https://www.gwern.net/GPT-2) and [anime](https://www.thiswaifudoesnotexist.net/index.html). There have been many Medium posts on GPT-2, [some very popular](https://towardsdatascience.com/openais-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8), and at least one Medium post [on GPT-2 written by GPT-2](https://medium.com/comet-ml/this-machine-learning-medium-post-does-not-exist-c4705215b4a0). There is a subreddit where all users are copies of GPT-2, and they imitate other subreddits. It got too meta when the subreddit imitated [another subreddit about people play-acting robots-pretending-to-be-humans](https://slatestarcodex.com/2019/06/20/if-only-turing-was-alive-to-see-this/). [Stephen Woods](https://medium.com/@ysaw) has lots of examples including food recipes.
Here in our rationality community, we created [user GPT-2](https://www.lesswrong.com/users/gpt2) trained on the entire corpus of LessWrong comments and posts and released it onto the comment section on April 1st (a user who we [warned](https://www.lesswrong.com/posts/cxpZYswq5QvfhTEzF/user-gpt2-has-a-warning-for-violating-frontpage-commenting) and then [banned](https://www.lesswrong.com/posts/7xJiotzeonZaAbgSp/user-gpt2-is-banned)). And Nostalgebraist [created a tumblr](https://uploadedyudkowsky.tumblr.com/) trained on the entire writings of Eliezer Yudkowsky (sequences+HPMOR), where Nostalgebraist picked their favourites to include on the Tumblr.
There was also very interesting analysis on LessWrong and throughout the community. The post that made me think most on this subject is Sarah Constantin's [Human's Who Are Not Concentrating Are Not General Intelligences](https://www.lesswrong.com/posts/4AHXDwcGab5PhKhHT/humans-who-are-not-concentrating-are-not-general). Also see SlateStarCodex's [Do Neural Nets Dream of Electric Hobbits?](https://slatestarcodex.com/2019/02/18/do-neural-nets-dream-of-electric-hobbits/) and [GPT-2 As Step Toward General Intelligence](https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence/), plus my teammate jimrandomh's [Two Small Experiments on GPT-2](https://www.lesswrong.com/posts/s9sDyZ9AA3jKbM7DY/two-small-experiments-on-gpt-2).
However, these were all using a nerfed version of GPT-2, which only had 175 million parameters, rather than the fully trained model with 1.5 billion parameters. (If you want to see examples of the full model, see the initial announcement posts for [examples with unicorns and more](https://openai.com/blog/better-language-models/).)
**Reasoning for only releasing a nerfed GPT-2 and response**
OpenAI writes:
> Due to our concerns about malicious applications of the technology, we are not releasing the trained model.
While the post includes some discussion of how specifically GPT-2 could be used maliciously (e.g. automating false clickbait news, automated spam, fake accounts) the key line is here.
> This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.
Is this out of character for OpenAI - a surprise decision? Not really.
> Nearly a year ago we wrote in the [OpenAI Charter](https://blog.openai.com/openai-charter/): “we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research,” and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time.
> Other disciplines such as biotechnology and cybersecurity have long had active debates about responsible publication in cases with clear misuse potential, and we hope that our experiment will serve as a case study for more nuanced discussions of model and code release decisions in the AI community.
**Public response to decision**
There has been discussion in news+Twitter, see [here](https://www.skynettoday.com/briefs/gpt2) for an overview of what some people in the field/industry have said, and what the news media has written. The main response that's been selected for by news+twitter is that OpenAI did this primarily as a publicity stunt.
For a source with a different bias than the news and Twitter (which selects heavily for anger and calling out of norm violation), I've searched through all Medium articles on GPT-2 and copied here any 'most highlighted comments'. Most posts actually didn't have any, which I think means they haven't had many viewers. Here are the three I found, in chronological order.
[OpenAIs GPT-2: The Model, The Hype, and the Controvery](https://towardsdatascience.com/openais-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8)
> As ML researchers, we are building things that affect people. Sooner or later, we’ll cross a line where our research can be used maliciously to do bad things. Should we just wait until that happens to decide how we handle research that can have negative side effects?
[OpenAI GPT-2: Understanding Language Generation through Visualization](https://towardsdatascience.com/openai-gpt-2-understanding-language-generation-through-visualization-8252f683b2f8)
> Soon, these [deepfakes](https://en.wikipedia.org/wiki/Deepfake) will become personal. So when your mom calls and says she needs $500 wired to the Cayman Islands, ask yourself: Is this really my mom, or is it a language-generating AI that acquired a [voice skin](https://www.americaninno.com/boston/funding-boston/cambridge-voice-skins-startup-modulate-raises-2m-in-seed-funding/) of my mother from that Facebook video she posted 5 years ago?
[GPT-2, Counting Consciousness and the Curious Hacker](https://medium.com/@NPCollapse/gpt2-counting-consciousness-and-the-curious-hacker-323c6639a3a8)
> *If we have a system charged with detecting what we can and can’t trust, we aren’t removing our need to invest trust, we are only moving our trust from our own faculties to those of the machine.*
I wrote this linkpost to discuss the last one. See below.
Can someone else just build another GPT-2 and release the full 1.5B parameter model?
------------------------------------------------------------------------------------
From the initial OpenAI announcement:
> We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.
Since the release, one researcher has tried to reproduce and publish OpenAI's result. Google has a program called TensorFlow Research Cloud that gives loads of free compute to researchers affiliated with various universities, which let someone train an attempted copy of GPT-2 with 1.5 billion parameters. They [say](https://medium.com/@NPCollapse/replicating-gpt2-1-5b-86454a7f26af):
> I really can’t express how grateful I am to Google and the TFRC team for their support in enabling this. They were incredibly gracious and open to allowing me access, without requiring any kind of rigorous, formal qualifications, applications or similar. I can really only hope they are happy with what I’ve made of what they gave me.
> ...I estimate I spent around 200 hours working on this project.... I ended up spending around 600–800€ on cloud resources for creating the dataset, testing the code and running the experiments
That said, it turned out that the copy [did not match up in skill level](https://medium.com/@NPCollapse/addendum-evaluation-of-my-model-e6734b51a830?postPublishedType=initial), and is weaker even than nerfed model OpenAI released. The person who built it says (1) they think they know how to fix it and (2) releasing it as-is may still be a helpful "shortcut" for others interested in building a GPT-2-level system; I don't have the technical knowledge to assess these claims, and am interested to hear from others who do.
During the period where people didn't know that the attempted copy was not successful, the person who made the copy wrote a long and interesting post [explaining their decision to release the copy](https://medium.com/@NPCollapse/gpt2-counting-consciousness-and-the-curious-hacker-323c6639a3a8) (with multiple links to LW posts). It discussed reasons why this specific technology may cause us to better grapple with misinformation on the internet that we hear. The author is someone who had a strong object level disagreement with the policy people at OpenAI, and had thought pretty carefully about it. However, it opened thus:
> *Disclaimer: I would like it to be made very clear that I am absolutely 100% open to the idea that I am wrong about anything in this post. I don’t only accept but explicitly request arguments that could convince me I am wrong on any of these issues. If you think I am wrong about anything here, and have an argument that might convince me,* ***please*** *get in touch and present your argument. I am happy to say “oops” and retract any opinions presented here and change my course of action.*
> *As the saying goes: “When the facts change, I change my mind. What do you do?”*
> *TL;DR: I’m a student that replicated OpenAI’s GPT2–1.5B. I plan on releasing it on the 1st of July. Before criticizing my decision to do so, please read my arguments below. If you still think I’m wrong, contact me on Twitter @NPCollapse or by email (thecurioushacker@outlook.com) and convince me. For code and technical details, see* *[this post](https://medium.com/@NPCollapse/replicating-gpt2-1-5b-86454a7f26af).*
And they later said
> [B]e assured, I read every single comment, email and message I received, even if I wasn’t able to respond to all of them.
On reading the initial I was genuinely delighted to see such pro-social and cooperative behaviour from the person who believed OpenAI was wrong. They considered unilaterally overturning OpenAI's decision but instead chose to spend 11,000 words explaining their views and a month reading others' comments and talking to people. This, I thought, is how one avoids falling prey to Bostrom's [unilateralist curse](https://nickbostrom.com/papers/unilateralist.pdf).
Their next post [The Hacker Learns to Trust](https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51) was released 6 days later, where they decided not to release the model. Note that they did not substantially change their opinions on the object level decision.
> I was presented with many arguments that have made me reevaluate and weaken my beliefs in some of the arguments I presented in my last essay. There were also many, maybe even a majority of, people in full support of me. Overall I still stand by most of what I said.
> ...I got to talk to Jack Clark, Alec Radford and Jeff Wu from OpenAI. We had a nice hour long discussion, where I explained where I was coming from, and they helped me to refine my beliefs. They didn’t come in accusing me in any way, they were very clear in saying they wanted to help me gain more important insight into the wider situation. For this open and respectful attitude I will always be grateful. Large entities like OpenAI often seem like behemoths to outsiders, but it was during this chat that it really hit me that they were people just like me, and curious hackers to boot as well.
> I quickly began to understand nuances of the situation I wasn’t aware of. OpenAI had a lot more internal discussion than their [blog post](https://openai.com/blog/better-language-models/) made it seem. And I found this reassuring. Jack in particular also gave me a lot of valuable information about the possible dangers of the model, and a bit of insight into the workings of governments and intelligence agencies.
> After our discussion, I had a lot to think about. But I still wasn’t really convinced to not release.
They then talked with Buck from MIRI (author of [this](https://www.lesswrong.com/posts/4QemtxDFaGXyGSrGD/other-people-are-wrong-vs-i-am-right) great post). Talking with Buck lead them to their new view.
> [T]his isn’t just about GPT2. What matters is that at some point in the future, someone *will* create something truly dangerous and there need to be commonly accepted safety norms *before* that happens.
> We tend to live in an ever accelerating world. Both the industrial and academic R&D cycles have grown only faster over the decades. Everyone wants “the next big thing” as fast as possible. And with the way our culture is now, it can be hard to resist the pressures to adapt to this accelerating pace. Your career can depend on being the first to publish a result, as can your market share.
> We as a community and society need to combat this trend, and create a healthy cultural environment that allows researchers to *take their time*. They shouldn’t have to fear repercussions or ridicule for delaying release. Postponing a release because of added evaluation should be the norm rather than the exception. We need to make it commonly accepted that we as a community respect others’ safety concerns and don’t penalize them for having such concerns, *even if they ultimately turn out to be wrong*. If we don’t do this, it will be a race to the bottom in terms of safety precautions.
> We as a community of researchers and humans need to trust one another and respect when one of us has safety concerns. We need to extend understanding and offer help, rather than get caught in a race to the bottom. And this isn’t easy, because we’re curious hackers. Doing cool things fast is what we do.
The person also came to believe that the AI (and AI safety) community was much more helpful and cooperative than they'd expected.
> The people at OpenAI and the wider AI community have been incredibly helpful, open and thoughtful in their responses to me. I owe to them everything I have learned. OpenAI reached out to me almost immediately to talk and they were nothing but respectful and understanding. The same applies to Buck Shlegeris from MIRI and many other thoughtful and open people, and I am truly thankful for their help.
> I expected a hostile world of skepticism and competition, and there was some of that to be sure. But overall, the AI community was open in ways I did not anticipate. In my mind, I couldn’t imagine people from OpenAI, or MIRI, or anywhere else actually wanting to talk to me. But I found that was wrong.
> So this is the first lesson: The world of AI is full of smart, good natured and open people that I shouldn’t be afraid of, and neither should you.
Overall, the copy turned out not to be strong enough to change the ability for malicious actors to automate spam/clickbait, but I am pretty happy with the public dialogue and process that occurred. It was a process whereby, in a genuinely dangerous situation, the AI world would not fall prey to Bostrom's [unilateralist's curse](https://nickbostrom.com/papers/unilateralist.pdf). It's encouraging to see that process starting to happen in the field of ML.
I'm interested to know if anyone has any different takes, info to add, or broader thoughts on information-security norms.
*Edited: Thanks to 9eB1 for pointing out how nerfed the copy was, I've edited the post to reflect that.* |
c27f6c75-4122-467f-9ec5-2ed986a2bf96 | trentmkelly/LessWrong-43k | LessWrong | Meta: Basic and advanced section
lukeprog recently wrote a top-level post exhorting LW to emphasize the basics of rationality. That strikes me as a worthwhile endeavor- if we do actually want to raise the sanity waterline, we need a lot of droplets.
The suggestion I made was to redesign the site to create more specialized areas: a basic one where people aren't expected to have read the sequences, and an advanced one where people are. That's not the division that has to be made, but it seems a natural place to make one. For ease of reading, a page that displays both sections seems like it would be a good plan (but I have no idea if that is easy to make), and it might also be desirable to have it so one article can appear in both sections (maybe there would flags or tags for basic and advanced?). Another option for a division is between video and text posts- I personally have little interest in video posts but I'm sure there are people interested in both or just video posts.
The basic section would focus on fundamental concepts and standalone / modular posts. By modular, I mean replaceable- two posts that cover the same concept from different angles, ideally in enough detail that you could read either of them and be able to go on to the next post, if there is one. There are several (mutually inclusive) ways to go about this- video explanations of old posts, entirely new material, and new restatements (or perspectives) on old material. lukeprog covers videos, so I won't; new material seems pretty self-explanatory.
The main possible focus of such a section that needs discussion is new perspectives on old concepts. Right now, one of the failures of LW according to the outside view is the sense of cultishness, and I would not be surprised if that comes from the fact that anyone who maintains their desire to read the sequences throughout the entire sequences must be in some sense on the same wavelength as EY. If we had five different explanations of Politics is the Mind-Killler rather than just EY's, i |
82e34e1d-230a-4b25-a0a4-d15d3875f6c3 | trentmkelly/LessWrong-43k | LessWrong |
LW Update 2019-04-02 – Frontpage Rework
Since LW2.0 launched, the frontpage had become very complex – both visually and conceptually. This was producing an overall bad experience, and making it hard for the team to add or scale up features (such as Q&A, and later on Community, Library and upcoming Recommendations)
For the past couple months, we've been working on an overhaul of the frontpage (and correspondingly, the overall site design). Our goal was is to rearrange that complexity, spending fewer "complexity points" on things that didn't need them as much, so we could spend them elsewhere.
Frontpage Updates
* Tooltip oriented design.
* It's easier to figure out what most things will do before you click on it.
* Navigation Menu
* Helps establish the overall site hierarchy
* Available on all major site pages (not Post Pages, where we want people to read without distraction)
* Improved mobile navigation (shows up as a tab menu at the bottom)
* Eventually we'll deprecate the old Nav Menu (still available in the header) and replace it with a collapsible version of the new one.
* Home Page streamlining
* Moved Recommend Sequences and Community over to the Nav Menu, so there are only 3 sections to parse
* Post Items simplified down to one line.
* Latest Posts now only have a single setting: "show personal blogposts", instead of forcing you to figure out immediately what "meta", "curated" and "daily" are.
* Post List options are generally 'light cobalt blue' – not too obtrusive, but easier to find when you want them.
* Questions Page now has two sections:
* Recent Activity – simply sorted by "most recently commented at", so if you respond to an old question it will appear above the fold.
* Top Questions – also sorted by "recently commented", but filtered to questions with 40 or more karma, so that it's easier to catch up on updates to highly upvoted questions.
* Community Page
* UI updated to match Home Page.
* The group section now shows 7 groups |
b0865f99-5320-4852-8a99-af580cb5804e | trentmkelly/LessWrong-43k | LessWrong | Learning Mathematics in Context
I have almost no direct knowledge of mathematics. I took various mathematics courses in school, but I put in the minimal amount of effort required to pass and immediately forgot everything afterwards.
When people learn foreign languages, they often learn vocabulary and grammar out of context. They drill vocabulary and grammar in terms of definitions and explanations written in their native language. I, however, have found this to be intolerably boring. I'm conversational in Japanese, but every ounce of my practice came in context: either hanging out with Japanese friends who speak limited English, or watching shows and adding to Anki new words or sentence structures I encounter.
I'm convinced that humans must spike their blood sugar and/or pump their body full of stimulants such as caffeine in order to get past the natural tendency to find it unbearably dull to memorize words and syntax by rote and lifeless connection with the structures in their native language.
I've tried to delve into some mathematics recently, but I get the impression that most of the expositions fall into one of two categories: Either (1) they assume that I'm a student powering my day with coffee and chips and that I won't find it unusual if I'm supposed to just trust that once I spend 300 hours pushing arbitrary symbols around I'll end up with some sort of insight. Or (2) they do enter the world of proper epistemological explanations and deep real-world relevance, but only because they expect that I'm already quite well-versed in various background information.
I don't want an introduction that assumes I'm the average unthinking student, and I don't want an exposition that expects me to understand five different mathematical fields before I can read it. What I want seems likely to be uncommon enough that I might as well simply say: I don't care what field it is; I just want to jump into something which assumes no specifically mathematical background knowledge but nevertheless delves into s |
317c2ec1-2266-494b-abcb-72e3215074ea | trentmkelly/LessWrong-43k | LessWrong | Fighting the allure of depressive realism
Epistemic tragic backstory: Personal.
Earlier this week, on my first post on this site under this account, shminux commented to the value of cognitive-behavioral therapy to get out of what I called "depression philosophising".
I was worried about trying it. One claim of CBT is that depressed people are negatively biased and by correcting their thinking errors they are gradually brought to being both happier and more accurate about the world around them.
However this may not actually be the case. The phenomenon known as "depressive realism" suggests that the ordinary person might be positively biased and that depressed people might need to correct their "errors" by forming less accurate, but happier patterns of cognition. Sort of like a really, really weak Nozick machine.
Now, this question doesn't actually undermine CBT itself that much. A movement from (depressed, inaccurate) to (not depressed, accurate) is pretty much as good, in terms of what any anti-depression therapy is trying to do, as one from (depressed, accurate) to (not depressed, inaccurate).
But if it's the latter, our least convenient likely world, we face the classic question: "Should we optimize more for epistemic or instrumental rationality?" This was a hurdle I had to get over before I could convince myself to use CBT. I had a few false starts, but eventually came up with some good convincing arguments that even if this is the case CBT is well worth it.
I decided to treat it as a decision on the margin, remembering my Econ 101. That turned out to be such an obviously right fit to the problem that I felt ridiculous for not having thought of it instantly. The tradeoff of a small amount of epistemic rationality (= losing the benefit of depressive realism) for a high chance of a moderate, potentially large amount of instrumental rationality (= all the time, energy, and general life pleasure I get from treating my depression) is one almost any sane person should make.
After that I also realized t |
73bf5f6e-ef67-4ec9-a71f-9b04d5e45b61 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Impact of meta-roles on the evolution of organisational institutions
1 Introduction
---------------
Employing evolutionary methods to study economic change has attracted several scholars. For instance, Nelson and Winter proposed the idea that “organisational routines” are pivotal in the evolution of business firms (i.e. their role is similar to the role of the genes in biological evolution) [[28](#bib.bib12 "An evolutionary theory of economic change")]. Also, they suggested that “[metaphorically] [r]outines are the skills of an organisation.” However, different scholars suggested various definitions of routines. For instance, Feldman and Pentland called “a repetitive, recognizable pattern of interdependent actions, involving multiple actors”, a routine [[16](#bib.bib11 "Reconceptualizing organizational routines as a source of flexibility and change")]. Routines have similarities with institutions (e.g. ‘the rules of the game’ [[25](#bib.bib304 "Institutions")]), in terms of their collective attributes [[8](#bib.bib3 "Handbook of organizational routines")] (i.e. they have rule-like conditions [[23](#bib.bib4 "The mystery of the routine")]). However, whether routines are subconsciously followed (they are simple rules) or they are open to amendments and changes (they are ambiguous rules) is subject of controversy [[7](#bib.bib10 "Organizational routines: a review of the literature")]. Hodgson [[23](#bib.bib4 "The mystery of the routine")] criticised Nelson and Winter’s [[28](#bib.bib12 "An evolutionary theory of economic change")] method and pointed to some shortcomings such as not considering ‘birth’ and ‘death’ in their method.
Also, in computer science, role/meta-role based frameworks were developed to facilitate modelling. For instance, Riehle and Gross [[29](#bib.bib7 "Role model based framework design and integration")] developed a role modelling approach ‘to describe the complexity of object collaborations.’ Also, MetaRole-Based Modelling Language (RBML) was expressed in the Unified Modeling Language (UML) to describe patterns’ attributes [[18](#bib.bib8 "Metarole-based modeling language (RBML) specification V1.0")]. The CKSW (Commander–textitKnowledge–Skills–Worker) framework was proposed for meta-role modelling in agent-based simulation [[27](#bib.bib243 "CKSW: a folk-sociological meta-model for agent-based modelling")]. The idea of integrating roles and institutions is already studied in the context of multi-agent systems. For instance, nested ADICO refined Ostrom’s grammar of institutions [[13](#bib.bib504 "A grammar of institutions")] by differentiating between roles (e.g. enforcer and monitoring agent) [[19](#bib.bib144 "nADICO: A nested grammar of institutions")].
The BDI (beliefs-desires-intention) model is a cognitive agent architecture [[9](#bib.bib326 "Plans and resource-bounded practical reasoning")] with some extensions, including the BOID [[10](#bib.bib31 "The boid architecture: conflicts between beliefs, obligations, intentions and desires")], EBDI [[26](#bib.bib329 "Formal modelling of emotions in BDI agents")] and the BRIDGE [[14](#bib.bib241 "Towards agents for policy making")] models. This architecture was employed to model agents’ cooperation in institutionalised multi-agent systems [[5](#bib.bib5 "On-line reasoning for institutionally-situated bdi agents"), [6](#bib.bib6 "Normative run-time reasoning for institutionally-situated BDI agents")].
In light of earlier studies, this paper integrates agents’ meta-roles [[27](#bib.bib243 "CKSW: a folk-sociological meta-model for agent-based modelling")] in the BDI architecture and also employs the theory of planned behaviour TPB [[17](#bib.bib118 "Predicting and changing behavior: The reasoned action approach")] to model different facets of beliefs. The integrated model is used to investigate how dynamics in agents’ meta-roles may lead to the evolution of organisational institutions. Meta-roles in this work are modelled using the CKSW framework that helps modellers to decompose agents in a society based on the characteristics of their roles [[27](#bib.bib243 "CKSW: a folk-sociological meta-model for agent-based modelling")]. The coupling of the CKSW framework within a BDI architecture is investigated in the context of rule-making and -following (how rules are established, interpreted, and followed).
2 An overview of the extended BDI architecture
-----------------------------------------------
This extended BDI cognitive architecture is shown in Figure [1](#S2.F1 "Figure 1 ‣ 2 An overview of the extended BDI architecture ‣ Impact of meta-roles on the evolution of organisational institutions"). It can be observed that there are two separate blocks, a left block called ‘Events’ and a right block called ‘Cognitive architecture’. The Events block represents the events an agent perceives from the environment (e.g. information collected from peers). The Cognitive architecture block represents an agent’s cognitive decision-making components. Note that when an action is performed by an agent, it will be an input event for those agents interested in that event in the next iteration.
A brief description of the four high-level components is provided below. It should be noted that the main focus of this paper is on the addition of *Role* component to the BDI architecture (highlighted in green in Figure [1](#S2.F1 "Figure 1 ‣ 2 An overview of the extended BDI architecture ‣ Impact of meta-roles on the evolution of organisational institutions")).

Figure 1: Proposed cognitive architecture for this model.
* Roles: An agent has a set of roles in society regarding established institutions (e.g. agents make those institutions or they monitor their implementations). An agent’s role impacts its beliefs, based on individual and social experiences (e.g. it personally may find the rule unfair). We discuss this module in more detail in Section [3](#S3 "3 Meta-roles and role dynamics ‣ Impact of meta-roles on the evolution of organisational institutions").
* Beliefs: To model beliefs, we are inspired by the idea of different belief components of TPB [[17](#bib.bib118 "Predicting and changing behavior: The reasoned action approach")]. This component indicates an agent’s perception about the rule and the support the rule has. In other words, an agent has its own internal belief about the rule, and also the perception about the social support for that rule (e.g. rebuking the rule), and an estimation of what an organisation meant by the rule (e.g. consequences of minor violation).
* Desires: Agents have different desires, such as an agent’s goals and ideal preferences.
* Intentions and decision: An agent’s intentions for an action and its decision about the final action is formed in this module. The decision results in an action which can be a modification of beliefs and roles or only performing a task.
3 Meta-roles and role dynamics
-------------------------------
To model agents’ roles and their interactions we use CKSW meta-roles [[27](#bib.bib243 "CKSW: a folk-sociological meta-model for agent-based modelling")]. Note that CKSW is a generic model and since this paper concerns the rule-making and rule-following context, we reinterpret those roles in this context as follows:
* Commander (C): This role is empowered with ultimate authority [[27](#bib.bib243 "CKSW: a folk-sociological meta-model for agent-based modelling")]. In this context, they are the agents who are permitted to make or revise rules.
* Knowledge (K): This role concerns the *know-what* aspect of a society [[27](#bib.bib243 "CKSW: a folk-sociological meta-model for agent-based modelling")]. In this context, these are agents who monitor and report the suspicious activities of others.
* Skills (S): This role concerns agents who are known for their skills in society (*know-how*). Unlike knowledge, skills are difficult to communicate and much more so to apply [[27](#bib.bib243 "CKSW: a folk-sociological meta-model for agent-based modelling")]. In the rules context, those agents that have the skills to interpret the rules judge reported agents’ activities.
* Worker (W): These agents perform basic jobs that do not require specialist skills [[27](#bib.bib243 "CKSW: a folk-sociological meta-model for agent-based modelling")]. In this context, they are agents who do not formally collaborate in monitoring, establishing, or interpreting the rules (i.e. the rest of agents).
We also consider two categories of roles, *formal roles* and *informal roles*:
* Formal roles: these roles are defined based on the agent’s position in an organisation (one of CKSW meta-roles).
* Informal (internalised) roles: these roles are unofficially and self-assigned (e.g. based on values) by agents such as monitoring, and reporting suspicious behaviours of other agents to managers. These are the role(s) that an agent may perform in addition to its formal role (one or more out of the CKSW meta-roles).

Figure 2: Transition of formal meta-roles and internalising informal meta-roles — the circles indicate roles that an agent really performs (including internalised roles), and the bigger fonts indicate more involvement in such a role. The arrows indicate the possible transitions in a society.
Figure [2](#S3.F2 "Figure 2 ‣ 3 Meta-roles and role dynamics ‣ Impact of meta-roles on the evolution of organisational institutions") depicts how an agent’s meta-role may evolve111Note that most times agents are downgraded for economic issues or bad performance of agents, and this downgrading can be considered as an extension of this model.. In this example, a worker (say clerk) of an organisation may be promoted to a higher rank after demonstrating competence for such a promotion (say to a manager, a knowledge-based role). If the manager has relevant education, skills and experience, it can be promoted to an even higher position. In these positions, the manager may be responsible for interpreting the situation and deciding about who to fire or hire (i.e. promotion from Knowledge role to the role of a judge (skill) — represented as SK in the figure). Under certain conditions an agent (Knowledge or Skill) can be promoted to director role (i.e. to the Commander). Note that judges (highlighted in blue) might not be explicitly present on an organisational level for various reasons (e.g. sometimes legal cases go to international courts).
In Figure [2](#S3.F2 "Figure 2 ‣ 3 Meta-roles and role dynamics ‣ Impact of meta-roles on the evolution of organisational institutions"), the initials for formal roles are indicated on top or bottom of each agent (e.g. K1) and the list of all roles for an agent (i.e. informal and formal roles) are shown in circles placed near the agents. The font sizes of initials inside these circles indicate the involvement level in such a role, with larger fonts involving more involvement. The involvement is influenced by the ability of an agent to perform a role, as well as the perceived importance of performing such roles from an individual agent’s perspective. For instance, some worker agents may adopt additional informal roles in a company (e.g., k for agent W2). Some worker agents may monitor other agents or they may have a charismatic personality and informally establish rules (i.e. norms) which are executed by the help of other agents (see internalised roles of W1). Another example is the case of a knowledge agent who may adopt the informal judging role voluntarily (note the addition of S to K2’s formal current role K). Note this agent could adopt the monitoring role for various reasons (e.g. to help stabilise the rule or to weaken the rule-following by not reporting the violators).
Another example is the commanders who may also take additional roles such as K and S (e.g. C1). They may take some informal roles to influence rule change. For instance, even though they may establish a rule, they may feel that they do not have the obligation to follow them and so they may overlook them, hence impacting rule-following for the whole society. These examples described above show how formal and informal roles can shape rule changes in an organisation.
4 Simulation, algorithms, and parameters
-----------------------------------------
In this section, first, we discuss the underlying assumptions of this simulation. Then, we provide an overview of two historical societies studied for simulation, namely the English East India Company (EIC) and Armenian merchants of New-Julfa (Julfa). Then we briefly discuss the aspects of these societies that are of interest for us and the simulation procedures used to represent their agents’ behaviour in the simulation context.
###
4.1 Assumptions
In societies, the rules that exist may not be honoured by agents. Although, the agents know the existence of such rules, they don’t follow them and the agents justify this behaviour through the resolution of cognitive dissonance. *Cognitive dissonance* is defined as tensions formed by conflicts between different cognitions (for instance, one likes to smoke, but loathes to get cancer) [[3](#bib.bib116 "The social animal")]. These tensions lead to creating some justification for taking one action (quit smoking or continuing). This idea was used to attribute workers’ productivity to cognitive dissonance regarding fairness of institutions [[1](#bib.bib457 "The relationship of worker productivity to cognitive dissonance about wage inequities.")]. In particular, studies showed that procedural justice (having fair dispute resolution mechanisms) increases public law obedience and cooperation with the police [[32](#bib.bib257 "The role of procedural justice and legitimacy in shaping public support for policing")]. Also, underpaid or overpaid persons alter their efforts put forth on the system (e.g. efforts or voluntarily performed tasks) to make the system fairer for themselves [[2](#bib.bib14 "Inequity in social exchange")]. In this work, we consider that agents justify the need for rule change (or don’t follow rules),
because they need to resolve this cognitive dissonance (i.e. they justify not following rules, or the reason to keep following the rules).
###
4.2 Societies
As stated earlier, in this paper, we investigate two long-distance trading societies, namely Armenian merchants of New-Julfa (Julfa) and the East India Company (EIC). The two societies were contemporaneous and shared the same areas for trading products (e.g. the EIC managers granted Julfans permissions for using the EIC infrastructures [[4](#bib.bib25 "From the Indian Ocean to the Mediterranean: Circulation and the global trade networks of Armenian merchants from New Julfa/Isfahan, 1605–1747")]). Also, both societies faced principal-agent problem [[24](#bib.bib373 "Origin of the theory of agency: An account by one of the theory’s originators")] — the dilemma where the self-interested decisions of a party (agent) impact the benefits of the other person on whose behalf these decisions are made (principal).
Armenian merchants of New-Julfa (Julfa): Armenian merchants of New-Julfa were originally from old Julfa in Armenia. They re-established a trader society in New-Julfa (near Isfahan, Iran) after their forced displacement in the early 17th century [[22](#bib.bib24 "The Armenian merchants of New Julfa, Isfahan: A study in pre-modern Asian trade"), [4](#bib.bib25 "From the Indian Ocean to the Mediterranean: Circulation and the global trade networks of Armenian merchants from New Julfa/Isfahan, 1605–1747")]. They used commenda contracts (profit-sharing contracts) in the society and also used courts to resolve disputes [[22](#bib.bib24 "The Armenian merchants of New Julfa, Isfahan: A study in pre-modern Asian trade"), [4](#bib.bib25 "From the Indian Ocean to the Mediterranean: Circulation and the global trade networks of Armenian merchants from New Julfa/Isfahan, 1605–1747")].
The English East India Company (EIC): During the same time, the EIC (AD 1600s-1850s) had a totally different perspective on managing the society. The EIC faced a high mortality rate due to environmental factors in India. EIC paid fixed wages and fired agents based on their own beliefs about their trading behaviour. Furthermore, EIC’s trading period covers the English Civil War (1642–1651), which led to inclusion of some of the senior mangers on the board of directors and granting permission for private trade to the employees (i.e. trading activities for individuals’ self-interests).
In both of these societies agents’ meta-roles changed over time. More precisely in EIC, a mercantile or trader agent (W) after gaining experience was promoted to a managerial position to monitor other mercantile agents (K). Also, in EIC, after the English Civil War, managers had the opportunity to be part of the board of directors (C). In Julfa, the promotions took place based on the ageing of the family members (i.e. agents got promoted from one meta-role to the other gradually). Additionally, in Julfa mercantile agents (W) and heads of families (C) formed the courts (S). In this model, we use the EIC dynamics in organisational meta-roles (i.e. promotion of agents) to make the two systems comparable. Note that this change in dynamics decreases the opportunities for Julfans to revise their rules. However, we know that the rules were deeply honoured by Julfans [[4](#bib.bib25 "From the Indian Ocean to the Mediterranean: Circulation and the global trade networks of Armenian merchants from New Julfa/Isfahan, 1605–1747")].
Environment: These societies had different mortality rates. On average an EIC agent died before the age of 35 due to harsh environmental circumstances [[20](#bib.bib2 "Microeconomic investigations of the English East India Company")]. Julfan traders did not face such a situation [[4](#bib.bib25 "From the Indian Ocean to the Mediterranean: Circulation and the global trade networks of Armenian merchants from New Julfa/Isfahan, 1605–1747")] and the closed trading society of Julfa would have collapsed under a high mortality rate [[30](#bib.bib414 "A comparison of two historical trader societies – An agent-based simulation study of English East India Company and New-Julfa")].
Fairness: Another difference between the two historical long-distance trading societies is associated with their payment schemes for employees and the adjudication processes (i.e. use of courts for resolving disputes about suspicious behaviour). EIC rarely employed an adjudication process (e.g. agents were fired based on their performance because of suspected cheating), and the agents were paid low wages [[21](#bib.bib18 "Contract enforcement in the English East India Company")]. However, in Julfa a mercantile agent was paid based on his performance [[4](#bib.bib25 "From the Indian Ocean to the Mediterranean: Circulation and the global trade networks of Armenian merchants from New Julfa/Isfahan, 1605–1747")]. Julfans had adjudication processes to resolve disputes, which considered available evidence [[4](#bib.bib25 "From the Indian Ocean to the Mediterranean: Circulation and the global trade networks of Armenian merchants from New Julfa/Isfahan, 1605–1747")]. Though Julfa appears to be fairer than EIC in terms of payment, total fairness can be questioned — for instance, in the Julfa society, the family wealth and trade was managed and controlled by the eldest brother [[22](#bib.bib24 "The Armenian merchants of New Julfa, Isfahan: A study in pre-modern Asian trade")]. This rule deprived younger ones from managing their own share of capital.
###
4.3 Algorithms
In this subsection, we discuss the procedures employed to simulate role changes within the two societies. The simulation model is split into four distinctive procedures. The first procedure models the societal level of simulation, including creating an initial population and staffing (hiring new mercantile agents) to create a stable population. The second level describes procedures for mercantile agents’ (W) decision-making and learning the system’s parameters. The third level covers the decision-making and learning procedure associated with managers (K). The last procedure is the meta-algorithm that sequentially executes the aforementioned algorithms and updates appropriate parameters. In this algorithm, agent meta-roles may change and the opportunity for institutional dynamics is provided (i.e. promotion of K agents to C and changes in institutions).
/\* Intialise the system starting with iteration←0. \*/
Create 500 new agents with status←new, random personality aspects, and random parameters Assign appropriate roles (i.e. mercantile, managers, and directors) to created agents /\* n = deceased and fired agents (mercantile agents and managers) in the previous iteration. \*/
The most experienced mercantile agents get promoted to a managerial role Create n new agents with: status←new, Experiene←0, and randomly initialise parameters /\* Perceived environment and fairness for inexperienced agents. \*/
PEnvironment←RandomUniform(0,1) Fair←RandomUniform(0,1)
Algorithm 1 Societal level set-up and initialisation
Algorithm [1](#algorithm1 "Algorithm 1 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions") shows how the societal level of the system is simulated. In iteration 0, the system is initialised by creating 500 new agents with random parameters (line 1). The roles are assigned to created agents (about 2% directors, 5% managers, and the rest mercantile agents).222These numbers are inspired from the numbers in the EIC [[20](#bib.bib2 "Microeconomic investigations of the English East India Company")]. The organisation hires and promotes agents to sustain the number of agents per role — i.e. replaces deceased agents (lines 3-4). The rest of the algorithm is executed only for inexperienced agents (i.e. new recruits). An agent has a completely random understanding of the system’s characteristics (lines 5-6).
/\* Update parameters for new recruits. \*/
1 if *Status = New* then Set agent’s parameter using Algorithm. [1](#algorithm1 "Algorithm 1 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions") if *Experience>3* then
/\* Update role and the decision to perform private trade. \*/
2 if *Dissonance(Fair)<DissonThresh* then
/\* Agent stops monitoring violations. \*/
3 Remove K from voluntarily performed roles if *((Fair<thresh) or* then
/\* Agent decides to perform private trade. \*/
4 PrivateTrade←OK
5 end if
6
7 end if
/\* Agent voluntarily collaborates in monitoring. \*/
8 if *Dissonance(Fair)>DissonThresh* then Voluntarily perform K
9 end if
/\* learning; \*/
10 if *Experience>3* then
/\* Reporting observed violations; \*/
11 if *Voluntarily performing K* then
/\* The agent reports some of the cheaters observed. \*/
12 Agent reports connections who impose more costs on the organisation than his tolerance (internalised S).
13 end if
14
15 end if
Learn parameters and adjust the beliefs about rules Experience←Experience+1 if *Rand(1)≤MortalityProbability(Experience+15)* then Die
Algorithm 2 Mercantile agent’s algorithm (for meta-role W)
Algorithm [2](#algorithm2 "Algorithm 2 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions") shows the procedure associated with mercantile agents’ decision-making process. Note that in this algorithm #Rnd(x) indicates a random number generated in the interval (0,x). As stated earlier, if the status of the mercantile agent is new, he goes through an initialisation (see Algorithm [1](#algorithm1 "Algorithm 1 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions"), lines 3-4). Furthermore, experienced mercantile agents decide on their participation in monitoring by considering cognitive dissonance incurred (based on their perception of institutional fairness and dissonance toleration). They also decide on performing private trade with respect to the perceived fairness and their friends who perform such trades (lines 3-7). If the mercantile agent has enough experience and has already decided to collaborate in monitoring, he helps the system to identify violators, based on his interpretation of a fair action (lines 8-9). Finally, the mercantile agent updates his perception of system parameters (e.g. fairness of the society), increases his experience, and may die (lines 10-12).
/\* Manager reports (and eventually punishes) a number of employees who violate the rules of the organisation beyond its tolerance level. We call the threshold TolPunish. \*/
1 PotPunish← employees with violations more than TolPunish if *The number of members of PotPunish>MaxPunish* then
/\* The manager has a limit for the number of agents he can punish called MaxPunish. \*/
2 Punish MaxPunish out of PotPunish that have the most violation
3else
4 Punish all PotPunish members.
5 end if
Experience←Experience+1 if *Rand(1)≤MortalityProbability(Experience+15)* then Die
Algorithm 3 Manager’s algorithm (for meta-role K)
Algorithm [3](#algorithm3 "Algorithm 3 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions") shows the procedures associated with managers (i.e. monitoring agents (K)). A manager creates a set that consists of reported violators with unacceptable violations (i.e. he tolerates violations to some extent, see line 1). Note that the manager reports about the violators and punishes a certain number. If the number of violators exceeds a certain threshold, he punishes the worst violators (lines 2-3). Otherwise, all the violators are punished (lines 4-5). Finally, the agent’s experience and age increase, and the agent may die (lines 6-7).
/\* Intialise the system starting with iteration←0. \*/
Create 500 new agents with status←new and random parameters with appropriate roles /\* Call algorithms in an appropriate sequence. \*/
1 repeat
2 Run Algorithm [1](#algorithm1 "Algorithm 1 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions") Run Algorithm [2](#algorithm2 "Algorithm 2 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions") Run Algorithm [3](#algorithm3 "Algorithm 3 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions") if *iteration=70* then
3 Update board of directors (C) with new managers if *majority support private trade* then legalise private trade and reduce wages
4 end if
5 iteration←iteration+1
until *iteration=250*
Algorithm 4 Meta algorithm
Algorithm [4](#algorithm4 "Algorithm 4 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions") is the main algorithm that calls the other procedures. In iteration 0, the system is initialised by creating 500 new agents with random parameters. The roles are assigned to created agents (2% directors (C), 5% managers (K), and the rest are mercantile agents (W)), and they have 0 years of experience (line 1). Then, 250 iterations corresponding to 250 years, containing specific steps (lines 3-9) are performed (250). The first step is to run the societal algorithm (i.e. Algorithm [1](#algorithm1 "Algorithm 1 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions"), line 3). Then the algorithm associated with the mercantile agents is run (i.e. Algorithm [2](#algorithm2 "Algorithm 2 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions")). Finally, the manager’s decisions are made using Algorithm [3](#algorithm3 "Algorithm 3 ‣ 4.3 Algorithms ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions") (line 5). When the simulation reaches the year that some of the managers in the EIC (who started as mercantile agents) are promoted to the board of directors (i.e. consequences of the English Civil War, iteration 70), a decision about permitting (or legalising) private trade is made (lines 6-8).
###
4.4 Parameters
In this subsection, we discuss the important parameters employed in the simulation (see Table [1](#S4.T1 "Table 1 ‣ 4.4 Parameters ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions")), along with the reasons for choosing specific values for them. Note that we used 250 iterations to reflect the longevity of EIC (it was active with some interruptions and changes in power from 1600 to 1850). In Table [1](#S4.T1 "Table 1 ‣ 4.4 Parameters ‣ 4 Simulation, algorithms, and parameters ‣ Impact of meta-roles on the evolution of organisational institutions"), column ‘Name’ indicates the names of parameters, column ‘Comment’ shows additional information if required, column ‘Distribution’ indicates the probability distribution used for these parameters, and column ‘Values’ indicates the values of parameters estimated for the two societies. Note that these parameters can be modified to reflect other societies.
| | | | |
| --- | --- | --- | --- |
| Variable name | Comment | Distribution | Values |
| Fairness | Unfair : Fair | Constant | −0.4:0.6 |
|
| |
| --- |
| Perception of environment |
| and fairness of system |
| | Uniform | (−1,1) |
| Thresholds |
| |
| --- |
| Dissonance |
| Environment |
| Fired agents |
| Uniform |
| |
| --- |
| (0,1) |
| (0,1) |
| (0,0.3) |
|
| Monitoring | Boolean | Bernoulli | 0.5 |
|
| |
| --- |
| Permission for private |
| trade |
|
| |
| --- |
| Percent of joined managers |
| who agreed to change |
| Constant | 70% |
Table 1: Parameters associated with the model
Fairness: Note that as discussed earlier, Julfa had fairer institutions than the EIC. We set system fairness values to 0.6 and -0.4 for fair and unfair societies respectively. We believe that neither of these two societies were totally fair or unfair (e.g. EIC managers justified the firing of agents that indicates there has some effort towards fairness).
Perceived characteristics: Because of lack of prior experience, the new agents have a totally random understanding of social characteristics.
Thresholds: These are the numbers that reflect an agent’s tolerance of different aspects and characteristics of the system. All these thresholds are generated at random except for firing. For the proportion of fired agents, we assume that a manager would fire 30% of the suspected employees.
Monitoring: In the model, a recruit may voluntarily decide to participate in monitoring — we use a random boolean generator to represent this.
Permission for private trade: In this simulation, we assume that permission is granted if more than 70% in the board of directors agree to such a decision (i.e. 8 out of 11).
Furthermore, we parametrise the agents’ learning as follows. Agents discount information using a weight of 30% for the past. This reflects the importance of recent information for agents.
5 Results
----------
In this section, we describe the simulation results considering four different combinations of two characteristics, namely a) environmental circumstances and b) fairness of institutions. With two different values for each of these characteristics, four combinations are possible (see Table [2](#S5.T2 "Table 2 ‣ 5 Results ‣ Impact of meta-roles on the evolution of organisational institutions")).
| Characteristics |
| |
| --- |
| E0F0 |
| (EIC) |
| E0F1 | E1F0 |
| |
| --- |
| E1F1 |
| (Julfa) |
|
| Environment | ✗ | ✗ | ✓ | ✓ |
| Fairness | ✗ | ✓ | ✗ | ✓ |
Table 2: System specification based on different characteristics
The configurations (i.e. societies) are identified by the first letter of the characteristics, namely E and F that are representatives of the environmental characteristic of (E) and fairness of the institutions (F), respectively. A tick indicates that the society possesses such an attribute, and a cross indicates the society does not possess such an attribute. In this table, we gradually change characteristics of the EIC (E0F0) to get closer to Julfa (E1F1), to examine their effects on the success of these societies. We utilised NetLogo to perform our simulations [[33](#bib.bib447 "NetLogo")]. We also used 30 different runs for each set-up and then averaged their results. Finally, note that the patterns observed in simulation results are compared to the patterns reported from the EIC and Jufla, because we had access to the qualitative data.
###
5.1 Permissions for private trade
Table [3](#S5.T3 "Table 3 ‣ 5.1 Permissions for private trade ‣ 5 Results ‣ Impact of meta-roles on the evolution of organisational institutions") presents the percentage of simulation runs (out of 30) where the permission for private trade was granted (see row “Permission granted”). Note that this change in rule (granting of permission) happened due to changes in agents’ meta-roles where a mercantile agent progresses to the board of directors (and advocates the decision to permit private trade). As can be seen from the results, both unfair societies (FO) had higher percentage of runs where the private trade is permitted (>50%), although with a large difference (93% and 57% respectively). In fair societies, none of the runs resulted in private trade being approved. This result mirrors the evidence from Julfa. In Julfa, mercantile agents (W) and peripheral managers (K) were the ones who eventually ran the family business (C). Also, mercantile agents and managers made decisions regarding violations and acted as juries in certain courts [[4](#bib.bib25 "From the Indian Ocean to the Mediterranean: Circulation and the global trade networks of Armenian merchants from New Julfa/Isfahan, 1605–1747")]. The aforementioned situation, combined with keeping private trade illegal [[22](#bib.bib24 "The Armenian merchants of New Julfa, Isfahan: A study in pre-modern Asian trade")], indicate that this rule was socially accepted.333Because in none of the simulation runs of Julfa the permission was granted, we believe that using a similar dynamics to Julfa would not change the results Also, we know that in the EIC, the permission for private trade was granted once the managers had the opportunity to be part of the board of directors[[15](#bib.bib19 "Between monopoly and free trade: the English East India Company, 1600–1757")].
| Societies | Permission granted for private trade |
| --- | --- |
| E0F0 (EIC) | 93% |
| E0F1 | 0% |
| E1F0 | 57% |
| E1F1 (Julfa) | 0% |
Table 3: Percentage of runs where private trade was permitted (out of 30 runs).

Figure 3: Monitoring strength and firing in simulated societies.
###
5.2 Fired violators (monitoring strength)
Now we discuss the impact of aforementioned two characteristics on the monitoring strength of the system (see Figure [3](#S5.F3 "Figure 3 ‣ 5.1 Permissions for private trade ‣ 5 Results ‣ Impact of meta-roles on the evolution of organisational institutions")). Figures [3](#S5.F3 "Figure 3 ‣ 5.1 Permissions for private trade ‣ 5 Results ‣ Impact of meta-roles on the evolution of organisational institutions")a-[3](#S5.F3 "Figure 3 ‣ 5.1 Permissions for private trade ‣ 5 Results ‣ Impact of meta-roles on the evolution of organisational institutions")d present the percentage of the cheating agents fired. In these figures, the y-axis indicates the percentage of fired cheaters. As can be seen, the most fired agents belong to society E1F1 and then E0F1. These indicate the importance of fairness of institutions on the system’s monitoring strength.
This impact that we see in Figure 3 is a consequence of two informal roles performed by agents, namely a) mercantile agents (W) that monitor and report suspicious behaviours (internalised K) to managers (formal K), and b) managers (K) who interpret rules based on the situations and tolerate some behaviours (S). For example, managers who think the system isn’t fair, may not report the cheating behaviour of agents (agents who are involved in private trades). And these same managers who become a part of the board of directors allow for these private trades to happen legally (but with the reduction in wages further, though). Also, in organisations with unfair institutions, after granting permissions for private trade (year 70), agents’ collaboration in monitoring the cheaters (for theft etc.) decreases. Note that the evidence for interpretation of the rules can be found in EIC managers’ correspondence444For instance, in the early years, some managers defended mercantile agents’ private trade by stating:
“if some tolleration [sic] for private trade be not permitted none but desperate men will sail our ships” [[11](#bib.bib30 "The English East India Company: The study of an early joint-stock company 1600–1640")].. Also, these results mirror the evidence of rare cheating and successful monitoring mechanisms in Julfa [[4](#bib.bib25 "From the Indian Ocean to the Mediterranean: Circulation and the global trade networks of Armenian merchants from New Julfa/Isfahan, 1605–1747")] and the popularity of cheating and collusion in the EIC [[11](#bib.bib30 "The English East India Company: The study of an early joint-stock company 1600–1640")].
6 Discussion and concluding remarks
------------------------------------
This study has presented an extension of the BDI cognitive architecture to investigate its interaction with agents meta-roles. Also, using this extension, the study has investigated the impact of a combination of a) dynamics in agents’ roles and b) the institutional characteristics (i.e. mortality rate and fairness) on organisational rule dynamics (i.e. change of rule). As the role of individuals changes (e.g. W to K), their beliefs formed based in their previous role impacts their new decisions. Finally, our study has used the evidence from empirical studies to simulate two historical long-distance trading societies, namely Armenian merchants of new Julfa (Julfa) and the English East India Company (EIC) and has demonstrated what may cause rule changes (i.e. role change and institutional characteristics).
The simulation results mirrored historical evidence. It has shown that the fairness of institutions is a pivotal characteristic to drive their stability (i.e. avoiding revisions in rules) and in facilitating agents’ collaboration in monitoring each other’s behaviours. These results (i.e. changes in rules and weak monitoring and reporting) mirror concerns in the modern context about the division of “rules into the two categories of rules-in-use and rules-in-form” [[31](#bib.bib517 "Governments at work: Canadian parliamentary federalism and its public policy effects")]. For instance, it is noted that rules-in-use (followed rules) in some provinces of Canada might have been rules-in-forms (unfollowed rules that do not have any effect on behaviour) in others [[31](#bib.bib517 "Governments at work: Canadian parliamentary federalism and its public policy effects")]. There exist some obstacles in a law in becoming a rule-in-use [[12](#bib.bib307 "Laws, norms, and the institutional analysis and development framework")]. An instance of this obstacle is the activities of monitoring agents who interpret the law differently and thus hamper its effectiveness (e.g. through not monitoring violations) and hence can aid the formation of new rules similar to what has been observed in results from Table [3](#S5.T3 "Table 3 ‣ 5.1 Permissions for private trade ‣ 5 Results ‣ Impact of meta-roles on the evolution of organisational institutions") and Figure [3](#S5.F3 "Figure 3 ‣ 5.1 Permissions for private trade ‣ 5 Results ‣ Impact of meta-roles on the evolution of organisational institutions").
A future extension of the study, will involve detailed examination of the interaction between other modules of the cognitive architecture presented in Figure [1](#S2.F1 "Figure 1 ‣ 2 An overview of the extended BDI architecture ‣ Impact of meta-roles on the evolution of organisational institutions"). Also, the simulation can be extended to take account of other characteristics of these historical societies, such as the personalities of agents, to provide a more fine-grained model. |
ecc653da-00b8-4f6b-abad-8d70867c8fa2 | trentmkelly/LessWrong-43k | LessWrong | Stupid Questions August 2015
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag. |
9684eed7-1e90-4e3f-b4e2-458310ae8ac8 | trentmkelly/LessWrong-43k | LessWrong | Why so little AI risk on rationalist-adjacent blogs?
I read a lot of rationalist-adjacents. Outside of LessWrong and ACX, I hardly ever see posts on AI risk. Tyler Cowen of Marginal Revolution writes that, "it makes my head hurt" but hasn't engaged with the issue. Even Zvi spends very few posts on AI risk.
This is surprising, and I wonder what to make of it. Why do the folks most exposed to MIRI-style arguments have so little to say about them?
Here's a few possibilities
1. Some of the writers disagree that AGI is a major near-term threat
2. It's unusually hard to think and write about AI risk
3. The best rationalist-adjacent writers don't feel like they have a deep enough understanding to write about AI risk
4. There's not much demand for these posts, and LessWrong/Alignment Forum/ACX are already filling it. Even a great essay wouldn't be that popular
5. Folks engaged in AI risk are a challenging audience. Eliezer might get mad at you
6. When you write about AGI for a mainstream audience, you look weird. I don't think this is as true it used to be, since Ezra Klein did it in the New York Times and Kelsey Piper in Vox
7. Some of these writers are heavily specialized. The mathematicians want to write about pure math. The pharmacologists want to write about drug development. The historians want to argue that WWII strategic bombing was based on a false theory of popular support for the enemy regime, and present-day sanctions are making the same mistake
8. Some of the writers are worried that they'll present the arguments badly, inoculating their readers against a better future argument
What they wrote
I'll treat Scott Alexander's blogroll as the canonical list of rationalist-adjacent writers. I've grouped them by their stance on the following statement:
> Misaligned AGI is among the most important existential risks to humanity[1]
Explicitly agrees and provides original gears-level analysis (2)
Zvi Mowshowitz
“The default outcome, if we do not work hard and carefully now on AGI safety, is for AGI to wipe |
077b72e8-b68f-4d3b-b963-35c7cafa2ec3 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Will humans build goal-directed agents?
In the [previous post](https://www.alignmentforum.org/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-imply-goal-directed-behavior), I argued that simply knowing that an AI system is superintelligent does not imply that it must be goal-directed. However, there are many other arguments that suggest that AI systems will or should be goal-directed, which I will discuss in this post.
Note that I don’t think of this as the [Tool AI vs. Agent AI](http://www.gwern.net/Tool-AI) argument: it seems possible to build agent AI systems that are not goal-directed. For example, imitation learning allows you to create an agent that behaves similarly to another agent -- I would classify this as “Agent AI that is not goal-directed”. (But see [this comment thread](https://www.alignmentforum.org/posts/9zpT9dikrrebdq3Jf/will-humans-build-goal-directed-agents#mjYjBiq4mQosy6Wwt) for discussion.)
Note that these arguments have different implications than the argument that superintelligent AI must be goal-directed due to coherence arguments. Suppose you believe all of the following:
* Any of the arguments in this post.
* Superintelligent AI is not *required* to be goal-directed, as I argued in the [last post](https://www.alignmentforum.org/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-imply-goal-directed-behavior).
* Goal-directed agents cause catastrophe by default.
Then you could try to create alternative designs for AI systems such that they can do the things that goal-directed agents can do without themselves being goal-directed. You could also try to persuade AI researchers of these facts, so that they don’t build goal-directed systems.
### Economic efficiency: goal-directed humans
Humans want to build powerful AI systems in order to help them achieve their goals -- it seems quite clear that humans are at least partially goal-directed. As a result, it seems natural that they would build AI systems that are also goal-directed.
This is really an argument that the *system* comprising the human and AI agent should be directed towards some goal. The AI agent by itself need not be goal-directed as long as we get goal-directed behavior when combined with a human operator. However, in the situation where the AI agent is much more intelligent than the human, it is probably best to delegate most or all decisions to the agent, and so the agent could still look mostly goal-directed.
Even so, you could imagine that even the small part of the work that the human continues to do allows the agent to not be goal-directed, especially over long horizons. For example, perhaps the human decides what the agent should do each day, and the agent executes the instruction, which involves planning over the course of a day, but no longer. (I am *not* arguing that this is safe; on the contrary, having very powerful optimization over the course of a day seems probably unsafe.) This could be extremely powerful without the AI being goal-directed over the long term.
Another example would be a [corrigible](https://www.alignmentforum.org/posts/fkLYhTQteAu5SinAc/corrigibility) agent, which could be extremely powerful while not being goal-directed over the long term. (Though the meanings of “goal-directed” and “corrigible” are sufficiently fuzzy that this is not obvious and depends on the definitions we settle on for each.)
### Economic efficiency: beyond human performance
Another benefit of goal-directed behavior is that it allows us to find novel ways of achieving our goals that we may not have thought of, such as AlphaGo’s move 37. Goal-directed behavior is one of the few methods we know of that allow AI systems to exceed human performance.
I think this is a good argument for goal-directed behavior, but given the problems of goal-directed behavior I think it’s worth searching for alternatives, such as the two examples in the previous section (optimizing over a day, and corrigibility). Alternatively, we could learn human reasoning, and execute it for a longer subjective time than humans would, in order to make better decisions. Or we could have systems that remain uncertain about the goal and clarify what they should do when there are multiple very different options (though this has its own problems).
### Current progress in reinforcement learning
If we had to guess today which paradigm would lead to AI systems that can exceed human performance, I would guess reinforcement learning (RL). In RL, we have a reward function and we seek to choose actions that maximize the sum of expected discounted rewards. This sounds a lot like an agent that is searching over actions for the best one according to a measure of goodness (the reward function [1]), which I said previously is a goal-directed agent. And the math behind RL says that the agent should be trying to maximize its reward for the rest of time, which makes it long-term [2].
That said, current RL agents learn to replay behavior that in their past experience worked well, and typically do not generalize outside of the training distribution. This does not seem like a search over actions to find ones that are the best. In particular, you shouldn’t expect a treacherous turn, since the whole point of a treacherous turn is that you don’t see it coming because it never happened before.
In addition, current RL is episodic, so we should only expect that RL agents are goal-directed *over the current episode* and not in the long-term. Of course, many tasks would have very long episodes, such as being a CEO. The vanilla deep RL approach here would be to specify a reward function for how good a CEO you are, and then try many different ways of being a CEO and learn from experience. This requires you to collect many full episodes of being a CEO, which would be extremely time-consuming.
Perhaps with enough advances in model-based deep RL we could train the model on partial trajectories and that would be enough, since it could generalize to full trajectories. I think this is a tenable position, though I personally don’t expect it to work since it relies on our model generalizing well, which seems unlikely even with future research.
These arguments lead me to believe that we’ll probably have to do something that is not vanilla deep RL in order to train an AI system that can be a CEO, and that thing may not be goal-directed.
Overall, it is certainly possible that improved RL agents will look like dangerous long-term goal-directed agents, but this does not seem to be the case today and there seem to be serious difficulties in scaling current algorithms to superintelligent AI systems that can optimize over the long term. (I’m not arguing for long timelines here, since I wouldn’t be surprised if we figured out some way that *wasn’t* vanilla deep RL to optimize over the long term, but that method need not be goal-directed.)
### Existing intelligent agents are goal-directed
So far, humans and perhaps animals are the only example of generally intelligent agents that we know of, and they seem to be quite goal-directed. This is some evidence that we should expect intelligent agents that we build to also be goal-directed.
Ultimately we are observing a correlation between two things with sample size 1, which is really not much evidence at all. If you believe that many animals are also intelligent and goal-directed, then perhaps the sample size is larger, since there are intelligent animals with very different evolutionary histories and neural architectures (eg. octopuses).
However, this is specifically about agents that were created by evolution, which did a relatively stupid blind search over a large space, and we could use a different method to develop AI systems. So this argument makes me more wary of creating AI systems using evolutionary searches over large spaces, but it doesn’t make me much more confident that all good AI systems must be goal-directed.
### Interpretability
Another argument for building a goal-directed agent is that it allows us to predict what it’s going to do in novel circumstances. While you may not be able to predict the specific actions it will take, you can predict some features of the final world state, in the same way that if I were to play Magnus Carlsen at chess, [I can’t predict how he will play, but I can predict that he will win](https://www.lesswrong.com/posts/rEDpaTTEzhPLz4fHh/expected-creative-surprises).
I do not understand the intent behind this argument. It seems as though faced with the negative results that suggest that goal-directed behavior tends to cause catastrophic outcomes, we’re arguing that it’s a good idea to build a goal-directed agent so that we can more easily predict that it’s going to cause catastrophe.
I also think that we would typically be able to predict significantly *more* about what any AI system we actually build will do (than if we modeled it as trying to achieve some goal). This is because “agent seeking a particular goal” is one of the simplest models we can build, and with any system we have more information on, we start refining the model to make it better.
### Summary
Overall, I think there are good reasons to think that “by default” we would develop goal-directed AI systems, because the things we want AIs to do can be easily phrased as goals, and because the stated goal of reinforcement learning is to build goal-directed agents (although they do not look like goal-directed agents today). As a result, it seems important to figure out ways to get the powerful capabilities of goal-directed agents through agents that are not themselves goal-directed. In particular, this suggests that we will need to figure out ways to build AI systems that do not involve specifying a utility function that the AI should optimize, or even learning a utility function that the AI then optimizes.
---
[1] Technically, actions are chosen according to the Q function, but the distinction isn’t important here.
[2] Discounting does cause us to prioritize short-term rewards over long-term ones. On the other hand, discounting seems mostly like a hack to make the math not spit out infinities, and so that learning is more stable. On the third hand, infinite horizon MDPs with undiscounted reward aren't solvable unless you almost surely enter an absorbing state. So discounting complicates the picture, but not in a particularly interesting way, and I don’t want to rest an argument against long-term goal-directed behavior on the presence of discounting. |
2791d0a1-c5d0-4302-b59f-45f1dc956e59 | trentmkelly/LessWrong-43k | LessWrong | [LINK] Anchoring Bias In Medicine (NYT)
http://well.blogs.nytimes.com/2012/07/19/falling-into-the-diagnostic-trap/
> For the doctors, this was a harrowing lesson in the trap of anchoring bias. It is so easy to slip into it without even knowing. But this case reminded us to keep reciting the mantra: if something doesn’t fit, don’t try to make it fit. Ask what else might be going on. Don’t fall into the trap.
(Wikipedia's article: http://en.wikipedia.org/wiki/Anchoring ) |
49e6736d-3dd6-4961-92d2-924fde435294 | trentmkelly/LessWrong-43k | LessWrong | Freelance platform, freelance marketplace platform, Marketplace script
|
59836998-8a8d-4c18-91dd-ceaef8a21975 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Mind Projection Fallacy
Today's post, Mind Projection Fallacy was originally published on 11 March 2008. A summary (taken from the LW wiki):
> E. T. Jaynes used the term Mind Projection Fallacy to denote the error of projecting your own mind's properties into the external world. the Mind Projection Fallacy generalizes as an error. It is in the argument over the real meaning of the word sound, and in the magazine cover of the monster carrying off a woman in the torn dress, and Kant's declaration that space by its very nature is flat, and Hume's definition of a priori ideas as those "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe"...
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Righting a Wrong Question, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
991e8e11-6f57-43dd-99a3-4afea898f5f9 | trentmkelly/LessWrong-43k | LessWrong | Less Threat-Dependent Bargaining Solutions?? (3/2)
In the previous two posts, we went over various notions of bargaining. The Nash bargaining solution. The CoCo value. Shapley values. And eventually, we managed to show they were all special cases of each other. The rest of this post will assume you've read the previous two posts and have a good sense for what the CoCo value is doing.
Continuing from the last post, the games which determine the payoff everyone gets (not to be confused with the games that directly entail what actions are taken) are all of the form "everyone splits into two coalitions S and N/S, and both coalitions are trying to maximize "utility of my coalition - utility of the opposite coalition"".
Now, in toy cases involving two hot-dog sellers squabbling over whether to hawk their wares at a beach or an airport, this produces acceptable results. But, in richer environments, it's VERY important to note that adopting "let's go for a CoCo equilibria" as your rule for how to split gains amongst everyone incentivizes everyone to invent increasingly nasty ways to hurt their foes. Not to actually be used, mind you. To affect how negotiations go.
After all, if you invent the Cruciatus curse, then in all those hypothetical games where your coalition faces off against the foe coalition, and everyone's utility is "the utility of my coalition - the utility of the foe coalition"... well, you sure can reduce the utility of the foe coalition by a whole lot! And so, your team gets a much higher score in all those games.
Of course, these minimax games aren't actually played. They just determine everyone's payoffs. And so, you'd end up picking up a whole lot of extra money from everyone else, because you have the Cruciatus curse and everyone's scared of it so they give you money. In the special case of a two-player game, getting access to an option which you don't care about and the foe would pay $1000 to avoid, should let you demand a 500$ payment from the foe as a "please don't hurt me" payment.
But Which |
d79790c1-e049-4441-82e5-74caa91229d2 | trentmkelly/LessWrong-43k | LessWrong | Effective Altruism Summer Reading List
None |
3df3560d-2be7-468c-99f9-d9a645211fb1 | trentmkelly/LessWrong-43k | LessWrong | CICO is a weird reference class
People tend to form what I guess I'm going to call default-hypothesis-reference-classes. This is what they see as the "default" hypothesis that new hypotheses have to compete against. Obviously this isn't how a perfect bayesian reasoner would act, but we're only human.
These weird reference classes can be tricky, because you can get "sucked into them" and stop noticing that they're weird.
I started thinking of this when thinking about the CICO model of weight gain/loss. Calories-in-calories-out sounds obviously true under thermodynamics. There's about 2 × 1010 Calories in a gram of matter, and humans need about 2000 Calories per day, so make sure not to eat more than a tiny fraction of a gram per day and you're sure to lose weight.
Well, not quite. In my opinion, the most "natural" default reference class for weight gain or loss is mass-in-mass-out (MIMO), which is obviously true, if not super useful. This has to be true - everything we know about physics tells us it's impossible for your body to gain or lose mass unless it comes from or goes somewhere.
What people mean when they say CICO is that different foods have different amounts of bioavailable energy, and weight gain is dependent on difference between the bioavailable energy you eat and the amount of heat your body produces. This doesn't seem like it necessarily has to be true! People gain weight when they grow, and that's partially because they're eating actual minerals that are deposited into their bones. (And these minerals largely don't go anywhere once they're deposited.) So that's a straightforward counterexample to CICO.
We could imagine the same thing happening with fat gain or loss. Your body could store energy by accumulating fat, but not remove the actual fat molecules from your body when their chemical energy is consumed. (I'm not saying that this actually happens, just that it's not forbidden by thermodynamics.) Or our model for what kinds of energy are bioavailable could be wrong, and if |
368e31ac-0154-41cb-a478-068b59ea3c65 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Chicago Organizational Meeting and Open Discussion
Discussion article for the meetup : Chicago Organizational Meeting and Open Discussion
WHEN: 17 November 2012 01:00:00PM (-0600)
WHERE: 360 N. Michigan Ave., Chicago, IL
"This is an organizational meeting planned for every other month to discuss the group's progress and ideas for possible future meetups. Afterwards, we will have open discussion..." -- http://www.meetup.com/Less-Wrong-Chicago/events/83542572/
This location is the same Corner Bakery that we previously had the weekly meetings at.
Discussion article for the meetup : Chicago Organizational Meeting and Open Discussion |
1c4dce43-eea8-4b44-a6be-558496b4b768 | trentmkelly/LessWrong-43k | LessWrong | Minimax Search and the Structure of Cognition!
(Blog adaptation of a ~10 minute talk I gave at !!Con West 2019 on what writing a chess engine taught me about intelligence-in-general.) |
ac580647-3c23-4dbf-9809-3f1708dfd8b7 | trentmkelly/LessWrong-43k | LessWrong | How do you say no?
Some people seem to be a bit too generous for their own good. I know a precious few people who are especially good at saying "no" when asked to take on new responsibilities that would put them over their limits. I love working with people like that because I can always trust them to tell me when it would be better for me to find someone else to do the thing. I expect this to be an extremely valuable skill it would probably be good for many of us to understand, learn, and be able to teach to people who really need it.
If you frequently find yourself overburdened, think it's not entirely necessary for you to be doing as much as you are, and can recall a specific instance in the last month where someone asked you to do something and you accepted against your better judgement, I invite you to describe what you were feeling and thinking at the time.
Alternately, if you're an unusually busy and productive person who nevertheless is good at saying "no", I'd like to hear about
1. a specific example of a time when you said no to new responsibility, what was going on in your head, and how it felt
2. how exactly you believe you decide whether to take on or reject prospective responsibilities if you have an explicit model
3. whether you consider yourself more or less empathetic or compassionate than average
4. whether there was ever a time when you had that "don't know how to say no" problem, and if so what changed |
e8326521-e0fa-4048-9770-b45b6cc41784 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Scrutinizing AI Risk (80K, #81) - v. quick summary
*Epistemic status: uncertain about whether this accurately describes Ben's views. The [podcast](https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/) is great and he's also doing a very interesting [AMA](https://forum.effectivealtruism.org/posts/7gxtXrMeqw78ZZeY9/ama-or-discuss-my-80k-podcast-episode-ben-garfinkel-fhi). This is a very complex topic, and I would love to hear lots of different perspectives and have them really fleshed out in detail. The below is my attempt at a quick summary for those short on time.*
**Core ideas I took away (not that I necessarily all agree with)**
1. **Brain in a box** - the classic Bostrom-Yudkowsky scenario is where there’s a superintelligent AGI developed which is far more capable than anything else people are dealing with, i.e. a brain in a box, but actually we’d expect systems to develop incrementally and so we should have other examples of similar concrete problems to work on.
2. **Intelligence explosion** - one of the concepts behind the runaway intelligence explosion is that a system is recursively self-improving, so the AI starts to rewrite its own code or hardware and then get much better. But there are many tasks that go into system improvement, and even coding requires many different skills, so just because a system might be able to improve one of its inputs, that doesn’t mean that its overall capacity should increase.
3. **Entanglement** **and capabilities** - when we’ve had AI systems they’ve usually got more capable by getting better at giving us what we want, so by exploring the potential space of solutions more and more carefully. For example house cleaning robots only get better as they learn more about our preferences. Thermostats only get more effective and capable when they get better at moderating the temperature, because the intelligence of meeting the goal is entangled with the goal itself. This should make us suspicious of extremely powerful and capable systems that also have divergent goals to ours.
4. **Hard to shape the future** - if we take these arguments seriously, it also might be the case that AI safety can develop more gradually as a field over coming decades, and that while it’s important, it just might not be as much of a race as some have previously argued. To take something potentially analogous, it’s not clear what someone in the 1500s could have done to influence the industrial revolution, even if they had strong reasons to think it would take off.
**Some other points**
* In the interview, Ben also mentions that there are “multiple salient emerging forms of military technology” which could be of similar importance, giving the example of hypersonic glide vehicles. I’ve considered taking [this course](https://www.kcl.ac.uk/study/postgraduate/taught-courses/science-and-international-security-ma) in Science and International Security at the War Studies department at KCL, and I’ve uploaded the syllabi for the main units [here](https://drive.google.com/file/d/1pw2ZdEChsq8Fs3JFNyHK3PYpvGJITYLi/view?usp=sharing) and [here](https://drive.google.com/file/d/13rHT4oqHcl_EoYULurx0AdtJAXeD0Tn6/view?usp=sharing). Other examples are space security and cyber security.
* In the [80K podcast with Stuart Russell](https://80000hours.org/podcast/episodes/stuart-russell-human-compatible-ai/), Stuart calls out Rob for conflating ML systems with AI systems. Just to define terms, machine learning systems improve automatically through experience with data. And artificial intelligence is a much broader area of research, including robotics, computer vision, classical search, logical reasoning, and many other areas. Stuart makes the point in the podcast that Google’s self-driving cars mostly use classical search, and so only looking at ML is part of the picture.
* Rohin Shah reviews Ben’s interview favourably in the Alignment Newsletter [here](https://mailchi.mp/05518aad6baf/an-108why-we-should-scrutinize-arguments-for-ai-risk).
* Rohin also discusses AI safety with Buck Shlegeris [here](https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/), but I haven’t finished the interview (though I did find the discussion quite confrontational and switched off)
* I’ve also just pulled out the most contentious points - Ben gives a very rounded and considerate interview, which I’d recommend listening to in full
**My takeaways**
* I found Ben’s arguments to be very useful and interesting
* I agree that working on existential risk involves more than just one technology, and so there could be fruitful work in security studies and power structures, with a great popular example of theoretical work being [Destined for War](https://www.amazon.co.uk/Destined-War-America-Escape-Thucydidess/dp/0544935276) (also see [The Vulnerable World Hypothesis](https://nickbostrom.com/papers/vulnerable.pdf)). This work seems important and neglected.
* While I think Ben’s arguments require responses from people looking into AI, from my perspective the main idea that humans are not optimally intelligent, and that more advanced technologies could exploit that significantly in the future to produce undesirable outcomes (including human extinction and s-risks) seem plausible to me
* Before listening to this podcast, I’d have put a 10-30% chance on a <6 month hard take-off scenario this century, conditional on AI safety work happening and the world not being radically different from now, but I’d now put it at something like 5-20%, though I’m really not an expert here, so would expect my views to change a lot (immediately after reading Superintelligence I was probably at 50%).
* I was also very glad that this perspective was aired, and I hope it leads to more fruitful discussions
* At the end of his [slides](https://docs.google.com/presentation/d/1sHA3rwTHLIxyZPQObcw8mbNo2jffswH8uYV7N5PwqZE/edit#slide=id.g6230db10d0_0_555), Ben closes with an important point for the EA community, ‘If we’ve failed to notice important issues with classic arguments until recently, we should also worry about our ability to assess new arguments.’ |
c7d6e9fa-4e27-4034-aeb9-320d3300cd22 | trentmkelly/LessWrong-43k | LessWrong | Awakening
This is the story of my personal experience with Buddhism (so far).
First Experiences
My first experience with Buddhism was in my high school's World Religions class. For homework, I had to visit a religious institution. I was getting bad grades, so I asked if I could get extra credit for visiting two and my teacher said yes. I picked an Amida Buddhist church and a Tibetan Buddhist meditation center.
I took off my shoes at the entrance to the Tibetan Buddhist meditation center. It was like nothing I had ever seen before in real life. There were no chairs. Cushions were on the floor instead. The walls were covered in murals. There were no instructions. People just sat down and meditated. After that there was some walking meditation. I didn't know anything about meditation so I instead listened to the birds and the breeze out of an open window. Little did I know that this is similar to the Daoist practices that would later form the foundation of my practice.
The Amida Buddhist church felt like a fantasy novelist from a Protestant Christian background wanted to invent a throwaway religion in the laziest way possible so he just put three giant Buddha statues on the altar and called it a day. The priest told a story about his beautiful stained glass artifact. A young child asked if he could have the pretty thing. The priest, endeavoring to teach non-attachment, said yes. Then the priest asked for it back. The child said no, thereby teaching the priest about non-attachment. Lol.
It would be ten years until I returned to Buddhism.
Initial Search
> It is only after you have lost everything that you are free to do anything.
Things were bad. I had dumped six years of my life into a failed startup. I had allowed myself to be gaslit (nothing to do with the startup; my co-founders are great people) for even longer than that. I believed (incorrectly) that I had an STD. I had lost most of my friends. I was living in a basement infested with mice. I slept poorly because my |
4ec304de-95cb-4b17-b68c-dc3f7b244103 | trentmkelly/LessWrong-43k | LessWrong | Is there a beeminder without the punishment?
I used Beeminder for a few months and found it extremely effective. However, a huge pain point of mine was forgetting to manually log the datapoints since a lot of my goals couldn't be logged automatically. This involved having to message their support e-mail each time. Eventually, this became such a huge friction point that I quit Beeminder.
Since then, I noticed that I haven't maintained my goals as much as I did on Beeminder. I believe that recording that I did X goal made Beeminder so effective for me, rather than the avoiding the punishment that comes when you don't do X goal. It was satisfying that I did X goal and seeing the graph change.
That being said, I want to find another solution like Beeminder without the punishment. Any ideas? |
2ca0a2c6-7b8b-4267-82b8-191e1f991fca | trentmkelly/LessWrong-43k | LessWrong | MA RMV Overloaded
My drivers license expires at the end of January, and renewal requires an appointment. On 1/12 when I tried to make one I had a little over two weeks (17d) before expiration. I should have left a larger cushion, but the RMV says you generally need 14d and it averaged 9d in 2021 and 11d in 2022. The earliest appointment at the Boston RMV was 2/23 (42d).
I filled out their contact form asking if I could get an extension, and they replied:
> Extensions are only granted during an in person renewal appointment, we cannot offer an extension through email. As a reminder renewal is available up to a full year in advance and up to 2 years after expiration.
This morning I tried going in person right when they opened. Outside they checked whether you had an appointment, and wouldn't let you in for an ID without one:
> Me: I tried to renew my license a few weeks in advance and the soonest appointment it would offer me was nearly a month after it expires.
>
> RMV: Sorry, you need an appointment. Try reloading the page to see if there's a cancellation.
>
> Me: I've been doing that, and the earliest it shows me is March.
>
> RMV: That's not possible.
>
> Me: Here's what I see:
>
>
>
>
>
>
> RMV: Sorry, still you need an appointment. Try reloading the page every hour to see if there's a cancellation.
When you go to check if there have been any cancellations you're presented with a screen like:
There are 17 RMV locations within an hour drive of my house, and the only way to check for cancellations is to manually click each one. I can't ask it to order locations by "soonest available appointment" or even "closest to my house". Checking all 17 right now, the soonest option is 2/14 (26d) in Taunton. This is 43mi away and would be a 55min drive, except it's after my license expires so it would be a 2hr+ each way on public transit. Everything within an hour on public transit is 3/2 (42d).
This is similar to what WBZ reported in November:
> When WBZ checked fo |
76189a43-2241-40b7-8cb7-e43387ce3f28 | trentmkelly/LessWrong-43k | LessWrong | Auto Shutdown Script
I run a lot of one-off jobs on EC2 machines. This usually looks like:
* Stand up a machine
* Mess around for a while trying things and writing code
* Run my command under screen
For short jobs this is fine, but when I run a long job there are two issues:
* If the machine costs a non-trivial amount and the job finishes in the middle of the night I'm not awake to shut it down.
* I could, and sometimes do, forget to turn the machine off.
Ideally I could tell the machine to shut itself off if no one was logging in and there weren't any active jobs.
I didn't see anything like this (though I didn't look very hard) so I wrote something (github):
$ prevent-shutdown long-running-command
As long as that command is still running, or someone is logged in over ssh, the machine will stay on. Every five minutes a systemd timer will check if this is the case, and if not shut the machine down. Note that you still need screen or something to prevent the long running command from exiting when you log out.
(This is an example of the kind of thing that I find goes a lot faster with an LLM. I used Claude 3.7, prompted it with essentially the beginning of this blog post, took the scripts it generated as a starting point, and then fixed some things. It did make some mistakes (the big ones: a typo of $ for $$, a regex looking for PID: that should have looked for ^PID:, didn't initially plan for handling stale jobs) but that's also about what I'd expect if I'd asked a junior engineer to write this for me. And with much faster turnaround on my code reviews!) |
94493c61-fc3c-4eda-a4f3-46af8a4c8278 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Science Doesn't Trust Your Rationality
Today's post, Science Doesn't Trust Your Rationality was originally published on 14 May 2008. A summary (taken from the LW wiki):
> The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Dilemma: Science or Bayes?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
c862dca8-b1ee-4a8b-830a-09a9a02ab46b | trentmkelly/LessWrong-43k | LessWrong | Group Rationality Diary, January 16-31
This is the public group rationality diary for January 16-31.
> It's a place to record and chat about it if you have done, or are actively doing, things like:
>
> * Established a useful new habit
> * Obtained new evidence that made you change your mind about some belief
> * Decided to behave in a different way in some set of situations
> * Optimized some part of a common routine or cached behavior
> * Consciously changed your emotions or affect with respect to something
> * Consciously pursued new valuable information about something that could make a big difference in your life
> * Learned something new about your beliefs, behavior, or life that surprised you
> * Tried doing any of the above and failed
>
> Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.
Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.
Previous diary: January 1-15
Next diary: February 1-14
Rationality diaries archive |
5589641e-681e-4f61-bf72-e78014f33f73 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Examples of Causal Abstraction
I’m working on a [theory of abstraction](https://www.lesswrong.com/posts/wuJpYLcMEBz4kcgAn/what-is-abstraction-1) suitable as a [foundation for embedded agency](https://www.lesswrong.com/posts/hLFD6qSN9MmQxKjG5/embedded-agency-via-abstraction) and specifically multi-level world models. I want to [use real-world examples](https://www.lesswrong.com/posts/9pZtvjegYKBALFnLk/characterizing-real-world-agents-as-a-research-meta-strategy) to build a fast feedback loop for theory development, so a natural first step is to build a starting list of examples which capture various relevant aspects of the problem.
These are mainly focused on *causal* abstraction, in which both the concrete and abstract model are causal DAGs with some natural correspondence between counterfactuals on the two. (There are some exceptions, though.) The list isn’t very long; I’ve chosen a handful of representative examples which cover qualitatively different aspects of the general problem.
I’ve grouped the examples by [symmetry class](https://www.lesswrong.com/posts/mZy6AMgCw9CPjNCoK/computational-model-causal-diagrams-with-symmetry):
* Finite DAGs without any symmetries, or at least no symmetries which matter for our purposes
* Plate symmetry (as in “[plate notation](https://en.wikipedia.org/wiki/Plate_notation)”), in which there are a number of conditionally IID components
* Time symmetry, in which the DAG (or some part of it) consists of one repeated subcomponent connected in a straight line (i.e. a Markov chain structure)
Note that many of the abstractions below abstract from one symmetry class to another - for example, MCMC abstracts a concrete time-symmetric model into an abstract plate-symmetric model.
I’m interested to hear more examples, especially examples which emphasize qualitative features which are absent from any of the examples here. Examples in which other symmetry classes play an important role are of particular interest, as well as examples with agenty behavior which we know how to formalize without too much mess.
Finite DAGs: Examples from Electrical Circuits
----------------------------------------------
Electrical engineers rely heavily on nested layers of abstraction, of exactly the sort I’m interested in (i.e. multi-level models of the physical world). Additionally, causal models are a natural fit for digital circuits. These properties make electrical circuits ideal starting points. They’re a great conceptually-simple use case.
A few of the major abstraction layers, from lowest to highest:
* Fields: the most concrete-level model used in EE
* Lumped circuit abstraction: approximating the system as discrete “wires” with constant voltage connecting “circuit elements” with various voltage-current relationships and internal state.
* Digital abstraction: bucket voltages into high and low.
* Logic abstraction: replace various subcircuits with logic gates, and multiple “wires” with logical connections.
* Arithmetic abstraction: replace a logic circuit with an arithmetic circuit
* Floating point & modular arithmetic: throw out least-significant bits vs throw out most-significant bits
* Software-level abstractions, e.g. IP -> TCP -> HTTP
Note that real circuits usually do contain some repeated sub-components, but the symmetries in these DAGs aren’t particularly relevant to our purposes, so we’ll mostly ignore them.
Parallel to all this, somewhere along the way we usually abstract out the low-level continuous time-dependence, and adopt an abstract model of instantaneous input-output circuits coupled to clocked storage units (i.e. flip-flops/registers). We’ll include that abstraction separately in the time symmetry section; the levels from lumped circuit through floating point/modular arithmetic can all be specialized to memoryless input-output circuits for simplicity.
Plate Symmetry: Statistical Toy Models
--------------------------------------
This is the simplest nontrivial symmetry class. The main new qualitative phenomena I see in this class are:
* Nodes which attempt to estimate the value of other nodes, i.e. embedded maps/embedded reasoners. Technically we can have these in finite DAGs too, but they’re most natural to first consider in models with plate-symmetry, since that’s where traditional statistics operates.
* Two types of counterfactuals on symmetric components: those which act on only one component, and those which act symmetrically on all.
* The possibility that an embedded reasoner (i.e. statistical method) can leverage knowledge of the symmetry.
The use of sufficient statistics is a particularly simple example in this class, and adding the calculation of sufficient statistics as an explicit node in the DAG gives us the simplest embedded map. This is the easiest model I’ve used to ask questions like “when can we use the map in place of the territory?” - i.e. questions about abstractions embedded in the DAG itself.
Another example of interest in this class is an embedded reasoner which attempts to deduce model structure by leveraging symmetry. In particular, this introduces the possibility that a node in the DAG could detect (some) counterfactual modifications of the DAG - i.e. notice when it is in a counterfactual query.
Time Symmetry: Equilibrium -> Causality
---------------------------------------
This is the main symmetry class of interest at the level of physics for most systems, so there’s a lot of examples. Most of them involve some kind of equilibrium abstraction: the concrete model is a DAG over time, while the abstract model captures long-run behavior with time removed.
The simplest example is circuit equilibrium, which we mentioned earlier. At the physical level, the behavior of electrical circuits is DAG-shaped only when viewed over time. Yet, in many applications, there are “inputs” and “outputs” and the *equilibrium state* of the electrical circuit implements a DAG of some sort. Where does the abstract causal structure come from? This problem is also very similar to causality arising in equilibrium in other areas, e.g. biochemical signalling circuits in cells, or markets/supply chains in which certain goods have very high/very low price elasticity.
The next simplest example is timescale separation, in which a part of the system equilibrates much faster than the rest. A couple examples in this class:
* Fast equilibrium approximations in chemical kinetics (leading to an abstract causal model in which production & removal rates are parents of equilibrium levels)
* Alternate updating of fast equilibrium & slow dynamics, e.g. flip-flops/registers paired with fast memoryless input-output circuits in digital electronics.
MCMC is a particularly interesting example. The baby version of this example is the independence of widely-time-separated samples from a markov chain; that’s a simple prototypical example of abstracting time-symmetry into plate-symmetry. But MCMC adds DAG structure *within* the plate, in a way which does not directly mirror the DAG structure of the concrete model (although it does mirror the *undirected* structure). It also involves probability calculations in each (concrete) node, which is a hint that an embedded map is present in the system.
Of course, looking at abstractions of time-symmetric systems, we can’t omit feedback control. Despite loopy behavior on the concrete level, at the abstract level we can view the controller target point as causing system limiting behavior - and this abstract view will correctly handle many counterfactuals. In this case, the structure of the abstract equilibrium model might not match the concrete-level structure at all. Based on the [good regulator theorem](https://en.wikipedia.org/wiki/Good_regulator), this is another case where embedded maps are likely to be involved.
Finally, one particularly difficult example: the derivation of the Navier-Stokes equations from molecular dynamics. The main qualitative difference from the earlier examples (at least that I know of) is the importance of an *ontology shift*: a move from particles to fields of delta functions, from Hamiltonian particle dynamics to Vlasov/Boltzmann equations. Without that shift, our DAG structure shifts over time - because interactions are spatially organized, particles interact with different particles depending on where they are. (Note that deriving Navier-Stokes from particle dynamics is arguably an open problem, depending on what exactly we count as a “derivation”, so there may be other interesting aspects to this example as well. Or possibly not - calculation difficulties, rather than fundamental/conceptual difficulties, seem to be considered the main blockade to a derivation.) |
6b3da3bc-7ce5-454b-b07f-5e74d22a4df6 | trentmkelly/LessWrong-43k | LessWrong | Book Review: Freezing People is (Not) Easy
|
cd1936bf-fde9-4eb9-91e4-a4d517667237 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Decision making under model ambiguity, moral uncertainty, and other agents with free will?
The linked post was originally written in the EA forum before realizing that this forum contains already a lot about decision theory. Since I believe it still contains some original thoughts, I link-post it here.
From the introduction:
*What is the state of the art or best practise or common practise in the EA community for taking individual decisions if there are various forms of ambiguity and/or non-quantifiable uncertainty involved, such as...*
* model ambiguity (e.g. about prior probability distributions, conditional probabilities and other model parameters)
* moral uncertainty (e.g. about risk attitudes, inequality aversion, time preferences, moral status of beings, value systems, etc.)
* strategic ambiguity (e.g. how rational are other agents and what can we really assume that they will do given that there might be free will?)
My own thoughts on how it might be done at least in theory are summarized below.
In that context, I also wonder:
*Is there some place for smart collective decision making in this, e.g. in oder to*
* increase epistemic quality of decisions through crowd-sourcing information
* raise acceptability of decisions and thus improve implementation quality
* deal robustly with moral uncertainty and diverse value assessments
And if so, *what collective decision making mechanisms are most appropriate?*
I'd be more than happy to hear about your thoughts on this! Jobst |
503e7eb2-f69d-47df-abd5-fa3036119577 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Houston Meetup - 1/29
Discussion article for the meetup : Houston Meetup - 1/29
WHEN: 29 January 2012 02:00:00PM (-0600)
WHERE: 2010 Commerce Street, Houston, Tx.77002
The Houston LW meetup will have our next meeting this coming Sunday at 2:00 PM. We will be discussing the first two chapters of E.T. Jaynes "The Logic of Science", available here:
http://bayes.wustl.edu/etj/prob/book.pdf
The first two chapters are relatively easy for those with some experience in formal logic, but we will make sure everyone understands the basics. We will also be going over the following sequences in the discussion, if time permits:
How An Algorithm Feels From Inside (http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/)
Feel the Meaning (http://lesswrong.com/lw/nq/feel_the_meaning/)
Replace the Symbol with the Substance (http://lesswrong.com/lw/nv/replace_the_symbol_with_the_substance/)
As always, pizza is an option if we feel like it. Bacon, eggs and other snacks are often produced in the hackerspace kitchen during this time, but those are communal products, so bring some cash for the tip jar if you partake.
Discussion article for the meetup : Houston Meetup - 1/29 |
1d06e1c6-b8fd-4470-9be2-19fa2b963b4b | trentmkelly/LessWrong-43k | LessWrong | Karma Motivation Thread
This idea is so obvious I can't believe we haven't done it before. Many people here have posts they would like to write but keep procrastinating on. Many people also have other work to do but keep procrastinating on Less Wrong. Making akrasia cost you money is often a good way to motivate yourself. But that can be enough of a hassle to deter the lazy, the ADD addled and the executive dysfunctional. So here is a low transaction cost alternative that takes advantage of the addictive properties of Less Wrong karma. Post a comment here with a task and a deadline- pick tasks that can be confirmed by posters; so either Less Wrong posts or projects that can be linked to or photographed. When the deadline comes edit your comment to include a link to the completed task. If you complete the task, expect upvotes. If you fail to complete the task by the deadline, expect your comment to be downvoted into oblivion. If you see completed tasks, vote those comments up. If you see past deadlines vote those comments down. At least one person should reply to the comment, noting the deadline has passed-- this way it will come up in the recent comments and more eyes will see it.
Edit: DanArmak makes a great suggestion.
> Several people have now used this to commit to doing something others can benefit from, like LW posts. I suggest an alternative method: when a user commits to doing something, everyone who is interested in that thing being done will upvote that comment. However, if the task is not complete by the deadline, everyone who upvoted commits to coming back and downvoting the comment instead.
>
> This way, people can judge whether the community is interested in their post, and the karma being gained or lost is proportional to the amount of interest. Also, upvoting and then downvoting effectively doubles the amount of karma at stake.
|
2be3442f-3f0b-4e6e-b3ed-01345ce28e50 | trentmkelly/LessWrong-43k | LessWrong | Prediction Contest 2018: Scores and Retrospective
Way back in April 2018, I announced a Prediction Contest, in which the person who made the best predictions on a bunch of questions on PredictionBook ahead of a 1st July deadline would win a prize after they all resolved in January 2019, which is now.
It was a bit of an experiment; I had no idea how many people were up for practicing predictions to try to improve their calibration, and decided to throw a little money and time at giving it a try. And in the spirit of reporting negative experimental results: The answer was 3, all of which I greatly appreciate for their participation. I don't regret running the experiment, but I'm going to pass on running a Prediction Contest 2019. I don't think this necessarily rules out trying to practically test and compete in rationality-related areas in other ways later, though.
The Results
Our entrants were bendini, bw, and Ialaithion, and their ranked log scores were:
bw: -9.358221122
Ialaithion: -9.594999615
bendini: -10.0044594
This was sufficiently close that changing a single question's resolution could tip the results, so they were all pretty good. That said, bw came out ahead, and even managed to beat averaging everyone's predictions- if you simply took the average prediction (including non-entrants) as of entry deadline and made that your prediction, you'd have got -9.576568147.
The full calculations for each of the log scores, as well as my own log score and the results of feeding the predictions as of prediction time to a simple model rather than simply averaging them, are in a spreadsheet here.
I'll be in touch with bw to sort out their prize this evening, and thanks to everyone who participated and who helped with finding questions to use for it. |
93fe412e-0e03-402a-86aa-d42db8928f5f | trentmkelly/LessWrong-43k | LessWrong | Open Thread November 2018
If it’s worth saying, but not worth its own post, then it goes here. |
723798cb-d4e8-4cd7-a0ab-4d6f15d98e4d | trentmkelly/LessWrong-43k | LessWrong | Mean of quantiles
In a previous post, I looked at some of the properties of using the median rather than the mean.
Inspired by Househalter's comment, it seems we might be able to take a compromise between median and mean. It seems to me that simply taking the mean of the lower quartile, median, and upper quartile would also have the nice features I described, and would likely be closer to the mean.
Furthermore, there's no reason to stop there. We can take the mean of the n-1 n-quantiles.
Two questions:
1. As n increases, does this quantity tend to the mean if it exists? (I suspect yes).
2. For some distributions (eg Cauchy distribution) this quantity will tend to a limit as n increases, even if there is no mean. Is this an effective way of extending means to distributions that don't possess them?
Note the unlike the median approach, for large enough n, this maximiser will pay Pascal's mugger. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.